|
--- |
|
library_name: transformers |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- ja |
|
- ko |
|
- zh |
|
- ar |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- exl2 |
|
--- |
|
|
|
# c4ai-command-r-v01 - EXL2 4.0bpw |
|
|
|
This is a 4.0bpw EXL2 quant of [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) |
|
|
|
Details about the model can be found at the above model page. |
|
|
|
|
|
## EXL2 Version |
|
|
|
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library. |
|
|
|
If you have problems loading these models, please update Text Generation WebUI to the latest version. |
|
|
|
### RP Calibrated |
|
|
|
The rpcal quants were made using data/PIPPA-cleaned/pippa_raw_fix.parquet for calibration. |
|
|
|
## Perplexity Scoring |
|
|
|
Below are the perplexity scores for the EXL2 models. A lower score is better. |
|
|
|
### Stock Quants |
|
|
|
| Quant Level | Perplexity Score | |
|
|-------------|------------------| |
|
| 8.0 | 6.4436 | |
|
| 7.0 | 6.4372 | |
|
| 6.0 | 6.4391 | |
|
| 5.0 | 6.4526 | |
|
| 4.5 | 6.4629 | |
|
| 4.0 | 6.5081 | |
|
| 3.5 | 6.6301 | |
|
| 3.0 | 6.7974 | |
|
|
|
### RP Calibrated Quants |
|
|
|
| Quant Level | Perplexity Score | |
|
|-------------|------------------| |
|
| 8.0 | 6.4331 | |
|
| 7.0 | 6.4347 | |
|
| 6.0 | 6.4356 | |
|
| 5.0 | 6.4740 | |
|
| 4.5 | 6.4875 | |
|
| 4.0 | 6.5039 | |
|
| 3.5 | 6.6928 | |
|
| 3.0 | 6.8913 | |
|
|
|
## EQ Bench |
|
|
|
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Command-R and Command-R-Plus prompt templates. A higher score is better. |
|
|
|
### Quants |
|
|
|
| Quant Size | Instruct Template | Score | |
|
|------------|-------------------|-------| |
|
| 8.0 | Alpaca | 56.67 | |
|
| 8.0 | ChatML | 47.28 | |
|
| 8.0 | Command-R | 58.46 | |
|
| 8.0 | Command-R-Plus | 58.49 | |
|
| 7.0 | Alpaca | 57.5 | |
|
| 7.0 | ChatML | 46.86 | |
|
| 7.0 | Command-R | 57.29 | |
|
| 7.0 | Command-R-Plus | 57.91 | |
|
| 6.0 | Alpaca | 56.5 | |
|
| 6.0 | ChatML | 48.61 | |
|
| 6.0 | Command-R | 57.8 | |
|
| 6.0 | Command-R-Plus | 58.64 | |
|
| 5.0 | Alpaca | 54.64 | |
|
| 5.0 | ChatML | 48.48 | |
|
| 5.0 | Command-R | 57.14 | |
|
| 5.0 | Command-R-Plus | 56.63 | |
|
| 4.5 | Alpaca | 57.75 | |
|
| 4.5 | ChatML | 48.1 | |
|
| 4.5 | Command-R | 57.08 | |
|
| 4.5 | Command-R-Plus | 56.7 | |
|
| 4.0 | Alpaca | 53.41 | |
|
| 4.0 | ChatML | 50.99 | |
|
| 4.0 | Command-R | 57.46 | |
|
| 4.0 | Command-R-Plus | 57.99 | |
|
| 3.5 | Alpaca | 56.68 | |
|
| 3.5 | ChatML | 52.72 | |
|
| 3.5 | Command-R | 60.91 | |
|
| 3.5 | Command-R-Plus | 60.91 | |
|
| 3.0 | Alpaca | 36.45 | |
|
| 3.0 | ChatML | 39.19 | |
|
| 3.0 | Command-R | 49.17 | |
|
| 3.0 | Command-R-Plus | 49.68 | |
|
|
|
|
|
### RP Calibrated Quants |
|
|
|
| Quant Size | Instruct Template | Score | |
|
|------------|-------------------|-------| |
|
| 8.0 | Alpaca | 56.23 | |
|
| 8.0 | ChatML | 48.42 | |
|
| 8.0 | Command-R | 58.41 | |
|
| 8.0 | Command-R-Plus | 58.41 | |
|
| 7.0 | Alpaca | 57.01 | |
|
| 7.0 | ChatML | 48.47 | |
|
| 7.0 | Command-R | 57.85 | |
|
| 7.0 | Command-R-Plus | 57.67 | |
|
| 6.0 | Alpaca | 58.33 | |
|
| 6.0 | ChatML | 50.93 | |
|
| 6.0 | Command-R | 60.32 | |
|
| 6.0 | Command-R-Plus | 59.83 | |
|
| 5.0 | Alpaca | 55.28 | |
|
| 5.0 | ChatML | 50.29 | |
|
| 5.0 | Command-R | 58.96 | |
|
| 5.0 | Command-R-Plus | 59.23 | |
|
| 4.5 | Alpaca | 55.01 | |
|
| 4.5 | ChatML | 46.63 | |
|
| 4.5 | Command-R | 57.7 | |
|
| 4.5 | Command-R-Plus | 59.24 | |
|
| 4.0 | Alpaca | 49.76 | |
|
| 4.0 | ChatML | 47.13 | |
|
| 4.0 | Command-R | 54.76 | |
|
| 4.0 | Command-R-Plus | 55.5 | |
|
| 3.5 | Alpaca | 56.39 | |
|
| 3.5 | ChatML | 52.98 | |
|
| 3.5 | Command-R | 59.19 | |
|
| 3.5 | Command-R-Plus | 58.32 | |
|
| 3.0 | Alpaca | 50.36 | |
|
| 3.0 | ChatML | 47.94 | |
|
| 3.0 | Command-R | 54.89 | |
|
| 3.0 | Command-R-Plus | 53.61 | |
|
|
|
|
|
### Command-R-Plus Template |
|
|
|
This is the Command-R-Plus template yaml that was used in EQ bench(which uses Text Generation Web UI yaml templates). It adds BOS_TOKEN into the starter prompt. |
|
|
|
_text-generation-webui/instruction-templates/Command-R-Plus.yaml_: |
|
```yaml |
|
instruction_template: |- |
|
{%- if messages[0]['role'] == 'system' -%} |
|
{%- set loop_messages = messages[1:] -%} |
|
{%- set system_message = messages[0]['content'] -%} |
|
{%- elif false == true -%} |
|
{%- set loop_messages = messages -%} |
|
{%- set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' -%} |
|
{%- else -%} |
|
{%- set loop_messages = messages -%} |
|
{%- set system_message = false -%} |
|
{%- endif -%} |
|
{%- if system_message != false -%} |
|
{{ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }} |
|
{%- endif -%} |
|
{%- for message in loop_messages -%} |
|
{%- set content = message['content'] -%} |
|
{%- if message['role'] == 'user' -%} |
|
{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} |
|
{%- elif message['role'] == 'assistant' -%} |
|
{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }} |
|
{%- endif -%} |
|
{%- endfor -%} |
|
{%- if add_generation_prompt -%} |
|
{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }} |
|
{%- endif -%} |
|
|
|
``` |
|
|
|
### Perplexity Script |
|
|
|
This was the script used for perplexity testing. |
|
|
|
```bash |
|
#!/bin/bash |
|
|
|
# Activate the conda environment |
|
source ~/miniconda3/etc/profile.d/conda.sh |
|
conda activate exllamav2 |
|
|
|
# Set the model name and bit size |
|
MODEL_NAME="c4ai-command-r-v01" |
|
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.5 4.0 3.5 3.0) |
|
|
|
# Print the markdown table header |
|
echo "| Quant Level | Perplexity Score |" |
|
echo "|-------------|------------------|" |
|
|
|
for BIT_PRECISION in "${BIT_PRECISIONS[@]}" |
|
do |
|
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" |
|
# MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw-rpcal" |
|
if [ -d "$MODEL_DIR" ]; then |
|
output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet) |
|
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+') |
|
echo "| $BIT_PRECISION | $score |" |
|
fi |
|
done |
|
``` |
|
|
|
|
|
## Quant Details |
|
|
|
This is the script used for quantization. |
|
|
|
```bash |
|
#!/bin/bash |
|
|
|
# Activate the conda environment |
|
source ~/miniconda3/etc/profile.d/conda.sh |
|
conda activate exllamav2 |
|
|
|
# Set the model name and bit size |
|
MODEL_NAME="c4ai-command-r-v01" |
|
|
|
# Define variables |
|
MODEL_DIR="models/$MODEL_NAME" |
|
OUTPUT_DIR="exl2_$MODEL_NAME" |
|
MEASUREMENT_FILE="measurements/$MODEL_NAME.json" |
|
# CALIBRATION_DATASET="data/PIPPA-cleaned/pippa_raw_fix.parquet" |
|
|
|
# Create the measurement file if needed |
|
if [ ! -f "$MEASUREMENT_FILE" ]; then |
|
echo "Creating $MEASUREMENT_FILE" |
|
# Create directories |
|
if [ -d "$OUTPUT_DIR" ]; then |
|
rm -r "$OUTPUT_DIR" |
|
fi |
|
mkdir "$OUTPUT_DIR" |
|
|
|
# python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE -c $CALIBRATION_DATASET |
|
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE |
|
fi |
|
|
|
# Choose one of the below. Either create a single quant for testing or a batch of them. |
|
# BIT_PRECISIONS=(5.0) |
|
BIT_PRECISIONS=(8.0 7.0 6.0 5.0 4.5 4.0 3.5 3.0) |
|
|
|
for BIT_PRECISION in "${BIT_PRECISIONS[@]}" |
|
do |
|
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw" |
|
|
|
# If it doesn't already exist, make the quant |
|
if [ ! -d "$CONVERTED_FOLDER" ]; then |
|
|
|
echo "Creating $CONVERTED_FOLDER" |
|
|
|
# Create directories |
|
if [ -d "$OUTPUT_DIR" ]; then |
|
rm -r "$OUTPUT_DIR" |
|
fi |
|
mkdir "$OUTPUT_DIR" |
|
mkdir "$CONVERTED_FOLDER" |
|
|
|
# Run conversion commands |
|
# python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -c $CALIBRATION_DATASET -cf $CONVERTED_FOLDER |
|
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER |
|
|
|
fi |
|
done |
|
``` |
|
|