YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

gemma-2-2b-jpn-it-abliterated-18-ORPO - GGUF

Original model description:

base_model: google/gemma-2-2b-jpn-it language: - multilingual datasets: - mlabonne/orpo-dpo-mix-40k library_name: transformers license: gemma license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation tags: - nlp - code quantized_by: ymcki widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits?

Original model: https://huggingface.co/google/gemma-2-2b-jpn-it

Prompt format

<start_of_turn>user
{prompt}<end_of_turn>
<start_of_turn>model
<end_of_turn>
<start_of_turn>model

Note that this model does not support a System prompt.

This is abliterated model of google/gemma-2-2b-jpn-it using the method described by mlabonne.

Layer 18 of the original model was chosen for abliteration. I also created another layer 17 abliterated model for comparison.

ORPO fine tuning was performed for four epoches.

Epoch loss eval_loss
1 1.0452101707458496 1.0170862674713135
2 0.81533865332603454 0.9825302958488464
3 1.15400108695030208 0.9852740168571472
4 0.76560704708099362 0.9880287051200867

The fine tuned model is uploaded here to be evaluated by the Open LLM Leaderboard to see if the slightly brain damaged non-ORPO model can be healed. Again, the fine tuning method is also based on one described by mlabonne but the input model was read into VRAM by unsloth to allow using the full 40k dataset to run on a single 3090.

Benchmark (100.0*raw scores only)

Click on the model name go to the raw score json generated by Open LLM Leaderboard.

Model Average IFEval BHH Math Lv5 GPQA MUSR MMLU-PRO
gemma-2-2b-jpn-it 30.82 54.11 41.43 0.0 27.52 37.17 24.67
gemma-2-2b-jpn-it-abliterated-17-ORPO 29.99 50.94 38.59 2.87 27.43 38.23 21.86
gemma-2-2b-jpn-it-abliterated-18-ORPO 29.94 48.97 40.18 3.02 26.17 39.42 21.85
gemma-2-2b-jpn-it-abliterated-17 30.29 52.65 40.46 0.0 27.18 36.90 24.55
gemma-2-2b-jpn-it-abliterated-18 30.61 53.02 40.96 0.0 27.35 37.30 25.05

Looks like fine tuning is probably not enough. May need to run more epoches.

How to run this model

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model_id = "gemma-2-2b-jpn-it-abliterated-18-ORPO"
dtype = torch.bfloat16

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="cuda",
    torch_dtype=dtype,)

chat = [
    { "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download ymcki/gemma-2-2b-jpn-it-abliterated-18-ORPO --include "*" --local-dir ./

Credits

Thank you mlabonne for describing his fine tuning method.

Thanks FullOf_Bad_Ideas from LocalLlama for the suggestion of using unsloth to save VRAM.

Downloads last month
0
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.