WIP
(Please bear with me, this model will get better and get a license soon)
Hermes + Leo + German AWQ = Germeo
Germeo-7B-AWQ
A German-English understanding, but German-only speaking model merged from Hermeo-7B.
Model details
- Merged from: leo-mistral-hessianai-7b-chat and DPOpenHermes-7B-v2
- Model type: Causal decoder-only transformer language model
- Languages: German replies with English Understanding Capabilities
- Calibration Data: LeoLM/OpenSchnabeltier
Quantization Procedure and Use Case:
The speciality of this model is that it solely replies in German, independently from the system message or prompt. Within the AWQ-process I introduced OpenSchnabeltier as calibration data for the model to stress the importance of German Tokens.
Usage
Setup in autoawq
# setup [autoawq](https://github.com/casper-hansen/AutoAWQ)
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
quant_path = "aari1995/germeo-7b-awq"
# Load model
model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
Setup in transformers (works in colab)
# pip install [autoawq](https://github.com/casper-hansen/AutoAWQ) and pip install --upgrade transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
quant_path = "aari1995/germeo-7b-awq"
# Load model
model = AutoModelForCausalLM.from_pretrained(quant_path, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True)
Inference:
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """<|im_start|>system
Du bist ein hilfreicher Assistent.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "Schreibe eine Stellenanzeige für Data Scientist bei AXA!"
tokens = tokenizer(
prompt_template.format(prompt=prompt),
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_new_tokens=1012
)
# tokenizer.decode(generation_output.flatten())
FAQ
The model continues after the reply with user inputs:
To solve this, you need to implement a custom stopping criteria:
from transformers import StoppingCriteria
class GermeoStoppingCriteria(StoppingCriteria):
def __init__(self, target_sequence, prompt):
self.target_sequence = target_sequence
self.prompt=prompt
def __call__(self, input_ids, scores, **kwargs):
# Get the generated text as a string
generated_text = tokenizer.decode(input_ids[0])
generated_text = generated_text.replace(self.prompt,'')
# Check if the target sequence appears in the generated text
if self.target_sequence in generated_text:
return True # Stop generation
return False # Continue generation
def __len__(self):
return 1
def __iter__(self):
yield self
This then expects your input prompt (formatted as given into the model), and a stopping criteria, in this case the im_end token. Simply add it to the generation:
generation_output = model.generate(
tokens,
streamer=streamer,
max_new_tokens=1012,
stopping_criteria=GermeoStoppingCriteria("<|im_end|>", prompt_template.format(prompt=prompt))
)
Acknowledgements and Special Thanks
- Thank you malteos for hermeo, without this it would not be possible! (and all your other contributions)
- Thanks to the authors of the base models: Mistral, LAION, HessianAI, Open Access AI Collective, @teknium, @bjoernp
- Also @bjoernp thank you for your contribution and LeoLM for OpenSchnabeltier.
Evaluation and Benchmarks (German only)
German benchmarks
German tasks: | MMLU-DE | Hellaswag-DE | ARC-DE | Average |
---|---|---|---|---|
Models / Few-shots: | (5 shots) | (10 shots) | (24 shots) | |
7B parameters | ||||
llama-2-7b | 0.400 | 0.513 | 0.381 | 0.431 |
leo-hessianai-7b | 0.400 | 0.609 | 0.429 | 0.479 |
bloom-6b4-clp-german | 0.274 | 0.550 | 0.351 | 0.392 |
mistral-7b | 0.524 | 0.588 | 0.473 | 0.528 |
leo-mistral-hessianai-7b | 0.481 | 0.663 | 0.485 | 0.543 |
leo-mistral-hessianai-7b-chat | 0.458 | 0.617 | 0.465 | 0.513 |
DPOpenHermes-7B-v2 | 0.517 | 0.603 | 0.515 | 0.545 |
hermeo-7b | 0.511 | 0.668 | 0.528 | 0.569 |
germeo-7b-awq (this model) | 0.522 | 0.651 | 0.514 | 0.563 |
13B parameters | ||||
llama-2-13b | 0.469 | 0.581 | 0.468 | 0.506 |
leo-hessianai-13b | 0.486 | 0.658 | 0.509 | 0.551 |
70B parameters | ||||
llama-2-70b | 0.597 | 0.674 | 0.561 | 0.611 |
leo-hessianai-70b | 0.653 | 0.721 | 0.600 | 0.658 |
German reply rate benchmark
The fraction of German reply rates according to this benchmark
Models: | German Response Rate |
---|---|
hermeo-7b | tba |
germeo-7b-awq (this model) | tba |
Additional Benchmarks:
TruthfulQA-DE: 0.508
- Downloads last month
- 702