SebastianBodza commited on
Commit
4d593c9
·
1 Parent(s): 644175c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md CHANGED
@@ -1,3 +1,66 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - SebastianBodza/Ger_WizardLM_evol_instruct_70k_V0
5
+ language:
6
+ - de
7
  ---
8
+ # DElefant:
9
+ <img src="https://huggingface.co/SebastianBodza/DElefant/resolve/main/badge_gerlefant.png" style="max-width:200px">
10
+ DElefant is a LLM developed for instruction tuned German interactions. This version is built on top of the adapted BLOOM version from [Malte Ostendorff](!https://huggingface.co/malteos/bloom-6b4-clp-german) with a opus-mt translated and afterwards filtered [WizardLM](!https://huggingface.co/datasets/SebastianBodza/Ger_WizardLM_evol_instruct_70k_V0) dataset. The evolved dataset led to SOTA english LLMs and we hope by incoperating the dataset to a german base model we can leverage the capabilities for various tasks including Code generation.
11
+ Due to limitation in translation, the comments inside of the code blocks remained english, however the Coding was kept in working condition.
12
+
13
+ ## Model Description:
14
+ Full-Finetuning of the German-BLOOM model on an RTX 3090 with the translated WizardLM Dataset.
15
+
16
+ ## Roadmap:
17
+ If there is sufficient demand, additional adjustments can be made:
18
+ - Native German generated dataset
19
+ - Full Fine-Tuning of larger LLMs e.g. Falcon, Starcoderplus, ...
20
+
21
+ ## How to use:
22
+ Prompt-Template:
23
+ ```
24
+ {instruction}\n\n### Response:
25
+ ```
26
+ Code example for inference:
27
+ ```
28
+ from transformers import AutoTokenizer, AutoModelForCausalLM
29
+ tokenizer = AutoTokenizer.from_pretrained("SebastianBodza/DElefant")
30
+ model = AutoModelForCausalLM.from_pretrained("SebastianBodza/DElefant", device_map="auto")
31
+ frage = "Wie heißt der Bundeskanzler?"
32
+ prompt = f"{frage}\n\n### Response:"
33
+
34
+ txt = tokenizer(prompt, return_tensors="pt").to("cuda")
35
+ txt = model.generate(**txt,
36
+ max_new_tokens=256,
37
+ eos_token_id=tokenizer.eos_token_id)
38
+ tokenizer.decode(txt[0], skip_special_tokens=True)
39
+ ```
40
+ ## Training:
41
+ Training was based on Llama-X with the adaptions of WizardLMs training script.
42
+ ```
43
+ deepspeed Llama-X/src/train_freeform.py \
44
+ --model_name_or_path malteos/bloom-6b4-clp-german \
45
+ --data_path ger_alpaca_evol_instruct_70k_e.json \
46
+ --output_dir ./full_finetune \
47
+ --num_train_epochs 2 \
48
+ --model_max_length 2048 \
49
+ --per_device_train_batch_size 2 \
50
+ --per_device_eval_batch_size 1 \
51
+ --gradient_accumulation_steps 8 \
52
+ --evaluation_strategy "no" \
53
+ --save_strategy "steps" \
54
+ --save_steps 400 \
55
+ --save_total_limit 3 \
56
+ --learning_rate 2e-5 \
57
+ --warmup_steps 2 \
58
+ --logging_steps 2 \
59
+ --lr_scheduler_type "cosine" \
60
+ --report_to "tensorboard" \
61
+ --gradient_checkpointing True \
62
+ --deepspeed deepspeed.json \
63
+ --bf16 True
64
+ ```
65
+ <img src="https://huggingface.co/SebastianBodza/DElefant/resolve/main/train_loss_DElefant.svg" style="max-width:350px">
66
+