File size: 690 Bytes
fdd561e
9f2869f
756bd16
 
 
 
 
 
 
 
 
fdd561e
aed5e0a
eaac2a7
756bd16
eaac2a7
756bd16
eaac2a7
756bd16
 
 
 
 
 
 
 
eaac2a7
 
756bd16
 
ef88b4e
9f2869f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
language:
- de
tags:
- llama
- alpaca
- llm
- finetune
- german
- transformers
---

# Zicklein: german 🇩🇪 finetuned instruction LLaMA

Visit the Github for more information: https://github.com/avocardio/zicklein

## Usage

```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig

tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
    "decapoda-research/llama-7b-hf",
    load_in_8bit=False,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, "avocardio/alpaca-lora-7b-german-base-52k")
```