emre570 commited on
Commit
bd7197a
1 Parent(s): c1bf985

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -24,7 +24,7 @@ You can access the fine-tuning code [here](https://colab.research.google.com/dri
24
 
25
  Trained with NVIDIA L4 with 150 steps, took around 8 minutes.
26
 
27
- ## Example Usage
28
  You can use the adapter model with PEFT.
29
  ```py
30
  from peft import PeftModel, PeftConfig
@@ -55,6 +55,35 @@ inputs = tokenizer([
55
  outputs = model.generate(**inputs, max_new_tokens=256)
56
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
57
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  Output:
59
  ```
60
  Instruction:
 
24
 
25
  Trained with NVIDIA L4 with 150 steps, took around 8 minutes.
26
 
27
+ ## Example Usages
28
  You can use the adapter model with PEFT.
29
  ```py
30
  from peft import PeftModel, PeftConfig
 
55
  outputs = model.generate(**inputs, max_new_tokens=256)
56
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
57
  ```
58
+
59
+ You can use it from Transformers:
60
+ ```py
61
+ from transformers import AutoTokenizer, AutoModelForCausalLM
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained("myzens/llama3-8b-tr-finetuned")
64
+ model = AutoModelForCausalLM.from_pretrained("myzens/llama3-8b-tr-finetuned")
65
+
66
+ alpaca_prompt = """
67
+ Instruction:
68
+ {}
69
+
70
+ Input:
71
+ {}
72
+
73
+ Response:
74
+ {}"""
75
+
76
+ inputs = tokenizer([
77
+ alpaca_prompt.format(
78
+ "",
79
+ "Ankara'da gezilebilecek 3 yeri söyle ve ne olduklarını kısaca açıkla.",
80
+ "",
81
+ )], return_tensors = "pt").to("cuda")
82
+
83
+
84
+ outputs = model.generate(**inputs, max_new_tokens=192)
85
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
86
+ ```
87
  Output:
88
  ```
89
  Instruction: