damerajee commited on
Commit
c637c88
1 Parent(s): 63547c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -3,3 +3,53 @@ library_name: peft
3
  base_model: unsloth/tinyllama-bnb-4bit
4
  ---
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  base_model: unsloth/tinyllama-bnb-4bit
4
  ---
5
 
6
+ # Steps to try the model:
7
+
8
+ ### prompt Template
9
+ ```python
10
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
11
+
12
+ ### Instruction:
13
+ {}
14
+
15
+ ### Input:
16
+ {}
17
+
18
+ ### Response:
19
+ {}"""
20
+ ```
21
+ ### load the model
22
+
23
+ ```python
24
+ from peft import PeftModel, PeftConfig
25
+ from transformers import AutoModelForCausalLM ,AutoTokenizer
26
+
27
+ config = PeftConfig.from_pretrained("damerajee/Tinyllama-sft-small")
28
+ model = AutoModelForCausalLM.from_pretrained("unsloth/tinyllama-bnb-4bit")
29
+ tokenizer=AutoTokenizer.from_pretrained("damerajee/Tinyllama-sft-small")
30
+ model = PeftModel.from_pretrained(model, "damerajee/Tinyllama-sft-small")
31
+
32
+ ```
33
+ ### Inference
34
+
35
+ ```python
36
+ inputs = tokenizer(
37
+ [
38
+ alpaca_prompt.format(
39
+ "i want to learn machine learning help me",
40
+ "", # input
41
+ "", # output
42
+ )
43
+ ]*1, return_tensors = "pt").to("cuda")
44
+
45
+ outputs = model.generate(**inputs, max_new_tokens = 312, use_cache = True)
46
+ tokenizer.batch_decode(outputs)
47
+ ```
48
+
49
+ # Model Information
50
+ The base model [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit)was Instruct finetuned using [Unsloth](https://github.com/unslothai/unsloth)
51
+
52
+ # Training Details
53
+
54
+ The model was trained for 1 epoch on a free goggle colab which took about 1 hour and 30 mins approximately
55
+