Abhishek0323 commited on
Commit
08b5e51
·
verified ·
1 Parent(s): e3c1bbc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - llama
6
+ - text-generation
7
+ - fine-tuned
8
+ datasets:
9
+ - mlabonne/guanaco-llama2-1k
10
+ ---
11
+
12
+ # Abhishek0323's Fine-tuned LLaMA-2 Model
13
+
14
+ ## Model Description
15
+
16
+ This model is a fine-tuned version of the LLaMA-2 language model specifically optimized for generating responses to general knowledge questions. It has been fine-tuned to better understand and process prompts in a conversational context.
17
+
18
+ ## How to Use
19
+
20
+ #python
21
+ from transformers import AutoTokenizer, pipeline
22
+ import torch
23
+
24
+ model_name = "Abhishek0323/llama-2-7b-ftabhi"
25
+ prompt = "What is a large language model?"
26
+
27
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
28
+ gen_pipeline = pipeline(
29
+ "text-generation",
30
+ model=model_name,
31
+ torch_dtype=torch.float16,
32
+ device_map="auto",
33
+ )
34
+
35
+ sequences = gen_pipeline(
36
+ f'<s>[INST] {prompt} [/INST]',
37
+ do_sample=True,
38
+ top_k=10,
39
+ num_return_sequences=1,
40
+ eos_token_id=tokenizer.eos_token_id,
41
+ max_length=200,
42
+ )
43
+
44
+ for ans in sequences:
45
+ print(f"Result: {ans['generated_text']}")