roger33303 commited on
Commit
0ba4c48
·
verified ·
1 Parent(s): 0e27cc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -5
README.md CHANGED
@@ -9,14 +9,74 @@ tags:
9
  license: apache-2.0
10
  language:
11
  - en
 
 
 
 
 
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** roger33303
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
19
-
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
-
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
9
  license: apache-2.0
10
  language:
11
  - en
12
+ metrics:
13
+ - bleu
14
+ - cer
15
+ - meteor
16
+ library_name: transformers
17
  ---
18
 
19
+ # Llama-3.2B Finetuned Model
20
+
21
+ ## 1. Introduction
22
+ This model is a finetuned version of the Llama-3.2B large language model. It has been specifically trained to provide detailed and accurate responses for university course-related queries. This model offers insights on course details, fee structures, duration, and campus options, along with links to corresponding course pages. The finetuning process ensured domain-specific accuracy by utilizing a tailored dataset.
23
+
24
+ ---
25
+
26
+ ## 2. Dataset Used for Finetuning
27
+ The finetuning of the Llama-3.2B model was performed using a private dataset obtained through web scraping. Data was collected from the University of Westminster website and included:
28
+
29
+ - Course titles
30
+ - Campus details
31
+ - Duration options (full-time, part-time, distance learning)
32
+ - Fee structures (for UK and international students)
33
+ - Course descriptions
34
+ - Direct links to course pages
35
+
36
+ This dataset was carefully cleaned and formatted to enhance the model's ability to provide precise responses to user queries.
37
+
38
+ ---
39
+
40
+ ## 3. How to Use This Model
41
+ To use the Llama-3.2B finetuned model, follow the steps below:
42
+
43
+ 1. **Prepare the Query Function**
44
+ - Define the function to handle user queries and generate responses:
45
+
46
+ ```python
47
+ from transformers import TextStreamer
48
+
49
+ def chatml(question, model):
50
+ messages = [{"role": "user", "content": question},]
51
+
52
+ inputs = tokenizer.apply_chat_template(messages,
53
+ tokenize=True,
54
+ add_generation_prompt=True,
55
+ return_tensors="pt",).to("cuda")
56
+
57
+ print(tokenizer.decode(inputs[0]))
58
+ text_streamer = TextStreamer(tokenizer, skip_special_tokens=True,
59
+ skip_prompt=True)
60
+ return model.generate(input_ids=inputs,
61
+ streamer=text_streamer,
62
+ max_new_tokens=512)
63
+ ```
64
+
65
+ 2. **Query the Model**
66
+ - Use the following example to test the model:
67
+
68
+ ```python
69
+ question = "Does the University of Westminster offer a course on AI, Data and Communication MA?"
70
+ x = chatml(question, model)
71
+ ```
72
+
73
+ This setup ensures you can effectively query the Llama-3.2B finetuned model and receive detailed, relevant responses.
74
+
75
+ ---
76
+
77
+
78
  # Uploaded model
79
 
80
  - **Developed by:** roger33303
81
  - **License:** apache-2.0
82
+ - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct