tsessk commited on
Commit
b32249c
·
verified ·
1 Parent(s): 1558846

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -43
README.md CHANGED
@@ -7,53 +7,25 @@ tags:
7
  - generated_from_trainer
8
  - trl
9
  - reward-trainer
10
- licence: license
11
  ---
12
 
13
- # Model Card for llm-course-hw2-reward-model
14
 
15
- This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) on the [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset) dataset.
16
- It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
- ## Quick start
19
-
20
- ```python
21
- from transformers import pipeline
22
-
23
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
- generator = pipeline("text-generation", model="tsessk/llm-course-hw2-reward-model", device="cuda")
25
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
- print(output["generated_text"])
27
- ```
28
-
29
- ## Training procedure
30
-
31
-
32
-
33
-
34
- This model was trained with Reward.
35
-
36
- ### Framework versions
37
-
38
- - TRL: 0.15.2
39
- - Transformers: 4.48.3
40
- - Pytorch: 2.5.1+cu124
41
- - Datasets: 3.3.2
42
- - Tokenizers: 0.21.0
43
-
44
- ## Citations
45
 
 
 
 
 
 
46
 
 
 
 
 
 
47
 
48
- Cite TRL as:
49
-
50
- ```bibtex
51
- @misc{vonwerra2022trl,
52
- title = {{TRL: Transformer Reinforcement Learning}},
53
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
54
- year = 2020,
55
- journal = {GitHub repository},
56
- publisher = {GitHub},
57
- howpublished = {\url{https://github.com/huggingface/trl}}
58
- }
59
- ```
 
7
  - generated_from_trainer
8
  - trl
9
  - reward-trainer
 
10
  ---
11
 
12
+ # 🏆 Model Card for llm-course-hw2-reward-model
13
 
14
+ This model is a **fine-tuned reward model** based on [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct), trained on the **[HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset)** dataset.
15
+ It has been trained using **[TRL](https://github.com/huggingface/trl)** to **evaluate and rank responses based on human preferences**, playing a crucial role in **RLHF (Reinforcement Learning from Human Feedback)** for models like **SmolLM-135M-PPO**.
16
 
17
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ ## 📝 Overview
20
+ - **Base Model:** SmolLM-135M-Instruct
21
+ - **Fine-Tuned Dataset:** [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset)
22
+ - **Objective:** Learn to assign **higher scores** to more engaging, structured, and emotional responses.
23
+ - **Use Case:** Used in **PPO-based RLHF training** to reinforce **human-like response quality**.
24
 
25
+ ### **Training Method**
26
+ - The model was fine-tuned using **Direct Preference Comparisons**:
27
+ - Each sample contains a **chosen response** (preferred) and a **rejected response**.
28
+ - The model **learns to assign higher rewards** to the chosen response and **lower rewards** to the rejected one.
29
+ - This reward function was used in **PPO fine-tuning** to optimize response generation.
30
 
31
+ ---