prepared the documentation
Browse files
README.md
CHANGED
@@ -3,199 +3,145 @@ library_name: transformers
|
|
3 |
tags:
|
4 |
- trl
|
5 |
- sft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
---
|
7 |
|
8 |
-
#
|
9 |
|
10 |
-
|
11 |
|
12 |
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
|
23 |
-
- **Funded by [optional]:** [More Information Needed]
|
24 |
-
- **Shared by [optional]:** [More Information Needed]
|
25 |
-
- **Model type:** [More Information Needed]
|
26 |
-
- **Language(s) (NLP):** [More Information Needed]
|
27 |
-
- **License:** [More Information Needed]
|
28 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
<!-- Provide the basic links for the model. -->
|
33 |
-
|
34 |
-
- **Repository:** [More Information Needed]
|
35 |
-
- **Paper [optional]:** [More Information Needed]
|
36 |
-
- **Demo [optional]:** [More Information Needed]
|
37 |
-
|
38 |
-
## Uses
|
39 |
-
|
40 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
41 |
-
|
42 |
-
### Direct Use
|
43 |
-
|
44 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
45 |
-
|
46 |
-
[More Information Needed]
|
47 |
-
|
48 |
-
### Downstream Use [optional]
|
49 |
-
|
50 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
-
|
54 |
-
### Out-of-Scope Use
|
55 |
-
|
56 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
-
|
60 |
-
## Bias, Risks, and Limitations
|
61 |
-
|
62 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
63 |
-
|
64 |
-
[More Information Needed]
|
65 |
-
|
66 |
-
### Recommendations
|
67 |
-
|
68 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
69 |
-
|
70 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
71 |
-
|
72 |
-
## How to Get Started with the Model
|
73 |
-
|
74 |
-
Use the code below to get started with the model.
|
75 |
-
|
76 |
-
[More Information Needed]
|
77 |
-
|
78 |
-
## Training Details
|
79 |
-
|
80 |
-
### Training Data
|
81 |
-
|
82 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
83 |
-
|
84 |
-
[More Information Needed]
|
85 |
-
|
86 |
-
### Training Procedure
|
87 |
-
|
88 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
89 |
-
|
90 |
-
#### Preprocessing [optional]
|
91 |
-
|
92 |
-
[More Information Needed]
|
93 |
-
|
94 |
-
|
95 |
-
#### Training Hyperparameters
|
96 |
-
|
97 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
98 |
-
|
99 |
-
#### Speeds, Sizes, Times [optional]
|
100 |
-
|
101 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
102 |
-
|
103 |
-
[More Information Needed]
|
104 |
|
105 |
## Evaluation
|
|
|
106 |
|
107 |
-
|
108 |
-
|
109 |
-
### Testing Data, Factors & Metrics
|
110 |
-
|
111 |
-
#### Testing Data
|
112 |
-
|
113 |
-
<!-- This should link to a Dataset Card if possible. -->
|
114 |
-
|
115 |
-
[More Information Needed]
|
116 |
-
|
117 |
-
#### Factors
|
118 |
-
|
119 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
-
|
123 |
-
#### Metrics
|
124 |
-
|
125 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
### Results
|
130 |
-
|
131 |
-
[More Information Needed]
|
132 |
-
|
133 |
-
#### Summary
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
## Model Examination [optional]
|
138 |
-
|
139 |
-
<!-- Relevant interpretability work for the model goes here -->
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
## Environmental Impact
|
144 |
-
|
145 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
146 |
-
|
147 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
148 |
-
|
149 |
-
- **Hardware Type:** [More Information Needed]
|
150 |
-
- **Hours used:** [More Information Needed]
|
151 |
-
- **Cloud Provider:** [More Information Needed]
|
152 |
-
- **Compute Region:** [More Information Needed]
|
153 |
-
- **Carbon Emitted:** [More Information Needed]
|
154 |
-
|
155 |
-
## Technical Specifications [optional]
|
156 |
-
|
157 |
-
### Model Architecture and Objective
|
158 |
-
|
159 |
-
[More Information Needed]
|
160 |
-
|
161 |
-
### Compute Infrastructure
|
162 |
-
|
163 |
-
[More Information Needed]
|
164 |
-
|
165 |
-
#### Hardware
|
166 |
|
167 |
-
|
|
|
|
|
|
|
168 |
|
169 |
-
#### Software
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
174 |
|
175 |
-
|
|
|
|
|
176 |
|
177 |
-
**BibTeX:**
|
178 |
|
179 |
-
|
|
|
180 |
|
181 |
-
|
|
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
|
|
|
|
|
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
|
|
|
|
|
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
|
|
198 |
|
199 |
-
|
200 |
|
201 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
tags:
|
4 |
- trl
|
5 |
- sft
|
6 |
+
license: apache-2.0
|
7 |
+
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
|
8 |
+
datasets:
|
9 |
+
- EngSAF
|
10 |
+
metrics:
|
11 |
+
- accuracy
|
12 |
+
- f1
|
13 |
+
- precision
|
14 |
+
- recall
|
15 |
+
- cohen_kappa
|
16 |
+
- rmse
|
17 |
+
model-index:
|
18 |
+
- name: SmolLM2-1.7B-Instruct-EngSaf-429K
|
19 |
+
results:
|
20 |
+
- task:
|
21 |
+
name: Text Generation
|
22 |
+
type: text-generation
|
23 |
+
dataset:
|
24 |
+
name: EngSAF
|
25 |
+
type: EngSAF
|
26 |
+
config: EngSAF
|
27 |
+
split: train
|
28 |
+
args: EngSAF
|
29 |
+
metrics:
|
30 |
+
- name: Accuracy
|
31 |
+
type: accuracy
|
32 |
+
value: 0.4000
|
33 |
+
- name: F1
|
34 |
+
type: f1
|
35 |
+
value: 0.3614
|
36 |
+
- name: Precision
|
37 |
+
type: precision
|
38 |
+
value: 0.4496
|
39 |
+
- name: Recall
|
40 |
+
type: recall
|
41 |
+
value: 0.3939
|
42 |
+
- name: Cohen Kappa
|
43 |
+
type: cohen_kappa
|
44 |
+
value: 0.0789
|
45 |
+
- name: RMSE
|
46 |
+
type: rmse
|
47 |
+
value: 1.0392
|
48 |
+
language:
|
49 |
+
- en
|
50 |
+
pipeline_tag: text-generation
|
51 |
---
|
52 |
|
53 |
+
# SmolLM2-1.7B-Instruct-EngSaf-429K
|
54 |
|
55 |
+
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) on the EngSAF dataset for Essay Grading.
|
56 |
|
57 |
|
58 |
+
- **Workflow**: GitHub Repository: [https://github.com/IsmaelMousa/automatic-essay-grading](https://github.com/IsmaelMousa/automatic-essay-grading).
|
59 |
+
- **Base Model:** SmolLM2-1.7B-Instruct: [https://doi.org/10.48550/arXiv.2502.02737](https://doi.org/10.48550/arXiv.2502.02737).
|
60 |
+
- **Fine-tuning Dataset:** EngSAF-428K: [https://github.com/IsmaelMousa/EngSAF/429K](https://github.com/IsmaelMousa/automatic-essay-grading/blob/main/data/engsaf/clean/train/3552_entries.csv).
|
61 |
+
- **Task:** Automatic Essay Grading (Text Generation).
|
62 |
|
63 |
+
[](https://api.wandb.ai/links/ismael-amjad/rav48wc1)
|
64 |
|
65 |
+
## Dataset
|
66 |
|
67 |
+
The EngSAF dataset, in its raw and unprocessed form, consists of approximately 5,800 short-answer responses collected
|
68 |
+
from real-life engineering examinations administered at a reputed academic institute. These responses are spread across
|
69 |
+
119 unique questions drawn from a wide range of engineering disciplines, making the dataset both diverse and
|
70 |
+
domain-specific. Each data point includes a student’s answer and an associated human-annotated score, serving as a
|
71 |
+
benchmark for evaluating automated grading models.
|
72 |
|
73 |
+
The dataset is divided into three primary subsets: 70% is allocated for training, 16% is reserved for evaluation on
|
74 |
+
unseen answers (UA), and 14% is dedicated to evaluating performance on entirely new questions (UQ). At this stage, it is
|
75 |
+
important to note that the dataset is considered in its original state; no preprocessing, transformation, or filtering
|
76 |
+
has yet been applied. All subsequent improvements and refinements to the data will be described in later sections.
|
77 |
+
This dataset is known as EngSAF version 1.0 and was introduced in the paper titled *"I understand why I got this grade":
|
78 |
+
Automatic Short Answer Grading (ASAG) with Feedback*, authored by Aggarwal et al., and set to appear in the proceedings
|
79 |
+
of AIED 2025. The dataset is released strictly for academic and research purposes; any commercial use or redistribution
|
80 |
+
without explicit permission is prohibited. Researchers are also urged to avoid publicly disclosing any sensitive content
|
81 |
+
that may be contained in the dataset.
|
82 |
|
83 |
+
For more details, the paper can be accessed at: [https://arxiv.org/abs/2407.12818](https://arxiv.org/abs/2407.12818).
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
|
85 |
+
## Modeling
|
86 |
+
The modeling approach for this study was carefully designed to evaluate the performance of different large language models (LLMs) on the automated essay grading task. We selected the SmolLM2 architecture to represent a range of model sizes: 135M, 360M, and 1.7B. Each model was instruction-tuned on the EngSAF dataset in varying sizes, with hyperparameters optimized to balance computational efficiency and performance. The experiments were conducted on GPU-accelerated hardware, leveraging techniques such as gradient checkpointing, flash attention, and mixed-precision training to maximize resource utilization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
## Evaluation
|
89 |
+
The evaluation methodology employed both quantitative metrics and qualitative analysis. For quantitative assessment, we computed accuracy, precision, recall, F1 score, root mean squared error (RMSE), and Cohen's kappa score (CKS) for the scoring task, while using BERT-Score precision, recall, and F1 for rationale evaluation. On a held-out test set of 100 samples. Qualitative examination of models' outputs revealed cases where most of the models correctly identified key aspects of student answers but sometimes failed to properly align its scoring with the rubric criteria.
|
90 |
|
91 |
+
### Evaluation results for `score` and `rationale` outputs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
+
| **Aspect** | **F1** | **Precision** | **Recall** | **Accuracy** | **CKS** | **RMSE** |
|
94 |
+
|:----------:|:----------:|:-------------:|:----------:|:------------:|:-------:|:--------:|
|
95 |
+
| Score | 0.3614 | 0.4496 | 0.3939 | 0.4 | 0.0789 | 1.0392 |
|
96 |
+
| Rationale | 0.6335 | 0.6381 | 0.6333 | -- | -- | -- |
|
97 |
|
|
|
98 |
|
99 |
+
## Usage
|
100 |
|
101 |
+
Below is an example of how to use the model with the Hugging Face Transformers library:
|
102 |
|
103 |
+
```python
|
104 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
105 |
+
import torch
|
106 |
|
|
|
107 |
|
108 |
+
checkpoint = "IsmaelMousa/SmolLM2-1.7B-Instruct-EngSaf-429K"
|
109 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
110 |
|
111 |
+
tokenizer = AutoTokenizer .from_pretrained(checkpoint)
|
112 |
+
model = AutoModelForCausalLM.from_pretrained(checkpoint)
|
113 |
|
114 |
+
assistant = pipeline("text-generation", tokenizer=tokenizer, model=model, device=device)
|
115 |
|
116 |
+
question = input("Question : ")
|
117 |
+
reference_answer = input("Reference Answer: ")
|
118 |
+
student_answer = input("Student Answer : ")
|
119 |
+
mark_scheme = input("Mark Scheme : ")
|
120 |
|
121 |
+
system_content = "You are a grading assistant. Evaluate student answers based on the mark scheme. Respond only in JSON format with keys 'score' (int) and 'rationale' (string)."
|
122 |
|
123 |
+
user_content = ("Provide both a score and a rationale by evaluating the student's answer strictly within the mark scheme range,"
|
124 |
+
" grading based on how well it meets the question's requirements by comparing the student answer to the reference answer.\n"
|
125 |
+
f"Question: {question}\n"
|
126 |
+
f"Reference Answer: {reference_answer}\n"
|
127 |
+
f"Student Answer: {student_answer}\n"
|
128 |
+
f"Mark Scheme: {mark_scheme}")
|
129 |
|
130 |
+
messages = [{"role": "system", "content": system_content}, {"role": "user", "content": user_content}]
|
131 |
|
132 |
+
inputs = tokenizer.apply_chat_template(messages, tokenize=False)
|
133 |
|
134 |
+
output = assistant(inputs, max_new_tokens=128, do_sample=False, return_full_text=False)[0]["generated_text"]
|
135 |
|
136 |
+
print(output)
|
137 |
+
```
|
138 |
|
139 |
+
### Frameworks
|
140 |
|
141 |
+
- `datasets-3.6.0`
|
142 |
+
- `torch-2.7.0`
|
143 |
+
- `transformers-4.51.3`
|
144 |
+
- `trl-0.17.0`
|
145 |
+
- `scikit-learn-1.6.1`
|
146 |
+
- `bert-score-0.3.13`
|
147 |
+
- `json-repair-0.46.0`
|