kgrammar-2-9b
kgrammar-2-9b is a state-of-the-art model designed to evaluate Korean sentences, particularly focusing on identifying instances where the response deviates by using a different language or mixing multiple languages within a sentence. The model is based on the gemma-2-9b architecture and aims to provide accurate assessments for language consistency and clarity in Korean text.
Model Details
- Model Name: kgrammar-2-9b
- Base Model: Google/Gemma-2-9b-it
- Fine-Tuning Techniques: Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO)
Benchmarks and Dataset
keval leverages the custom-built ko-bench dataset, which draws inspiration from MT-Bench but has been tailored specifically for Korean language assessments. This dataset includes tasks spanning a wide range of user scenarios to effectively evaluate key elements like multi-turn conversation ability and instruction adherence.
Usage Application Form
To use this model, please complete the application form and submit it via email [[email protected]].
Access will be granted after your application is reviewed and approved.
We appreciate your cooperation and look forward to assisting you.
1. **Name:**
- (e.g., John Doe)
2. **Date of Birth:**
- (e.g., January 1, 1990)
3. **Affiliation:**
- Are you applying as a company or an individual? [ ] Company [ ] Individual
- Company Name (if applicable):
- Department (if applicable):
4. **Position/Role:**
- (e.g., Data Scientist, Researcher, etc.)
5. **Contact Information:**
- Email:
- Phone Number:
6. **Purpose of Use:**
- (e.g., Research and Development, Commercial use, Educational purposes, etc.)
7. **Detailed Reason for Use:**
- 1. Name and version of the model you wish to use:
- 2. Reason for selecting this model:
- 3. Objectives to achieve using this model:
- 4. Expected use cases (please describe in as much detail as possible):
8. **Data Security and Ethical Use Plan:**
- (Please describe your plans for data protection and ethical use.)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "davidkim205/kgrammar-2-9b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# The model is loaded in 4-bit precision for memory efficiency
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
prompt = "ํ๊ตญ์ด ๋ฌธ๋งฅ์ ๋ถ์์ฐ์ค๋ฌ์ด ๋ถ๋ถ์ ์ฐพ์ผ์์ค. ์ค๋ฅ ๋ฌธ์ฅ๊ณผ ๊ฐ์๋ <incorrect grammar> </incorrect grammar> tag, ์ฆ <incorrect grammar> - ์ค๋ฅ ๋ฌธ์ฅ๊ณผ ์ค๋ช
</incorrect grammar> ์์ ๋ด๊ฒจ ์์ผ๋ฉฐ, <wrong count> </wrong count> tag, ์ฆ <wrong count> ์ค๋ฅ ๊ฐ์ </wrong count> ์ด๋ค."
text = "์๋๋ ์ฒซ ๋ฒ์งธ ์์ ์์ libros๋ฅผ ์ฑ
๋น $19.6923077์ ๊ตฌ์
ํ์ต๋๋ค (1280 รท 65). ๊ทธ๋
์ ํ๊ท ์ฑ
๋น ๊ฐ๊ฒฉ์ $18์๊ธฐ ๋๋ฌธ์ ๋ ๋ฒ์งธ ์์ ์์ ์ฑ
์ ์ด $907.5์ ๊ตฌ์
ํ์ต๋๋ค (18 ร 120 - 1280 = 907.5)."
conversation = [
{"role": "system", "content": ""},
{"role": "user", "content": prompt + text}
]
formatted_conversation = tokenizer.apply_chat_template(
conversation, tokenize=False, add_generation_prompt=True
)
inputs = tokenizer(formatted_conversation, return_tensors="pt", add_special_tokens=False)
inputs = {key: tensor.to(model.device) for key, tensor in inputs.items()}
with torch.no_grad():
# Generate the output response based on the input tokens
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
print(tokenizer.decode(
outputs[0][inputs['input_ids'].size(1):], skip_special_tokens=True
))
<incorrect grammar>
- "libros"๋ ์คํ์ธ์ด๋ก "์ฑ
"์ ์๋ฏธํ๋ฉฐ, ๋ฌธ๋งฅ์ ํ๊ตญ์ด๋ก ๋์ฒดํ ํ์๊ฐ ์์ต๋๋ค.
</incorrect grammar> <wrong count>1</wrong count>
Evaluation
Diff
The diff
refers to the difference between the label scores and predicted scores, represented as a score. The wrong
count refers to the number of incorrect answers that do not match the required format, while length
represents the total number of test data. Other columns containing numbers indicate the count and percentage of differences between label and predicted scores for each value.
The score is calculated by:
- Calculating the difference between the label and predicted score for each pair.
- Assigning full points for a difference of 0, and half a point for a difference of 1.
- The total score is the sum of all points divided by the number of data points.
model | wrong | score | length | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | kgrammar-2-9b | 0 (0.0%) | 80.0% | 80 | 54 (67.5%) | 20 (25.0%) | 4 (5.0%) | 1 (1.2%) | 0 | 1 (1.2%) | 0 | 0 | 0 | 0 | 0 | 0 |
1 | kgrammar-2-3b | 0 (0.0%) | 76.2% | 80 | 52 (65.0%) | 18 (22.5%) | 5 (6.2%) | 2 (2.5%) | 1 (1.2%) | 1 (1.2%) | 0 | 0 | 0 | 1 (1.2%) | 0 | 0 |
2 | kgrammar-2-1b | 1 (1.2%) | 71.9% | 80 | 50 (62.5%) | 15 (18.8%) | 6 (7.5%) | 5 (6.2%) | 0 | 2 (2.5%) | 0 | 0 | 1 (1.2%) | 0 | 0 | 0 |
Accuracy
The score
column represents the ratio of correctly predicted labels to the total number of data points. The wrong
column shows the count and percentage of incorrectly formatted answers. The columns labeled "0" through "10" represent the number and percentage of correct predictions for each label, based on how well the model predicted each specific label.
model | wrong | score | length | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | kgrammar-2-9b | 0 (0.0%) | 67.5% | 80 | 35 (97.2%) | 6 (66.7%) | 8 (53.3%) | 1 (33.3%) | 4 (44.4%) | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | kgrammar-2-3b | 0 (0.0%) | 65.0% | 80 | 35 (97.2%) | 2 (22.2%) | 8 (53.3%) | 1 (33.3%) | 3 (33.3%) | 1 (100.0%) | 0 | 0 | 1 (50.0%) | 1 (100.0%) | 0 | 0 |
2 | kgrammar-2-1b | 1 (1.2%) | 62.5% | 80 | 34 (94.4%) | 5 (55.6%) | 6 (40.0%) | 1 (33.3%) | 2 (22.2%) | 0 | 1 (50.0%) | 0 | 1 (50.0%) | 0 | 0 | 0 |
Error Detection Accuracy
This accuracy metric evaluates a model's error detection performance by measuring how well it identifies the presence or absence of errors. It differs from conventional accuracy by focusing on correct and incorrect error predictions rather than overall classification correctness.
model | score | wrong | length | |
---|---|---|---|---|
0 | kgrammar-2-9b | 95% | 0 | 80 |
1 | kgrammar-2-3b | 93.8% | 0 | 80 |
2 | kgrammar-2-1b | 93.7% | 1 | 79 |
- Downloads last month
- 61