Update README.md
Browse files
README.md
CHANGED
@@ -30,6 +30,8 @@ pipeline_tag: text-generation
|
|
30 |
|
31 |
DNA 1.0 8B Instruct was fine-tuned on approximately 7B tokens of carefully curated data and has undergone extensive instruction tuning to enhance its ability to follow complex instructions and engage in natural conversations.
|
32 |
|
|
|
|
|
33 |
- **Developed by:** Dnotitia Inc.
|
34 |
- **Supported Languages:** Korean, English
|
35 |
- **Model Release Date:** Dec 10, 2024
|
@@ -45,7 +47,7 @@ DNA 1.0 8B Instruct was fine-tuned on approximately 7B tokens of carefully curat
|
|
45 |
|
46 |
## Evaluation
|
47 |
|
48 |
-
We evaluated DNA 1.0 8B Instruct against other prominent language models of similar size across various benchmarks, including Korean-specific tasks and general language understanding metrics.
|
49 |
|
50 |
| Language | Benchmark | **dnotitia/Llama-DNA-1.0-8B-Instruct** | LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct | LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct | yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | Qwen/Qwen2.5-7B-Instruct | meta-llama/Llama-3.1-8B-Instruct | mistralai/Mistral-7B-Instruct-v0.3 | NCSOFT/Llama-VARCO-8B-Instruct | upstage/SOLAR-10.7B-Instruct-v1.0 |
|
51 |
|----------|------------|----------------------------------------|--------------------------------------|--------------------------------------|-----------------------------------------|--------------------------|----------------------------------|------------------------------------|--------------------------------|-----------------------------------|
|
@@ -127,12 +129,14 @@ This model is released under CC BY-NC 4.0 license. For commercial usage inquirie
|
|
127 |
If you use or discuss this model in your academic research, please cite the project to help spread awareness:
|
128 |
|
129 |
```
|
130 |
-
@
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
|
|
|
|
136 |
}
|
137 |
```
|
138 |
|
|
|
30 |
|
31 |
DNA 1.0 8B Instruct was fine-tuned on approximately 7B tokens of carefully curated data and has undergone extensive instruction tuning to enhance its ability to follow complex instructions and engage in natural conversations.
|
32 |
|
33 |
+
For more details, please refer to our [Technical Report](https://arxiv.org/abs/2501.10648).
|
34 |
+
|
35 |
- **Developed by:** Dnotitia Inc.
|
36 |
- **Supported Languages:** Korean, English
|
37 |
- **Model Release Date:** Dec 10, 2024
|
|
|
47 |
|
48 |
## Evaluation
|
49 |
|
50 |
+
We evaluated DNA 1.0 8B Instruct against other prominent language models of similar size across various benchmarks, including Korean-specific tasks and general language understanding metrics.
|
51 |
|
52 |
| Language | Benchmark | **dnotitia/Llama-DNA-1.0-8B-Instruct** | LGAI-EXAONE/EXAONE-3.5-7.8B-Instruct | LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct | yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | Qwen/Qwen2.5-7B-Instruct | meta-llama/Llama-3.1-8B-Instruct | mistralai/Mistral-7B-Instruct-v0.3 | NCSOFT/Llama-VARCO-8B-Instruct | upstage/SOLAR-10.7B-Instruct-v1.0 |
|
53 |
|----------|------------|----------------------------------------|--------------------------------------|--------------------------------------|-----------------------------------------|--------------------------|----------------------------------|------------------------------------|--------------------------------|-----------------------------------|
|
|
|
129 |
If you use or discuss this model in your academic research, please cite the project to help spread awareness:
|
130 |
|
131 |
```
|
132 |
+
@misc{lee2025dna10technicalreport,
|
133 |
+
title={DNA 1.0 Technical Report},
|
134 |
+
author={Jungyup Lee and Jemin Kim and Sang Park and SeungJae Lee},
|
135 |
+
year={2025},
|
136 |
+
eprint={2501.10648},
|
137 |
+
archivePrefix={arXiv},
|
138 |
+
primaryClass={cs.CL},
|
139 |
+
url={https://arxiv.org/abs/2501.10648},
|
140 |
}
|
141 |
```
|
142 |
|