File size: 1,085 Bytes
0f59a8e 1c69346 894637b 1c69346 894637b e2c5c27 0f59a8e 894637b 8ffc415 b25ea53 4b05f25 1c69346 4b05f25 0f59a8e 1c69346 0f59a8e 1c69346 0f59a8e 66383fd 0f59a8e 1c69346 0f59a8e 8ffc415 1c69346 fbe0768 8ffc415 fbe0768 a2bcfa1 fbe0768 e2c5c27 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
library_name: transformers
language:
- en
- ko
pipeline_tag: translation
tags:
- llama-3-ko
license: mit
datasets:
- 4yo1/llama3_enkor_testing_short
---
### Model Card for Model ID
### Model Details
Model Card: LLaMA3-ENG-KO-8B with Fine-Tuning
Model Overview
Model Name: LLaMA3-ENG-KO-8B
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### Model Description
LLaMA3-ENG-KO-8B is a language model pre-trained on a diverse corpus of English and Korean texts.
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-eng-ko-8b")
model = AutoModel.from_pretrained("4yo1/llama3-eng-ko-8b")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-eng-ko-8b")
```
datasets:
- 4yo1/llama3_enkor_testing_short
license: mit |