[Llama-3.1-8B-EZO-1.1-it] Model Card

image/png

ใƒขใƒ‡ใƒซๆƒ…ๅ ฑ / Model Information

ใ“ใฎใƒขใƒ‡ใƒซใฏใ€Meta AI ใฎ Llama 3.1 ใ‚’ใƒ™ใƒผใ‚นใซใ€ๆ—ฅๆœฌ่ชžใ‚ฟใ‚นใ‚ฏใงใฎๆ€ง่ƒฝใ‚’ๅ‘ไธŠใ•ใ›ใ‚‹ใŸใ‚ใซใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’่กŒใฃใŸใ‚‚ใฎใงใ™ใ€‚ ใƒ™ใƒผใ‚นใจใชใ‚‹Llama-3.1-8B-Instructใ‹ใ‚‰ๅคงๅน…ใชๆ—ฅๆœฌ่ชžๆ€ง่ƒฝๅ‘ไธŠใ‚’้”ๆˆใ—ใพใ—ใŸใ€‚

This model is based on Meta AI's Llama 3.1 with fine tuning to improve performance on Japanese tasks. Significant Japanese language performance improvement was achieved from the base Llama-3.1-8B-Instruct.

ๆณ•็š„้€š็Ÿฅ / Legal Notice

This model is subject to the Llama 3.1 Community License Agreement. For detailed information, please refer to the official Llama license page: Llama 3.1 License

ใ“ใฎใƒขใƒ‡ใƒซใฏ Llama 3.1 Community License Agreement ใซๅพ“ใ„ใพใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€Llama ใฎๅ…ฌๅผใƒฉใ‚คใ‚ปใƒณใ‚นใƒšใƒผใ‚ธใ‚’ใ”ๅ‚็…งใใ ใ•ใ„ใ€‚

ไฝฟ็”จๆ–นๆณ• / Usage

import transformers
import torch

model_id = "HODACHI/Llama-3.1-8B-EZO-1.1-it"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "ใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใฎใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚็‰นใซๆŒ‡็คบใŒ็„กใ„ๅ ดๅˆใฏใ€ๅŽŸๅ‰‡ๆ—ฅๆœฌ่ชžใงๅ›ž็ญ”ใ—ใฆใใ ใ•ใ„ใ€‚"},
    {"role": "user", "content": "ไป•ไบ‹ใฎ็†ฑๆ„ใ‚’ๅ–ใ‚Šๆˆปใ™ใŸใ‚ใฎใ‚ขใ‚คใƒ‡ใ‚ขใ‚’5ใคๆŒ™ใ’ใฆใใ ใ•ใ„ใ€‚"},
]

outputs = pipeline(
    messages,
    max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1])

ใƒ™ใƒณใƒใƒžใƒผใ‚ฏ็ตๆžœ / Benchmark Results

image/png

ๅˆถ้™ไบ‹้ …ใจๅ€ซ็†็š„่€ƒๆ…ฎไบ‹้ … / Limitations and Ethical Considerations

ๆœฌใƒขใƒ‡ใƒซใฏใ€Llama 3.1ใ‚’ใƒ™ใƒผใ‚นใซใ—ใฆใ„ใ‚‹ใŸใ‚ใ€ๅŒๆง˜ใฎๅˆถ้™ไบ‹้ …ใจๅ€ซ็†็š„่€ƒๆ…ฎไบ‹้ …ใŒ้ฉ็”จใ•ใ‚Œใพใ™๏ผš

  1. ไบˆๆธฌไธๅฏ่ƒฝใชๅ‡บๅŠ›: ๅ…จใฆใฎLLMใจๅŒๆง˜ใซใ€ๆœฌใƒขใƒ‡ใƒซใฎๆฝœๅœจ็š„ใชๅ‡บๅŠ›ใ‚’ไบ‹ๅ‰ใซไบˆๆธฌใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ๅ ดๅˆใซใ‚ˆใฃใฆใฏใ€ไธๆญฃ็ขบใ€ๅ่ฆ‹ใฎใ‚ใ‚‹ใ€ใ‚ใ‚‹ใ„ใฏๅ•้กŒใฎใ‚ใ‚‹ๅฟœ็ญ”ใ‚’็”Ÿๆˆใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚

  2. ๅฎ‰ๅ…จๆ€งใƒ†ใ‚นใƒˆใฎๅฟ…่ฆๆ€ง: ้–‹็™บ่€…ใฏใ€ๆœฌใƒขใƒ‡ใƒซใ‚’็”จใ„ใŸใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใ‚’ใƒ‡ใƒ—ใƒญใ‚คใ™ใ‚‹ๅ‰ใซใ€็‰นๅฎšใฎใ‚ขใƒ—ใƒชใ‚ฑใƒผใ‚ทใƒงใƒณใซๅˆใ‚ใ›ใŸๅฎ‰ๅ…จๆ€งใƒ†ใ‚นใƒˆใจใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใ‚’ๅฎŸๆ–ฝใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚

  3. ใƒžใƒซใƒใƒชใƒณใ‚ฌใƒซๅฏพๅฟœ: ๆœฌใƒขใƒ‡ใƒซใฏ่ค‡ๆ•ฐใฎ่จ€่ชžใ‚’ใ‚ตใƒใƒผใƒˆใ—ใฆใ„ใพใ™ใŒใ€ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„่จ€่ชžใงใฎไฝฟ็”จใฏๆŽจๅฅจใ•ใ‚Œใพใ›ใ‚“ใ€‚ใ‚ตใƒใƒผใƒˆใ•ใ‚Œใฆใ„ใชใ„่จ€่ชžใงไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€้ฉๅˆ‡ใชๆ–น้‡ใซๆฒฟใฃใŸใƒ•ใ‚กใ‚คใƒณใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใจใ‚ทใ‚นใƒ†ใƒ ๅˆถๅพกใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚

  4. ๆ–ฐใ—ใ„ๆŠ€่ก“ใจใ—ใฆใฎใƒชใ‚นใ‚ฏ: ๆœฌใƒขใƒ‡ใƒซใฏๆ–ฐใ—ใ„ๆŠ€่ก“ใงใ‚ใ‚Šใ€ไป–ใฎๆ–ฐๆŠ€่ก“ใจๅŒๆง˜ใซใ€ใใฎไฝฟ็”จใซใฏใƒชใ‚นใ‚ฏใŒไผดใ„ใพใ™ใ€‚ใ“ใ‚Œใพใงใฎใƒ†ใ‚นใƒˆใงใฏใ™ในใฆใฎใ‚ทใƒŠใƒชใ‚ชใ‚’ใ‚ซใƒใƒผใ—ใฆใ„ใชใ„ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™ใ€‚

  5. ็ถ™็ถš็š„ใชๆ”นๅ–„ใฎๅฟ…่ฆๆ€ง: ใ‚ณใƒŸใƒฅใƒ‹ใƒ†ใ‚ฃใ‹ใ‚‰ใฎใƒ•ใ‚ฃใƒผใƒ‰ใƒใƒƒใ‚ฏใ‚„ๅ ฑๅ‘Šใƒกใ‚ซใƒ‹ใ‚บใƒ ใ‚’้€šใ˜ใฆใ€ใƒขใƒ‡ใƒซใฎ็ถ™็ถš็š„ใชๆ”นๅ–„ใŒๅฟ…่ฆใงใ™ใ€‚

้–‹็™บ่€…ใจไฝฟ็”จ่€…ใฏใ€ใ“ใ‚Œใ‚‰ใฎๅˆถ้™ไบ‹้ …ใ‚’่ช่ญ˜ใ—ใ€่ฒฌไปปใ‚ใ‚‹ไฝฟ็”จใ‚’ๅฟƒใŒใ‘ใ‚‹ใ“ใจใŒ้‡่ฆใงใ™ใ€‚่ฉณ็ดฐใซใคใ„ใฆใฏใ€Llama 3.1ใฎResponsible Use Guideใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚

This model, being based on Llama 3.1, carries similar limitations and ethical considerations:

  1. Unpredictable Outputs: Like all LLMs, this model's potential outputs cannot be predicted in advance. It may sometimes generate inaccurate, biased, or problematic responses.

  2. Need for Safety Testing: Developers should perform safety testing and tuning tailored to their specific applications before deploying any applications using this model.

  3. Multilingual Considerations: While this model supports multiple languages, use in non-supported languages is not recommended without implementing fine-tuning and system controls aligned with appropriate policies.

  4. Risks as New Technology: This model represents new technology and, like any new technology, there are risks associated with its use. Testing to date may not have covered all scenarios.

  5. Need for Continuous Improvement: Continuous improvement of the model is necessary through community feedback and reporting mechanisms.

It's crucial for developers and users to be aware of these limitations and strive for responsible use. For more information, please refer to the Llama 3.1 Responsible Use Guide.

[Model Data]

Training Dataset]

We extracted high-quality data from Japanese Wikipedia and FineWeb to create instruction data. Our innovative training approach allows for performance improvements across various languages and domains, making the model suitable for global use despite its focus on Japanese data.

ๆ—ฅๆœฌ่ชžใฎWikiใƒ‡ใƒผใ‚ฟใŠใ‚ˆใณใ€FineWebใ‹ใ‚‰่‰ฏ่ณชใชใƒ‡ใƒผใ‚ฟใฎใฟใ‚’ๆŠฝๅ‡บใ—ใ€Instructionใƒ‡ใƒผใ‚ฟใ‚’ไฝœๆˆใ—ใพใ—ใŸใ€‚ใ“ใฎใƒขใƒ‡ใƒซใงใฏๆ—ฅๆœฌ่ชžใซ็‰นๅŒ–ใ•ใ›ใฆใ„ใพใ™ใŒใ€ไธ–็•Œไธญใฎใฉใ‚“ใชใƒฆใƒผใ‚นใ‚ฑใƒผใ‚นใงใ‚‚ๅˆฉ็”จๅฏ่ƒฝใชใ‚ขใƒ—ใƒญใƒผใƒใงใ™ใ€‚

https://huggingface.co/datasets/legacy-datasets/wikipedia https://huggingface.co/datasets/HuggingFaceFW/fineweb

Data Preprocessing

We used a plain instruction tuning method to train the model on exemplary responses. This approach enhances the model's ability to understand and generate high-quality responses across various languages and contexts.

ใƒ—ใƒฌใ‚คใƒณใ‚นใƒˆใƒฉใ‚ฏใƒˆใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐๆ‰‹ๆณ•+QLoRAใ‚’็”จใ„ใฆใ€ๆจก็ฏ„็š„ๅ›ž็ญ”ใ‚’ๅญฆ็ฟ’ใ•ใ›ใพใ—ใŸใ€‚ใ“ใฎๆ‰‹ๆณ•ใซใ‚ˆใ‚Šใ€ใƒขใƒ‡ใƒซใฏๆง˜ใ€…ใช่จ€่ชžใ‚„ใ‚ณใƒณใƒ†ใ‚ญใ‚นใƒˆใซใŠใ„ใฆ้ซ˜ๅ“่ณชใชๅฟœ็ญ”ใ‚’็†่งฃใ—็”Ÿๆˆใ™ใ‚‹่ƒฝๅŠ›ใŒๅ‘ไธŠใ—ใฆใ„ใพใ™ใ€‚

Implementation Information

[Pre-Instruction Training]

https://huggingface.co/instruction-pretrain/instruction-synthesizer

[Disclaimer]

ใ“ใฎใƒขใƒ‡ใƒซใฏ็ ”็ฉถ้–‹็™บใฎใฟใ‚’็›ฎ็š„ใจใ—ใฆๆไพ›ใ•ใ‚Œใ‚‹ใ‚‚ใฎใงใ‚ใ‚Šใ€ๅฎŸ้จ“็š„ใชใƒ—ใƒญใƒˆใ‚ฟใ‚คใƒ—ใจใฟใชใ•ใ‚Œใ‚‹ในใใƒขใƒ‡ใƒซใงใ™ใ€‚ ๅ•†ๆฅญ็š„ใชไฝฟ็”จใ‚„ใƒŸใƒƒใ‚ทใƒงใƒณใ‚ฏใƒชใƒ†ใ‚ฃใ‚ซใƒซใช็’ฐๅขƒใธใฎ้…ๅ‚™ใ‚’ๆ„ๅ›ณใ—ใŸใ‚‚ใฎใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๆœฌใƒขใƒ‡ใƒซใฎไฝฟ็”จใฏใ€ไฝฟ็”จ่€…ใฎ่ฒฌไปปใซใŠใ„ใฆ่กŒใ‚ใ‚Œใ‚‹ใ‚‚ใฎใจใ—ใ€ใใฎๆ€ง่ƒฝใŠใ‚ˆใณ็ตๆžœใฏไฟ่จผใ•ใ‚Œใพใ›ใ‚“ใ€‚ Axcxeptๆ ชๅผไผš็คพใฏใ€็›ดๆŽฅ็š„ใ€้–“ๆŽฅ็š„ใ€็‰นๅˆฅใ€ๅถ็™บ็š„ใ€็ตๆžœ็š„ใชๆๅฎณใ€ใพใŸใฏๆœฌใƒขใƒ‡ใƒซใฎไฝฟ็”จใ‹ใ‚‰็”Ÿใ˜ใ‚‹ใ„ใ‹ใชใ‚‹ๆๅคฑใซๅฏพใ—ใฆใ‚‚ใ€ๅพ—ใ‚‰ใ‚ŒใŸ็ตๆžœใซใ‹ใ‹ใ‚ใ‚‰ใšใ€ไธ€ๅˆ‡ใฎ่ฒฌไปปใ‚’่ฒ ใ„ใพใ›ใ‚“ใ€‚ ๅˆฉ็”จ่€…ใฏใ€ๆœฌใƒขใƒ‡ใƒซใฎไฝฟ็”จใซไผดใ†ใƒชใ‚นใ‚ฏใ‚’ๅๅˆ†ใซ็†่งฃใ—ใ€่‡ชๅทฑใฎๅˆคๆ–ญใงไฝฟ็”จใ™ใ‚‹ใ‚‚ใฎใจใ—ใพใ™ใ€‚

[Hardware]

H100 ร— 1(Running in 8h)

ใ‚ฏใƒฌใ‚ธใƒƒใƒˆ / Credits

This model is based on Meta AI's Llama 3.1. We acknowledge and thank the Meta AI team for their work on the base model.

ใ“ใฎใƒขใƒ‡ใƒซใฏ Meta AI ใฎ Llama 3.1 ใ‚’ใƒ™ใƒผใ‚นใซใ—ใฆใ„ใพใ™ใ€‚ใƒ™ใƒผใ‚นใƒขใƒ‡ใƒซใฎ้–‹็™บใซๆบใ‚ใฃใŸ Meta AI ใƒใƒผใƒ ใซๆ„Ÿ่ฌใจๅฐŠๆ•ฌใฎๆ„ใ‚’่กจใ—ใพใ™ใ€‚

[We are.]

Axcxept logo

Downloads last month
517
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AXCXEPT/Llama-3.1-8B-EZO-1.1-it

Quantizations
6 models

Spaces using AXCXEPT/Llama-3.1-8B-EZO-1.1-it 7

Collections including AXCXEPT/Llama-3.1-8B-EZO-1.1-it