Model Card for Model ID
This model is an HF optimum 0.0.28 (AWS Neuron SDK 2.20.2)'s compiled verson, of the Korean fine-tuned model MLP-KTLim/llama-3-Korean-Bllossom-8B, available at https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B. It is intended for deployment on Amazon EC2 Inferentia2 and Amazon SageMaker. For detailed information about the model and its license, please refer to the original MLP-KTLim/llama-3-Korean-Bllossom-8B model page
Model Details
This model is compiled with HF optimum 0.0.28, neuronx-cc version: 2.15.143 v1.2-hf-tgi-0.0.28-pt-2.1.2-inf-neuronx-py310 Please refer to a guide at https://github.com/aws-samples/aws-ai-ml-workshop-kr/tree/master/neuron/hf-optimum/04-Deploy-Qwen-25-8B-Llama3-8B-HF-TGI-Docker-On-INF2
Hardware
At a minimum hardware, you can use Amazon EC2 inf2.xlarge and more powerful family such as inf2.8xlarge, inf2.24xlarge and inf2.48xlarge and them at SageMaker Inference endpoing. The detailed information is Amazon EC2 Inf2 Instances
Model Card Contact
Gonsoo Moon, [email protected]
- Downloads last month
- 3
Model tree for Gonsoo/AWS-HF-optimum-neuron-0-0-28-llama-3-Korean-Bllossom-8B
Base model
meta-llama/Meta-Llama-3-8B