Fine tuned Meta-Llama-3.1-8B-Instruct deployment on AWS Sagemaker fails
I am getting an error after fine-tuning the Llama 3.1 8B Instruct model and deploying it to SageMaker. I configured SageMaker to use HuggingFace Transformers 4.43, and the deployment was successful. However, when I try to test the endpoint, it gives this error. How can I run pip install --upgrade transformers==4.43.2?
Received client error (400) from 3VSBZEPFose1o1Q8vAytfGhMQD1cnCE5T83b with message "{ "code": 400, "type": "InternalServerException", "message": "rope_scalingmust be a dictionary with with two fields,typeandfactor, got {\u0027factor\u0027: 8.0, \u0027high_freq_factor\u0027: 4.0, \u0027low_freq_factor\u0027: 1.0, \u0027original_max_position_embeddings\u0027: 8192, \u0027rope_type\u0027: \u0027llama3\u0027}" }
"
Have you tried version 4.43.3?
"rope_scaling": {
"factor": 8.0,
"rope_type": "linear"
}