Overview

This model is a fine-tuned version of the Meta LLaMA 3.1 8B Instruct model. It has been fine-tuned using the Unsloth library with 4-bit quantization for efficient inference and deployment. The fine-tuning process utilized a synthetic dataset from @AI Maker Space consisting of acronyms and their expanded forms in English and was performed using the LoRA (Low-Rank Adaptation) technique, specifically designed for instruction-based tasks. It can be easily deployed in low-resource environments thanks to the 4-bit quantization.

Uploaded model

  • Developed by: vhab10
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
0
Safetensors
Model size
4.65B params
Tensor type
FP16
·
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vhab10/Llama-3-1-8B-Instruct-Unsloth-LoRA-4bit

Dataset used to train vhab10/Llama-3-1-8B-Instruct-Unsloth-LoRA-4bit