--- base_model: llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** alphaaico - **License:** apache-2.0 - **Finetuned from model :** llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. **AlphaAI-Chatty-INT1** Overview AlphaAI-Chatty-INT1 is a fine-tuned LLaMA 3B Small model optimized for chatty and engaging conversations. This model has been trained on a proprietary conversational dataset, making it well-suited for local deployments that require a natural, interactive dialogue experience. The model is available in GGUF format and has been quantized to different levels to support various hardware configurations. **Model Details** - Base Model: LLaMA 3B Small - Fine-tuned By: Alpha AI - Training Framework: Unsloth Quantization Levels Available: - q4_k_m - q5_k_m - q8_0 - 16-bit (full precision) https://huggingface.co/alphaaico/AlphaAI-Chatty-INT1-16bit Format: GGUF (Optimized for local deployments) Use Cases: - Conversational AI – Ideal for chatbots, virtual assistants, and customer support. - Local AI Deployments – Runs efficiently on local machines without requiring cloud-based inference. - Research & Experimentation – Suitable for studying conversational AI and fine-tuning on domain-specific datasets. Model Performance The model has been optimized for chat-style interactions, ensuring: - Engaging and context-aware responses - Efficient performance on consumer hardware - Balanced coherence and creativity in conversations Limitations & Biases This model, like any AI system, may have biases from the training data. It is recommended to use it responsibly and fine-tune further if needed for specific applications. License This model is released under a permissible license. Please check the Hugging Face repository for more details. Acknowledgments Special thanks to the Unsloth team for providing an optimized training pipeline for LLaMA models.