File size: 2,423 Bytes
ffaad95
260d806
ffaad95
 
 
 
 
260d806
ffaad95
 
 
 
 
410bf91
 
 
 
 
 
 
 
 
 
ffaad95
 
 
 
260d806
ffaad95
 
 
260d806
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
base_model: llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---

<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/669777597cb32718c20d97e9/4emWK_PB-RrifIbrCUjE8.png"
     alt="Title card" 
     style="width: 500px;
            height: auto;
            object-position: center top;">
</div>

**Website - https://www.alphaai.biz**

# Uploaded  model

- **Developed by:** alphaaico
- **License:** apache-2.0
- **Finetuned from model :** llama-3.2-3b-instruct-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

**AlphaAI-Chatty-INT1**

Overview
AlphaAI-Chatty-INT1 is a fine-tuned LLaMA 3B Small model optimized for chatty and engaging conversations. This model has been trained on a proprietary conversational dataset, making it well-suited for local deployments that require a natural, interactive dialogue experience.

The model is available in GGUF format and has been quantized to different levels to support various hardware configurations.

**Model Details**
- Base Model: LLaMA 3B Small
- Fine-tuned By: Alpha AI
- Training Framework: Unsloth

Quantization Levels Available:
- q4_k_m
- q5_k_m
- q8_0
- 16-bit (full precision) https://huggingface.co/alphaaico/AlphaAI-Chatty-INT1-16bit

Format: GGUF (Optimized for local deployments)

Use Cases:
- Conversational AI – Ideal for chatbots, virtual assistants, and customer support.
- Local AI Deployments – Runs efficiently on local machines without requiring cloud-based inference.
- Research & Experimentation – Suitable for studying conversational AI and fine-tuning on domain-specific datasets.

Model Performance
The model has been optimized for chat-style interactions, ensuring:
- Engaging and context-aware responses
- Efficient performance on consumer hardware
- Balanced coherence and creativity in conversations

Limitations & Biases
This model, like any AI system, may have biases from the training data. It is recommended to use it responsibly and fine-tune further if needed for specific applications.

License
This model is released under a permissible license. Please check the Hugging Face repository for more details.

Acknowledgments
Special thanks to the Unsloth team for providing an optimized training pipeline for LLaMA models.