alphaaico commited on
Commit
75a2d05
·
verified ·
1 Parent(s): b0ddbcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
@@ -15,8 +15,50 @@ language:
15
 
16
  - **Developed by:** alphaaico
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
19
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: llama-3.2-3b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
15
 
16
  - **Developed by:** alphaaico
17
  - **License:** apache-2.0
18
+ - **Finetuned from model :** llama-3.2-3b-instruct-bnb-4bit
19
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
+ AlphaAI-Chatty-INT1
23
+
24
+ Overview
25
+
26
+ AlphaAI-Chatty-INT1 is a fine-tuned LLaMA 3B Small model optimized for chatty and engaging conversations. This model has been trained on a proprietary conversational dataset, making it well-suited for local deployments that require a natural, interactive dialogue experience.
27
+
28
+ The model is available in GGUF format and has been quantized to different levels to support various hardware configurations.
29
+
30
+ Model Details
31
+
32
+ Base Model: LLaMA 3B Small
33
+
34
+ Fine-tuned By: Alpha AI
35
+
36
+ Training Framework: Unsloth
37
+
38
+ Quantization Levels Available:
39
+ -q4_k_m
40
+ -q5_k_m
41
+ -q8_0
42
+ -16-bit (full precision) https://huggingface.co/alphaaico/AlphaAI-Chatty-INT1-16bit
43
+
44
+ Format: GGUF (Optimized for local deployments)
45
+
46
+ Use Cases:
47
+ -Conversational AI – Ideal for chatbots, virtual assistants, and customer support.
48
+ -Local AI Deployments – Runs efficiently on local machines without requiring cloud-based inference.
49
+ -Research & Experimentation – Suitable for studying conversational AI and fine-tuning on domain-specific datasets.
50
+
51
+ Model Performance
52
+ The model has been optimized for chat-style interactions, ensuring:
53
+ -Engaging and context-aware responses
54
+ -Efficient performance on consumer hardware
55
+ -Balanced coherence and creativity in conversations
56
+
57
+ Limitations & Biases
58
+ This model, like any AI system, may have biases from the training data. It is recommended to use it responsibly and fine-tune further if needed for specific applications.
59
+
60
+ License
61
+ This model is released under a permissible license. Please check the Hugging Face repository for more details.
62
+
63
+ Acknowledgments
64
+ Special thanks to the Unsloth team for providing an optimized training pipeline for LLaMA models.