|
--- |
|
base_model: Qwen/Qwen2.5-3B-Instruct |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- qwen2 |
|
- trl |
|
license: other |
|
license_name: qwen-research |
|
license_link: https://huggingface.co/Spestly/Athena-1-3B/blob/main/LICENSE |
|
language: |
|
- en |
|
--- |
|
![Header](https://raw.githubusercontent.com/Aayan-Mishra/Images/refs/heads/main/Athena.png) |
|
|
|
# Athena-1 3B: |
|
|
|
Athena-1 3B is a fine-tuned, instruction-following large language model derived from [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct). It is designed to provide efficient, high-quality text generation while maintaining a compact size. Athena 3B is optimized for lightweight applications, conversational AI, and structured data tasks, making it ideal for real-world use cases where performance and resource efficiency are critical. |
|
|
|
--- |
|
|
|
## Key Features |
|
|
|
### โก Lightweight and Efficient |
|
- **Compact Size**: At just **3.09 billion parameters**, Athena-1 3B offers excellent performance with reduced computational requirements. |
|
- **Instruction Following**: Fine-tuned for precise and reliable adherence to user prompts. |
|
- **Coding and Mathematics**: Proficient in solving coding challenges and handling mathematical tasks. |
|
|
|
### ๐ Long-Context Understanding |
|
- **Context Length**: Supports up to **32,768 tokens**, enabling the processing of moderately lengthy documents or conversations. |
|
- **Token Generation**: Can generate up to **8K tokens** of output. |
|
|
|
### ๐ Multilingual Support |
|
- Supports **29+ languages**, including: |
|
- English, Chinese, French, Spanish, Portuguese, German, Italian, Russian |
|
- Japanese, Korean, Vietnamese, Thai, Arabic, and more. |
|
|
|
### ๐ Structured Data & Outputs |
|
- **Structured Data Interpretation**: Processes structured formats like tables and JSON. |
|
- **Structured Output Generation**: Generates well-formatted outputs, including JSON and other structured formats. |
|
|
|
--- |
|
|
|
## Model Details |
|
|
|
- **Base Model**: [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) |
|
- **Architecture**: Transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias, and tied word embeddings. |
|
- **Parameters**: 3.09B total (2.77B non-embedding). |
|
- **Layers**: 36 |
|
- **Attention Heads**: 16 for Q, 2 for KV. |
|
- **Context Length**: Up to **32,768 tokens**. |
|
|
|
--- |
|
|
|
## Applications |
|
|
|
Athena 3B is designed for a variety of real-world applications: |
|
- **Conversational AI**: Build fast, responsive, and lightweight chatbots. |
|
- **Code Generation**: Generate, debug, or explain code snippets. |
|
- **Mathematical Problem Solving**: Assist with calculations and reasoning. |
|
- **Document Processing**: Summarize and analyze moderately large documents. |
|
- **Multilingual Applications**: Support for global use cases with diverse language requirements. |
|
- **Structured Data**: Process and generate structured data, such as tables and JSON. |
|
|
|
--- |
|
|
|
## Quickstart |
|
|
|
Hereโs how you can use Athena 3B for quick text generation: |
|
|
|
```python |
|
# Use a pipeline as a high-level helper |
|
from transformers import pipeline |
|
|
|
messages = [ |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
pipe = pipeline("text-generation", model="Spestly/Athena-1-3B") |
|
pipe(messages) |
|
|
|
# Load model directly |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Spestly/Athena-1-3B") |
|
model = AutoModelForCausalLM.from_pretrained("Spestly/Athena-1-3B") |
|
``` |