Athena-3-7B Model Card
Athena generated this model card!
Model Overview
Athena-3-7B is a 7.68-billion-parameter causal language model fine-tuned from Qwen2.5-Math-7B. This model is designed to excel in STEM reasoning, mathematics, and natural language processing tasks, offering advanced instruction-following and problem-solving capabilities.
Model Details
- Model Developer: Aayan Mishra
- Model Type: Causal Language Model
- Architecture: Transformer with Rotary Position Embeddings (RoPE), SwiGLU activation, RMSNorm, Attention QKV bias, and tied word embeddings
- Parameters: 7.68 billion total (6.93 billion non-embedding)
- Layers: 32
- Attention Heads: 24 for query and 4 for key-value (Grouped Query Attention)
- Vocabulary Size: Approximately 151,646 tokens
- Context Length: Supports up to 131,072 tokens
- Languages Supported: Over 29 languages, with strong emphasis on English and mathematical expressions
- License: MIT
Training Details
Athena-3-7B was fine-tuned using the Unsloth framework on a single NVIDIA A100 GPU. The fine-tuning process spanned approximately 90 minutes over 60 epochs, utilizing a curated dataset focused on instruction-following, problem-solving, and advanced mathematics. This approach enhances the model's capabilities in academic and analytical tasks.
Intended Use
Athena-3-7B is designed for a range of applications, including but not limited to:
- STEM Reasoning: Assisting with complex problem-solving and theoretical explanations.
- Academic Assistance: Supporting tutoring, step-by-step math solutions, and scientific writing.
- General NLP Tasks: Text generation, summarization, and question answering.
- Data Analysis: Interpreting and explaining mathematical and statistical data.
While Athena-3-7B is a powerful tool for various applications, it is not intended for real-time, safety-critical systems or for processing sensitive personal information.
How to Use
To utilize Athena-3-7B, ensure that you have the latest version of the transformers
library installed:
pip install transformers
Here's an example of how to load the Athena-3-7B model and generate a response:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Spestly/Athena-3-7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the concept of entropy in thermodynamics."
messages = [
{"role": "system", "content": "You are Maverick, an AI assistant designed to be helpful."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Maverick Search usage π
To use this model with Maverick Search, please refer to this repository
Limitations
Users should be aware of the following limitations:
- Biases: Athena-3-7B may exhibit biases present in its training data. Users should critically assess outputs, especially in sensitive contexts.
- Knowledge Cutoff: The model's knowledge is current up to August 2024. It may not be aware of events or developments occurring after this date.
- Language Support: While the model supports multiple languages, performance is strongest in English and technical content.
Acknowledgements
Athena-3-7B builds upon the work of the Qwen team. Gratitude is also extended to the open-source AI community for their contributions to tools and frameworks that facilitated the development of Athena-3-7B.
License
Athena-3-7B is released under the MIT License, permitting wide usage with proper attribution.
Contact
- Email: [email protected]
- Downloads last month
- 11