Spestly commited on
Commit
d7b1e8b
·
verified ·
1 Parent(s): a63b764

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -62,4 +62,120 @@ extra_gated_fields:
62
  value: other
63
  I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
64
  I agree to use this model under the MIT licence: checkbox
65
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  value: other
63
  I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
64
  I agree to use this model under the MIT licence: checkbox
65
+ ---
66
+ <div align="center">
67
+ <span style="font-family: default; font-size: 1.5em;">Athena-R3</span>
68
+ <div>
69
+ 🚀 Athena-R3: Think Deeper. Solve Smarter. 🤔
70
+ </div>
71
+ </div>
72
+ <br>
73
+ <div align="center" style="line-height: 1;">
74
+ <a href="https://github.com/Aayan-Mishra/Maverick-Search" style="margin: 2px;">
75
+ <img alt="Github Page" src="https://img.shields.io/badge/Toolkit-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
76
+ </a>
77
+ <a href="https://aayanmishra.com/blog/athena-3" target="_blank" style="margin: 2px;">
78
+ <img alt="Blogpost" src="https://img.shields.io/badge/Blogpost-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
79
+ </a>
80
+ <a href="https://huggingface.co/Spestly/Athena-R3-1.5B" style="margin: 2px;">
81
+ <img alt="HF Page" src="https://img.shields.io/badge/Athena-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor" style="display: inline-block; vertical-align: middle;"/>
82
+ </a>
83
+ </div>
84
+
85
+ *Generated by Athena-3!*
86
+
87
+ ## **Model Overview**
88
+
89
+ **Athena-R3-1.5B** is a 1.5-billion-parameter causal language model fine-tuned from [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B). This model is specifically tailored to enhance reasoning capabilities, making it adept at handling complex problem-solving tasks and providing coherent, contextually relevant responses.
90
+
91
+ ## **Model Details**
92
+
93
+ - **Model Developer:** Aayan Mishra
94
+ - **Model Type:** Causal Language Model
95
+ - **Architecture:** Transformer with Rotary Position Embeddings (RoPE), SwiGLU activation, RMSNorm, and Attention QKV bias
96
+ - **Parameters:** 1.5 billion total
97
+ - **Layers:** 24
98
+ - **Attention Heads:** 16 for query and 2 for key-value (Grouped Query Attention)
99
+ - **Vocabulary Size:** Approximately 151,646 tokens
100
+ - **Context Length:** Supports up to 128,000 tokens
101
+ - **Languages Supported:** Primarily English, with capabilities in other languages
102
+ - **License:** MIT
103
+
104
+ ## **Training Details**
105
+
106
+ Athena-R3-1.5B was fine-tuned using the Unsloth framework on a single NVIDIA A100 GPU. The fine-tuning process involved 60 epochs over approximately 90 minutes, utilizing a curated dataset focused on reasoning tasks, including mathematical problem-solving and logical inference. This approach aimed to bolster the model's proficiency in complex reasoning and analytical tasks.
107
+
108
+ ## **Intended Use**
109
+
110
+ Athena-R3-1.5B is designed for a variety of applications, including but not limited to:
111
+
112
+ - **Advanced Reasoning:** Assisting with complex problem-solving and logical analysis.
113
+ - **Academic Support:** Providing explanations and solutions for mathematical and scientific queries.
114
+ - **General NLP Tasks:** Engaging in text completion, summarization, and question-answering tasks.
115
+ - **Data Interpretation:** Offering insights and explanations for data-centric inquiries.
116
+
117
+ While Athena-R3-1.5B is a powerful tool for various applications, it is not intended for real-time, safety-critical systems or for processing sensitive personal information.
118
+
119
+ ## **How to Use**
120
+
121
+ To utilize Athena-R3-1.5B, ensure that you have the latest version of the `transformers` library installed:
122
+
123
+ ```bash
124
+ pip install transformers
125
+ ```
126
+
127
+ Here's an example of how to load the Athena-R3-1.5B model and generate a response:
128
+
129
+ ```python
130
+ from transformers import AutoModelForCausalLM, AutoTokenizer
131
+
132
+ model_name = "Spestly/Athena-R3-1.5B"
133
+ model = AutoModelForCausalLM.from_pretrained(
134
+ model_name,
135
+ torch_dtype="auto",
136
+ device_map="auto"
137
+ )
138
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
139
+
140
+ prompt = "Explain the concept of entropy in thermodynamics."
141
+ messages = [
142
+ {"role": "system", "content": "You are Athena, an AI assistant designed to be helpful."},
143
+ {"role": "user", "content": prompt}
144
+ ]
145
+ text = tokenizer.apply_chat_template(
146
+ messages,
147
+ tokenize=False,
148
+ add_generation_prompt=True
149
+ )
150
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
151
+ generated_ids = model.generate(
152
+ **model_inputs,
153
+ max_new_tokens=512
154
+ )
155
+ generated_ids = [
156
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
157
+ ]
158
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
159
+ print(response)
160
+ ```
161
+
162
+ ## **Limitations**
163
+
164
+ Users should be aware of the following limitations:
165
+
166
+ - **Biases:** Athena-R3-1.5B may exhibit biases present in its training data. Users should critically assess outputs, especially in sensitive contexts.
167
+ - **Knowledge Cutoff:** The model's knowledge is current up to August 2024. It may not be aware of events or developments occurring after this date.
168
+ - **Language Support:** While the model supports multiple languages, performance is strongest in English.
169
+
170
+ ## **Acknowledgements**
171
+
172
+ Athena-R3-1.5B builds upon the work of the DeepSeek team, particularly the [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) model. Gratitude is also extended to the open-source AI community for their contributions to tools and frameworks that facilitated the development of Athena-R3-1.5B.
173
+
174
+ ## **License**
175
+
176
+ Athena-R3-1.5B is released under the MIT License, permitting wide usage with proper attribution.
177
+
178
+ ## **Contact**
179
+
180
+ - Email: [email protected]
181
+