Spestly commited on
Commit
b8116f7
·
verified ·
1 Parent(s): 939575d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -53
README.md CHANGED
@@ -1,8 +1,12 @@
1
  ---
2
  base_model: Spestly/Atlas-Pro-7B-Preview
3
- datasets:
4
- - openai/gsm8k
5
- - HuggingFaceH4/ultrachat_200k
 
 
 
 
6
  language:
7
  - en
8
  - zh
@@ -33,69 +37,115 @@ language:
33
  - km
34
  - tl
35
  - nl
 
 
 
36
  library_name: transformers
37
- license: mit
38
- quantized_by: mradermacher
39
- tags:
40
- - text-generation-inference
41
- - transformers
42
- - unsloth
43
- - qwen2
44
- - trl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ---
46
- ## About
 
 
 
 
 
47
 
48
- <!-- ### quantize_version: 2 -->
49
- <!-- ### output_tensor_quantised: 1 -->
50
- <!-- ### convert_type: hf -->
51
- <!-- ### vocab_type: -->
52
- <!-- ### tags: -->
53
- static quants of https://huggingface.co/Spestly/Atlas-Pro-7B-Preview
54
 
55
- <!-- provided-files -->
56
- weighted/imatrix quants are available at https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-i1-GGUF
57
- ## Usage
58
 
59
- If you are unsure how to use GGUF files, refer to one of [TheBloke's
60
- READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
61
- more details, including on how to concatenate multi-part files.
 
 
 
 
62
 
63
- ## Provided Quants
 
 
 
 
64
 
65
- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
66
 
67
- | Link | Type | Size/GB | Notes |
68
- |:-----|:-----|--------:|:------|
69
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q2_K.gguf) | Q2_K | 3.1 | |
70
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
71
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
72
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
73
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
74
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
75
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
76
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
77
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
78
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
79
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
80
- | [GGUF](https://huggingface.co/mradermacher/Atlas-Pro-7B-Preview-GGUF/resolve/main/Atlas-Pro-7B-Preview.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
81
 
82
- Here is a handy graph by ikawrakow comparing some lower-quality quant
83
- types (lower is better):
84
 
85
- ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
 
86
 
87
- And here are Artefact2's thoughts on the matter:
88
- https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
89
 
90
- ## FAQ / Model Request
 
91
 
92
- See https://huggingface.co/mradermacher/model_requests for some answers to
93
- questions you might have and/or if you want some other model quantized.
94
 
95
- ## Thanks
 
 
 
96
 
97
- I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
98
- me use its servers and providing upgrades to my workstation to enable
99
- this work in my free time.
 
 
100
 
101
- <!-- end -->
 
 
1
  ---
2
  base_model: Spestly/Atlas-Pro-7B-Preview
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - qwen2
8
+ - trl
9
+ license: mit
10
  language:
11
  - en
12
  - zh
 
37
  - km
38
  - tl
39
  - nl
40
+ datasets:
41
+ - openai/gsm8k
42
+ - HuggingFaceH4/ultrachat_200k
43
  library_name: transformers
44
+ extra_gated_prompt: "By accessing this model, you agree to comply with ethical usage guidelines and accept full responsibility for its applications. You will not use this model for harmful, malicious, or illegal activities, and you understand that the model's use is subject to ongoing monitoring for misuse. This model is provided 'AS IS' and agreeing to this means that you are responsible for all the outputs generated by you"
45
+ extra_gated_fields:
46
+ Name: text
47
+ Organization: text
48
+ Country: country
49
+ Date of Birth: date_picker
50
+ Intended Use:
51
+ type: select
52
+ options:
53
+ - Research
54
+ - Education
55
+ - Personal Development
56
+ - Commercial Use
57
+ - label: Other
58
+ value: other
59
+ I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
60
+ I agree to use this model under the MIT licence: checkbox
61
+ ---
62
+ ![Header](./Atlas-Pro.png)
63
+ # **Atlas Pro**
64
+
65
+ ### **Model Overview**
66
+ **Atlas Pro** (Previously known as '🏆 Atlas-Experiment 0403 🧪' in AtlasUI) is an advanced language model (LLM) built on top of **Atlas Flash**. It's designed to provide exceptional performance for professional tasks like coding, mathematics, and scientific problem-solving. Atlas Pro builds on Atlas Flash by adding more fine-tuning and specialization, making it perfect for researchers and advanced users.
67
+
68
+ ---
69
+
70
+ ### **Key Features**
71
+ - **Improved Problem-Solving:** Handles tricky tasks in programming, math, and sciences better than most models.
72
+ - **Advanced Code Generation:** Produces clean and efficient code, but may still miss edge cases occasionally.
73
+ - **Domain Expertise:** Focused on technical and scientific domains but works well in general contexts too.
74
+ - **Reasoning Improvement:** In this version of Atlas, I have enhanced it's reasoning via synthetic data from models such as Gemini-2.0 Flash Thinking so that it can improve on reasoning.
75
+ ---
76
+
77
+ # **Evaluation**
78
+ Below are the evaluations of the Atlas-Pro models and Deepseek's R1 Qwen Distills (The model that started the whole Atlas family):
79
+
80
+ | **Metric** | **Spestly Atlas Pro (7B)** | **Spestly Atlas Pro (1.5B)** | DeepSeek-R1-Distill-Qwen (7B) | DeepSeek-R1-Distill-Qwen (1.5B) |
81
+ |-------------------------|---------------------------|------------------------------|-----------------------------------|-------------------------------------|
82
+ | **Average** | **22.65%** | 12.93% | 11.73% | 7.53% |
83
+ | **IFEval** | 31.54% | 24.30% | **40.38%** | 34.63% |
84
+ | **BBH** | **25.27%** | 9.08% | 7.88% | 4.73% |
85
+ | **MATH** | **38.90%** | 25.83% | 0.00% | 0.00% |
86
+ | **GPQA** | **11.63%** | 6.26% | 3.91% | 2.97% |
87
+ | **MUSR** | **6.65%** | 1.86% | 3.55% | 2.08% |
88
+ | **MMLU-Pro** | **21.89%** | 10.28% | 14.68% | 0.78% |
89
+ | **Carbon Emissions (kg)** | 0.69 kg | **0.59 kg** | 0.68 kg | 0.62 kg |
90
+
91
+
92
+
93
+
94
  ---
95
+ ### **Intended Use Cases**
96
+ Atlas Pro works best for:
97
+ - **Technical Professionals:** Helping developers, engineers, and scientists solve complex problems.
98
+ - **Educational Assistance:** Offering clear, step-by-step help for students and teachers.
99
+ - **Research Support:** Assisting in theoretical and applied science work.
100
+ - **Enterprise Tools:** Integrating into company workflows for smarter systems.
101
 
102
+ ---
 
 
 
 
 
103
 
104
+ ### **NOTICE**
105
+ Atlas Pro is built on **Atlas Flash** and improved to meet high standards. Here’s how it’s made:
 
106
 
107
+ 1. **Base Model:** Built upon **Atlas Flash**, which is already quite capable.
108
+ 2. **Fine-Tuning Details:**
109
+ - Used datasets specific to programming, math, and scientific challenges and overall reasoning abilities.
110
+ - Refined its performance for professional scenarios.
111
+ 3. **Performance Highlights:**
112
+ - Beats benchmarks with high accuracy, though occasional tweaks might still improve outputs.
113
+ ---
114
 
115
+ ### **Limitations**
116
+ - **Knowledge Cutoff:** It doesn’t know about anything recent unless updated.
117
+ - **Hardware Requirements:** Needs high-end GPUs to run smoothly.
118
+ - **Specialization Bias:** While amazing in its focus areas, general chat capabilities might not be as good as other models.
119
+ - **Token Leakage:** In some very rare cases (~1/167), Atlas Pro will experience some token leakage.
120
 
121
+ ---
122
 
123
+ ### **Licensing**
124
+ Atlas Pro is released under the **MIT**, which prohibits harmful uses. Make sure to follow the rules in the license agreement.
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ ---
 
127
 
128
+ ### **Acknowledgments**
129
+ Created by **Spestly** as part of the **Atlas Model Family**, Atlas Pro builds on the strong foundation of **Atlas Flash**. Special thanks to **Deepseek's R1 Qwen Distilles** for helping make it happen.
130
 
131
+ ---
 
132
 
133
+ ### **Usage**
134
+ You can use Atlas Pro with this code snippet:
135
 
136
+ ```python
137
+ from transformers import AutoModelForCausalLM, AutoTokenizer
138
 
139
+ # Load the Atlas Pro model
140
+ model_name = "Spestly/Atlas-R1-Pro-7B-Preview"
141
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
142
+ model = AutoModelForCausalLM.from_pretrained(model_name)
143
 
144
+ # Generate a response
145
+ prompt = "Write a Python function to calculate the Fibonacci sequence."
146
+ inputs = tokenizer(prompt, return_tensors="pt")
147
+ outputs = model.generate(**inputs, max_length=200)
148
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
149
 
150
+ print(response)
151
+ ```