Update README.md
Browse files
README.md
CHANGED
@@ -4,13 +4,13 @@ base_model:
|
|
4 |
- rubenroy/Zurich-7B-GCv2-5m
|
5 |
library_name: transformers
|
6 |
---
|
7 |
-
# Maverick Model Card
|
8 |
|
9 |
-
## Model Overview
|
10 |
|
11 |
**Maverick** is a 14.7-billion-parameter causal language model fine-tuned from [Ruben Roy's Zurich-14B-GCv2-5m](https://huggingface.co/rubenroy/Zurich-14B-GCv2-5m). The base model, Zurich-14B-GCv2-5m, is itself a fine-tuned version of Alibaba's Qwen 2.5 14B Instruct model, trained on the GammaCorpus v2-5m dataset. Maverick is designed to excel in various STEM fields and general natural language processing tasks, offering enhanced reasoning and instruction-following capabilities.
|
12 |
|
13 |
-
## Model Details
|
14 |
|
15 |
- **Model Developer:** Aayan Mishra
|
16 |
- **Model Type:** Causal Language Model
|
@@ -23,11 +23,11 @@ library_name: transformers
|
|
23 |
- **Languages Supported:** Over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic
|
24 |
- **License:** MIT
|
25 |
|
26 |
-
## Training Details
|
27 |
|
28 |
Maverick was fine-tuned using the Unsloth framework on a single NVIDIA A100 GPU. The fine-tuning process spanned approximately 90 minutes over 60 epochs, utilising a curated dataset focused on instruction-following and STEM-related content. This approach aimed to enhance the model's performance in complex reasoning and academic tasks.
|
29 |
|
30 |
-
## Intended Use
|
31 |
|
32 |
Maverick is designed for a range of applications, including but not limited to:
|
33 |
|
@@ -38,9 +38,9 @@ Maverick is designed for a range of applications, including but not limited to:
|
|
38 |
|
39 |
While Maverick is a powerful tool for various applications, it is not intended for real-time, safety-critical systems or for processing sensitive personal information.
|
40 |
|
41 |
-
## How to Use
|
42 |
|
43 |
-
To
|
44 |
|
45 |
```bash
|
46 |
pip install transformers
|
@@ -85,7 +85,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
85 |
print(response)
|
86 |
```
|
87 |
|
88 |
-
## Limitations
|
89 |
|
90 |
Users should be aware of the following limitations:
|
91 |
|
@@ -93,10 +93,14 @@ Users should be aware of the following limitations:
|
|
93 |
- **Knowledge Cutoff:** The model's knowledge is current up to August 2024. It may not be aware of events or developments occurring after this date.
|
94 |
- **Language Support:** While primarily trained on English data, performance in other languages may be inconsistent.
|
95 |
|
96 |
-
## Acknowledgements
|
97 |
|
98 |
Maverick builds upon the work of [Ruben Roy](https://huggingface.co/rubenroy), particularly the Zurich-14B-GCv2-5m model, which is a fine-tuned version of Alibaba's Qwen 2.5 14B Instruct model. Gratitude is also extended to the open-source AI community for their contributions to tools and frameworks that facilitated the development of Maverick.
|
99 |
|
100 |
-
## License
|
101 |
|
102 |
-
Maverick is released under the [MIT License](https://opensource.org/license/mit), permitting wide usage with proper attribution.
|
|
|
|
|
|
|
|
|
|
4 |
- rubenroy/Zurich-7B-GCv2-5m
|
5 |
library_name: transformers
|
6 |
---
|
7 |
+
# **Maverick Model Card**
|
8 |
|
9 |
+
## **Model Overview**
|
10 |
|
11 |
**Maverick** is a 14.7-billion-parameter causal language model fine-tuned from [Ruben Roy's Zurich-14B-GCv2-5m](https://huggingface.co/rubenroy/Zurich-14B-GCv2-5m). The base model, Zurich-14B-GCv2-5m, is itself a fine-tuned version of Alibaba's Qwen 2.5 14B Instruct model, trained on the GammaCorpus v2-5m dataset. Maverick is designed to excel in various STEM fields and general natural language processing tasks, offering enhanced reasoning and instruction-following capabilities.
|
12 |
|
13 |
+
## **Model Details**
|
14 |
|
15 |
- **Model Developer:** Aayan Mishra
|
16 |
- **Model Type:** Causal Language Model
|
|
|
23 |
- **Languages Supported:** Over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic
|
24 |
- **License:** MIT
|
25 |
|
26 |
+
## **Training Details**
|
27 |
|
28 |
Maverick was fine-tuned using the Unsloth framework on a single NVIDIA A100 GPU. The fine-tuning process spanned approximately 90 minutes over 60 epochs, utilising a curated dataset focused on instruction-following and STEM-related content. This approach aimed to enhance the model's performance in complex reasoning and academic tasks.
|
29 |
|
30 |
+
## **Intended Use**
|
31 |
|
32 |
Maverick is designed for a range of applications, including but not limited to:
|
33 |
|
|
|
38 |
|
39 |
While Maverick is a powerful tool for various applications, it is not intended for real-time, safety-critical systems or for processing sensitive personal information.
|
40 |
|
41 |
+
## **How to Use**
|
42 |
|
43 |
+
To utilise Maverick, ensure that you have the latest version of the `transformers` library installed:
|
44 |
|
45 |
```bash
|
46 |
pip install transformers
|
|
|
85 |
print(response)
|
86 |
```
|
87 |
|
88 |
+
## **Limitations**
|
89 |
|
90 |
Users should be aware of the following limitations:
|
91 |
|
|
|
93 |
- **Knowledge Cutoff:** The model's knowledge is current up to August 2024. It may not be aware of events or developments occurring after this date.
|
94 |
- **Language Support:** While primarily trained on English data, performance in other languages may be inconsistent.
|
95 |
|
96 |
+
## **Acknowledgements**
|
97 |
|
98 |
Maverick builds upon the work of [Ruben Roy](https://huggingface.co/rubenroy), particularly the Zurich-14B-GCv2-5m model, which is a fine-tuned version of Alibaba's Qwen 2.5 14B Instruct model. Gratitude is also extended to the open-source AI community for their contributions to tools and frameworks that facilitated the development of Maverick.
|
99 |
|
100 |
+
## **License**
|
101 |
|
102 |
+
Maverick is released under the [MIT License](https://opensource.org/license/mit), permitting wide usage with proper attribution.
|
103 |
+
|
104 |
+
## **Contact**
|
105 |
+
|
106 |
+
- Email: [email protected]
|