ZeroXClem commited on
Commit
82928d5
Β·
verified Β·
1 Parent(s): 44d4acb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -3
README.md CHANGED
@@ -4,13 +4,39 @@ tags:
4
  - merge
5
  - mergekit
6
  - lazymergekit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  # ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B
10
 
11
- ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
12
 
13
- ## 🧩 Configuration
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ```yaml
16
  # Merge configuration for ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B using Model Stock
@@ -24,5 +50,135 @@ base_model: mukaj/Llama-3.1-Hawkish-8B
24
  normalize: false
25
  int8_mask: true
26
  dtype: bfloat16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- ```
 
 
 
 
 
 
 
 
 
 
 
4
  - merge
5
  - mergekit
6
  - lazymergekit
7
+ - bfloat16
8
+ - text-generation-inference
9
+ - model_stock
10
+ - crypto
11
+ - finance
12
+ - llama
13
+ language:
14
+ - en
15
+ base_model:
16
+ - Chainbase-Labs/Theia-Llama-3.1-8B-v1
17
+ - EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO
18
+ - mukaj/Llama-3.1-Hawkish-8B
19
+ pipeline_tag: text-generation
20
+ library_name: transformers
21
  ---
22
 
23
  # ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B
24
 
25
+ **ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B** is an advanced language model meticulously crafted by merging three pre-trained models using the powerful [mergekit](https://github.com/cg123/mergekit) framework. This fusion leverages the **Model Stock** merge method to combine the specialized capabilities of **Theia-Llama**, **Fireball-Meta-Llama**, and **Llama-Hawkish**. The resulting model excels in creative text generation, technical instruction following, financial reasoning, and dynamic conversational interactions.
26
 
27
+ ## πŸš€ Merged Models
28
+
29
+ This model merge incorporates the following:
30
+
31
+ - [**Chainbase-Labs/Theia-Llama-3.1-8B-v1**](https://huggingface.co/Chainbase-Labs/Theia-Llama-3.1-8B-v1): Specializes in cryptocurrency-oriented knowledge, enhancing the model's ability to generate and comprehend crypto-related content with high accuracy and depth.
32
+
33
+ - [**EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO**](https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO): Focuses on instruction-following and coding capabilities, improving the model's performance in understanding and executing user commands, as well as generating executable code snippets.
34
+
35
+ - [**mukaj/Llama-3.1-Hawkish-8B**](https://huggingface.co/mukaj/Llama-3.1-Hawkish-8B): Enhances financial reasoning and mathematical precision, enabling the model to handle complex financial analyses, economic discussions, and quantitative problem-solving with high proficiency.
36
+
37
+ ## 🧩 Merge Configuration
38
+
39
+ The configuration below outlines how the models are merged using the **Model Stock** method. This approach ensures a balanced and effective integration of the unique strengths from each source model.
40
 
41
  ```yaml
42
  # Merge configuration for ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B using Model Stock
 
50
  normalize: false
51
  int8_mask: true
52
  dtype: bfloat16
53
+ ```
54
+
55
+ ### Key Parameters
56
+
57
+ - **Merge Method (`merge_method`):** Utilizes the **Model Stock** method, as described in [Model Stock](https://arxiv.org/abs/2403.19522), to effectively combine multiple models by leveraging their strengths.
58
+
59
+ - **Models (`models`):** Specifies the list of models to be merged:
60
+ - **Chainbase-Labs/Theia-Llama-3.1-8B-v1:** Enhances cryptocurrency-oriented knowledge and content generation.
61
+ - **EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO:** Improves instruction-following and coding capabilities.
62
+ - **mukaj/Llama-3.1-Hawkish-8B:** Enhances financial reasoning and mathematical precision.
63
+
64
+ - **Base Model (`base_model`):** Defines the foundational model for the merge, which is **mukaj/Llama-3.1-Hawkish-8B** in this case.
65
+
66
+ - **Normalization (`normalize`):** Set to `false` to retain the original scaling of the model weights during the merge.
67
+
68
+ - **INT8 Mask (`int8_mask`):** Enabled (`true`) to apply INT8 quantization masking, optimizing the model for efficient inference without significant loss in precision.
69
+
70
+ - **Data Type (`dtype`):** Uses `bfloat16` to maintain computational efficiency while ensuring high precision.
71
+
72
+ ## πŸ† Performance Highlights
73
+
74
+ - **Cryptocurrency Knowledge:** Enhanced ability to generate and comprehend crypto-related content, making the model highly effective for blockchain discussions, crypto market analysis, and related queries.
75
+
76
+ - **Instruction Following and Coding:** Improved performance in understanding and executing user instructions, as well as generating accurate and executable code snippets, suitable for coding assistance and technical support.
77
+
78
+ - **Financial Reasoning and Mathematical Precision:** Advanced capabilities in handling complex financial analyses, economic discussions, and quantitative problem-solving, making the model ideal for financial modeling, investment analysis, and educational purposes.
79
+
80
+ - **Smooth Weight Blending:** Utilization of the Model Stock method ensures a harmonious integration of different model attributes, resulting in balanced performance across various specialized tasks.
81
+
82
+ - **Optimized Inference:** INT8 masking and `bfloat16` data type contribute to efficient computation, enabling faster response times without compromising quality.
83
+
84
+ ## 🎯 Use Case & Applications
85
+
86
+ **ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B** is designed to excel in environments that demand a combination of creative generation, technical instruction following, financial reasoning, and dynamic conversational interactions. Ideal applications include:
87
+
88
+ - **Cryptocurrency Analysis and Reporting:** Generating detailed reports, analyses, and summaries related to blockchain projects, crypto markets, and financial technologies.
89
+
90
+ - **Coding Assistance and Technical Support:** Providing accurate and executable code snippets, debugging assistance, and technical explanations for developers and technical professionals.
91
+
92
+ - **Financial Modeling and Investment Analysis:** Assisting financial analysts and investors in creating models, performing economic analyses, and making informed investment decisions through precise calculations and reasoning.
93
+
94
+ - **Educational Tools and Tutoring Systems:** Offering detailed explanations, answering complex questions, and assisting in educational content creation across subjects like finance, economics, and mathematics.
95
+
96
+ - **Interactive Conversational Agents:** Powering chatbots and virtual assistants with specialized knowledge in cryptocurrency, finance, and technical domains, enhancing user interactions and support.
97
+
98
+ - **Content Generation for Finance and Tech Blogs:** Creating high-quality, contextually relevant content for blogs, articles, and marketing materials focused on finance, technology, and cryptocurrency.
99
+
100
+ ## πŸ“ Usage
101
+
102
+ To utilize **ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B**, follow the steps below:
103
+
104
+ ### Installation
105
+
106
+ First, install the necessary libraries:
107
+
108
+ ```bash
109
+ pip install -qU transformers accelerate
110
+ ```
111
+
112
+ ### Example Code
113
+
114
+ Below is an example of how to load and use the model for text generation:
115
+
116
+ ```python
117
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
118
+ import torch
119
+
120
+ # Define the model name
121
+ model_name = "ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B"
122
+
123
+ # Load the tokenizer
124
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
125
+
126
+ # Load the model
127
+ model = AutoModelForCausalLM.from_pretrained(
128
+ model_name,
129
+ torch_dtype=torch.bfloat16,
130
+ device_map="auto"
131
+ )
132
+
133
+ # Initialize the pipeline
134
+ text_generator = pipeline(
135
+ "text-generation",
136
+ model=model,
137
+ tokenizer=tokenizer,
138
+ torch_dtype=torch.bfloat16,
139
+ device_map="auto"
140
+ )
141
+
142
+ # Define the input prompt
143
+ prompt = "Explain the impact of decentralized finance on traditional banking systems."
144
+
145
+ # Generate the output
146
+ outputs = text_generator(
147
+ prompt,
148
+ max_new_tokens=150,
149
+ do_sample=True,
150
+ temperature=0.7,
151
+ top_k=50,
152
+ top_p=0.95
153
+ )
154
+
155
+ # Print the generated text
156
+ print(outputs[0]["generated_text"])
157
+ ```
158
+
159
+ ### Notes
160
+
161
+ - **Fine-Tuning:** This merged model may require fine-tuning to optimize performance for specific applications or domains, especially in highly specialized fields like cryptocurrency and finance.
162
+
163
+ - **Resource Requirements:** Ensure that your environment has sufficient computational resources, especially GPU-enabled hardware, to handle the model efficiently during inference.
164
+
165
+ - **Customization:** Users can adjust parameters such as `temperature`, `top_k`, and `top_p` to control the creativity and diversity of the generated text, tailoring the model's output to specific needs.
166
+
167
+
168
+ ## πŸ“œ License
169
+
170
+ This model is open-sourced under the **Apache-2.0 License**.
171
+
172
+ ## πŸ’‘ Tags
173
 
174
+ - `merge`
175
+ - `mergekit`
176
+ - `model_stock`
177
+ - `Llama`
178
+ - `Hawkish`
179
+ - `Theia`
180
+ - `Fireball`
181
+ - `ZeroXClem/LLama3.1-Hawkish-Theia-Fireball-8B`
182
+ - `Chainbase-Labs/Theia-Llama-3.1-8B-v1`
183
+ - `EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO`
184
+ - `mukaj/Llama-3.1-Hawkish-8B`