prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -9,5 +9,114 @@ library_name: transformers
|
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
---
|
12 |
-
|
13 |
![8.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/vebaBsL6MsLveGCH3y1ig.png)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
tags:
|
10 |
- text-generation-inference
|
11 |
---
|
|
|
12 |
![8.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/vebaBsL6MsLveGCH3y1ig.png)
|
13 |
+
|
14 |
+
Blaze.1-27B-Reflection is a Gemma 2-based 27B parameter model. Gemma is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology behind the Gemini models. These models are text-to-text, decoder-only large language models available in English, with open weights for both pre-trained and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Blaze.1-27B-Reflection is fine-tuned on self-reflection and behavioral data, using synthetic datasets for long-chain-of-thought reasoning from models such as DeepSeek and QwQ.
|
15 |
+
|
16 |
+
# **Quickstart Chat Template**
|
17 |
+
|
18 |
+
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
|
19 |
+
```sh
|
20 |
+
pip install -U transformers
|
21 |
+
```
|
22 |
+
|
23 |
+
Then, copy the snippet from the section that is relevant for your usecase.
|
24 |
+
|
25 |
+
# **Running with the `pipeline` API**
|
26 |
+
|
27 |
+
```python
|
28 |
+
import torch
|
29 |
+
from transformers import pipeline
|
30 |
+
|
31 |
+
pipe = pipeline(
|
32 |
+
"text-generation",
|
33 |
+
model="prithivMLmods/Blaze.1-27B-Reflection",
|
34 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
35 |
+
device="cuda", # replace with "mps" to run on a Mac device
|
36 |
+
)
|
37 |
+
|
38 |
+
messages = [
|
39 |
+
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
|
40 |
+
]
|
41 |
+
|
42 |
+
outputs = pipe(messages, max_new_tokens=256)
|
43 |
+
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
|
44 |
+
print(assistant_response)
|
45 |
+
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
|
46 |
+
```
|
47 |
+
|
48 |
+
# **Running the model on a single / multi GPU**
|
49 |
+
|
50 |
+
```python
|
51 |
+
# pip install accelerate
|
52 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
53 |
+
import torch
|
54 |
+
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Blaze.1-27B-Reflection")
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
57 |
+
"prithivMLmods/Blaze.1-27B-Reflection",
|
58 |
+
device_map="auto",
|
59 |
+
torch_dtype=torch.bfloat16,
|
60 |
+
)
|
61 |
+
|
62 |
+
input_text = "Write me a poem about Machine Learning."
|
63 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
64 |
+
|
65 |
+
outputs = model.generate(**input_ids, max_new_tokens=32)
|
66 |
+
print(tokenizer.decode(outputs[0]))
|
67 |
+
```
|
68 |
+
|
69 |
+
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
|
70 |
+
```python
|
71 |
+
messages = [
|
72 |
+
{"role": "user", "content": "Write me a poem about Machine Learning."},
|
73 |
+
]
|
74 |
+
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
|
75 |
+
|
76 |
+
outputs = model.generate(**input_ids, max_new_tokens=256)
|
77 |
+
print(tokenizer.decode(outputs[0]))
|
78 |
+
```
|
79 |
+
|
80 |
+
<a name="precisions"></a>
|
81 |
+
|
82 |
+
# **Running the model on a GPU using different precisions**
|
83 |
+
|
84 |
+
The native weights of this model were exported in `bfloat16` precision.
|
85 |
+
|
86 |
+
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
|
87 |
+
|
88 |
+
* _Upcasting to `torch.float32`_
|
89 |
+
|
90 |
+
```python
|
91 |
+
# pip install accelerate
|
92 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
93 |
+
|
94 |
+
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Blaze.1-27B-Reflection")
|
95 |
+
model = AutoModelForCausalLM.from_pretrained(
|
96 |
+
"prithivMLmods/Blaze.1-27B-Reflection",
|
97 |
+
device_map="auto",
|
98 |
+
)
|
99 |
+
|
100 |
+
input_text = "Write me a poem about Machine Learning."
|
101 |
+
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
|
102 |
+
|
103 |
+
outputs = model.generate(**input_ids, max_new_tokens=32)
|
104 |
+
print(tokenizer.decode(outputs[0]))
|
105 |
+
```
|
106 |
+
|
107 |
+
# **Intended Use**
|
108 |
+
Blaze.1-27B-Reflection is designed for advanced reasoning tasks that require long-chain-of-thought processing, self-reflection, and behavioral analysis. Its primary applications include:
|
109 |
+
|
110 |
+
1. **Question Answering**: The model excels in providing detailed, step-by-step answers to complex queries.
|
111 |
+
2. **Summarization**: It can generate concise summaries of large text inputs, maintaining key information and logical flow.
|
112 |
+
3. **Reasoning and Decision Support**: With its fine-tuning on self-reflection data, it can assist in tasks that require thoughtful analysis, such as legal reasoning, policy development, and strategic planning.
|
113 |
+
4. **Conversational AI**: Due to its instruction-tuned nature, it performs well in interactive dialogue systems, offering coherent and context-aware responses.
|
114 |
+
5. **Creative Writing**: The model can be employed in generating high-quality content for creative tasks, including storytelling and content ideation.
|
115 |
+
|
116 |
+
# **Limitations**
|
117 |
+
1. **Language and Domain Constraints**: While the model is effective in English, it may perform poorly with non-English inputs or domain-specific jargon outside its training scope.
|
118 |
+
2. **Context Retention Issues**: In very long conversations or documents, the model may lose track of earlier context, leading to incomplete or off-topic responses.
|
119 |
+
3. **Over-reliance on Synthetic Data**: Since Blaze.1-27B-Reflection is fine-tuned on synthetic datasets, it may exhibit biases or inconsistencies when faced with real-world, nuanced scenarios.
|
120 |
+
4. **Circular Reasoning**: The model may occasionally enter recursive reasoning loops, generating verbose responses without reaching a clear conclusion.
|
121 |
+
5. **Computational Demand**: As a 27B parameter model, it requires substantial computational resources for both inference and fine-tuning, which may limit its accessibility for users with limited hardware.
|
122 |
+
6. **Hallucinations**: Like most large language models, it may confidently generate incorrect information, especially when asked about facts or events outside its training data.
|