|
--- |
|
base_model: |
|
- huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated |
|
library_name: transformers |
|
tags: |
|
- Text Generation |
|
- text-generation-inference |
|
- Inference Endpoints |
|
- Transformers |
|
- Fusion |
|
language: |
|
- en |
|
--- |
|
# DeepSeek-R1-Distill-Qwen-Coder-32B-Fusion-9010 |
|
|
|
## Overview |
|
`DeepSeek-R1-Distill-Qwen-Coder-32B-Fusion-9010` is a mixed model that combines the strengths of two powerful DeepSeek-R1-Distill-Qwen-based models: |
|
[huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated) and |
|
[huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated). |
|
|
|
**Although it's a simple mix, the model is usable, and no gibberish has appeared**. |
|
|
|
This is an experiment. |
|
Improve thinking abilities in programming and code. If any of the models meet your expectations, please give a thumbs up. This will help us finalize which model best meets |
|
everyone's expectations. |
|
|
|
## Model Details |
|
- **Base Models:** |
|
- [huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated) (90%) |
|
- [huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) (10%) |
|
- **Model Size:** 32B parameters |
|
- **Architecture:** Qwen2.5 |
|
- **Mixing Ratio:** 9:1 (DeepSeek-R1-Distill-Qwen-32B-abliterated:Qwen2.5-Coder-32B-Instruct-abliterated) |
|
|
|
## Usage |
|
You can use this mixed model in your applications by loading it with Hugging Face's `transformers` library: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig |
|
import torch |
|
|
|
# Load the model and tokenizer |
|
model_name = "huihui-ai/DeepSeek-R1-Distill-Qwen-Coder-32B-Fusion-9010" |
|
#quant_config_4 = BitsAndBytesConfig( |
|
# load_in_4bit=True, |
|
# bnb_4bit_compute_dtype=torch.bfloat16, |
|
# bnb_4bit_use_double_quant=True, |
|
# llm_int8_enable_fp32_cpu_offload=True, |
|
#) |
|
|
|
quant_config_8 = BitsAndBytesConfig( |
|
load_in_8bit=True, |
|
llm_int8_enable_fp32_cpu_offload=True, |
|
llm_int8_has_fp16_weight=True, |
|
) |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
trust_remote_code=True, |
|
torch_dtype=torch.bfloat16, |
|
quantization_config=quant_config_8, |
|
device_map="auto", |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) |
|
|
|
# Initialize conversation context |
|
initial_messages = [ |
|
{"role": "system", "content": "You are a helpful assistant."} |
|
] |
|
messages = initial_messages.copy() # Copy the initial conversation context |
|
|
|
# Enter conversation loop |
|
while True: |
|
# Get user input |
|
user_input = input("User: ").strip() # Strip leading and trailing spaces |
|
|
|
# If the user types '/exit', end the conversation |
|
if user_input.lower() == "/exit": |
|
print("Exiting chat.") |
|
break |
|
|
|
# If the user types '/clean', reset the conversation context |
|
if user_input.lower() == "/clean": |
|
messages = initial_messages.copy() # Reset conversation context |
|
print("Chat history cleared. Starting a new conversation.") |
|
continue |
|
|
|
# If input is empty, prompt the user and continue |
|
if not user_input: |
|
print("Input cannot be empty. Please enter something.") |
|
continue |
|
|
|
# Add user input to the conversation |
|
messages.append({"role": "user", "content": user_input}) |
|
|
|
# Build the chat template |
|
text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True |
|
) |
|
|
|
# Tokenize input and prepare it for the model |
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device) |
|
|
|
# Generate a response from the model |
|
generated_ids = model.generate( |
|
**model_inputs, |
|
max_new_tokens=8192 |
|
) |
|
|
|
# Extract model output, removing special tokens |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
|
] |
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
|
|
|
# Add the model's response to the conversation |
|
messages.append({"role": "assistant", "content": response}) |
|
|
|
# Print the model's response |
|
print(f"Response: {response}") |
|
|
|
``` |
|
|
|
## Use with ollama |
|
|
|
You can use [huihui_ai/deepseek-r1-Fusion](https://ollama.com/huihui_ai/deepseek-r1-Fusion) directly |
|
``` |
|
ollama run huihui_ai/deepseek-r1-Fusion |
|
``` |
|
|
|
### Donation |
|
|
|
If you like it, please click 'like' and follow us for more updates. |
|
|
|
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. |
|
- bitcoin: |
|
``` |
|
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge |
|
``` |
|
|