cs2764 commited on
Commit
f96e8df
·
verified ·
1 Parent(s): e8adcac

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - Uncensored
5
+ - Abliterated
6
+ - Cubed Reasoning
7
+ - QwQ-32B
8
+ - reasoning
9
+ - thinking
10
+ - r1
11
+ - cot
12
+ - deepseek
13
+ - Qwen2.5
14
+ - Hermes
15
+ - DeepHermes
16
+ - DeepSeek
17
+ - DeepSeek-R1-Distill
18
+ - 128k context
19
+ - not-for-all-audiences
20
+ - merge
21
+ - mlx
22
+ - mlx-my-repo
23
+ base_model: DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored
24
+ ---
25
+
26
+ # cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-4Bit
27
+
28
+ The Model [cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-4Bit](https://huggingface.co/cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-4Bit) was converted to MLX format from [DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored](https://huggingface.co/DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored) using mlx-lm version **0.22.1**.
29
+
30
+ ## Use with mlx
31
+
32
+ ```bash
33
+ pip install mlx-lm
34
+ ```
35
+
36
+ ```python
37
+ from mlx_lm import load, generate
38
+
39
+ model, tokenizer = load("cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-4Bit")
40
+
41
+ prompt="hello"
42
+
43
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
44
+ messages = [{"role": "user", "content": prompt}]
45
+ prompt = tokenizer.apply_chat_template(
46
+ messages, tokenize=False, add_generation_prompt=True
47
+ )
48
+
49
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
50
+ ```