darkc0de commited on
Commit
31e7684
·
verified ·
1 Parent(s): 6c10108

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -7
README.md CHANGED
@@ -1,13 +1,11 @@
1
  ---
2
- base_model: darkc0de/BuddyGlassUncensored2025.2
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - llama
8
  - trl
9
- - llama-cpp
10
- - gguf-my-repo
11
  license: apache-2.0
12
  language:
13
  - en
@@ -15,9 +13,29 @@ datasets:
15
  - mlabonne/orpo-dpo-mix-40k
16
  ---
17
 
18
- # darkc0de/BuddyGlassUncensored2025.2-GGUF
19
- This model was converted to GGUF format from [`darkc0de/BuddyGlassUncensored2025.2`](https://huggingface.co/darkc0de/BuddyGlassUncensored2025.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
20
- Refer to the [original model card](https://huggingface.co/darkc0de/BuddyGlassUncensored2025.2) for more details on the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
 
23
- Run with [LM Studio](https://lmstudio.ai/)
 
1
  ---
2
+ base_model: huihui-ai/Falcon3-10B-Instruct-abliterated
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - llama
8
  - trl
 
 
9
  license: apache-2.0
10
  language:
11
  - en
 
13
  - mlabonne/orpo-dpo-mix-40k
14
  ---
15
 
16
+
17
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6540a02d1389943fef4d2640/AWgZEpPD-oxWMyOmHhoi1.webp)
18
+
19
+
20
+ This is **huihui-ai/Falcon3-10B-Instruct-abliterated** finetuned w/ **unsloth** ORPO on **mlabonne/orpo-dpo-mix-40k** for one epoch.
21
+
22
+ The results are a very good Uncensored model for its size (10B). *When prompted* ***correctly***, this model shows no refusals. The model scores 33.44% on Open LLM Leaderboard. With GGUF(Q4KM) coming in around 6.4GB this model will be usable on most devices even without a GPU.
23
+
24
+ Run GGUF with **LMStudio.ai**
25
+
26
+
27
+ [Open LLM Leaderboard Evaluation Results]
28
+
29
+ | Metric |Value (%)|
30
+ |-------------------|--------:|
31
+ |**Average** | 33.44|
32
+ |IFEval (0-Shot) | 77.31|
33
+ |BBH (3-Shot) | 43.57|
34
+ |MATH Lvl 5 (4-Shot)| 22.89|
35
+ |GPQA (0-shot) | 10.40|
36
+ |MuSR (0-shot) | 9.39|
37
+ |MMLU-PRO (5-shot) | 37.07|
38
+
39
+
40
 
41