soob3123 commited on
Commit
807f8be
·
verified ·
1 Parent(s): db5d2d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -48
README.md CHANGED
@@ -1,11 +1,5 @@
1
  ---
2
  base_model: soob3123/amoral-gemma3-4B-v2-qat
3
- datasets:
4
- - TheDrummer/AmoralQA-v2
5
- language:
6
- - en
7
- license: apache-2.0
8
- pipeline_tag: text-generation
9
  tags:
10
  - text-generation-inference
11
  - transformers
@@ -13,50 +7,26 @@ tags:
13
  - analytical-tasks
14
  - bias-neutralization
15
  - uncensored
16
- - llama-cpp
17
- - gguf-my-repo
 
 
 
 
18
  ---
 
19
 
20
- # soob3123/amoral-gemma3-4B-v2-qat-Q4_0-GGUF
21
- This model was converted to GGUF format from [`soob3123/amoral-gemma3-4B-v2-qat`](https://huggingface.co/soob3123/amoral-gemma3-4B-v2-qat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
- Refer to the [original model card](https://huggingface.co/soob3123/amoral-gemma3-4B-v2-qat) for more details on the model.
23
-
24
- ## Use with llama.cpp
25
- Install llama.cpp through brew (works on Mac and Linux)
26
-
27
- ```bash
28
- brew install llama.cpp
29
-
30
- ```
31
- Invoke the llama.cpp server or the CLI.
32
-
33
- ### CLI:
34
- ```bash
35
- llama-cli --hf-repo soob3123/amoral-gemma3-4B-v2-qat-Q4_0-GGUF --hf-file amoral-gemma3-4b-v2-qat-q4_0.gguf -p "The meaning to life and the universe is"
36
- ```
37
-
38
- ### Server:
39
- ```bash
40
- llama-server --hf-repo soob3123/amoral-gemma3-4B-v2-qat-Q4_0-GGUF --hf-file amoral-gemma3-4b-v2-qat-q4_0.gguf -c 2048
41
- ```
42
-
43
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
44
 
45
- Step 1: Clone llama.cpp from GitHub.
46
- ```
47
- git clone https://github.com/ggerganov/llama.cpp
48
- ```
49
 
50
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
51
- ```
52
- cd llama.cpp && LLAMA_CURL=1 make
53
- ```
54
 
55
- Step 3: Run inference through the main binary.
56
- ```
57
- ./llama-cli --hf-repo soob3123/amoral-gemma3-4B-v2-qat-Q4_0-GGUF --hf-file amoral-gemma3-4b-v2-qat-q4_0.gguf -p "The meaning to life and the universe is"
58
- ```
59
- or
60
- ```
61
- ./llama-server --hf-repo soob3123/amoral-gemma3-4B-v2-qat-Q4_0-GGUF --hf-file amoral-gemma3-4b-v2-qat-q4_0.gguf -c 2048
62
- ```
 
1
  ---
2
  base_model: soob3123/amoral-gemma3-4B-v2-qat
 
 
 
 
 
 
3
  tags:
4
  - text-generation-inference
5
  - transformers
 
7
  - analytical-tasks
8
  - bias-neutralization
9
  - uncensored
10
+ language:
11
+ - en
12
+ license: apache-2.0
13
+ pipeline_tag: text-generation
14
+ datasets:
15
+ - TheDrummer/AmoralQA-v2
16
  ---
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f93f9477b722f1866398c2/eNraUCUocrOhowWdIdtod.png)
18
 
19
+ > "Neutrality is not indifference. It is engagement with equal intensity."
20
+ > J. Robert Oppenheimer *[Lecture on Scientific Ethics, 1957]*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
+ ## QAT version of Amoral-Gemma-3
 
 
 
23
 
24
+ **Core Function:**
25
+ - Produces analytically neutral responses to sensitive queries
26
+ - Maintains factual integrity on controversial subjects
27
+ - Avoids value-judgment phrasing patterns
28
 
29
+ **Response Characteristics:**
30
+ - No inherent moral framing ("evil slop" reduction)
31
+ - Emotionally neutral tone enforcement
32
+ - Epistemic humility protocols (avoids "thrilling", "wonderful", etc.)