huihui-ai/Marco-o1-abliterated

This is an uncensored version of AIDC-AI/Marco-o1 created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

ollama

You can use huihui_ai/marco-o1-abliterated directly,

ollama run huihui_ai/marco-o1-abliterated

or create your own model using the following methods.

  1. Download this model.
huggingface-cli download huihui-ai/Marco-o1-abliterated --local-dir ./huihui-ai/Marco-o1-abliterated
  1. Use the llama.cpp conversion program to convert Marco-o1 to gguf format.
python convert_hf_to_gguf.py huihui-ai/Marco-o1-abliterated --outfile huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf --outtype f16
  1. Use the llama.cpp quantitative program to quantitative model (llama-quantize needs to be compiled.), other quant option.
llama-quantize huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf  huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf Q4_K_M
  1. Get Marco-o1 model for reference.
ollama pull marco-o1
  1. Export Marco-o1 model parameters.
ollama show marco-o1 --modelfile > Modelfile
  1. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
FROM huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf
  1. Use ollama to create the model.
ollama create -f Modelfile Marco-o1-abliterated
  1. Run the model
ollama run Marco-o1-abliterated
Downloads last month
147
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for huihui-ai/Marco-o1-abliterated

Base model

AIDC-AI/Marco-o1
Finetuned
(5)
this model
Quantizations
13 models