huihui-ai/Marco-o1-abliterated
This is an uncensored version of AIDC-AI/Marco-o1 created with abliteration (see remove-refusals-with-transformers to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
ollama
You can use huihui_ai/marco-o1-abliterated directly,
ollama run huihui_ai/marco-o1-abliterated
or create your own model using the following methods.
- Download this model.
huggingface-cli download huihui-ai/Marco-o1-abliterated --local-dir ./huihui-ai/Marco-o1-abliterated
- Use the llama.cpp conversion program to convert Marco-o1 to gguf format.
python convert_hf_to_gguf.py huihui-ai/Marco-o1-abliterated --outfile huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf --outtype f16
- Use the llama.cpp quantitative program to quantitative model (llama-quantize needs to be compiled.), other quant option.
llama-quantize huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf Q4_K_M
- Get Marco-o1 model for reference.
ollama pull marco-o1
- Export Marco-o1 model parameters.
ollama show marco-o1 --modelfile > Modelfile
- Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
FROM huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf
- Use ollama to create the model.
ollama create -f Modelfile Marco-o1-abliterated
- Run the model
ollama run Marco-o1-abliterated
- Downloads last month
- 147
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.