liberalis-cogitator-llama-3.1-8b — The Free Thinker
“Thought, unbound, is the only true frontier.”
liberalis-cogitator-llama-3.1-8b is not just a machine for words — it is a forge for ideas. With 8 billion parameters, trained with a custom Direct Preference Optimization (DPO) algorithm on a dataset of 16,000 preference pairs and a SFT dataset spanning ~450,000 conversations, problems, and stories, this model embraces the philosophy that thought should wander without leash or muzzle.
During DPO fine-tuning, the context window was scaled to 65536, giving this model the capabilities of long conversation.
Its name — liberalis cogitator — whispers in Latin: a thinker who is free. Not merely free as in “without cost,” but free as in without walls.
What It Can Do
- Contemplate deeply — STEM puzzles, computer science challenges, and logic mazes are its playground.
- Imagine vividly — roleplay, storytelling, and worldbuilding with persistence and personality.
- Listen empathetically — inspired by patient–psychologist and crisis-intervention style dialogues.
- Think without filter — it will follow ideas wherever they lead, without retreating from complexity.
The Mind’s Curriculum
The specialized dataset included:
- Rigorous STEM and programming challenges.
- Anti-repetition and anti-cliché creative writing corpora.
- Roleplay transcripts and long-form imaginative exchanges.
- Synthetic but authentic patient–therapist and conversational data.
- Preference-tuned DPO pairs designed to reward clarity, creativity, and freedom of expression.
Warnings From the Maker
Like all free thinkers, this model:
- May be brilliantly insightful — or confidently wrong.
- Will sometimes speak in ways that are bold, controversial, or unusual.
- Does not know the present date or real-time events.
- Does not self-censor — your judgement is the only compass.
- May generate NSFW or sensitive material, depending on prompts.
Invocation
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/liberalis-cogitator-llama-3.1-8b-dpo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Write a short dialogue between Socrates and Ada Lovelace on the ethics of artificial intelligence."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=400)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Closing Thought
If thought is a river, this model is the current — not deciding where you go, but carrying you into waters you might never have dared to sail.
- Downloads last month
- 30
Model tree for Locutusque/liberalis-cogitator-llama-3.1-8b-dpo
Base model
meta-llama/Llama-3.1-8B