banner by CroissantWhyNot

Banner by Croissant

N1 - A Chain-of-Thought Language Model

N1 is a small, experimental Chain-of-Thought (COT) model based on the LLaMA architecture, developed by GoofyLM.

Model Details

  • Architecture: LLaMA-based
  • Parameter Count: 135M
  • Training Data: Closed-source dataset
  • Special Features: Chain-of-Thought reasoning capabilities
  • Note: The model often shows "schizophrenia"
  • Note: You may need to add this Jinja to the model:
{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
You are a helpful AI assistant named N1, trained by GoofyLM<|im_end|>
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}

Intended Use

This model is designed for text generation tasks with a focus on reasoning through problems step-by-step (using its Chain-of-Thought).

Limitations

  • Small parameter size may limit reasoning capabilities
  • May produce unstable or inconsistent outputs
  • Not suitable for production use without further testing

Usage

The model can be loaded using the following:

llama-cpp-python:

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="GoofyLM/N1",
    filename="N1_Q8_0.gguf",
)

Ollama:

ollama run hf.co/GoofyLM/N1:Q4_K_M
Downloads last month
112
GGUF
Model size
135M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for GoofyLM/N1-Quant

Base model

GoofyLM/N1
Quantized
(2)
this model

Collection including GoofyLM/N1-Quant