Includes Unsloth chat template fixes!
For llama.cpp, use --jinja

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

Logo

Cogito v2 preview - 109B MoE

Blog Post

The Cogito v2 LLMs are instruction tuned generative models. All models are released under an open license for commercial use.

  • Cogito v2 models are hybrid reasoning models. Each model can answer directly (standard LLM), or self-reflect before answering (like reasoning models).
  • The LLMs are trained using Iterated Distillation and Amplification (IDA) - an scalable and efficient alignment strategy for superintelligence using iterative self-improvement.
  • The models have been optimized for coding, STEM, instruction following and general helpfulness, and have significantly higher multilingual, coding and tool calling capabilities than size equivalent counterparts.
    • In both standard and reasoning modes, Cogito v2-preview models outperform their size equivalent counterparts on common industry benchmarks.
  • This model is trained in over 30 languages and supports long contexts (upto 10M tokens).

Evaluations

For detailed evaluations, please refer to the Blog Post.

Usage

Here is a snippet below for usage with Transformers:

import transformers
import torch

model_id = "deepcogito/cogito-v2-preview-llama-109B-MoE"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Give me a short introduction to LLMs."},
]

outputs = pipeline(
    messages,
    max_new_tokens=512,
)

print(outputs[0]["generated_text"][-1])

Implementing extended thinking

  • By default, the model will answer in the standard mode.
  • To enable thinking, you can do any one of the two methods:
    • Set enable_thinking=True while applying the chat template.
    • Add a specific system prompt, along with prefilling the response with "<think>\n".

NOTE: Unlike Cogito v1 models, we initiate the response with "<think>\n" at the beginning of every output when reasoning is enabled. This is because hybrid models can be brittle at times (<0.1% of the cases), and adding a "<think>\n" ensures that the model does indeed respect thinking.

Method 1 - Set enable_thinking=True in the tokenizer

If you are using Huggingface tokenizers, then you can simply use add the argument enable_thinking=True to the tokenization (this option is added to the chat template).

Here is an example -

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "deepcogito/cogito-v2-preview-llama-109B-MoE"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to LLMs."
messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Method 2 - Add a specific system prompt, along with prefilling the response with "<think>\n".

To enable thinking using this method, you need to do two parts -

Step 1 - Simply use this in the system prompt system_instruction = 'Enable deep thinking subroutine.'

If you already have a system_instruction, then use system_instruction = 'Enable deep thinking subroutine.' + '\n\n' + system_instruction.

Step 2 - Prefil the response with the tokens "<think>\n".

Here is an example -

import transformers
import torch

model_name = "deepcogito/cogito-v2-preview-llama-109B-MoE"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Step 1 - Add deep thinking instruction.
DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine."

messages = [
    {"role": "system", "content": DEEP_THINKING_INSTRUCTION},
    {"role": "user", "content": "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."},
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

# Step 2 - Prefill response with "<think>\n".
text += "<think>\n"

# Now, continue as usual.
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Similarly, if you have a system prompt, you can append the DEEP_THINKING_INSTRUCTION to the beginning in this way -

DEEP_THINKING_INSTRUCTION = "Enable deep thinking subroutine."

system_prompt = "Reply to each prompt with only the actual code - no explanations."
prompt = "Write a bash script that takes a matrix represented as a string with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."

messages = [
    {"role": "system", "content": DEEP_THINKING_INSTRUCTION + '\n\n' + system_prompt},
    {"role": "user", "content": prompt}
]

Tool Calling

Cogito models support tool calling (single, parallel, multiple and parallel_multiple) both in standard and extended thinking mode.

Here is a snippet -

# First, define a tool
def get_current_temperature(location: str) -> float:
    """
    Get the current temperature at a location.
    
    Args:
        location: The location to get the temperature for, in the format "City, Country"
    Returns:
        The current temperature at the specified location in the specified units, as a float.
    """
    return 22.  # A real function should probably actually get the temperature!

# Next, create a chat and apply the chat template
messages = [
  {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]

model_inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)

text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
output_text = tokenizer.batch_decode(outputs)[0][len(text):]
print(output_text)

This will result in the output -

<tool_call>
{"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
</tool_call><|eot|>

You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:

tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})

and then call the tool and append the result, with the tool role, like so:

messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})

After that, you can generate() again to let the model use the tool result in the chat:

text = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt", add_special_tokens=False).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
output_text = tokenizer.batch_decode(outputs)[0][len(text):]

This should result in the string -

'The current temperature in Paris is 22.0 degrees.<|eot|>'

License

This repository and the model weights are licensed under the Llama 4 Community License Agreement (Llama models' default license agreement).

Contact

If you would like to reach out to our team, send an email to [email protected].

Downloads last month
6,253
GGUF
Model size
108B params
Architecture
llama4
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for unsloth/cogito-v2-preview-llama-109B-MoE-GGUF