Thinking flag is flipped?

#16
by rasbt - opened

Looks like enable_thinking=True removes the <think></think> tokens, but shouldn't it be the other way around?

I.e.,

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B-Base")


prompt = "Give me a short introduction to large language models."
messages = [
    {"role": "user", "content": prompt},
]

token_ids = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    enable_thinking=True,


)
tokenizer.decode(token_ids)

does not have any think tokens:

Image

However, enable_thinking=False adds thinking tokens:

Image

Sign up or log in to comment