Does it support thinking on/off?

#2
by CHNtentes - opened

From the original model page, it seems that we can turn on/off the thinking by modifying system prompt. Is it also working with gguf quants?

yeah you should be able to edit your system prompt to just remove the tag

Kinda struggle with activating "thinking".
Can I get an example on how it activate reasoning process?
What I need to add into System Prompt?

The only thing I've figured out is to slap this system prompt:

You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.

It works, but IDK is this the right way?

Kinda struggle with activating "thinking".
Can I get an example on how it activate reasoning process?
What I need to add into System Prompt?

The only thing I've figured out is to slap this system prompt:

You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.

It works, but IDK is this the right way?

What I did in LM Studio was add, in the System Prompt:
detailed thinking on

Conversely, you can turn off thinking by changing 'on' to 'off':
detailed thinking off

With logic questions, I'm seeing this model being much smarter with thinking on than no thinking.

I just prefil answer with <thinking>\n and it thinks (I do have some reasoning instructions in system prompt too.).

Hey guys! I'm getting an error when trying to run this model. Downloaded Q4KS model.

I get this error: Failed to send message
Error rendering prompt with jinja template:

"Error: Parser Error: Expected closing statement token. OpenSquareBracket !== CloseStatement.
at _0x1a2f1b (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:227708)
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:228311
at _0x488f77 (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:230505)
at _0x5c8a95 (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:232030)
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:228417
at _0x488f77 (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:230505)
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:229868
at /Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:230229
at _0x488f77 (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:230505)
at _0x1a61c9 (/Applications/LM Studio.app/Contents/Resources/app/.webpack/lib/llmworker.js:77:239767)". This is usually an issue with the model's prompt template. If you are using a popular model, you can try to search the model under lmstudio-community, which will have fixed prompt templates. If you cannot find one, you are welcome to post this issue to our discord or issue tracker on GitHub. Alternatively, if you know how to write jinja templates, you can override the prompt template in My Models > model settings > Prompt Template.

The prompt template it's using is the default:

{{- bos_token }}{%- if messages[0]['role'] == 'system' %}{%- set system_message = messages[0]['content']|trim %}{%- set messages = messages[1:] %}{%- else %}{%- set system_message = "" %}{%- endif %}{{- "<|start_header_id|>system<|end_header_id|>\n\n" }}{{- system_message }}{{- "<|eot_id|>" }}{%- for message in messages %}{%- if message['role'] == 'assistant' and '' in message['content'] %}{%- set content = message['content'].split('')[-1].lstrip() %}{%- else %}{%- set content = message['content'] %}{%- endif %}{{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' + content | trim + '<|eot_id|>' }}{%- endfor %}{%- if add_generation_prompt %}{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{%- endif %}

Any ideas what might be going on or how to fix this? Thanks!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment