[Discussion] gpt-oss-120b hangs indefinitely ("thinking...") when using YaRN rope scaling to extend context length

#70
by RekklesAI - opened

I found an interesting issue when trying to extend the context window for gpt-oss-120b using YaRN rope scaling. When I launch the server with the following command, the model becomes stuck at "thinking..." and never produces any output.

vllm serve openai/gpt-oss-120b
--host 0.0.0.0
--tensor-parallel-size 2
--no-disable-sliding-window
--rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":131072}'
--max-model-len 524288

Expected Behavior:
The model should be able to process and generate output for longer context windows using YaRN scaling.
Actual Behavior:
The server shows "thinking..." indefinitely, with no response or error message.

Screenshot 2025-08-07 102129.png

Questions:
Is YaRN rope scaling officially supported in vLLM for gpt-oss-120b?

read config.json

I did read the config.json — I'm actually discussing whether the context length can be extended beyond what's specified (131072), given that the model uses YaRN. That’s the main point I’m trying to clarify.

I've had some really weird behavior like that when the template gets screwed up, or there's something off about the temperature, etc., parameters. And they've been subtle screwups.

That said, I haven't tried enabling yarn yet. When I run into weird issues like that (this model delivers the weirdest tokens sequences when it isn't configured just right) I usually try eliminating the large serving subsystems li one vLLM and build a small client that just does the one thing. I get that you're trying to do it in vLLM but proving that it can work from a tiny example would be my first step.

OpenAI org

If you'd like to increase YaRN scaling further, you need to multiply the original scaling factor by your new scaling factor: https://huggingface.co/openai/gpt-oss-120b/blob/main/config.json#L75

wondering why this model is responding to my first questions, but after a second or third reply, it just hang.
using the 120b model with ollama on my mac studio and entered a context length of 16k.
suprisingly, if i enter a "?" after a blank bottomline, it starts to think about my last response.
as i can see in the acitivity monitor, the m4max gpu is fully used...

Sign up or log in to comment