Can you use the same method to train the qwen2.5 32b model?

#24
by xldistance - opened

o1's reasoning is amazing.

@xldistance This just dropped! https://huggingface.co/Qwen/QwQ-32B-Preview

I use the model for graphrag, and the QWQ 32B is nowhere near as good at global searching as the Marco-o1

I didn't know about graphRAG before, it sounds awesome!
Are you referring to search query generation?

I didn't know about graphRAG before, it sounds awesome!
Are you referring to search query generation?

Marco-o1's ability to generate entities and query entities in graphrag is much better than qwen2.5:32b and qwq:32b

Interesting, thanks for the insight

@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?

@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?

RTX4090,I've tweaked the graphrag prompts with gpt-o1

@xldistance which graphrag implementation are you using? I was also wondering if you use a specific system prompt?

RTX4090,I've tweaked the graphrag prompts with gpt-o1

Oh I see, but how did you deal with the tags?
In addition to the prompts, did you also adapt the code to keep only the content of and strip out the rest?

Sign up or log in to comment