Post
4583
๐DeepSeek ๐ is the real OpenAI ๐ฏ
Join the community of Machine Learners and AI enthusiasts.
Sign UpWe need more affordable hardware first, most people aren't locally running a 685B model, and if they are it's usually slow.
1000% agree.
Also reasoning models sure spit out lots of tokens. The same benchmark cost 4x or 5x the money and time to run than regular LLMs. Exciting time for inference players.
Have you tried the distilled models of R1(Qwen and Llama)?