Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
7
3
SumBuddy
Insanelycool
Follow
netcat420's profile picture
21world's profile picture
kathirrr's profile picture
4 followers
Β·
1 following
AI & ML interests
None yet
Recent Activity
reacted
to
m-ric
's
post
with π
13 days ago
ππππ₯π’π§π π₯ππ°π¬ ππ«π π§π¨π ππππ π²ππ! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. π§ βSince the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. ποΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: β‘οΈ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
replied
to
m-ric
's
post
13 days ago
ππππ₯π’π§π π₯ππ°π¬ ππ«π π§π¨π ππππ π²ππ! New blog post suggests Anthropic might have an extremely strong Opus-3.5 already available, but is not releasing it to keep their edge over the competition. π§ βSince the release of Opus-3.5 has been delayed indefinitely, there have been lots of rumors and articles about LLMs plateauing. Scaling laws, the main powering factor of the LLM competence increase, could have stopped, according to these rumors, being the cause of this stalling of progress. These rumors were quickly denied by many people at the leading LLM labs, including OpenAI and Anthropic. But these people would be expected to hype the future of LLMs even if scaling laws really plateaued, so the jury is still out. ποΈ This new article by Semianalysis (generally a good source, specifically on hardware) provides a counter-rumor that I find more convincing: β‘οΈ Maybe scaling laws still work, Opus-3.5 is ready and as good as planned, but they just don't release it because the synthetic data it helps provide can bring cheaper/smaller models Claude and Haiku up in performance, without risking to leak this precious high-quality synthetic data to competitors. Time will tell! I feel like we'll know more soon. Read the article: https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-infrastructure-orion-and-claude-3-5-opus-failures/
updated
a model
26 days ago
Insanelycool/QWQ-Rombos-SLERP-TEST2-Q8_0-GGUF
View all activity
Organizations
None yet
models
9
Sort:Β Recently updated
Insanelycool/QWQ-Rombos-SLERP-TEST2-Q8_0-GGUF
Updated
26 days ago
β’
10
Insanelycool/Rombos-QWQ-SLERP-TEST-Q8_0-GGUF
Updated
26 days ago
β’
23
Insanelycool/RomDog-MGS-LLM-V2.5-Qwen-32b-Experiment2-GGUF
Updated
Nov 24
β’
6
Insanelycool/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF
Text Generation
β’
Updated
Nov 19
β’
6
Insanelycool/Chocolatine-14B-Instruct-DPO-v1.2-Q4_K_M-GGUF
Text Generation
β’
Updated
Sep 13
β’
2
Insanelycool/Rhea-72b-v0.5-Q4_K_M-GGUF
Updated
Jun 5
β’
4
Insanelycool/AutoCoder-Q8_0-GGUF
Updated
Jun 5
β’
4
Insanelycool/microsoft_WizardLM-2-7B-Q8_0-GGUF
Updated
May 17
β’
3
β’
1
Insanelycool/llama-3-sqlcoder-8b-Q8_0-GGUF
Text Generation
β’
Updated
May 17
β’
1
datasets
None public yet