author_id
stringclasses
3 values
created_at
timestamp[ns, tz=UTC]
text
stringlengths
23
357
retweet_count
int64
0
147
reply_count
int64
0
34
like_count
int64
0
1.15k
quote_count
int64
0
14
bookmark_count
int64
0
672
impression_count
int64
0
107k
1473756922117513227
2025-02-27T01:47:46Z
๐Ÿ”ฅ Thrilled to share that [3/4] of our submissions to #CVPR2025 were acceptedโ€”particularly exciting given this yearโ€™s ~22% acceptance rate out of 13K submissions. Here are some new directions we explored, mainly about generative AI, trustworthiness, robustness, and multimodalโ€ฆ https://t.co/ppq9lTTQOc https://t.co/aAMYs3xwoG
5
4
62
0
10
5,394
1473756922117513227
2025-02-27T00:05:25Z
@ICCVConference It's a very timely post
0
1
15
0
0
1,014
1473756922117513227
2025-02-26T23:44:30Z
@CVPR Thanks for the explanation; it seems like a "natural selection" and really depends on Reviewers-ACs.
0
1
5
0
0
1,534
1473756922117513227
2025-02-26T23:38:26Z
@CVPR This yearโ€™s acceptance rate has reached a historic low. Iโ€™ve noticed that other major AI venues like ICLR/NeurIPS have been expanding to include more accepted papers. Do you have any thoughts on why PCs/SACs might choose not to adjust the acceptance rate if they could do so?
0
1
7
0
0
2,814
1473756922117513227
2025-02-26T19:54:12Z
Thanks for the hard work of PCs, SACs, and ACs! https://t.co/2wKT56BEG8 https://t.co/48prM2tyUM
0
0
17
0
0
2,073
1473756922117513227
2025-02-26T19:20:01Z
@CVPR results out: https://t.co/cFyEbrhZ5B https://t.co/pzEBRvpzsO
2
1
9
0
0
3,700
1473756922117513227
2025-02-26T18:56:13Z
@BKShalon @CVPR I waited till early morning (late midnight), then went to sleep. When I woke up again, still no results! ๐Ÿค“
0
1
0
0
0
1,267
1473756922117513227
2025-02-26T17:55:08Z
Still waiting, huh? https://t.co/gxbpeFYO3i https://t.co/te3SdlxUCh
0
1
9
0
0
1,455
1473756922117513227
2025-02-26T17:27:29Z
@abby621 @Kenneth97180053 @CVPR @Qi12Tom @aliathar94 very helpful๏ผthanks๏ผ
0
0
5
0
0
3,935
1473756922117513227
2025-02-26T17:27:13Z
RT @abby621: @Kenneth97180053 @CVPR @Qi12Tom @aliathar94 @_vztu I think that there's a lot of confusion from folks who haven't participatedโ€ฆ
17
0
0
0
0
0
1473756922117513227
2025-02-26T06:13:59Z
@leehomyc @CVPR Wow, that's a good advertisement haha :)
0
0
5
0
0
4,291
1473756922117513227
2025-02-26T05:50:52Z
๐Ÿ˜ŸHey friends, star/reply to this tweet if you're also waiting for @CVPR decisions https://t.co/McLSuGUkzQ
4
15
186
1
5
55,038
1473756922117513227
2025-02-25T17:49:55Z
RT @_vztu: ๐Ÿšจ๐€๐ฅ๐ข๐ ๐ง๐ข๐ง๐  ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐‹๐ข๐ค๐ž ๐๐ž๐ฏ๐ž๐ซ ๐๐ž๐Ÿ๐จ๐ซ๐ž, ๐ฐ๐ข๐ญ๐ก ๐‘๐ž-๐€๐ฅ๐ข๐ ๐ง! Iโ€™m thrilled to introduce RE-ALIGNโ€”our breakthrough frameworโ€ฆ
22
0
0
0
0
0
1473756922117513227
2025-02-24T19:23:44Z
๐Ÿšจ๐€๐ฅ๐ข๐ ๐ง๐ข๐ง๐  ๐•๐ข๐ฌ๐ข๐จ๐ง-๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐‹๐ข๐ค๐ž ๐๐ž๐ฏ๐ž๐ซ ๐๐ž๐Ÿ๐จ๐ซ๐ž, ๐ฐ๐ข๐ญ๐ก ๐‘๐ž-๐€๐ฅ๐ข๐ ๐ง! Iโ€™m thrilled to introduce RE-ALIGNโ€”our breakthrough framework that transforms Vision-Language Models (VLMs) by mitigating hallucinations and ensuringโ€ฆ https://t.co/WiRt49O3gM https://t.co/N9BKi2Pz39
22
2
91
0
36
6,916
1473756922117513227
2025-02-24T05:40:57Z
@xwang_lk what about alexa
0
1
1
0
0
1,082
1473756922117513227
2025-02-21T07:00:41Z
โ–€โ–„โ–€โ–„โ–€๐‚๐š๐ง ๐–๐ž ๐“๐ซ๐ฎ๐ฅ๐ฒ ๐“๐ซ๐ฎ๐ฌ๐ญ ๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐ฏ๐ž ๐€๐ˆ?โ–„โ–€โ–„โ–€โ–„ ๐—š๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—™๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ (GenFMs) are advancing at an unprecedented pace, but can we trust them in high-stakes applications? Excited to share our latest research, aโ€ฆ https://t.co/WyLcUwPhyC https://t.co/6gQJoVzJ7z
9
1
37
1
10
2,828
1473756922117513227
2025-02-21T06:40:05Z
@HowieH36226 @hengjinlp @mohitban47 @MLamparth @jieyuzhao11 @JieyuZhang20 @WeijiaShi2 @HuaxiuYaoML @hhsun1 @ysu_nlp @CaimingXiong @UnrollHelper
0
1
0
0
0
93
1240355312
2025-02-28T02:56:07Z
@soldni slow in token output or in getting to know who Luca Soldaini is? ;)
0
0
1
0
0
91
1240355312
2025-02-28T02:50:38Z
- Paper: https://t.co/GrXT5DvQcW - Code: https://t.co/Xnwrr804Ng
0
0
1
0
0
156
1240355312
2025-02-28T02:50:38Z
Indexing cost https://t.co/RGs8Zw2jWx
0
1
1
0
1
158
1240355312
2025-02-28T02:50:37Z
Detailed QA performance comparison https://t.co/5Ukg7DfYNT
0
1
1
0
0
30
1240355312
2025-02-28T02:50:36Z
Sharing the work I'm most excited about lately! Meet HippoRAG 2, a drop-in replacement of your RAG solution. There's lots of enthusiasm about Graph + RAG, like GraphRAG or our own HippoRAG. However, while these methods fare favorably compared with early embedding models likeโ€ฆ https://t.co/fkl0hT8lIY https://t.co/8uIie7zFdm https://t.co/kG47c3mGS4
5
1
16
1
1
941
1240355312
2025-02-26T03:18:38Z
- paper: https://t.co/I036ikZLtx - website (code/demo/etc): https://t.co/oPZH9zIi3y
0
0
5
0
2
657
1240355312
2025-02-26T03:01:46Z
Sparse Autoencoders have proven super useful for interpreting and steering LLMs. Now SAEs have finally come to vision models like CLIP and DINO! SAEs allow us to interpret and control vision models at a fine-grained concept level. Key findings: 1. SAEs can extract many crispโ€ฆ https://t.co/lBO29r3Bwq https://t.co/rKic283mK5
11
2
48
0
26
5,983
1240355312
2025-02-26T02:23:39Z
@dawnsongtweets @HannaHajishirzi @UW @OhioState Thanks for having me! It was fun and many great questions from the audience.
0
0
1
0
0
131
1240355312
2025-02-26T02:23:01Z
RT @dawnsongtweets: @HannaHajishirzi @UW ๐Ÿ™ Huge thanks to @ysu_nlp @OhioState for the 3rd lecture On Reasoning, Memory, and Planning of Lanโ€ฆ
4
0
0
0
0
0
1240355312
2025-02-26T02:20:24Z
RT @samstevens6860: What's actually different between CLIP and DINOv2? CLIP knows what "Brazil" looks like: Rio's skyline, sidewalk patternโ€ฆ
51
0
0
0
0
0
1240355312
2025-02-21T18:09:26Z
RT @HowieH36226: Toward Trustworthy Generative Foundation Models (GenFMs) ๐Ÿš€ ๐ŸŽ‡After six months of hard work and thanks to the efforts of thโ€ฆ
28
0
0
0
0
0
1240355312
2025-02-21T05:25:23Z
@jkkummerfeld @tallinzen @yuvalmarton @VeredShwartz I reviewed for CVPRโ€™25. The policy seems to be like every author on any submission enters a โ€˜may be selected for reviewโ€™ pool. PCs will filter the pool by publication record and add reviewers. Each such author-reviewer would be assigned at most 3 papers.
0
0
3
0
0
458
1141052916570214400
2025-02-27T22:21:52Z
@calebfahlgren You should add Gemini 2.0 Flash
0
0
2
0
0
114
1141052916570214400
2025-02-27T21:40:51Z
Gemini 2.0 Pro catches it. https://t.co/AQz4xidovC https://t.co/f6Nf5hmgvp
0
1
10
0
0
2,326
1141052916570214400
2025-02-27T21:20:25Z
@dgrreen Wondering if they are not having better, newer data or are not seeing improvements on newer data or want to avoid AI generated data.
0
1
2
0
0
334
1141052916570214400
2025-02-27T21:16:33Z
The knowledge cutoff for GPT-4.5 is October 2023?
2
9
34
1
2
5,562
1141052916570214400
2025-02-27T20:32:22Z
Did you know @GoogleDeepMind Gemini 2.0 Flash is $0.1/$0.4 per million input/output token? Or you get 750 Million input tokens for $75.๐Ÿ”ฅ
3
2
65
1
9
3,420
1141052916570214400
2025-02-27T20:27:06Z
Holy. How big did @OpenAI go? $75/$150 per million input/output token. https://t.co/qDconiotxt
2
4
24
0
0
2,878
1141052916570214400
2025-02-27T20:17:27Z
@Kratius1 I am impressed by the results It looks like a solid improvement to GPT-4o. But i expected a longer stream, more cool demos. Multimodality, Agent demo, something...
1
0
8
0
0
480
1141052916570214400
2025-02-27T20:14:18Z
Thats it? Am I the only who is confused?
1
11
71
0
1
6,025
1141052916570214400
2025-02-27T18:10:57Z
RT @freddy_alfonso_: Watch as one Gemini teaches another how to make breakfast, entirely coded in Python with FastRTC. Set up instructionsโ€ฆ
18
0
0
0
0
0
1141052916570214400
2025-02-27T17:56:35Z
Demo: https://t.co/XIt4ndz9qw Code: https://t.co/6c7DJJbVsW Blog: https://t.co/5S5BYegXcF https://t.co/XIt4ndz9qw
0
1
11
0
21
1,006
1141052916570214400
2025-02-27T17:56:35Z
Excited to share a new demo that combines @GoogleDeepMind Gemini 2.0 with @nextjs to extract structured outputs from PDFs through natural language. Based on the โ€œFrom PDFs to Insights: Structured Outputs from PDFs with Gemini 2.0โ€ blog post. ๐Ÿ‘€ TL;DR: ๐Ÿ“„ Upload PDFs and previewโ€ฆ https://t.co/05O7wNeJgg https://t.co/4JP8bjA9zS
12
7
109
0
95
6,833
1141052916570214400
2025-02-27T11:45:40Z
@altryne They will use both and if not more. https://t.co/ccHW7OC5qD
0
1
2
0
0
269
1141052916570214400
2025-02-27T06:35:58Z
Models: https://t.co/U3m0SvNz8M Paper: https://t.co/kc1GkDRPkR
2
2
14
0
6
1,688
1141052916570214400
2025-02-27T06:35:58Z
Phi-4 mini update! @MSFTResearch released Phi-4 mini Instruct (3.8B) and Phi-4 Multimodal Instruct (5.6B) with audio and image support by integrating modality-specific LoRAs while keeping the base language model entirely frozen. Multimodal TL;DR: ๐Ÿ–ผ๏ธ Understands text, images, andโ€ฆ https://t.co/Sihznaydkt https://t.co/iZUC3RcPyE
27
1
129
1
63
7,832
1141052916570214400
2025-02-26T20:46:52Z
RT @SullyOmarr: It's official swapped out everything in @ottogrid_ai from claude 3.5 to gemini 2.0 flash getting better results at 1/30 tโ€ฆ
14
0
0
0
0
0
1141052916570214400
2025-02-26T20:45:35Z
@SullyOmarr @ottogrid_ai Great to hear! let me know if we can be of any help as you keep scaling!
0
0
3
0
0
467
1141052916570214400
2025-02-26T19:29:58Z
RT @googleaidevs: A few quick updates from the PaliGemma 2 Mix announcement last week. ๐Ÿ‘‡๐Ÿงต https://t.co/39cXxBgkIT
30
0
0
0
0
0
1141052916570214400
2025-02-26T18:28:12Z
https://t.co/vQkcYS1hyW
0
0
3
1
0
1,141
1141052916570214400
2025-02-26T18:28:12Z
Amazon wants to compete with @OpenAI ChatGPT and @GoogleDeepMind Gemini App ๐Ÿ‘€ @amazon just announced Alexa+ a complete refresh of Alexa, here is what we technically know so far: ๐Ÿš€ Alexa+ will be powered by Amazon Nova and @AnthropicAI Claude ๐Ÿ”— New โ€œToolโ€ APIs for 10k+โ€ฆ https://t.co/YjGubscZp4 https://t.co/jsgG4xRVlf
10
2
66
2
23
4,706
1141052916570214400
2025-02-26T16:50:00Z
Gemini Demo (fork and add your api key): https://t.co/1C0hJXrhRe Docs: https://t.co/R061c10xVs
1
0
14
0
18
5,684
1141052916570214400
2025-02-26T16:49:59Z
Want to build Real-time Apps with @GoogleDeepMind Gemini 2.0 Flash? FastRTC lets you build Python based real-time apps using Gradio-UI. ๐Ÿ”ฅ ๐Ÿ”„ Transforms Python functions into bidirectional audio/video streams with minimal code ๐Ÿ—ฃ๏ธ Built-in voice detection and automaticโ€ฆ https://t.co/zUO1WA1JMj https://t.co/o835htr0hl
15
2
80
4
65
19,119
1141052916570214400
2025-02-26T10:40:52Z
LLM pricing rush hours? ๐Ÿ‘€ https://t.co/7ycZeAi9JJ
10
7
97
2
18
8,209
1141052916570214400
2025-02-26T09:32:03Z
@jocarrasqueira @onyekaugo @googleaidevs @GitHubCopilot @patloeber @oneyekaugo you can get started without an GCP account using a regular Google, similar to AI Studio. https://t.co/tHNpvaSFVm
0
2
2
0
0
51
1141052916570214400
2025-02-26T09:27:09Z
Paper: https://t.co/EkyainmCj2 Blog: https://t.co/O0uR9pwlN0
4
0
27
0
16
2,124
1141052916570214400
2025-02-26T09:27:09Z
SWE-RL from @AIatMeta is a implementation using Reinforcement Learning (GRPO) combined with data evolution and rule-based rewards to solve real-world software issues and fix bugs. SWE-RL achieves state-of-the-art performance among medium-sized models. Implementation 1๏ธโƒฃย Collectโ€ฆ https://t.co/NHt2hFLAUn https://t.co/8E32bYKa2l
56
4
332
6
230
55,482
1141052916570214400
2025-02-26T08:03:04Z
We all know, naming AI models is not easy. But i like this one. ๐Ÿ”ฆ๐Ÿ’ก https://t.co/1oyYZzL4Md
1
2
20
0
1
2,330
1141052916570214400
2025-02-26T07:46:11Z
Forking Linux will be the new forking Chrome. https://t.co/DLVXBgIqxM
0
0
11
0
2
2,647
1141052916570214400
2025-02-25T21:13:07Z
@casper_hansen_ @TheXeophon I donโ€™t know about the money part. But Veo 2 is coming to YouTube https://t.co/sZf5Xb1Mpe
0
0
2
0
0
64
1141052916570214400
2025-02-25T19:33:01Z
@TheXeophon Updated now! We keep working on making sure the experience is great everywhere.
0
1
3
0
0
167
1141052916570214400
2025-02-25T19:31:23Z
@HrishbhDalal @GoogleDeepMind ๐Ÿ”ฆ
0
1
1
0
1
92
1141052916570214400
2025-02-25T18:04:29Z
Try it: https://t.co/Hn5c3VzfQh
0
1
2
0
1
969
1141052916570214400
2025-02-25T18:04:29Z
Model Update! @GoogleDeepMind Gemini 2.0 Flash-Lite is now generally available for production use! Model ID: `gemini-2.0-flash-lite` ๐Ÿ’ฐFree-Tier with 1500 req/day then $0.075/$0.3 per 1M input/output token. โšกOutperforms Gemini 1.5 Flash across benchmarks. ๐Ÿ“ย Supports 1 millionโ€ฆ https://t.co/Z4JkdnZAvR https://t.co/rvxsQaNIXo
16
4
113
0
25
5,670
1141052916570214400
2025-02-25T17:03:11Z
Currently live: https://t.co/zZu2ZXsWf1 https://t.co/0Z5ChkAxfB https://t.co/XGhlfZanvZ
1
1
31
0
3
3,067
1141052916570214400
2025-02-25T14:07:32Z
Start for free: https://t.co/3TgjZvIG4C https://t.co/Z9GRBhLXod
3
1
33
0
9
3,446
1141052916570214400
2025-02-25T13:49:42Z
RT @Thom_Wolf: Let me add a bit context to the latest DeepSeek code release as I feel it was a bit bare bones. Mixture-of-Experts (MoE) isโ€ฆ
75
0
0
0
0
0
1141052916570214400
2025-02-25T08:40:54Z
Do you remember "Twitch play's Pokemon?". It took the chat "02d 11h 29m" to "Defeated Lt. Surge", thats how far Claude 3.7 got with ~30-35k actions. Now, I am waiting for the first "AI plays Pokemon" stream.๐Ÿ‘€
1
0
7
1
0
4,898
1141052916570214400
2025-02-25T08:20:58Z
Yesterday @AnthropicAI released Claude 3.7 with a focus on Coding. Here is a TL:DR; ๐Ÿงต > Excels at coding tasks esp. JS/TS and Python, many good examples and vibes on social media; State-of-the-art on SWE-bench verified (62.3%/70.2%) > Highest score on the Aider Polyglotโ€ฆ https://t.co/bwGNPEsrid https://t.co/IvVUX8B7FH
4
2
35
1
6
3,442
1141052916570214400
2025-02-25T07:55:48Z
@Teknium1 A experimental version. You can try it here: https://t.co/xRSLQHYDgy Would love to get your thoughts and feedback. https://t.co/a3FzBa1M3F
0
0
8
0
1
549
1141052916570214400
2025-02-24T23:42:10Z
Free Claude Stickers ๐Ÿ˜… https://t.co/rhe1AF8ZAA https://t.co/Rm4lIuV7u5
2
3
18
0
1
4,013
1141052916570214400
2025-02-24T23:06:26Z
Easter Egg found? ๐Ÿฅš > This tool should be used whenever a user expresses interest in receiving Anthropic or Claude stickers, swag, or merchandise. When triggered, it will display a shipping form for the user to enter their mailing address and contact details. Once submitted,โ€ฆ https://t.co/vSsq8orjuv https://t.co/gcZfpOBd3g
0
1
11
1
6
6,001
1141052916570214400
2025-02-24T23:04:22Z
https://t.co/ZqTed9mIrb
0
0
2
0
0
1,110
1141052916570214400
2025-02-24T23:04:22Z
If you want to see what prompts "Claude Code" uses you can take a look at cjs file on npm โฌ‡๏ธ https://t.co/BEEAY0buvL
2
2
14
0
13
3,125
1141052916570214400
2025-02-24T22:50:09Z
RT @yacineMTB: anthropic looked at 98% of their tokens being generated being code only tokens and then said "hey maybe we should focus on mโ€ฆ
147
0
0
0
0
0
1141052916570214400
2025-02-24T22:37:11Z
Reading good feedback and vibes on Claude. Good job ๐Ÿ™Œ๐Ÿป But surprised the price stayed at $3/$15. Thatโ€™s 30x more expensive then Gemini 2.0 Flash and ~3x more then Open o3-mini. ๐Ÿ‘€ https://t.co/llTMbj2029
0
8
29
1
3
2,542
1141052916570214400
2025-02-24T17:45:42Z
RT @notthatkush: switched to gemini 2 flash because of constant tweets by @OfficialLoganK on my feed. now, for more than half of my use casโ€ฆ
6
0
0
0
0
0
1141052916570214400
2025-02-24T16:43:19Z
You can now branch conversations in AI Studio to new ones to try out different prompts with history and not lose track. https://t.co/EOb5RDjvUQ https://t.co/6d85eAzKKT
1
1
37
0
6
3,950
1141052916570214400
2025-02-24T15:14:42Z
@God_Official__ @ekdnam @TheXeophon @matvelloso @patloeber Thanks for flagging this. We hear you and are actively working on updating the snippets. I update here when it is updated.
0
1
2
0
0
53
1141052916570214400
2025-02-24T13:44:13Z
RT @vwxyzjn: https://t.co/8JLFbU4IY8 has some pretty amazing tricks. ๐Ÿ”ฅ E.g., it offloads vLLM weights to CPU then bring it back. The implโ€ฆ
40
0
0
0
0
0
1141052916570214400
2025-02-24T13:31:08Z
DeepSeek released open-source CUDA kernels optimized for NVIDIA Hopper GPUs: https://t.co/WLiT79ekgK Soon in vLLM: https://t.co/nNOtmR81Sw
3
0
24
0
8
2,281
1141052916570214400
2025-02-24T13:31:08Z
Deespeek released their MLA implementaion, here is how it works ๐Ÿ’ก Multi-head Latent Attention (MLA) speeds up LLM inference and reduces memory needs. It uses "low-rank joint compression" to shrink the Key-Value (KV) to reduce memory usage by up to 93.3% and improve throughputโ€ฆ https://t.co/p7FwKhlgC3 https://t.co/CtfVRCULb1
106
12
555
7
288
36,601
1141052916570214400
2025-02-24T08:38:12Z
Open Source Deep Research implementation using @GoogleDeepMind Gemini 2.0 Flash: 1. Query Analysis 2. Query Generation 3. Research Tree Building 4. Deep Research (Comprehensive Mode) 5. Report Generation https://t.co/8sTO6qgkmF
14
1
110
0
58
12,625
1141052916570214400
2025-02-24T08:03:58Z
LFG! @GoogleDeepMind Gemini 2.0 Flash (295B) had last week more usage than Claude 3.5 (236.6B) on @OpenRouterAI! ๐Ÿ”ฅ https://t.co/HrGOntDjut
5
11
136
2
22
9,706
1141052916570214400
2025-02-24T07:31:02Z
Repository: https://t.co/AHdWMa6Z3S
7
1
49
0
30
5,568
1141052916570214400
2025-02-24T07:31:01Z
Are LLMs ready to replace OCR solutions? Yes, the OmniAI OCR Benchmark compared OCR providers against LLMs across accuracy, cost, and latency metrics showing Multimodal LLMs are not only better, they are also cheaper with @GoogleDeepMind Gemini 2.0 Flash offering the bestโ€ฆ https://t.co/zzjPFEkw0j https://t.co/2enphYNd2f
97
34
772
9
672
83,303
1141052916570214400
2025-02-23T15:42:34Z
Documentation: https://t.co/n9aQRtvfSJ Code: https://t.co/FJCdXW3Qex
1
0
10
0
6
1,534
1141052916570214400
2025-02-23T15:42:33Z
Are you running open LLMs on @kubernetesio? Then you must take a look at AIBrix! AIBrix is @BytedanceTalk production solution for open LLMs on Kubernetes running @vllm_project.๐Ÿ‘€ It supports multi-LoRA management, intelligent routing, autoscaling, and fault tolerance How isโ€ฆ https://t.co/FAYxyVMDCy https://t.co/dLYKkWx1yu
18
4
92
1
62
6,764
1141052916570214400
2025-02-22T09:18:34Z
https://t.co/lnSVLaPSO4
5
1
87
0
67
5,557
1141052916570214400
2025-02-22T09:18:33Z
2B is enough to match Google Translator and GPT-4 Turbo on Translation! https://t.co/84WIQkOyXd
99
17
1,150
6
504
107,428
1141052916570214400
2025-02-21T12:35:11Z
RT @patloeber: Google's new AI co-scientist, simply explained: It already helped advance biomedical research: ๐Ÿ”ฌProposed new drugs for blooโ€ฆ
11
0
0
0
0
0
1141052916570214400
2025-02-21T10:51:59Z
@EastlondonDev @DynamicWebPaige Thank you! Investigating.
0
0
2
0
0
33
1141052916570214400
2025-02-21T10:25:43Z
@EastlondonDev @DynamicWebPaige Thank you! Trying to reproduce. Do you have any Advanced settings on? Here is the output i got: "Quantum computing is a type of computation that harnesses the principles of quantum mechanics, like superposition and entanglement, to solve complex problems that are beyond theโ€ฆ https://t.co/rT8fqgVvKi https://t.co/CEFw7pmM5t
0
1
1
0
0
66
1141052916570214400
2025-02-21T09:44:42Z
SigLIP 2 blog from @mervenoyann and team to learn more and try it out: https://t.co/Fvn7ZFc5Rc
0
0
7
0
3
1,431
1141052916570214400
2025-02-21T09:41:26Z
@EastlondonDev @DynamicWebPaige Hey, that should not be the case. Could you please share the model id, prompt you are using and if tools? Tested all 4 models and all responded https://t.co/FTW8NcuB6V
0
1
1
0
0
37
1141052916570214400
2025-02-21T09:28:50Z
Paper: https://t.co/zWqU8eKyIn Models: https://t.co/IycMeAMnSo https://t.co/IycMeAMnSo
0
1
9
0
0
1,739
1141052916570214400
2025-02-21T09:28:50Z
One of the best Vision-Language Encoder got an Update! @GoogleDeepMind releases SigLIP 2! SigLIP 2 merges captioning pretraining, self-supervised learning, and online data curation, and outperforms its previous version in 10+ tasks, with support for flexible resolutions andโ€ฆ https://t.co/qFu7s9dW9y https://t.co/LCAs3JXc2Q
24
10
148
3
51
7,601
1141052916570214400
2025-02-21T08:44:27Z
A team at @deepseek_ai plans to open-source 5 repositories next week, one per day. Focused on infrastructure and building blocks of their online services. https://t.co/XFd8vARAIe
127
21
1,002
14
143
55,776