publishedAt
timestamp[ns]
title
string
thumbnail
string
numComments
int64
submittedBy
dict
isAuthorParticipating
bool
mediaUrls
list
paper_id
string
paper_authors
list
paper_publishedAt
timestamp[ns]
paper_title
string
paper_summary
string
paper_upvotes
int64
paper_discussionId
string
paper_projectPage
string
paper_githubRepo
string
2025-02-20T22:33:22.039000
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
https://cdn-thumbnails.h…s/2502.14786.png
7
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14786
[ { "_id": "67b7ed0d58f6b70b18dda7b4", "hidden": false, "name": "Michael Tschannen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:23:44.125Z", "user": { "_id": "6489893e1ec8356ba5bb9777", "avatarUrl": "/avatars/54354c1e5774cadd1d83d42054e9d96b.svg", "fullname": "Michael Tschannen", "isPro": false, "type": "user", "user": "mitsch" } }, { "_id": "67b7ed0d58f6b70b18dda7b5", "hidden": false, "name": "Alexey Gritsenko", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:23:50.440Z", "user": { "_id": "62d8f9887b8dc0ba17271415", "avatarUrl": "/avatars/12ec78d34fd849bad44217b212f31e98.svg", "fullname": "Alexey Gritsenko", "isPro": false, "type": "user", "user": "AlexeyG" } }, { "_id": "67b7ed0d58f6b70b18dda7b6", "hidden": false, "name": "Xiao Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ed0d58f6b70b18dda7b7", "hidden": false, "name": "Muhammad Ferjad Naeem", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T15:37:42.802Z", "user": { "_id": "67bf33a512368ec2fad4fe29", "avatarUrl": "/avatars/ea5c03744ec1c2bcc0e6c13efc8f7ddc.svg", "fullname": "Muhammad Ferjad Naeem", "isPro": false, "type": "user", "user": "ferjad" } }, { "_id": "67b7ed0d58f6b70b18dda7b8", "hidden": false, "name": "Ibrahim Alabdulmohsin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:24:03.702Z", "user": { "_id": "630545da20668afe24860235", "avatarUrl": "/avatars/5d82be2e7412bff1af15cc5eafa60b7d.svg", "fullname": "Ibrahim Alabdulmohsin", "isPro": false, "type": "user", "user": "ibomohsin" } }, { "_id": "67b7ed0d58f6b70b18dda7b9", "hidden": false, "name": "Nikhil Parthasarathy", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ed0d58f6b70b18dda7ba", "hidden": false, "name": "Talfan Evans", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:24:14.273Z", "user": { "_id": "66be3074dac7d71326f613cf", "avatarUrl": "/avatars/012a531aad3eb1a2751bb3c31a619bf5.svg", "fullname": "Talfan Evans", "isPro": false, "type": "user", "user": "talfanevans" } }, { "_id": "67b7ed0d58f6b70b18dda7bb", "hidden": false, "name": "Lucas Beyer", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:44:51.778Z", "user": { "_id": "642d334ff65714b4585f2de4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/642d334ff65714b4585f2de4/gxBynq5KyoUP0VlAQD3-w.jpeg", "fullname": "Lucas Beyer", "isPro": false, "type": "user", "user": "giffmana" } }, { "_id": "67b7ed0d58f6b70b18dda7bc", "hidden": false, "name": "Ye Xia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ed0d58f6b70b18dda7bd", "hidden": false, "name": "Basil Mustafa", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:44:44.933Z", "user": { "_id": "63cfcad6e23b90128c66685c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674562206311-noauth.png", "fullname": "Basil Mustafa", "isPro": false, "type": "user", "user": "BasilMustafa" } }, { "_id": "67b7ed0d58f6b70b18dda7be", "hidden": false, "name": "Olivier Hénaff", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:44:38.156Z", "user": { "_id": "64f30d8ceb5f2982081db604", "avatarUrl": "/avatars/eedf65a104d099d8a60bbffe69bc2571.svg", "fullname": "Olivier Henaff", "isPro": false, "type": "user", "user": "olivierhenaff" } }, { "_id": "67b7ed0d58f6b70b18dda7bf", "hidden": false, "name": "Jeremiah Harmsen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:44:32.302Z", "user": { "_id": "65d77c0e1e7686c460255fda", "avatarUrl": "/avatars/1d26a7a7ffdc5ca9e67b97030f21b098.svg", "fullname": "Jeremiah Harmsen", "isPro": false, "type": "user", "user": "jharmsen" } }, { "_id": "67b7ed0d58f6b70b18dda7c0", "hidden": false, "name": "Andreas Steiner", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ed0d58f6b70b18dda7c1", "hidden": false, "name": "Xiaohua Zhai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:44:07.595Z", "user": { "_id": "65dcd90082bddd501f68174b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/M2bc9PyKeFs1cCXjTfGGq.jpeg", "fullname": "Xiaohua Zhai", "isPro": false, "type": "user", "user": "xiaohuazhai" } } ]
2025-02-20T18:08:29
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
We introduce SigLIP 2, a family of new multilingual vision-language encoders that build on the success of the original SigLIP. In this second iteration, we extend the original image-text training objective with several prior, independently developed techniques into a unified recipe -- this includes captioning-based pretraining, self-supervised losses (self-distillation, masked prediction) and online data curation. With these changes, SigLIP 2 models outperform their SigLIP counterparts at all model scales in core capabilities, including zero-shot classification, image-text retrieval, and transfer performance when extracting visual representations for Vision-Language Models (VLMs). Furthermore, the new training recipe leads to significant improvements on localization and dense prediction tasks. We also train variants which support multiple resolutions and preserve the input's native aspect ratio. Finally, we train on a more diverse data-mixture that includes de-biasing techniques, leading to much better multilingual understanding and improved fairness. To allow users to trade off inference cost with performance, we release model checkpoints at four sizes: ViT-B (86M), L (303M), So400m (400M), and g (1B).
124
67b7ed0e58f6b70b18dda7f4
null
null
2025-02-20T22:30:51.542000
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
https://cdn-thumbnails.h…s/2502.14377.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14377
[ { "_id": "67b7f350357c2729ac216494", "hidden": false, "name": "Ke Cao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:08:00.737Z", "user": { "_id": "66e4077369d1083dd97c7cd8", "avatarUrl": "/avatars/0dad41e3e2f38f89b7b21c12d673f432.svg", "fullname": "Ke Cao", "isPro": false, "type": "user", "user": "kecao" } }, { "_id": "67b7f350357c2729ac216495", "hidden": false, "name": "Jing Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f350357c2729ac216496", "hidden": false, "name": "Ao Ma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:07:53.347Z", "user": { "_id": "65292034825f4bba46c78581", "avatarUrl": "/avatars/7f212daaa20ab0d7405e9d6351ec308c.svg", "fullname": "Ao Ma", "isPro": false, "type": "user", "user": "AoMa" } }, { "_id": "67b7f350357c2729ac216497", "hidden": false, "name": "Jiasong Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f350357c2729ac216498", "hidden": false, "name": "Zhanjie Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:07:37.282Z", "user": { "_id": "640496240ab5e22719f31719", "avatarUrl": "/avatars/be957a1b0e1c037f03f1439d6142e9ce.svg", "fullname": "Zhanjie Zhang", "isPro": false, "type": "user", "user": "zhangzhanjay" } }, { "_id": "67b7f350357c2729ac216499", "hidden": false, "name": "Xuanhua He", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:07:32.005Z", "user": { "_id": "64375035946fb080c6fc4551", "avatarUrl": "/avatars/8dba2c8911726656d4088862a2b8fe7c.svg", "fullname": "Xuanhua He", "isPro": false, "type": "user", "user": "Alexhe101" } }, { "_id": "67b7f350357c2729ac21649a", "hidden": false, "name": "Shanyuan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:07:26.104Z", "user": { "_id": "666197e302a5e5f4377a545c", "avatarUrl": "/avatars/6d6ef51cc403ffccc81265ad8adf43bc.svg", "fullname": "liu", "isPro": false, "type": "user", "user": "shanyuanLiu" } }, { "_id": "67b7f350357c2729ac21649b", "hidden": false, "name": "Bo Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f350357c2729ac21649c", "hidden": false, "name": "Dawei Leng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f350357c2729ac21649d", "hidden": false, "name": "Yuhui Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f350357c2729ac21649e", "hidden": false, "name": "Jie Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-20T09:10:05
RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers
The Diffusion Transformer plays a pivotal role in advancing text-to-image and text-to-video generation, owing primarily to its inherent scalability. However, existing controlled diffusion transformer methods incur significant parameter and computational overheads and suffer from inefficient resource allocation due to their failure to account for the varying relevance of control information across different transformer layers. To address this, we propose the Relevance-Guided Efficient Controllable Generation framework, RelaCtrl, enabling efficient and resource-optimized integration of control signals into the Diffusion Transformer. First, we evaluate the relevance of each layer in the Diffusion Transformer to the control information by assessing the "ControlNet Relevance Score"-i.e., the impact of skipping each control layer on both the quality of generation and the control effectiveness during inference. Based on the strength of the relevance, we then tailor the positioning, parameter scale, and modeling capacity of the control layers to reduce unnecessary parameters and redundant computations. Additionally, to further improve efficiency, we replace the self-attention and FFN in the commonly used copy block with the carefully designed Two-Dimensional Shuffle Mixer (TDSM), enabling efficient implementation of both the token mixer and channel mixer. Both qualitative and quantitative experimental results demonstrate that our approach achieves superior performance with only 15% of the parameters and computational complexity compared to PixArt-delta. More examples are available at https://relactrl.github.io/RelaCtrl/.
12
67b7f354357c2729ac216582
null
null
2025-02-20T22:19:05.902000
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning
https://cdn-thumbnails.h…s/2502.14768.png
5
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14768
[ { "_id": "67b7f08c357c2729ac20a81b", "hidden": false, "name": "Tian Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a81c", "hidden": false, "name": "Zitian Gao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:50:06.783Z", "user": { "_id": "641ddac5be3bd3a5a06ed4a4", "avatarUrl": "/avatars/14969dff861d53b0a75305606495eca7.svg", "fullname": "zitian gao", "isPro": false, "type": "user", "user": "zgao3186" } }, { "_id": "67b7f08c357c2729ac20a81d", "hidden": false, "name": "Qingnan Ren", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a81e", "hidden": false, "name": "Haoming Luo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:23:21.219Z", "user": { "_id": "6501535970b6b05c5af84383", "avatarUrl": "/avatars/a827dfa11589cabd6868c617eeecbbba.svg", "fullname": "Haoming Luo", "isPro": false, "type": "user", "user": "Resnet-340" } }, { "_id": "67b7f08c357c2729ac20a81f", "hidden": false, "name": "Yuqian Hong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a820", "hidden": false, "name": "Bryan Dai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:49:20.858Z", "user": { "_id": "67090a395f4ed2ff1f0b3658", "avatarUrl": "/avatars/03744a2fcbcb0e8074c04bad83b3e34c.svg", "fullname": "Zhenbang Dai", "isPro": false, "type": "user", "user": "BryanDai" } }, { "_id": "67b7f08c357c2729ac20a821", "hidden": false, "name": "Joey Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a822", "hidden": false, "name": "Kai Qiu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7f08c357c2729ac20a823", "hidden": false, "name": "Zhirong Wu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:48:55.865Z", "user": { "_id": "67034414f2b11c7dd251e232", "avatarUrl": "/avatars/6b741ac2eab48c6f72185342f9af7d1f.svg", "fullname": "wzr", "isPro": false, "type": "user", "user": "wuzhirong" } }, { "_id": "67b7f08c357c2729ac20a824", "hidden": false, "name": "Chong Luo", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:48:44.370Z", "user": { "_id": "676a328148d749b7086782d0", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Tt7u8l8f_1oVBWmBp7tkm.png", "fullname": "Chong Luo", "isPro": false, "type": "user", "user": "cluo-ms" } } ]
2025-02-20T17:49:26
Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning
Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in large reasoning models. To analyze reasoning dynamics, we use synthetic logic puzzles as training data due to their controllable complexity and straightforward answer verification. We make some key technical contributions that lead to effective and stable RL training: a system prompt that emphasizes the thinking and answering process, a stringent format reward function that penalizes outputs for taking shortcuts, and a straightforward training recipe that achieves stable convergence. Our 7B model develops advanced reasoning skills-such as reflection, verification, and summarization-that are absent from the logic corpus. Remarkably, after training on just 5K logic problems, it demonstrates generalization abilities to the challenging math benchmarks AIME and AMC.
44
67b7f08e357c2729ac20a88f
null
null
2025-02-20T22:15:33.133000
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
https://cdn-thumbnails.h…s/2502.14739.png
10
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14739
[ { "_id": "67b7efc26348a1df80a8ae53", "hidden": false, "name": "M-A-P Team", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae54", "hidden": false, "name": "Xinrun Du", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:42:53.525Z", "user": { "_id": "654907a4a1faff97850c4eff", "avatarUrl": "/avatars/458c90151614bc7f116943b6e67d6b8a.svg", "fullname": "du", "isPro": false, "type": "user", "user": "dododododo" } }, { "_id": "67b7efc26348a1df80a8ae55", "hidden": false, "name": "Yifan Yao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae56", "hidden": false, "name": "Kaijing Ma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:20:18.121Z", "user": { "_id": "65eb65722fbf6807134a636c", "avatarUrl": "/avatars/282920ff99c8d83cdac5fd6ee507096a.svg", "fullname": "Kaijing Ma", "isPro": false, "type": "user", "user": "mkj69" } }, { "_id": "67b7efc26348a1df80a8ae57", "hidden": false, "name": "Bingli Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:20:24.792Z", "user": { "_id": "658d0a228cff48d3a4612689", "avatarUrl": "/avatars/70e297c6cb12d1bdde6d91c23f590b63.svg", "fullname": "Bingli Wang", "isPro": false, "type": "user", "user": "BingliW" } }, { "_id": "67b7efc26348a1df80a8ae58", "hidden": false, "name": "Tianyu Zheng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:24.002Z", "user": { "_id": "64ab99dcb76bfd863eba64c1", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ab99dcb76bfd863eba64c1/UBXwDPx17X-gl-SzBPvrc.jpeg", "fullname": "TY.Zheng", "isPro": false, "type": "user", "user": "aaabiao" } }, { "_id": "67b7efc26348a1df80a8ae59", "hidden": false, "name": "Kang Zhu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:20:45.485Z", "user": { "_id": "6578265ddea7e2122d02f6ba", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6578265ddea7e2122d02f6ba/Bh6JjoVF5ceLSjV7Z7nTk.jpeg", "fullname": "kang zhu", "isPro": false, "type": "user", "user": "kangz" } }, { "_id": "67b7efc26348a1df80a8ae5a", "hidden": false, "name": "Minghao Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:25.894Z", "user": { "_id": "6417d9ea8f689506e7148417", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6417d9ea8f689506e7148417/bAYcruWNw4WvmuQcGgcwC.jpeg", "fullname": "minghao", "isPro": false, "type": "user", "user": "Liam-Liu" } }, { "_id": "67b7efc26348a1df80a8ae5b", "hidden": false, "name": "Yiming Liang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:21:08.553Z", "user": { "_id": "6555e8d8a0c34cd61a6b9ce3", "avatarUrl": "/avatars/71dc562cef4bd42f6b762f036357c800.svg", "fullname": "yimingliang", "isPro": false, "type": "user", "user": "yimingliang" } }, { "_id": "67b7efc26348a1df80a8ae5c", "hidden": false, "name": "Xiaolong Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae5d", "hidden": true, "name": "Zhenlin Wei", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:21:31.565Z", "user": { "_id": "67375a6ae6b1d15ff5359a54", "avatarUrl": "/avatars/9d32d9e3bfb43b8d001c6ddeae720ec5.svg", "fullname": "weizhenlin", "isPro": false, "type": "user", "user": "vzl123" } }, { "_id": "67b7efc26348a1df80a8ae5e", "hidden": false, "name": "Chujie Zheng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:34.124Z", "user": { "_id": "610b70452719facd4ea85e28", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/610b70452719facd4ea85e28/S7nMy7D0Rxq0VIVblhYDG.jpeg", "fullname": "Chujie Zheng", "isPro": false, "type": "user", "user": "chujiezheng" } }, { "_id": "67b7efc26348a1df80a8ae5f", "hidden": false, "name": "Kaixing Deng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae60", "hidden": false, "name": "Shuyue Guo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae61", "hidden": false, "name": "Shian Jia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae62", "hidden": false, "name": "Sichao Jiang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:22:09.140Z", "user": { "_id": "675085408119fa5fac3cd7cf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/0Mrrkzhv0wBggP5kGKtSt.png", "fullname": "jiangsichao", "isPro": false, "type": "user", "user": "jsc137" } }, { "_id": "67b7efc26348a1df80a8ae63", "hidden": false, "name": "Yiyan Liao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:22:15.619Z", "user": { "_id": "67a9d186b22571659007a43d", "avatarUrl": "/avatars/79b06cf0983083b6161374e66a8c51b2.svg", "fullname": "Yiyan Liao", "isPro": false, "type": "user", "user": "yiyanliao" } }, { "_id": "67b7efc26348a1df80a8ae64", "hidden": false, "name": "Rui Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae65", "hidden": false, "name": "Qinrui Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae66", "hidden": false, "name": "Sirun Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:22:30.478Z", "user": { "_id": "67ab7826ab5ebf181a7f78d7", "avatarUrl": "/avatars/d6baf414011d6df659da4eb58e9d8958.svg", "fullname": "Sirun Li", "isPro": false, "type": "user", "user": "inorganicwriter" } }, { "_id": "67b7efc26348a1df80a8ae67", "hidden": false, "name": "Yizhi Li", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:42:51.449Z", "user": { "_id": "6382252f54421460665ec501", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6382252f54421460665ec501/gW9fev3T5QPcNq4f9hqB1.jpeg", "fullname": "Yizhi Li", "isPro": false, "type": "user", "user": "yizhilll" } }, { "_id": "67b7efc26348a1df80a8ae68", "hidden": false, "name": "Yunwen Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae69", "hidden": false, "name": "Dehua Ma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae6a", "hidden": false, "name": "Yuansheng Ni", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:30.371Z", "user": { "_id": "64de37ee5e192985054be575", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64de37ee5e192985054be575/fVV7JQMtp_J3uFqszJJHH.jpeg", "fullname": "Yuansheng Ni", "isPro": false, "type": "user", "user": "yuanshengni" } }, { "_id": "67b7efc26348a1df80a8ae6b", "hidden": false, "name": "Haoran Que", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae6c", "hidden": false, "name": "Qiyao Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:28.639Z", "user": { "_id": "64560618bfdf9c63ce2d658a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64560618bfdf9c63ce2d658a/GVBWU4yNzRsjdyzKT3z3B.jpeg", "fullname": "Mathsion Wong", "isPro": false, "type": "user", "user": "QiYao-Wang" } }, { "_id": "67b7efc26348a1df80a8ae6d", "hidden": false, "name": "Zhoufutu Wen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae6e", "hidden": false, "name": "Siwei Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae6f", "hidden": false, "name": "Tianshun Xing", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae70", "hidden": false, "name": "Ming Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae71", "hidden": false, "name": "Zhenzhu Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae72", "hidden": false, "name": "Zekun Moore Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae73", "hidden": false, "name": "Junting Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae74", "hidden": false, "name": "Yuelin Bai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae75", "hidden": false, "name": "Xingyuan Bu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae76", "hidden": false, "name": "Chenglin Cai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:23:17.731Z", "user": { "_id": "64f9c21b681224dbe49a2280", "avatarUrl": "/avatars/df26cc4b4c6105af2c77392db61e3a27.svg", "fullname": "caichenglin", "isPro": false, "type": "user", "user": "easy4mego" } }, { "_id": "67b7efc26348a1df80a8ae77", "hidden": false, "name": "Liang Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae78", "hidden": false, "name": "Yifan Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae79", "hidden": false, "name": "Chengtuo Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7a", "hidden": false, "name": "Tianhao Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7b", "hidden": false, "name": "Keyi Ding", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7c", "hidden": false, "name": "Siming Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7d", "hidden": false, "name": "Yun Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7e", "hidden": false, "name": "Yaoru Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae7f", "hidden": false, "name": "Yizhe Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae80", "hidden": false, "name": "Zhaoqun Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae81", "hidden": false, "name": "Tianhao Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae82", "hidden": false, "name": "Chengdong Lin", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T15:15:33.213Z", "user": { "_id": "66751c722b487c2e015a1f60", "avatarUrl": "/avatars/d66a98b625451ccea1b4dfcdaf623304.svg", "fullname": "lin", "isPro": false, "type": "user", "user": "adams6435" } }, { "_id": "67b7efc26348a1df80a8ae83", "hidden": false, "name": "Hongquan Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae84", "hidden": false, "name": "Yinghao Ma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae85", "hidden": false, "name": "Zhongyuan Peng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae86", "hidden": false, "name": "Zifan Peng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:20.429Z", "user": { "_id": "65adda5299c3bd19c74d6a8d", "avatarUrl": "/avatars/1ce504b64ab60f375b235ebaf81cafd6.svg", "fullname": "PENG ZIFAN", "isPro": false, "type": "user", "user": "Ziffer" } }, { "_id": "67b7efc26348a1df80a8ae87", "hidden": false, "name": "Qige Qi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae88", "hidden": false, "name": "Shi Qiu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae89", "hidden": false, "name": "Xingwei Qu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8a", "hidden": false, "name": "Yizhou Tan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8b", "hidden": false, "name": "Zili Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8c", "hidden": false, "name": "Chenqing Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8d", "hidden": false, "name": "Hao Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8e", "hidden": false, "name": "Yiya Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae8f", "hidden": false, "name": "Yubo Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae90", "hidden": false, "name": "Jiajun Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae91", "hidden": false, "name": "Kexin Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae92", "hidden": false, "name": "Ruibin Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae93", "hidden": false, "name": "Yuanhao Yue", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae94", "hidden": false, "name": "Tianyang Zhan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae95", "hidden": false, "name": "Chun Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae96", "hidden": false, "name": "Jingyang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae97", "hidden": false, "name": "Xiyue Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae98", "hidden": false, "name": "Xingjian Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae99", "hidden": false, "name": "Yue Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9a", "hidden": false, "name": "Yongchi Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9b", "hidden": false, "name": "Xiangyu Zheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9c", "hidden": false, "name": "Chenghua Zhong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9d", "hidden": false, "name": "Yang Gao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9e", "hidden": false, "name": "Zhoujun Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8ae9f", "hidden": false, "name": "Dayiheng Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea0", "hidden": false, "name": "Qian Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:32.399Z", "user": { "_id": "612ee6a7b960e78c6d2319d4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/612ee6a7b960e78c6d2319d4/2Hu9BaAyXbyh1vt0v1Qui.jpeg", "fullname": "Qian Liu", "isPro": false, "type": "user", "user": "SivilTaram" } }, { "_id": "67b7efc26348a1df80a8aea1", "hidden": false, "name": "Tianyu Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea2", "hidden": false, "name": "Shiwen Ni", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea3", "hidden": false, "name": "Junran Peng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea4", "hidden": false, "name": "Yujia Qin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea5", "hidden": false, "name": "Wenbo Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea6", "hidden": false, "name": "Guoyin Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T10:23:25.946Z", "user": { "_id": "6490d4ba1afdee3acd1147f6", "avatarUrl": "/avatars/ae13c7b21fe9ced7541dcd664d1b94ed.svg", "fullname": "Guoyin Wang", "isPro": false, "type": "user", "user": "guoyinwang" } }, { "_id": "67b7efc26348a1df80a8aea7", "hidden": false, "name": "Shi Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea8", "hidden": false, "name": "Jian Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aea9", "hidden": false, "name": "Min Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeaa", "hidden": false, "name": "Meng Cao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeab", "hidden": false, "name": "Xiang Yue", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeac", "hidden": false, "name": "Zhaoxiang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aead", "hidden": false, "name": "Wangchunshu Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeae", "hidden": false, "name": "Jiaheng Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:22.185Z", "user": { "_id": "65377c30e48353201e6fdda0", "avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg", "fullname": "Jiaheng Liu", "isPro": false, "type": "user", "user": "CheeryLJH" } }, { "_id": "67b7efc26348a1df80a8aeaf", "hidden": false, "name": "Qunshu Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeb0", "hidden": false, "name": "Wenhao Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7efc26348a1df80a8aeb1", "hidden": false, "name": "Ge Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:17:53.865Z", "user": { "_id": "638efcf4c67af472d316d424", "avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg", "fullname": "Ge Zhang", "isPro": false, "type": "user", "user": "zhangysk" } } ]
2025-02-20T17:05:58
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
Large language models (LLMs) have demonstrated remarkable proficiency in mainstream academic disciplines such as mathematics, physics, and computer science. However, human knowledge encompasses over 200 specialized disciplines, far exceeding the scope of existing benchmarks. The capabilities of LLMs in many of these specialized fields-particularly in light industry, agriculture, and service-oriented disciplines-remain inadequately evaluated. To address this gap, we present SuperGPQA, a comprehensive benchmark that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines. Our benchmark employs a novel Human-LLM collaborative filtering mechanism to eliminate trivial or ambiguous questions through iterative refinement based on both LLM responses and expert feedback. Our experimental results reveal significant room for improvement in the performance of current state-of-the-art LLMs across diverse knowledge domains (e.g., the reasoning-focused model DeepSeek-R1 achieved the highest accuracy of 61.82% on SuperGPQA), highlighting the considerable gap between current model capabilities and artificial general intelligence. Additionally, we present comprehensive insights from our management of a large-scale annotation process, involving over 80 expert annotators and an interactive Human-LLM collaborative system, offering valuable methodological guidance for future research initiatives of comparable scope.
94
67b7efc66348a1df80a8afc8
null
null
2025-02-20T22:11:45.130000
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO
https://cdn-thumbnails.h…s/2502.14669.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14669
[ { "_id": "67b7eeddaf9f1b1bd95b878b", "hidden": false, "name": "Alan Dao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T15:03:59.165Z", "user": { "_id": "62d7b2339b629105a5d6888a", "avatarUrl": "/avatars/c3f164fde6b8f9a671890e08ce8a3e75.svg", "fullname": "Alan Dao", "isPro": false, "type": "user", "user": "alandao" } }, { "_id": "67b7eeddaf9f1b1bd95b878c", "hidden": false, "name": "Dinh Bach Vu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-20T16:05:18
AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via GRPO
Large Language Models (LLMs) have demonstrated impressive capabilities in language processing, yet they often struggle with tasks requiring genuine visual spatial reasoning. In this paper, we introduce a novel two-stage training framework designed to equip standard LLMs with visual reasoning abilities for maze navigation. First, we leverage Supervised Fine Tuning (SFT) on a curated dataset of tokenized maze representations to teach the model to predict step-by-step movement commands. Next, we apply Group Relative Policy Optimization (GRPO)-a technique used in DeepSeekR1-with a carefully crafted reward function to refine the model's sequential decision-making and encourage emergent chain-of-thought behaviors. Experimental results on synthetically generated mazes show that while a baseline model fails to navigate the maze, the SFT-trained model achieves 86% accuracy, and further GRPO fine-tuning boosts accuracy to 93%. Qualitative analyses reveal that GRPO fosters more robust and self-corrective reasoning, highlighting the potential of our approach to bridge the gap between language models and visual spatial tasks. These findings offer promising implications for applications in robotics, autonomous navigation, and other domains that require integrated visual and sequential reasoning.
11
67b7eeddaf9f1b1bd95b87c8
null
null
2025-02-20T22:08:38.225000
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
https://cdn-thumbnails.h…s/2502.14499.png
3
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.14499
[ { "_id": "67b7ee1dfedfe971271dcca0", "hidden": false, "name": "Deepak Nathani", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-21T07:20:46.836Z", "user": { "_id": "6114c9fae7a2566ae7d1a1a7", "avatarUrl": "/avatars/c71ab1850322fcf5ef239cb8d31cb137.svg", "fullname": "Deepak Nathani", "isPro": false, "type": "user", "user": "dnathani" } }, { "_id": "67b7ee1dfedfe971271dcca1", "hidden": false, "name": "Lovish Madaan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca2", "hidden": false, "name": "Nicholas Roberts", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca3", "hidden": false, "name": "Nikolay Bashlykov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:42:58.738Z", "user": { "_id": "633476edc3cb9eda9328e556", "avatarUrl": "/avatars/a127e270a606c18623fe00cd723313f6.svg", "fullname": "Nikolay B", "isPro": false, "type": "user", "user": "bashnick" } }, { "_id": "67b7ee1dfedfe971271dcca4", "hidden": false, "name": "Ajay Menon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca5", "hidden": false, "name": "Vincent Moens", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca6", "hidden": false, "name": "Amar Budhiraja", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca7", "hidden": false, "name": "Despoina Magka", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca8", "hidden": false, "name": "Vladislav Vorotilov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dcca9", "hidden": false, "name": "Gaurav Chaurasia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dccaa", "hidden": false, "name": "Dieuwke Hupkes", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dccab", "hidden": false, "name": "Ricardo Silveira Cabral", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:42:55.449Z", "user": { "_id": "67b8749ffa8442592bce008e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ctbzupAxRhcNXDka75ANi.png", "fullname": "Ricardo Silveira Cabral", "isPro": false, "type": "user", "user": "rscabral" } }, { "_id": "67b7ee1dfedfe971271dccac", "hidden": false, "name": "Tatiana Shavrina", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:43:03.777Z", "user": { "_id": "60dc2eb60037b630c5df57aa", "avatarUrl": "/avatars/fbe707b1231a3d9dc6e87ec011e0e738.svg", "fullname": "Tatiana", "isPro": false, "type": "user", "user": "Shavrina" } }, { "_id": "67b7ee1dfedfe971271dccad", "hidden": false, "name": "Jakob Foerster", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dccae", "hidden": false, "name": "Yoram Bachrach", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:43:01.304Z", "user": { "_id": "671b9e3ba54e59639d597fcb", "avatarUrl": "/avatars/95201b98d1cf2ffa68d23f3b74e387fb.svg", "fullname": "Yoram Bachrach", "isPro": false, "type": "user", "user": "yorambac" } }, { "_id": "67b7ee1dfedfe971271dccaf", "hidden": false, "name": "William Yang Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ee1dfedfe971271dccb0", "hidden": false, "name": "Roberta Raileanu", "status": "extracted_pending", "statusLastChangedAt": "2025-02-21T03:08:15.471Z", "user": { "_id": "633e94793a17ab61de8e2b9c", "avatarUrl": "/avatars/5f2f58ddeed211393660ada6b135f0d5.svg", "fullname": "Roberta Raileanu", "isPro": false, "type": "user", "user": "rraileanu" } } ]
2025-02-20T12:28:23
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 diverse and open-ended AI research tasks from diverse domains such as computer vision, natural language processing, reinforcement learning, and game theory. Solving these tasks requires real-world AI research skills such as generating new ideas and hypotheses, creating and processing data, implementing ML methods, training models, running experiments, analyzing the results, and iterating through this process to improve on a given task. We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro. Our MLGym framework makes it easy to add new tasks, integrate and evaluate models or agents, generate synthetic data at scale, as well as develop new learning algorithms for training agents on AI research tasks. We find that current frontier models can improve on the given baselines, usually by finding better hyperparameters, but do not generate novel hypotheses, algorithms, architectures, or substantial improvements. We open-source our framework and benchmark to facilitate future research in advancing the AI research capabilities of LLM agents.
171
67b7ee1ffedfe971271dcd3a
null
null
2025-02-20T22:04:42.635000
S*: Test Time Scaling for Code Generation
https://cdn-thumbnails.h…s/2502.14382.png
3
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.14382
[ { "_id": "67b7ed3e58f6b70b18ddb4bc", "hidden": false, "name": "Dacheng Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:13.558Z", "user": { "_id": "63715b25ffc0489ed7d1f415", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63715b25ffc0489ed7d1f415/xZJepbs0LRqFbW1knnBKR.jpeg", "fullname": "Dacheng Li", "isPro": false, "type": "user", "user": "DachengLi" } }, { "_id": "67b7ed3e58f6b70b18ddb4bd", "hidden": false, "name": "Shiyi Cao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:43.358Z", "user": { "_id": "64ebbae6895a36ab28de811a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ebbae6895a36ab28de811a/gBiaQP4paS4L13eu-yRm7.jpeg", "fullname": "Shiyi Cao", "isPro": false, "type": "user", "user": "eva98" } }, { "_id": "67b7ed3e58f6b70b18ddb4be", "hidden": false, "name": "Chengkun Cao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7ed3e58f6b70b18ddb4bf", "hidden": false, "name": "Xiuyu Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:28.150Z", "user": { "_id": "644570ba2d91b15b4c7f6311", "avatarUrl": "/avatars/d5e66012066d0c330b8f23718b1499d8.svg", "fullname": "Xiuyu Li", "isPro": false, "type": "user", "user": "xiuyul" } }, { "_id": "67b7ed3e58f6b70b18ddb4c0", "hidden": false, "name": "Shangyin Tan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:33.730Z", "user": { "_id": "663cfbd9b0f659de3db65c1a", "avatarUrl": "/avatars/b7d82d281026ee04a9932b44a770b840.svg", "fullname": "Shangyin Tan", "isPro": false, "type": "user", "user": "shangyint" } }, { "_id": "67b7ed3e58f6b70b18ddb4c1", "hidden": false, "name": "Kurt Keutzer", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:39.902Z", "user": { "_id": "6251bf4b183aa4266924ad91", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678041834400-6251bf4b183aa4266924ad91.jpeg", "fullname": "Kurt Keutzer", "isPro": true, "type": "user", "user": "kurtkeutzer" } }, { "_id": "67b7ed3e58f6b70b18ddb4c2", "hidden": false, "name": "Jiarong Xing", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:45.774Z", "user": { "_id": "66ff8aa43a31c499dc48fdd6", "avatarUrl": "/avatars/060dc90fb13991bd013ce8173f12ae3e.svg", "fullname": "Jiarong Xing", "isPro": false, "type": "user", "user": "JerryPotter" } }, { "_id": "67b7ed3e58f6b70b18ddb4c3", "hidden": false, "name": "Joseph E. Gonzalez", "status": "admin_assigned", "statusLastChangedAt": "2025-02-21T14:45:52.578Z", "user": { "_id": "645d2e8401f4eaab2a0878ce", "avatarUrl": "/avatars/1273c5fb607b4b622a746a42692fa632.svg", "fullname": "Joseph E. Gonzalez", "isPro": false, "type": "user", "user": "ProfJoeyG" } }, { "_id": "67b7ed3e58f6b70b18ddb4c4", "hidden": false, "name": "Ion Stoica", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-20T09:18:53
S*: Test Time Scaling for Code Generation
Increasing test-time compute for LLMs shows promise across domains but remains underexplored in code generation, despite extensive study in math. In this paper, we propose S*, the first hybrid test-time scaling framework that substantially improves the coverage and selection accuracy of generated code. S* extends the existing parallel scaling paradigm with sequential scaling to push performance boundaries. It further leverages a novel selection mechanism that adaptively generates distinguishing inputs for pairwise comparison, combined with execution-grounded information to robustly identify correct solutions. We evaluate across 12 Large Language Models and Large Reasoning Model and show: (1) S* consistently improves performance across model families and sizes, enabling a 3B model to outperform GPT-4o-mini; (2) S* enables non-reasoning models to surpass reasoning models - GPT-4o-mini with S* outperforms o1-preview by 3.7% on LiveCodeBench; (3) S* further boosts state-of-the-art reasoning models - DeepSeek-R1-Distill-Qwen-32B with S* achieves 85.7% on LiveCodeBench, approaching o1 (high) at 88.5%. Code will be available under https://github.com/NovaSky-AI/SkyThought.
59
67b7ed3f58f6b70b18ddb510
null
null
2025-02-20T21:25:09.725000
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
https://cdn-thumbnails.h…s/2502.14296.png
2
{ "_id": "639d94ab7145123e0d44e48a", "avatarUrl": "/avatars/5bb6a65b306d1383c4a8bcd9334b470a.svg", "followerCount": 2, "fullname": "Yue Huang", "isHf": false, "isMod": false, "isPro": false, "name": "HowieHwong", "type": "user" }
true
null
2502.14296
[ { "_id": "67b7e371f17ca6989faa9884", "hidden": false, "name": "Yue Huang", "status": "extracted_pending", "statusLastChangedAt": "2025-02-21T02:22:45.907Z", "user": { "_id": "639d94ab7145123e0d44e48a", "avatarUrl": "/avatars/5bb6a65b306d1383c4a8bcd9334b470a.svg", "fullname": "Yue Huang", "isPro": false, "type": "user", "user": "HowieHwong" } }, { "_id": "67b7e371f17ca6989faa9885", "hidden": false, "name": "Chujie Gao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:51.964Z", "user": { "_id": "65a13cb1c5770b27aef2a2bc", "avatarUrl": "/avatars/88ec5b988f10ad9fd4d469ae2fa34680.svg", "fullname": "Chujie Gao", "isPro": false, "type": "user", "user": "Flossie" } }, { "_id": "67b7e371f17ca6989faa9886", "hidden": false, "name": "Siyuan Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9887", "hidden": false, "name": "Haoran Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:23:23.186Z", "user": { "_id": "64b82c659ebb69a79f0073f6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b82c659ebb69a79f0073f6/INEHG7kijEHOhFZMjdBIM.png", "fullname": "Haoran Wang", "isPro": false, "type": "user", "user": "wang2226" } }, { "_id": "67b7e371f17ca6989faa9888", "hidden": false, "name": "Xiangqi Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:45.842Z", "user": { "_id": "66e4aa8d4926518abbf5cae2", "avatarUrl": "/avatars/dcff2521e0292b602f86c76fc4b5bbae.svg", "fullname": "XiangqiWang", "isPro": false, "type": "user", "user": "qisein" } }, { "_id": "67b7e371f17ca6989faa9889", "hidden": false, "name": "Yujun Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa988a", "hidden": false, "name": "Yanbo Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:54.838Z", "user": { "_id": "6512a4322f0aa026dd6dc9f8", "avatarUrl": "/avatars/7cc88d2d8061a83a24bb4458d7cbb242.svg", "fullname": "wyf", "isPro": false, "type": "user", "user": "wyf23187" } }, { "_id": "67b7e371f17ca6989faa988b", "hidden": false, "name": "Jiayi Ye", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa988c", "hidden": false, "name": "Jiawen Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa988d", "hidden": false, "name": "Qihui Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa988e", "hidden": false, "name": "Yuan Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa988f", "hidden": false, "name": "Han Bao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9890", "hidden": false, "name": "Zhaoyi Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9891", "hidden": false, "name": "Tianrui Guan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9892", "hidden": false, "name": "Dongping Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9893", "hidden": false, "name": "Ruoxi Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9894", "hidden": false, "name": "Kehan Guo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9895", "hidden": false, "name": "Andy Zou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9896", "hidden": false, "name": "Bryan Hooi Kuen-Yew", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9897", "hidden": false, "name": "Caiming Xiong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9898", "hidden": false, "name": "Elias Stengel-Eskin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa9899", "hidden": false, "name": "Hongyang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa989a", "hidden": false, "name": "Hongzhi Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa989b", "hidden": false, "name": "Huan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa989c", "hidden": false, "name": "Huaxiu Yao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa989d", "hidden": false, "name": "Jaehong Yoon", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:50.021Z", "user": { "_id": "652066649004117947e46ed6", "avatarUrl": "/avatars/972c97df6f26d2c3d6ce71ec579984bb.svg", "fullname": "Jaehong Yoon", "isPro": false, "type": "user", "user": "jaehong31" } }, { "_id": "67b7e371f17ca6989faa989e", "hidden": false, "name": "Jieyu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa989f", "hidden": false, "name": "Kai Shu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a0", "hidden": false, "name": "Kaijie Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a1", "hidden": false, "name": "Ranjay Krishna", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a2", "hidden": false, "name": "Swabha Swayamdipta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a3", "hidden": false, "name": "Taiwei Shi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:56.645Z", "user": { "_id": "62e1b3cb3eb0730f621a83f6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1658958764563-noauth.jpeg", "fullname": "Taiwei Shi", "isPro": false, "type": "user", "user": "MaksimSTW" } }, { "_id": "67b7e371f17ca6989faa98a4", "hidden": false, "name": "Weijia Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a5", "hidden": false, "name": "Xiang Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a6", "hidden": false, "name": "Yiwei Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a7", "hidden": false, "name": "Yuexing Hao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a8", "hidden": false, "name": "Yuexing Hao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98a9", "hidden": false, "name": "Zhihao Jia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98aa", "hidden": false, "name": "Zhize Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98ab", "hidden": false, "name": "Xiuying Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98ac", "hidden": false, "name": "Zhengzhong Tu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98ad", "hidden": false, "name": "Xiyang Hu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98ae", "hidden": false, "name": "Tianyi Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98af", "hidden": false, "name": "Jieyu Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b0", "hidden": false, "name": "Lichao Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b1", "hidden": false, "name": "Furong Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b2", "hidden": false, "name": "Or Cohen Sasson", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:58:47.678Z", "user": { "_id": "67b7eccd10a9714460d767fc", "avatarUrl": "/avatars/93b03f5a9df7bab777295d811520454f.svg", "fullname": "Or Cohen-Sasson", "isPro": false, "type": "user", "user": "orcs-prime" } }, { "_id": "67b7e371f17ca6989faa98b3", "hidden": false, "name": "Prasanna Sattigeri", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b4", "hidden": false, "name": "Anka Reuel", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b5", "hidden": false, "name": "Max Lamparth", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b6", "hidden": false, "name": "Yue Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b7", "hidden": false, "name": "Nouha Dziri", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b8", "hidden": false, "name": "Yu Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98b9", "hidden": false, "name": "Huan Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98ba", "hidden": false, "name": "Heng Ji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98bb", "hidden": false, "name": "Chaowei Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98bc", "hidden": false, "name": "Mohit Bansal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98bd", "hidden": false, "name": "Nitesh V. Chawla", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98be", "hidden": false, "name": "Jian Pei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98bf", "hidden": false, "name": "Jianfeng Gao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c0", "hidden": false, "name": "Michael Backes", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c1", "hidden": false, "name": "Philip S. Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c2", "hidden": false, "name": "Neil Zhenqiang Gong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c3", "hidden": false, "name": "Pin-Yu Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c4", "hidden": false, "name": "Bo Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e371f17ca6989faa98c5", "hidden": false, "name": "Xiangliang Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-20T06:20:36
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective
Generative Foundation Models (GenFMs) have emerged as transformative tools. However, their widespread adoption raises critical concerns regarding trustworthiness across dimensions. This paper presents a comprehensive framework to address these challenges through three key contributions. First, we systematically review global AI governance laws and policies from governments and regulatory bodies, as well as industry practices and standards. Based on this analysis, we propose a set of guiding principles for GenFMs, developed through extensive multidisciplinary collaboration that integrates technical, ethical, legal, and societal perspectives. Second, we introduce TrustGen, the first dynamic benchmarking platform designed to evaluate trustworthiness across multiple dimensions and model types, including text-to-image, large language, and vision-language models. TrustGen leverages modular components--metadata curation, test case generation, and contextual variation--to enable adaptive and iterative assessments, overcoming the limitations of static evaluation methods. Using TrustGen, we reveal significant progress in trustworthiness while identifying persistent challenges. Finally, we provide an in-depth discussion of the challenges and future directions for trustworthy GenFMs, which reveals the complex, evolving nature of trustworthiness, highlighting the nuanced trade-offs between utility and trustworthiness, and consideration for various downstream applications, identifying persistent challenges and providing a strategic roadmap for future research. This work establishes a holistic framework for advancing trustworthiness in GenAI, paving the way for safer and more responsible integration of GenFMs into critical applications. To facilitate advancement in the community, we release the toolkit for dynamic evaluation.
45
67b7e375f17ca6989faa9a28
null
null
2025-02-20T21:13:28.792000
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above
https://cdn-thumbnails.h…s/2502.14127.png
2
{ "_id": "62a3f93fe2b7740fe2a94c86", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3f93fe2b7740fe2a94c86/ZiaPqiVqXI2ANIyWQY_hT.png", "followerCount": 6, "fullname": "Nishant Balepur", "isHf": false, "isMod": false, "isPro": false, "name": "nbalepur", "type": "user" }
true
null
2502.14127
[ { "_id": "67b7e12b92b9b5b8184c6580", "hidden": false, "name": "Nishant Balepur", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:02.019Z", "user": { "_id": "62a3f93fe2b7740fe2a94c86", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3f93fe2b7740fe2a94c86/ZiaPqiVqXI2ANIyWQY_hT.png", "fullname": "Nishant Balepur", "isPro": false, "type": "user", "user": "nbalepur" } }, { "_id": "67b7e12b92b9b5b8184c6581", "hidden": false, "name": "Rachel Rudinger", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7e12b92b9b5b8184c6582", "hidden": false, "name": "Jordan Lee Boyd-Graber", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T22:11:52
Which of These Best Describes Multiple Choice Evaluation with LLMs? A) Forced B) Flawed C) Fixable D) All of the Above
Multiple choice question answering (MCQA) is popular for LLM evaluation due to its simplicity and human-like testing, but we argue for its reform. We first reveal flaws in MCQA's format, as it struggles to: 1) test generation/subjectivity; 2) match LLM use cases; and 3) fully test knowledge. We instead advocate for generative formats based on human testing-where LLMs construct and explain answers-better capturing user needs and knowledge while remaining easy to score. We then show even when MCQA is a useful format, its datasets suffer from: leakage; unanswerability; shortcuts; and saturation. In each issue, we give fixes from education, like rubrics to guide MCQ writing; scoring methods to bridle guessing; and Item Response Theory to build harder MCQs. Lastly, we discuss LLM errors in MCQA-robustness, biases, and unfaithful explanations-showing how our prior solutions better measure or address these issues. While we do not need to desert MCQA, we encourage more efforts in refining the task based on educational testing, advancing evaluations.
2
67b7e12c92b9b5b8184c65a5
null
null
2025-02-20T16:00:25.426000
REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation
https://cdn-thumbnails.h…s/2502.13270.png
2
{ "_id": "6142ec5a7215c6d505bafd4e", "avatarUrl": "/avatars/ae0387b672435c5a4cf16ff6764ce597.svg", "followerCount": null, "fullname": "Dong-Ho Lee", "isHf": false, "isMod": false, "isPro": false, "name": "danny911kr", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6142ec5a7215c6d505bafd4e/8ZXPnL7UdgpvHkiP0HHDI.png" ]
2502.13270
[ { "_id": "67b7975d10a9714460c03882", "hidden": false, "name": "Dong-Ho Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:30.731Z", "user": { "_id": "6142ec5a7215c6d505bafd4e", "avatarUrl": "/avatars/ae0387b672435c5a4cf16ff6764ce597.svg", "fullname": "Dong-Ho Lee", "isPro": false, "type": "user", "user": "danny911kr" } }, { "_id": "67b7975d10a9714460c03883", "hidden": false, "name": "Adyasha Maharana", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7975d10a9714460c03884", "hidden": false, "name": "Jay Pujara", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7975d10a9714460c03885", "hidden": false, "name": "Xiang Ren", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7975d10a9714460c03886", "hidden": false, "name": "Francesco Barbieri", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T20:29:01
REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation
Long-term, open-domain dialogue capabilities are essential for chatbots aiming to recall past interactions and demonstrate emotional intelligence (EI). Yet, most existing research relies on synthetic, LLM-generated data, leaving open questions about real-world conversational patterns. To address this gap, we introduce REALTALK, a 21-day corpus of authentic messaging app dialogues, providing a direct benchmark against genuine human interactions. We first conduct a dataset analysis, focusing on EI attributes and persona consistency to understand the unique challenges posed by real-world dialogues. By comparing with LLM-generated conversations, we highlight key differences, including diverse emotional expressions and variations in persona stability that synthetic dialogues often fail to capture. Building on these insights, we introduce two benchmark tasks: (1) persona simulation where a model continues a conversation on behalf of a specific user given prior dialogue context; and (2) memory probing where a model answers targeted questions requiring long-term memory of past interactions. Our findings reveal that models struggle to simulate a user solely from dialogue history, while fine-tuning on specific user chats improves persona emulation. Additionally, existing models face significant challenges in recalling and leveraging long-term context within real-world conversations.
6
67b7975e10a9714460c038bb
null
null
2025-02-20T14:34:52.849000
From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions
https://cdn-thumbnails.h…s/2502.13791.png
3
{ "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650745211725-noauth.png", "followerCount": 51, "fullname": "Mohammed Hamdy", "isHf": false, "isMod": false, "isPro": false, "name": "mmhamdy", "type": "user" }
true
null
2502.13791
[ { "_id": "67b7838bb41e5f760f8bd1b0", "hidden": false, "name": "Nathanaël Carraz Rakotonirina", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T15:51:38.471Z", "user": { "_id": "6195d3199b7166aedc74247f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6195d3199b7166aedc74247f/8N5mUpSf3q5hRfMd8x70U.jpeg", "fullname": "Nathanaël Carraz Rakotonirina", "isPro": false, "type": "user", "user": "nathanaelc" } }, { "_id": "67b7838bb41e5f760f8bd1b1", "hidden": false, "name": "Mohammed Hamdy", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:34.800Z", "user": { "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650745211725-noauth.png", "fullname": "Mohammed Hamdy", "isPro": false, "type": "user", "user": "mmhamdy" } }, { "_id": "67b7838bb41e5f760f8bd1b2", "hidden": false, "name": "Jon Ander Campos", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7838bb41e5f760f8bd1b3", "hidden": false, "name": "Lucas Weber", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7838bb41e5f760f8bd1b4", "hidden": false, "name": "Alberto Testoni", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7838bb41e5f760f8bd1b5", "hidden": false, "name": "Marzieh Fadaee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7838bb41e5f760f8bd1b6", "hidden": false, "name": "Sandro Pezzelle", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b7838bb41e5f760f8bd1b7", "hidden": false, "name": "Marco Del Tredici", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T14:58:04
From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions
Large Language Models (LLMs) are increasingly used in working environments for a wide range of tasks, excelling at solving individual problems in isolation. However, are they also able to effectively collaborate over long-term interactions? To investigate this, we introduce MemoryCode, a synthetic multi-session dataset designed to test LLMs' ability to track and execute simple coding instructions amid irrelevant information, simulating a realistic setting. While all the models we tested handle isolated instructions well, even the performance of state-of-the-art models like GPT-4o deteriorates when instructions are spread across sessions. Our analysis suggests this is due to their failure to retrieve and integrate information over long instruction chains. Our results highlight a fundamental limitation of current LLMs, restricting their ability to collaborate effectively in long interactions.
5
67b7838cb41e5f760f8bd209
null
null
2025-02-20T13:47:47.134000
Judging the Judges: A Collection of LLM-Generated Relevance Judgements
https://cdn-thumbnails.h…s/2502.13908.png
2
{ "_id": "64108fc514215c0775e13f5e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64108fc514215c0775e13f5e/pHWr8TlBnYrulo2owIrrv.jpeg", "followerCount": null, "fullname": "Hossein A. (Saeed) Rahmani", "isHf": false, "isMod": false, "isPro": false, "name": "rahmanidashti", "type": "user" }
true
null
2502.13908
[ { "_id": "67b75ce1fedef65ff99cf5f8", "hidden": false, "name": "Hossein A. Rahmani", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T16:57:36.417Z", "user": { "_id": "64108fc514215c0775e13f5e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64108fc514215c0775e13f5e/pHWr8TlBnYrulo2owIrrv.jpeg", "fullname": "Hossein A. (Saeed) Rahmani", "isPro": false, "type": "user", "user": "rahmanidashti" } }, { "_id": "67b75ce1fedef65ff99cf5f9", "hidden": false, "name": "Clemencia Siro", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5fa", "hidden": false, "name": "Mohammad Aliannejadi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5fb", "hidden": false, "name": "Nick Craswell", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5fc", "hidden": false, "name": "Charles L. A. Clarke", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5fd", "hidden": false, "name": "Guglielmo Faggioli", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5fe", "hidden": false, "name": "Bhaskar Mitra", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf5ff", "hidden": false, "name": "Paul Thomas", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b75ce1fedef65ff99cf600", "hidden": false, "name": "Emine Yilmaz", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T17:40:32
Judging the Judges: A Collection of LLM-Generated Relevance Judgements
Using Large Language Models (LLMs) for relevance assessments offers promising opportunities to improve Information Retrieval (IR), Natural Language Processing (NLP), and related fields. Indeed, LLMs hold the promise of allowing IR experimenters to build evaluation collections with a fraction of the manual human labor currently required. This could help with fresh topics on which there is still limited knowledge and could mitigate the challenges of evaluating ranking systems in low-resource scenarios, where it is challenging to find human annotators. Given the fast-paced recent developments in the domain, many questions concerning LLMs as assessors are yet to be answered. Among the aspects that require further investigation, we can list the impact of various components in a relevance judgment generation pipeline, such as the prompt used or the LLM chosen. This paper benchmarks and reports on the results of a large-scale automatic relevance judgment evaluation, the LLMJudge challenge at SIGIR 2024, where different relevance assessment approaches were proposed. In detail, we release and benchmark 42 LLM-generated labels of the TREC 2023 Deep Learning track relevance judgments produced by eight international teams who participated in the challenge. Given their diverse nature, these automatically generated relevance judgments can help the community not only investigate systematic biases caused by LLMs but also explore the effectiveness of ensemble models, analyze the trade-offs between different models and human assessors, and advance methodologies for improving automated evaluation techniques. The released resource is available at the following link: https://llm4eval.github.io/LLMJudge-benchmark/
4
67b75ce2fedef65ff99cf623
null
null
2025-02-20T12:26:53.898000
MMTEB: Massive Multilingual Text Embedding Benchmark
https://cdn-thumbnails.h…s/2502.13595.png
3
{ "_id": "5f1eb362eec0ad2a071ad6e2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f1eb362eec0ad2a071ad6e2/IXMYkYKuTwn6kBdWnQeeY.png", "followerCount": 120, "fullname": "Niklas Muennighoff", "isHf": false, "isMod": false, "isPro": false, "name": "Muennighoff", "type": "user" }
true
null
2502.13595
[ { "_id": "67b6fa9cb544aa153178a60b", "hidden": false, "name": "Kenneth Enevoldsen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:52:45.751Z", "user": { "_id": "5ff5943752c26e9bc240bada", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5ff5943752c26e9bc240bada/Exyzf3C_gJ2KdsL4K5_cq.png", "fullname": "Kenneth C. Enevoldsen", "isPro": false, "type": "user", "user": "KennethEnevoldsen" } }, { "_id": "67b6fa9cb544aa153178a60c", "hidden": false, "name": "Isaac Chung", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:42.744Z", "user": { "_id": "64cc0e80a257a3212c0c4b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64cc0e80a257a3212c0c4b24/wqs6WZN8-3OQthcnQXgN7.png", "fullname": "Isaac Chung", "isPro": false, "type": "user", "user": "isaacchung" } }, { "_id": "67b6fa9cb544aa153178a60d", "hidden": false, "name": "Imene Kerboua", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T10:18:18.756Z", "user": { "_id": "62610f8040e04009e81047e9", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62610f8040e04009e81047e9/iqGTYB7OMmS0jkFEB7cEF.jpeg", "fullname": "Imene Kerboua", "isPro": false, "type": "user", "user": "imenelydiaker" } }, { "_id": "67b6fa9cb544aa153178a60e", "hidden": false, "name": "Márton Kardos", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:40.740Z", "user": { "_id": "62696cd3d1ac0cde59280dcf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1656940964610-62696cd3d1ac0cde59280dcf.jpeg", "fullname": "Márton Kardos", "isPro": false, "type": "user", "user": "kardosdrur" } }, { "_id": "67b6fa9cb544aa153178a60f", "hidden": false, "name": "Ashwin Mathur", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a610", "hidden": false, "name": "David Stap", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a611", "hidden": false, "name": "Jay Gala", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a612", "hidden": false, "name": "Wissam Siblini", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a613", "hidden": false, "name": "Dominik Krzemiński", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T10:58:20.283Z", "user": { "_id": "662cc845616128914a3c9817", "avatarUrl": "/avatars/cd186de00f28bb866abc1ab6c4465663.svg", "fullname": "DomKrz", "isPro": false, "type": "user", "user": "dokato" } }, { "_id": "67b6fa9cb544aa153178a614", "hidden": false, "name": "Genta Indra Winata", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a615", "hidden": false, "name": "Saba Sturua", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a616", "hidden": false, "name": "Saiteja Utpala", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a617", "hidden": false, "name": "Mathieu Ciancone", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a618", "hidden": false, "name": "Marion Schaeffer", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:39:04.878Z", "user": { "_id": "6527f0678808d80ccff9c230", "avatarUrl": "/avatars/3f7edd63770d42472947252c36ffbf5e.svg", "fullname": "Marion Schaeffer", "isPro": false, "type": "user", "user": "mschaeffer" } }, { "_id": "67b6fa9cb544aa153178a619", "hidden": false, "name": "Gabriel Sequeira", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a61a", "hidden": false, "name": "Diganta Misra", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a61b", "hidden": false, "name": "Shreeya Dhakal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a61c", "hidden": false, "name": "Jonathan Rystrøm", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a61d", "hidden": false, "name": "Roman Solomatin", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:52:48.202Z", "user": { "_id": "61af4544d691b3aadd1f62b6", "avatarUrl": "/avatars/7a4067accdd1005f78c3c4adad3ee0a5.svg", "fullname": "Solomatin Roman", "isPro": false, "type": "user", "user": "Samoed" } }, { "_id": "67b6fa9cb544aa153178a61e", "hidden": false, "name": "Ömer Çağatan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a61f", "hidden": false, "name": "Akash Kundu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a620", "hidden": false, "name": "Martin Bernstorff", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a621", "hidden": false, "name": "Shitao Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a622", "hidden": false, "name": "Akshita Sukhlecha", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a623", "hidden": false, "name": "Bhavish Pahwa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a624", "hidden": false, "name": "Rafał Poświata", "status": "claimed_verified", "statusLastChangedAt": "2025-02-25T09:40:57.852Z", "user": { "_id": "63933543f8b4767ae646e8a1", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670591762482-noauth.png", "fullname": "Rafał Poświata", "isPro": false, "type": "user", "user": "rafalposwiata" } }, { "_id": "67b6fa9cb544aa153178a625", "hidden": false, "name": "Kranthi Kiran GV", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a626", "hidden": false, "name": "Shawon Ashraf", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a627", "hidden": false, "name": "Daniel Auras", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:24:56.683Z", "user": { "_id": "642c54a5b09c70b36de03071", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/642c54a5b09c70b36de03071/EwQyQmust01dgkdRMtYFt.jpeg", "fullname": "rasdani", "isPro": false, "type": "user", "user": "rasdani" } }, { "_id": "67b6fa9cb544aa153178a628", "hidden": false, "name": "Björn Plüster", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a629", "hidden": false, "name": "Jan Philipp Harries", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a62a", "hidden": false, "name": "Loïc Magne", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a62b", "hidden": false, "name": "Isabelle Mohr", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a62c", "hidden": false, "name": "Mariya Hendriksen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:23:49.198Z", "user": { "_id": "63960f35339bf68bc775b468", "avatarUrl": "/avatars/d82a914df001b830378246904634b756.svg", "fullname": "Mariya Hendrikse", "isPro": false, "type": "user", "user": "mariya-he" } }, { "_id": "67b6fa9cb544aa153178a62d", "hidden": false, "name": "Dawei Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a62e", "hidden": false, "name": "Hippolyte Gisserot-Boukhlef", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a62f", "hidden": false, "name": "Tom Aarsen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T15:15:34.699Z", "user": { "_id": "6317233cc92fd6fee317e030", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png", "fullname": "Tom Aarsen", "isPro": false, "type": "user", "user": "tomaarsen" } }, { "_id": "67b6fa9cb544aa153178a630", "hidden": false, "name": "Jan Kostkan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a631", "hidden": false, "name": "Konrad Wojtasik", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a632", "hidden": false, "name": "Taemin Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a633", "hidden": false, "name": "Marek Šuppa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a634", "hidden": false, "name": "Crystina Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a635", "hidden": false, "name": "Roberta Rocca", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a636", "hidden": false, "name": "Mohammed Hamdy", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:46.087Z", "user": { "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650745211725-noauth.png", "fullname": "Mohammed Hamdy", "isPro": false, "type": "user", "user": "mmhamdy" } }, { "_id": "67b6fa9cb544aa153178a637", "hidden": false, "name": "Andrianos Michail", "status": "claimed_verified", "statusLastChangedAt": "2025-02-28T13:07:57.365Z", "user": { "_id": "60e1bc418479fac0bd1daa0e", "avatarUrl": "/avatars/61f4f8ac0714aae7d5cbb7d4e1038020.svg", "fullname": "Andrianos Michail", "isPro": false, "type": "user", "user": "Andrianos" } }, { "_id": "67b6fa9cb544aa153178a638", "hidden": false, "name": "John Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a639", "hidden": false, "name": "Manuel Faysse", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a63a", "hidden": false, "name": "Aleksei Vatolin", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:10.077Z", "user": { "_id": "6454e5ac273f64983024ba5d", "avatarUrl": "/avatars/ca6f861ed830a79f7a1eba04ebe84afc.svg", "fullname": "Vatolin Alexey", "isPro": false, "type": "user", "user": "vatolinalex" } }, { "_id": "67b6fa9cb544aa153178a63b", "hidden": false, "name": "Nandan Thakur", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:24:59.423Z", "user": { "_id": "60196690dd31fde3c1062960", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1612277330660-noauth.jpeg", "fullname": "Nandan Thakur", "isPro": false, "type": "user", "user": "nthakur" } }, { "_id": "67b6fa9cb544aa153178a63c", "hidden": false, "name": "Manan Dey", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a63d", "hidden": false, "name": "Dipam Vasani", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a63e", "hidden": false, "name": "Pranjal Chitale", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a63f", "hidden": false, "name": "Simone Tedeschi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:23:51.243Z", "user": { "_id": "61b85aa99ba538c73a7dc78b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b85aa99ba538c73a7dc78b/gWxtQAvOYn7cXgE_nAy0p.jpeg", "fullname": "Simone Tedeschi", "isPro": false, "type": "user", "user": "sted97" } }, { "_id": "67b6fa9cb544aa153178a640", "hidden": false, "name": "Nguyen Tai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a641", "hidden": false, "name": "Artem Snegirev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a642", "hidden": false, "name": "Michael Günther", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a643", "hidden": false, "name": "Mengzhou Xia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a644", "hidden": false, "name": "Weijia Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a645", "hidden": false, "name": "Xing Han Lù", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:15:18.716Z", "user": { "_id": "5fa9ff3ea13e063b8b2b60cb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1633380224986-5fa9ff3ea13e063b8b2b60cb.jpeg", "fullname": "Xing Han Lu", "isPro": false, "type": "user", "user": "xhluca" } }, { "_id": "67b6fa9cb544aa153178a646", "hidden": false, "name": "Jordan Clive", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a647", "hidden": false, "name": "Gayatri Krishnakumar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a648", "hidden": false, "name": "Anna Maksimova", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a649", "hidden": false, "name": "Silvan Wehrli", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a64a", "hidden": false, "name": "Maria Tikhonova", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a64b", "hidden": false, "name": "Henil Panchal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a64c", "hidden": false, "name": "Aleksandr Abramov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a64d", "hidden": false, "name": "Malte Ostendorff", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:15:16.649Z", "user": { "_id": "5efda656ff69163f6f59e5d2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5efda656ff69163f6f59e5d2/ru2nfhaNjB9-Ls_vbMq92.jpeg", "fullname": "malteos", "isPro": false, "type": "user", "user": "malteos" } }, { "_id": "67b6fa9cb544aa153178a64e", "hidden": false, "name": "Zheng Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a64f", "hidden": false, "name": "Simon Clematide", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a650", "hidden": false, "name": "Lester James Miranda", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a651", "hidden": false, "name": "Alena Fenogenova", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a652", "hidden": false, "name": "Guangyu Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a653", "hidden": false, "name": "Ruqiya Bin Safi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a654", "hidden": false, "name": "Wen-Ding Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a655", "hidden": false, "name": "Alessia Borghini", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a656", "hidden": false, "name": "Federico Cassano", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a657", "hidden": false, "name": "Hongjin Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a658", "hidden": false, "name": "Jimmy Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a659", "hidden": false, "name": "Howard Yen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a65a", "hidden": false, "name": "Lasse Hansen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a65b", "hidden": false, "name": "Sara Hooker", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a65c", "hidden": false, "name": "Chenghao Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a65d", "hidden": false, "name": "Vaibhav Adlakha", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:15:20.849Z", "user": { "_id": "61981af5d420757268e195ac", "avatarUrl": "/avatars/8b59aaf33447224f83d497425fd7ea8f.svg", "fullname": "Vaibhav Adlakha", "isPro": false, "type": "user", "user": "vaibhavad" } }, { "_id": "67b6fa9cb544aa153178a65e", "hidden": false, "name": "Orion Weller", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a65f", "hidden": false, "name": "Siva Reddy", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6fa9cb544aa153178a660", "hidden": false, "name": "Niklas Muennighoff", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T10:13:43
MMTEB: Massive Multilingual Text Embedding Benchmark
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost.
31
67b6fa9db544aa153178a69c
null
null
2025-02-20T12:23:27.067000
AIDE: AI-Driven Exploration in the Space of Code
https://cdn-thumbnails.h…s/2502.13138.png
6
{ "_id": "65f7927e7bc58032aa5bda58", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65f7927e7bc58032aa5bda58/JxUSj-J7YBwtgj6rqQtjn.jpeg", "followerCount": null, "fullname": "Dex Dixing Xu", "isHf": false, "isMod": false, "isPro": false, "name": "dexhunter", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/65f7927e7bc58032aa5bda58/bkhW4LYUeFqT9_aqPd3Om.jpeg" ]
2502.13138
[ { "_id": "67b6e0829b29983879ad2312", "hidden": false, "name": "Zhengyao Jiang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:38:24.557Z", "user": { "_id": "630384837b50dd9d0a3328dc", "avatarUrl": "/avatars/17097a93ef403592bc07c0ff6712faf3.svg", "fullname": "Zhengyao Jiang", "isPro": false, "type": "user", "user": "ZhengyaoJiang" } }, { "_id": "67b6e0829b29983879ad2313", "hidden": false, "name": "Dominik Schmidt", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:38:52.544Z", "user": { "_id": "63042a72eedc089484c89aee", "avatarUrl": "/avatars/cbbe3000e6fa783c395b23c6b67da5ab.svg", "fullname": "Dominik Schmidt", "isPro": false, "type": "user", "user": "dominikschmidt" } }, { "_id": "67b6e0829b29983879ad2314", "hidden": false, "name": "Dhruv Srikanth", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:38:58.578Z", "user": { "_id": "62f3203506b6e6f54b4720f5", "avatarUrl": "/avatars/64a395d3391e1025dbdd945c33ceee94.svg", "fullname": "Dhruv Srikanth", "isPro": false, "type": "user", "user": "dSrikanth" } }, { "_id": "67b6e0829b29983879ad2315", "hidden": false, "name": "Dixing Xu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:52:52.450Z", "user": { "_id": "65f7927e7bc58032aa5bda58", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65f7927e7bc58032aa5bda58/JxUSj-J7YBwtgj6rqQtjn.jpeg", "fullname": "Dex Dixing Xu", "isPro": false, "type": "user", "user": "dexhunter" } }, { "_id": "67b6e0829b29983879ad2316", "hidden": false, "name": "Ian Kaplan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6e0829b29983879ad2317", "hidden": false, "name": "Deniss Jacenko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6e0829b29983879ad2318", "hidden": false, "name": "Yuxiang Wu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T18:57:21
AIDE: AI-Driven Exploration in the Space of Code
Machine learning, the foundation of modern artificial intelligence, has driven innovations that have fundamentally transformed the world. Yet, behind advancements lies a complex and often tedious process requiring labor and compute intensive iteration and experimentation. Engineers and scientists developing machine learning models spend much of their time on trial-and-error tasks instead of conceptualizing innovative solutions or research hypotheses. To address this challenge, we introduce AI-Driven Exploration (AIDE), a machine learning engineering agent powered by large language models (LLMs). AIDE frames machine learning engineering as a code optimization problem, and formulates trial-and-error as a tree search in the space of potential solutions. By strategically reusing and refining promising solutions, AIDE effectively trades computational resources for enhanced performance, achieving state-of-the-art results on multiple machine learning engineering benchmarks, including our Kaggle evaluations, OpenAI MLE-Bench and METRs RE-Bench.
7
67b6e0839b29983879ad2346
null
null
2025-02-20T12:09:53.761000
MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching
https://cdn-thumbnails.h…s/2502.12852.png
2
{ "_id": "64c8c2d87d0ea4e7f12995c6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c8c2d87d0ea4e7f12995c6/h8eWJrz8kqavemy8vQ2NK.jpeg", "followerCount": 3, "fullname": "Fabian David Schmidt", "isHf": false, "isMod": false, "isPro": false, "name": "fdschmidt93", "type": "user" }
true
null
2502.12852
[ { "_id": "67b5b31f5a17526b55c3ccde", "hidden": false, "name": "Fabian David Schmidt", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:53:15.852Z", "user": { "_id": "64c8c2d87d0ea4e7f12995c6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c8c2d87d0ea4e7f12995c6/h8eWJrz8kqavemy8vQ2NK.jpeg", "fullname": "Fabian David Schmidt", "isPro": false, "type": "user", "user": "fdschmidt93" } }, { "_id": "67b5b31f5a17526b55c3ccdf", "hidden": false, "name": "Florian Schneider", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T10:49:49.711Z", "user": { "_id": "62dfd54798815401141c47fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62dfd54798815401141c47fe/ct2OA_K0Wwpshy8DCswxy.png", "fullname": "Flo Schneider", "isPro": false, "type": "user", "user": "floschne" } }, { "_id": "67b5b31f5a17526b55c3cce0", "hidden": false, "name": "Chris Biemann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5b31f5a17526b55c3cce1", "hidden": false, "name": "Goran Glavaš", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T13:40:05
MVL-SIB: A Massively Multilingual Vision-Language Benchmark for Cross-Modal Topical Matching
Existing multilingual vision-language (VL) benchmarks often only cover a handful of languages. Consequently, evaluations of large vision-language models (LVLMs) predominantly target high-resource languages, underscoring the need for evaluation data for low-resource languages. To address this limitation, we introduce MVL-SIB, a massively multilingual vision-language benchmark that evaluates both cross-modal and text-only topical matching across 205 languages -- over 100 more than the most multilingual existing VL benchmarks encompass. We then benchmark a range of of open-weight LVLMs together with GPT-4o(-mini) on MVL-SIB. Our results reveal that LVLMs struggle in cross-modal topic matching in lower-resource languages, performing no better than chance on languages like N'Koo. Our analysis further reveals that VL support in LVLMs declines disproportionately relative to textual support for lower-resource languages, as evidenced by comparison of cross-modal and text-only topical matching performance. We further observe that open-weight LVLMs do not benefit from representing a topic with more than one image, suggesting that these models are not yet fully effective at handling multi-image tasks. By correlating performance on MVL-SIB with other multilingual VL benchmarks, we highlight that MVL-SIB serves as a comprehensive probe of multilingual VL understanding in LVLMs.
3
67b5b3205a17526b55c3cd40
null
null
2025-02-20T12:07:02.880000
Reducing Hallucinations in Language Model-based SPARQL Query Generation Using Post-Generation Memory Retrieval
https://cdn-thumbnails.h…s/2502.13369.png
2
{ "_id": "63e972f1ccae1fe5c6211759", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e972f1ccae1fe5c6211759/AfKPgMdAraUtvbtJpoHFY.jpeg", "followerCount": 2, "fullname": "Luis Lara", "isHf": false, "isMod": false, "isPro": false, "name": "ludolara", "type": "user" }
true
null
2502.13369
[ { "_id": "67b7610afedfe97127f75374", "hidden": false, "name": "Aditya Sharma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:37:33.974Z", "user": { "_id": "66d959e4fb6d15635f2b9d76", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/RpDOzFBoW_OQNW4iHuL0g.jpeg", "fullname": "Aditya Sharma ", "isPro": false, "type": "user", "user": "adityasharma001" } }, { "_id": "67b7610afedfe97127f75375", "hidden": false, "name": "Luis Lara", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T17:11:22.814Z", "user": { "_id": "63e972f1ccae1fe5c6211759", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e972f1ccae1fe5c6211759/AfKPgMdAraUtvbtJpoHFY.jpeg", "fullname": "Luis Lara", "isPro": false, "type": "user", "user": "ludolara" } }, { "_id": "67b7610afedfe97127f75376", "hidden": false, "name": "Amal Zouaq", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:37:41.765Z", "user": { "_id": "64df79eff08b064990fd62db", "avatarUrl": "/avatars/88a494615a2339bc29db4ea33a9817b2.svg", "fullname": "Amal Zouaq", "isPro": false, "type": "user", "user": "zouaq" } }, { "_id": "67b7610afedfe97127f75377", "hidden": false, "name": "Christopher J. Pal", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T02:08:13
Reducing Hallucinations in Language Model-based SPARQL Query Generation Using Post-Generation Memory Retrieval
The ability to generate SPARQL queries from natural language questions is crucial for ensuring efficient and accurate retrieval of structured data from knowledge graphs (KG). While large language models (LLMs) have been widely adopted for SPARQL query generation, they are often susceptible to hallucinations and out-of-distribution errors when producing KG elements like Uniform Resource Identifiers (URIs) based on internal parametric knowledge. This often results in content that appears plausible but is factually incorrect, posing significant challenges for their use in real-world information retrieval (IR) applications. This has led to increased research aimed at detecting and mitigating such errors. In this paper, we introduce PGMR (Post-Generation Memory Retrieval), a modular framework that incorporates a non-parametric memory module to retrieve KG elements and enhance LLM-based SPARQL query generation. Our experimental results indicate that PGMR consistently delivers strong performance across diverse datasets, data distributions, and LLMs. Notably, PGMR significantly mitigates URI hallucinations, nearly eliminating the problem in several scenarios.
2
67b7610bfedfe97127f7539c
null
null
2025-02-20T10:53:49.049000
High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion
https://cdn-thumbnails.h…s/2502.12752.png
2
{ "_id": "657dc1576dc01435cd9029d8", "avatarUrl": "/avatars/3bba11ac7659fce61aeaedf40e2057a8.svg", "followerCount": 2, "fullname": "Xiang Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "XiangZ", "type": "user" }
true
null
2502.12752
[ { "_id": "67b74fbdbb87b88059a9c5d3", "hidden": false, "name": "Xiang Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T16:06:01.193Z", "user": { "_id": "657dc1576dc01435cd9029d8", "avatarUrl": "/avatars/3bba11ac7659fce61aeaedf40e2057a8.svg", "fullname": "Xiang Zhang", "isPro": false, "type": "user", "user": "XiangZ" } }, { "_id": "67b74fbdbb87b88059a9c5d4", "hidden": false, "name": "Yang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b74fbdbb87b88059a9c5d5", "hidden": false, "name": "Lukas Mehl", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b74fbdbb87b88059a9c5d6", "hidden": false, "name": "Markus Gross", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b74fbdbb87b88059a9c5d7", "hidden": false, "name": "Christopher Schroers", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T11:13:06
High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion
Despite recent advances in Novel View Synthesis (NVS), generating high-fidelity views from single or sparse observations remains a significant challenge. Existing splatting-based approaches often produce distorted geometry due to splatting errors. While diffusion-based methods leverage rich 3D priors to achieve improved geometry, they often suffer from texture hallucination. In this paper, we introduce SplatDiff, a pixel-splatting-guided video diffusion model designed to synthesize high-fidelity novel views from a single image. Specifically, we propose an aligned synthesis strategy for precise control of target viewpoints and geometry-consistent view synthesis. To mitigate texture hallucination, we design a texture bridge module that enables high-fidelity texture generation through adaptive feature fusion. In this manner, SplatDiff leverages the strengths of splatting and diffusion to generate novel views with consistent geometry and high-fidelity details. Extensive experiments verify the state-of-the-art performance of SplatDiff in single-view NVS. Additionally, without extra training, SplatDiff shows remarkable zero-shot performance across diverse tasks, including sparse-view NVS and stereo video conversion.
3
67b74fc7bb87b88059a9c75d
null
null
2025-02-20T10:46:55.281000
TESS 2: A Large-Scale Generalist Diffusion Language Model
https://cdn-thumbnails.h…s/2502.13917.png
3
{ "_id": "62608fc2ffe8827cb1d89f9f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png", "followerCount": 11, "fullname": "Hamish Ivison", "isHf": false, "isMod": false, "isPro": false, "name": "hamishivi", "type": "user" }
true
null
2502.13917
[ { "_id": "67b698422c8b2ef925e03f4f", "hidden": false, "name": "Jaesung Tae", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b698422c8b2ef925e03f50", "hidden": false, "name": "Hamish Ivison", "status": "extracted_confirmed", "statusLastChangedAt": "2025-03-04T04:34:53.882Z", "user": { "_id": "62608fc2ffe8827cb1d89f9f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png", "fullname": "Hamish Ivison", "isPro": false, "type": "user", "user": "hamishivi" } }, { "_id": "67b698422c8b2ef925e03f51", "hidden": false, "name": "Sachin Kumar", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:12:28.834Z", "user": { "_id": "63f24d2d7ddf724fbcc0ea9c", "avatarUrl": "/avatars/3e24c1aa9c1b4066d2dd56aeb4b0f62e.svg", "fullname": "sachin kumar", "isPro": false, "type": "user", "user": "sachinkumar" } }, { "_id": "67b698422c8b2ef925e03f52", "hidden": false, "name": "Arman Cohan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:12:35.587Z", "user": { "_id": "5f5ba21188f57f65f951f255", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1599840760465-noauth.png", "fullname": "Arman Cohan", "isPro": false, "type": "user", "user": "armanc" } } ]
2025-02-19T17:50:31
TESS 2: A Large-Scale Generalist Diffusion Language Model
We introduce TESS 2, a general instruction-following diffusion language model that outperforms contemporary instruction-tuned diffusion models, as well as matches and sometimes exceeds strong autoregressive (AR) models. We train TESS 2 by first adapting a strong AR model via continued pretraining with the usual cross-entropy as diffusion loss, and then performing further instruction tuning. We find that adaptation training as well as the choice of the base model is crucial for training good instruction-following diffusion models. We further propose reward guidance, a novel and modular inference-time guidance procedure to align model outputs without needing to train the underlying model. Finally, we show that TESS 2 further improves with increased inference-time compute, highlighting the utility of diffusion LMs in having fine-grained controllability over the amount of compute used at inference time. Code and models are available at https://github.com/hamishivi/tess-2.
6
67b698432c8b2ef925e03fb4
null
null
2025-02-20T07:25:12.795000
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
https://cdn-thumbnails.h…s/2502.13622.png
2
{ "_id": "6540fbf9cb7fffd683942b43", "avatarUrl": "/avatars/d4a64fbde511d0949e1c339179586850.svg", "followerCount": 2, "fullname": "DongGeon Lee", "isHf": false, "isMod": false, "isPro": false, "name": "oneonlee", "type": "user" }
true
null
2502.13622
[ { "_id": "67b69cf4573aa8417aec103c", "hidden": false, "name": "DongGeon Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:55.480Z", "user": { "_id": "6540fbf9cb7fffd683942b43", "avatarUrl": "/avatars/d4a64fbde511d0949e1c339179586850.svg", "fullname": "DongGeon Lee", "isPro": false, "type": "user", "user": "oneonlee" } }, { "_id": "67b69cf4573aa8417aec103d", "hidden": false, "name": "Hwanjo Yu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T10:59:05
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (Retrieval-augmented Factuality hallucINation Detection), a novel framework that detects hallucinated spans within LLM outputs by directly leveraging retrieved documents. As part of the REFIND, we propose the Context Sensitivity Ratio (CSR), a novel metric that quantifies the sensitivity of LLM outputs to retrieved evidence. This innovative approach enables REFIND to efficiently and accurately detect hallucinations, setting it apart from existing methods. In the evaluation, REFIND demonstrated robustness across nine languages, including low-resource settings, and significantly outperformed baseline models, achieving superior IoU scores in identifying hallucinated spans. This work highlights the effectiveness of quantifying context sensitivity for hallucination detection, thereby paving the way for more reliable and trustworthy LLM applications across diverse languages.
4
67b69cf7573aa8417aec10bf
null
null
2025-02-20T06:45:40.507000
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
https://cdn-thumbnails.h…s/2502.13533.png
2
{ "_id": "63fcb42c987f631186e554f2", "avatarUrl": "/avatars/5cf87e9fa21c088c0bd8577d651d01f6.svg", "followerCount": null, "fullname": "Jun Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "junzhang98", "type": "user" }
true
null
2502.13533
[ { "_id": "67b68f883cd5860d8597eace", "hidden": false, "name": "Jun Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:13.757Z", "user": { "_id": "63fcb42c987f631186e554f2", "avatarUrl": "/avatars/5cf87e9fa21c088c0bd8577d651d01f6.svg", "fullname": "Jun Zhang", "isPro": false, "type": "user", "user": "junzhang98" } }, { "_id": "67b68f883cd5860d8597eacf", "hidden": false, "name": "Jue Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead0", "hidden": false, "name": "Huan Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead1", "hidden": false, "name": "Lidan Shou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead2", "hidden": false, "name": "Ke Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead3", "hidden": false, "name": "Yang You", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead4", "hidden": false, "name": "Guiming Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead5", "hidden": false, "name": "Xuejian Gong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68f883cd5860d8597ead6", "hidden": false, "name": "Kunlong Zhou", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:43:26.626Z", "user": { "_id": "63db54249f2687298a14dc4d", "avatarUrl": "/avatars/7fd31c4056be5c304cd79e17e3a6e560.svg", "fullname": "kunlong zhou", "isPro": false, "type": "user", "user": "kerlomz" } } ]
2025-02-19T08:39:15
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
Large Language Models (LLMs) have significantly advanced natural language processing with exceptional task generalization capabilities. Low-Rank Adaption (LoRA) offers a cost-effective fine-tuning solution, freezing the original model parameters and training only lightweight, low-rank adapter matrices. However, the memory footprint of LoRA is largely dominated by the original model parameters. To mitigate this, we propose LoRAM, a memory-efficient LoRA training scheme founded on the intuition that many neurons in over-parameterized LLMs have low training utility but are essential for inference. LoRAM presents a unique twist: it trains on a pruned (small) model to obtain pruned low-rank matrices, which are then recovered and utilized with the original (large) model for inference. Additionally, minimal-cost continual pre-training, performed by the model publishers in advance, aligns the knowledge discrepancy between pruned and original models. Our extensive experiments demonstrate the efficacy of LoRAM across various pruning strategies and downstream tasks. For a model with 70 billion parameters, LoRAM enables training on a GPU with only 20G HBM, replacing an A100-80G GPU for LoRA training and 15 GPUs for full fine-tuning. Specifically, QLoRAM implemented by structured pruning combined with 4-bit quantization, for LLaMA-3.1-70B (LLaMA-2-70B), reduces the parameter storage cost that dominates the memory usage in low-rank matrix training by 15.81times (16.95times), while achieving dominant performance gains over both the original LLaMA-3.1-70B (LLaMA-2-70B) and LoRA-trained LLaMA-3.1-8B (LLaMA-2-13B).
9
67b68f8b3cd5860d8597eb97
null
null
2025-02-20T05:38:39.430000
Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective
https://cdn-thumbnails.h…s/2502.13573.png
2
{ "_id": "668bb3b14c25c09b01815a55", "avatarUrl": "/avatars/5d46301dd5d7641e3da05b0ad560efee.svg", "followerCount": null, "fullname": "Yuan Yao", "isHf": false, "isMod": false, "isPro": false, "name": "yyyaoyuan", "type": "user" }
true
null
2502.13573
[ { "_id": "67b70459ea22340afaaf416f", "hidden": false, "name": "Yuan Yao", "status": "extracted_pending", "statusLastChangedAt": "2025-02-20T10:30:51.477Z", "user": { "_id": "668bb3b14c25c09b01815a55", "avatarUrl": "/avatars/5d46301dd5d7641e3da05b0ad560efee.svg", "fullname": "Yuan Yao", "isPro": false, "type": "user", "user": "yyyaoyuan" } }, { "_id": "67b70459ea22340afaaf4170", "hidden": false, "name": "Xiaopu Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:11:57.564Z", "user": { "_id": "67a2d9acd62b7924b4511564", "avatarUrl": "/avatars/adc0c3d0e11acb64ba72c7914f5105db.svg", "fullname": "Xiaopu Zhang", "isPro": true, "type": "user", "user": "xiaopz2" } }, { "_id": "67b70459ea22340afaaf4171", "hidden": false, "name": "Yu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b70459ea22340afaaf4172", "hidden": false, "name": "Jian Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b70459ea22340afaaf4173", "hidden": false, "name": "Qiang Yang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T09:27:03
Noise May Contain Transferable Knowledge: Understanding Semi-supervised Heterogeneous Domain Adaptation from an Empirical Perspective
Semi-supervised heterogeneous domain adaptation (SHDA) addresses learning across domains with distinct feature representations and distributions, where source samples are labeled while most target samples are unlabeled, with only a small fraction labeled. Moreover, there is no one-to-one correspondence between source and target samples. Although various SHDA methods have been developed to tackle this problem, the nature of the knowledge transferred across heterogeneous domains remains unclear. This paper delves into this question from an empirical perspective. We conduct extensive experiments on about 330 SHDA tasks, employing two supervised learning methods and seven representative SHDA methods. Surprisingly, our observations indicate that both the category and feature information of source samples do not significantly impact the performance of the target domain. Additionally, noise drawn from simple distributions, when used as source samples, may contain transferable knowledge. Based on this insight, we perform a series of experiments to uncover the underlying principles of transferable knowledge in SHDA. Specifically, we design a unified Knowledge Transfer Framework (KTF) for SHDA. Based on the KTF, we find that the transferable knowledge in SHDA primarily stems from the transferability and discriminability of the source domain. Consequently, ensuring those properties in source samples, regardless of their origin (e.g., image, text, noise), can enhance the effectiveness of knowledge transfer in SHDA tasks. The codes and datasets are available at https://github.com/yyyaoyuan/SHDA.
2
67b7045bea22340afaaf41fd
null
null
2025-02-20T05:19:11.890000
GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking
https://cdn-thumbnails.h…s/2502.13766.png
2
{ "_id": "62dfd54798815401141c47fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62dfd54798815401141c47fe/ct2OA_K0Wwpshy8DCswxy.png", "followerCount": 6, "fullname": "Flo Schneider", "isHf": false, "isMod": false, "isPro": false, "name": "floschne", "type": "user" }
true
null
2502.13766
[ { "_id": "67b6faf5a96bf2b8ff8c235c", "hidden": false, "name": "Florian Schneider", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T10:49:37.443Z", "user": { "_id": "62dfd54798815401141c47fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62dfd54798815401141c47fe/ct2OA_K0Wwpshy8DCswxy.png", "fullname": "Flo Schneider", "isPro": false, "type": "user", "user": "floschne" } }, { "_id": "67b6faf5a96bf2b8ff8c235d", "hidden": false, "name": "Carolin Holtermann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6faf5a96bf2b8ff8c235e", "hidden": false, "name": "Chris Biemann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6faf5a96bf2b8ff8c235f", "hidden": false, "name": "Anne Lauscher", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:41:05.177Z", "user": { "_id": "626c02e7703f3b27dd590896", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654503075060-626c02e7703f3b27dd590896.jpeg", "fullname": "Anne Lauscher", "isPro": false, "type": "user", "user": "anlausch" } } ]
2025-02-19T14:27:40
GIMMICK -- Globally Inclusive Multimodal Multitask Cultural Knowledge Benchmarking
Large Vision-Language Models (LVLMs) have recently gained attention due to their distinctive performance and broad applicability. While it has been previously shown that their efficacy in usage scenarios involving non-Western contexts falls short, existing studies are limited in scope, covering just a narrow range of cultures, focusing exclusively on a small number of cultural aspects, or evaluating a limited selection of models on a single task only. Towards globally inclusive LVLM research, we introduce GIMMICK, an extensive multimodal benchmark designed to assess a broad spectrum of cultural knowledge across 144 countries representing six global macro-regions. GIMMICK comprises six tasks built upon three new datasets that span 728 unique cultural events or facets on which we evaluated 20 LVLMs and 11 LLMs, including five proprietary and 26 open-weight models of all sizes. We systematically examine (1) regional cultural biases, (2) the influence of model size, (3) input modalities, and (4) external cues. Our analyses reveal strong biases toward Western cultures across models and tasks and highlight strong correlations between model size and performance, as well as the effectiveness of multimodal input and external geographic cues. We further find that models have more knowledge of tangible than intangible aspects (e.g., food vs. rituals) and that they excel in recognizing broad cultural origins but struggle with a more nuanced understanding.
3
67b6faf8a96bf2b8ff8c2422
null
null
2025-02-20T04:32:22.011000
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
https://cdn-thumbnails.h…s/2502.11573.png
2
{ "_id": "618c1ad1c74578e0a4a4d074", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/618c1ad1c74578e0a4a4d074/8u_AkeHt4d6xtQ8hzaffU.jpeg", "followerCount": 60, "fullname": "Drishti Sharma", "isHf": false, "isMod": false, "isPro": true, "name": "DrishtiSharma", "type": "user" }
false
null
2502.11573
[ { "_id": "67b6f629d9da6999328e38f5", "hidden": false, "name": "Congkai Xie", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:12:49.025Z", "user": { "_id": "6719f1ad725123d503b5ef3c", "avatarUrl": "/avatars/08e1be1f4afa1b6e1501a15cdb786a47.svg", "fullname": "Congkai Xie", "isPro": false, "type": "user", "user": "congkai" } }, { "_id": "67b6f629d9da6999328e38f6", "hidden": false, "name": "Shuo Cai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e38f7", "hidden": false, "name": "Wenjun Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e38f8", "hidden": true, "name": "Pengxiang Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:13:23.938Z", "user": { "_id": "65d6bd8ebb862d1b367580fe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d6bd8ebb862d1b367580fe/Pf5o-Oxi2IZxJPa-ZgEtV.jpeg", "fullname": "Pengxiang Li", "isPro": false, "type": "user", "user": "PengxiangLi" } }, { "_id": "67b6f629d9da6999328e38f9", "hidden": false, "name": "Zhijie Sang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:13:31.244Z", "user": { "_id": "6719f2569665701fd6a6a43d", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/8gS7Tzr6rIZMl-8sS1x_r.png", "fullname": "Zhijie Sang", "isPro": false, "type": "user", "user": "SANGZHIJIE" } }, { "_id": "67b6f629d9da6999328e38fa", "hidden": false, "name": "Kejing Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e38fb", "hidden": true, "name": "Yiming Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:13:47.213Z", "user": { "_id": "626309292597e1cb7c76ab6f", "avatarUrl": "/avatars/594b9ccfc5f3aa61613b4bc3158fed4f.svg", "fullname": "Yiming Zhang", "isPro": false, "type": "user", "user": "yimingzhang" } }, { "_id": "67b6f629d9da6999328e38fc", "hidden": false, "name": "Zhen Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e38fd", "hidden": false, "name": "Guanghao Zhu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:13:55.619Z", "user": { "_id": "670f8a7d4e3a710738fd13cc", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/2ITxPXZCAzdY-LhbNA6NI.png", "fullname": "Guanghao", "isPro": false, "type": "user", "user": "GuanghaoZhu" } }, { "_id": "67b6f629d9da6999328e38fe", "hidden": false, "name": "Zeyu Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e38ff", "hidden": false, "name": "Yang Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3900", "hidden": false, "name": "Yuhang Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3901", "hidden": false, "name": "Su Lu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3902", "hidden": false, "name": "Baoyi He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3903", "hidden": false, "name": "Qi Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3904", "hidden": false, "name": "Xiaotian Han", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:32:33.779Z", "user": { "_id": "650dde4ce14eeb01d42b37a1", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/650dde4ce14eeb01d42b37a1/n5Yv24uofZ2XJjXdYCrKd.png", "fullname": "Xiaotian Han", "isPro": false, "type": "user", "user": "xiaotianhan" } }, { "_id": "67b6f629d9da6999328e3905", "hidden": false, "name": "Jianbo Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3906", "hidden": false, "name": "Shengyu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3907", "hidden": false, "name": "Fei Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6f629d9da6999328e3908", "hidden": false, "name": "Hongxia Yang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T09:07:32
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have made significant advancements in reasoning capabilities. However, they still face challenges such as high computational demands and privacy concerns. This paper focuses on developing efficient Small Language Models (SLMs) and Multimodal Small Language Models (MSLMs) that retain competitive reasoning abilities. We introduce a novel training pipeline that enhances reasoning capabilities and facilitates deployment on edge devices, achieving state-of-the-art performance while minimizing development costs. \InfR~ aims to advance AI systems by improving reasoning, reducing adoption barriers, and addressing privacy concerns through smaller model sizes. Resources are available at https://github. com/Reallm-Labs/InfiR.
8
67b6f62ad9da6999328e3955
null
null
2025-02-20T03:56:54.121000
ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation
https://cdn-thumbnails.h…s/2502.13581.png
3
{ "_id": "64a62c2f500beb50968e5c9c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wfL3ojJmXqyzGUmCblPf4.jpeg", "followerCount": 5, "fullname": "Yupeng Hou", "isHf": false, "isMod": false, "isPro": false, "name": "hyp1231", "type": "user" }
true
null
2502.13581
[ { "_id": "67b6ee04412c9eccae5151f5", "hidden": false, "name": "Yupeng Hou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:14.498Z", "user": { "_id": "64a62c2f500beb50968e5c9c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/wfL3ojJmXqyzGUmCblPf4.jpeg", "fullname": "Yupeng Hou", "isPro": false, "type": "user", "user": "hyp1231" } }, { "_id": "67b6ee04412c9eccae5151f6", "hidden": false, "name": "Jianmo Ni", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6ee04412c9eccae5151f7", "hidden": false, "name": "Zhankui He", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:40:37.697Z", "user": { "_id": "64daab70c38427829daf5958", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cjL27oSvuJf1x0Zq3SrEJ.jpeg", "fullname": "Zhankui He", "isPro": false, "type": "user", "user": "ZhankuiHe" } }, { "_id": "67b6ee04412c9eccae5151f8", "hidden": false, "name": "Noveen Sachdeva", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:40:31.037Z", "user": { "_id": "652e529acb01191873067b02", "avatarUrl": "/avatars/a340d5faca6ad1d120e8a904380276e9.svg", "fullname": "Sachdeva", "isPro": false, "type": "user", "user": "Noveen" } }, { "_id": "67b6ee04412c9eccae5151f9", "hidden": false, "name": "Wang-Cheng Kang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6ee04412c9eccae5151fa", "hidden": false, "name": "Ed H. Chi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6ee04412c9eccae5151fb", "hidden": false, "name": "Julian McAuley", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6ee04412c9eccae5151fc", "hidden": false, "name": "Derek Zhiyuan Cheng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:40:05.814Z", "user": { "_id": "624cedaa43ce7877c5af2c4e", "avatarUrl": "/avatars/86f13f8e082e5c0b8ea671c374e4d675.svg", "fullname": "Zhiyuan Cheng", "isPro": false, "type": "user", "user": "willcee" } } ]
2025-02-19T09:45:29
ActionPiece: Contextually Tokenizing Action Sequences for Generative Recommendation
Generative recommendation (GR) is an emerging paradigm where user actions are tokenized into discrete token patterns and autoregressively generated as predictions. However, existing GR models tokenize each action independently, assigning the same fixed tokens to identical actions across all sequences without considering contextual relationships. This lack of context-awareness can lead to suboptimal performance, as the same action may hold different meanings depending on its surrounding context. To address this issue, we propose ActionPiece to explicitly incorporate context when tokenizing action sequences. In ActionPiece, each action is represented as a set of item features, which serve as the initial tokens. Given the action sequence corpora, we construct the vocabulary by merging feature patterns as new tokens, based on their co-occurrence frequency both within individual sets and across adjacent sets. Considering the unordered nature of feature sets, we further introduce set permutation regularization, which produces multiple segmentations of action sequences with the same semantics. Experiments on public datasets demonstrate that ActionPiece consistently outperforms existing action tokenization methods, improving NDCG@10 by 6.00% to 12.82%.
5
67b6ee04412c9eccae515223
null
null
2025-02-20T02:40:09.567000
MoM: Linear Sequence Modeling with Mixture-of-Memories
https://cdn-thumbnails.h…s/2502.13685.png
2
{ "_id": "6246bb33da617c00b48e4d92", "avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg", "followerCount": 3, "fullname": "Weigao Sun", "isHf": false, "isMod": false, "isPro": false, "name": "weigao266", "type": "user" }
true
null
2502.13685
[ { "_id": "67b6dc1ba7567156c6547880", "hidden": false, "name": "Jusen Du", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:01.601Z", "user": { "_id": "65003e857804f04a163328d9", "avatarUrl": "/avatars/fe32150aabfde8d283b38ccebcf6982e.svg", "fullname": "Jusen Du", "isPro": false, "type": "user", "user": "JusenK" } }, { "_id": "67b6dc1ba7567156c6547881", "hidden": false, "name": "Weigao Sun", "status": "extracted_confirmed", "statusLastChangedAt": "2025-03-04T08:10:49.466Z", "user": { "_id": "6246bb33da617c00b48e4d92", "avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg", "fullname": "Weigao Sun", "isPro": false, "type": "user", "user": "weigao266" } }, { "_id": "67b6dc1ba7567156c6547882", "hidden": false, "name": "Disen Lan", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:52:54.472Z", "user": { "_id": "66ea643899af9ac3463639b1", "avatarUrl": "/avatars/252d470e761a57834dee3dbc60dfefed.svg", "fullname": "Disen Lan", "isPro": false, "type": "user", "user": "landisen" } }, { "_id": "67b6dc1ba7567156c6547883", "hidden": false, "name": "Jiaxi Hu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:08.108Z", "user": { "_id": "665dc35752ff9daa9ba5a4ed", "avatarUrl": "/avatars/df8b01879d97e599b610fa51414d3a18.svg", "fullname": "Hu Jiaxi", "isPro": false, "type": "user", "user": "Jiaxihu2" } }, { "_id": "67b6dc1ba7567156c6547884", "hidden": false, "name": "Yu Cheng", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T12:53:55
MoM: Linear Sequence Modeling with Mixture-of-Memories
Linear sequence modeling methods, such as linear attention, state space modeling, and linear RNNs, offer significant efficiency improvements by reducing the complexity of training and inference. However, these methods typically compress the entire input sequence into a single fixed-size memory state, which leads to suboptimal performance on recall-intensive downstream tasks. Drawing inspiration from neuroscience, particularly the brain's ability to maintain robust long-term memory while mitigating "memory interference", we introduce a novel architecture called Mixture-of-Memories (MoM). MoM utilizes multiple independent memory states, with a router network directing input tokens to specific memory states. This approach greatly enhances the overall memory capacity while minimizing memory interference. As a result, MoM performs exceptionally well on recall-intensive tasks, surpassing existing linear sequence modeling techniques. Despite incorporating multiple memory states, the computation of each memory state remains linear in complexity, allowing MoM to retain the linear-complexity advantage during training, while constant-complexity during inference. Our experimental results show that MoM significantly outperforms current linear sequence models on downstream language tasks, particularly recall-intensive tasks, and even achieves performance comparable to Transformer models. The code is released at https://github.com/OpenSparseLLMs/MoM and is also released as a part of https://github.com/OpenSparseLLMs/Linear-MoE.
33
67b6dc1ca7567156c65478b8
null
https://github.com/OpenSparseLLMs/MoM
2025-02-20T01:20:46.431000
Presumed Cultural Identity: How Names Shape LLM Responses
https://cdn-thumbnails.h…s/2502.11995.png
2
{ "_id": "60c50f18754747f54fa37114", "avatarUrl": "/avatars/648ae58b81806dbd93a68546666047e3.svg", "followerCount": 1, "fullname": "Siddhesh", "isHf": false, "isMod": false, "isPro": false, "name": "sidicity", "type": "user" }
false
null
2502.11995
[ { "_id": "67b65bbe0d878eff1a6b111d", "hidden": false, "name": "Siddhesh Pawar", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:11:39.727Z", "user": { "_id": "661e2ac200798c2e33cc49a5", "avatarUrl": "/avatars/8e5e1672b36f86bb4ad7a7e22e8d4f4d.svg", "fullname": "Siddhesh Pawar", "isPro": false, "type": "user", "user": "Siddheshmp06" } }, { "_id": "67b65bbe0d878eff1a6b111e", "hidden": false, "name": "Arnav Arora", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b65bbe0d878eff1a6b111f", "hidden": false, "name": "Lucie-Aimée Kaffee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:53:09.702Z", "user": { "_id": "6531310497d7f1b4a083de7b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/ux7NRFAbgnlIVNh-Cbv9s.png", "fullname": "Lucie-Aimée Kaffee", "isPro": false, "type": "user", "user": "frimelle" } }, { "_id": "67b65bbe0d878eff1a6b1120", "hidden": false, "name": "Isabelle Augenstein", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:32.278Z", "user": { "_id": "608918b7df398c3b285ce960", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621507769190-608918b7df398c3b285ce960.jpeg", "fullname": "Isabelle Augenstein", "isPro": false, "type": "user", "user": "IAugenstein" } } ]
2025-02-17T16:35:15
Presumed Cultural Identity: How Names Shape LLM Responses
Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customisation.
10
67b65bbf0d878eff1a6b1174
null
null
2025-02-20T01:07:44.785000
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation
https://cdn-thumbnails.h…s/2502.13128.png
2
{ "_id": "64b4eec4faa3181a5eab9c46", "avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg", "followerCount": 16, "fullname": "Jiaqi Wang", "isHf": false, "isMod": false, "isPro": true, "name": "myownskyW7", "type": "user" }
true
null
2502.13128
[ { "_id": "67b6c696e9b901edeaf320d5", "hidden": false, "name": "Zihan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:49.211Z", "user": { "_id": "65f33b1c9f7970ccc0234cbf", "avatarUrl": "/avatars/99fbab303912e3674663251c04279907.svg", "fullname": "Zihan Liu", "isPro": false, "type": "user", "user": "zihanliu" } }, { "_id": "67b6c696e9b901edeaf320d6", "hidden": false, "name": "Shuangrui Ding", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:35.659Z", "user": { "_id": "65a7c0335e79abfa2ec30c52", "avatarUrl": "/avatars/2f62f83f9c5c4cc9444571f067cd85b7.svg", "fullname": "Shuangrui Ding", "isPro": true, "type": "user", "user": "Mar2Ding" } }, { "_id": "67b6c696e9b901edeaf320d7", "hidden": false, "name": "Zhixiong Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:29.113Z", "user": { "_id": "64b75f59324038715942628b", "avatarUrl": "/avatars/809c533138c2a69acbc3e42a143bbf34.svg", "fullname": "Zhixiong Zhang", "isPro": false, "type": "user", "user": "zzxlmlzcg2" } }, { "_id": "67b6c696e9b901edeaf320d8", "hidden": false, "name": "Xiaoyi Dong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6c696e9b901edeaf320d9", "hidden": false, "name": "Pan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6c696e9b901edeaf320da", "hidden": false, "name": "Yuhang Zang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:11.149Z", "user": { "_id": "63859cf3b2906edaf83af9f0", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg", "fullname": "Yuhang Zang", "isPro": false, "type": "user", "user": "yuhangzang" } }, { "_id": "67b6c696e9b901edeaf320db", "hidden": false, "name": "Yuhang Cao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:07:04.833Z", "user": { "_id": "65000bef18830fabea469fdd", "avatarUrl": "/avatars/b320c77dfad039d9f9c54127f610d44f.svg", "fullname": "Cao Yuhang", "isPro": false, "type": "user", "user": "yhcao" } }, { "_id": "67b6c696e9b901edeaf320dc", "hidden": false, "name": "Dahua Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:06:53.745Z", "user": { "_id": "636317ed80c1a705a6eff396", "avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg", "fullname": "Dahua Lin", "isPro": false, "type": "user", "user": "lindahua" } }, { "_id": "67b6c696e9b901edeaf320dd", "hidden": false, "name": "Jiaqi Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:06:47.581Z", "user": { "_id": "64b4eec4faa3181a5eab9c46", "avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg", "fullname": "Jiaqi Wang", "isPro": true, "type": "user", "user": "myownskyW7" } } ]
2025-02-18T18:52:21
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation
Text-to-song generation, the task of creating vocals and accompaniment from textual inputs, poses significant challenges due to domain complexity and data scarcity. Existing approaches often employ multi-stage generation procedures, resulting in cumbersome training and inference pipelines. In this paper, we propose SongGen, a fully open-source, single-stage auto-regressive transformer designed for controllable song generation. The proposed model facilitates fine-grained control over diverse musical attributes, including lyrics and textual descriptions of instrumentation, genre, mood, and timbre, while also offering an optional three-second reference clip for voice cloning. Within a unified auto-regressive framework, SongGen supports two output modes: mixed mode, which generates a mixture of vocals and accompaniment directly, and dual-track mode, which synthesizes them separately for greater flexibility in downstream applications. We explore diverse token pattern strategies for each mode, leading to notable improvements and valuable insights. Furthermore, we design an automated data preprocessing pipeline with effective quality control. To foster community engagement and future research, we will release our model weights, training code, annotated data, and preprocessing pipeline. The generated samples are showcased on our project page at https://liuzh-19.github.io/SongGen/ , and the code will be available at https://github.com/LiuZH-19/SongGen .
37
67b6c698e9b901edeaf321a7
null
null
2025-02-19T23:54:57.669000
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region
https://cdn-thumbnails.h…s/2502.13946.png
2
{ "_id": "631326d6289cf15634c52369", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631326d6289cf15634c52369/lmPWGHLsQ36H556cqcXjT.jpeg", "followerCount": 7, "fullname": "Cooper Leong", "isHf": false, "isMod": false, "isPro": false, "name": "cooperleong00", "type": "user" }
true
null
2502.13946
[ { "_id": "67b6b416b4ad845374143c31", "hidden": false, "name": "Chak Tou Leong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:12.631Z", "user": { "_id": "631326d6289cf15634c52369", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631326d6289cf15634c52369/lmPWGHLsQ36H556cqcXjT.jpeg", "fullname": "Cooper Leong", "isPro": false, "type": "user", "user": "cooperleong00" } }, { "_id": "67b6b416b4ad845374143c32", "hidden": false, "name": "Qingyu Yin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:12:30.407Z", "user": { "_id": "6453cb22908e259483c0a061", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6453cb22908e259483c0a061/hMgdwZUsUbgquGalzPGzV.jpeg", "fullname": "Qingyu_Yin", "isPro": false, "type": "user", "user": "MikaStars39" } }, { "_id": "67b6b416b4ad845374143c33", "hidden": false, "name": "Jian Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:53:05.099Z", "user": { "_id": "63a4117984a6a25c65bc2fff", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675936454785-63a4117984a6a25c65bc2fff.jpeg", "fullname": "Jian Wang", "isPro": false, "type": "user", "user": "jwanglvy" } }, { "_id": "67b6b416b4ad845374143c34", "hidden": false, "name": "Wenjie Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:12:36.692Z", "user": { "_id": "66a3710a4ee2a4c936315a5a", "avatarUrl": "/avatars/ef8da8fb1031695d77d34a5d365aa177.svg", "fullname": "Li", "isPro": false, "type": "user", "user": "WenjieLi" } } ]
2025-02-19T18:42:45
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region
The safety alignment of large language models (LLMs) remains vulnerable, as their initial behavior can be easily jailbroken by even relatively simple attacks. Since infilling a fixed template between the input instruction and initial model output is a common practice for existing LLMs, we hypothesize that this template is a key factor behind their vulnerabilities: LLMs' safety-related decision-making overly relies on the aggregated information from the template region, which largely influences these models' safety behavior. We refer to this issue as template-anchored safety alignment. In this paper, we conduct extensive experiments and verify that template-anchored safety alignment is widespread across various aligned LLMs. Our mechanistic analyses demonstrate how it leads to models' susceptibility when encountering inference-time jailbreak attacks. Furthermore, we show that detaching safety mechanisms from the template region is promising in mitigating vulnerabilities to jailbreak attacks. We encourage future research to develop more robust safety alignment techniques that reduce reliance on the template region.
9
67b6b416b4ad845374143c5b
null
null
2025-02-19T23:35:06.194000
Qwen2.5-VL Technical Report
https://cdn-thumbnails.h…s/2502.13923.png
7
{ "_id": "63451cf0a05b51f7ded25505", "avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg", "followerCount": 14, "fullname": "shuai bai", "isHf": false, "isMod": false, "isPro": false, "name": "bluelike", "type": "user" }
true
null
2502.13923
[ { "_id": "67b6b0688b56622e70b9e83e", "hidden": false, "name": "Shuai Bai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T15:54:00.062Z", "user": { "_id": "63451cf0a05b51f7ded25505", "avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg", "fullname": "shuai bai", "isPro": false, "type": "user", "user": "bluelike" } }, { "_id": "67b6b0688b56622e70b9e83f", "hidden": false, "name": "Keqin Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T15:54:14.529Z", "user": { "_id": "6461d675681b2e19b6acb5a5", "avatarUrl": "/avatars/0d95d65d30f6672ec09dc92155324d7f.svg", "fullname": "Keqin Chen", "isPro": false, "type": "user", "user": "chenkq" } }, { "_id": "67b6b0688b56622e70b9e840", "hidden": false, "name": "Xuejing Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e841", "hidden": false, "name": "Jialin Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:00:48.724Z", "user": { "_id": "6634979161776e1d8d35b16c", "avatarUrl": "/avatars/32a1fac0016445959c2a062c1ab76ab9.svg", "fullname": "jialinwang", "isPro": false, "type": "user", "user": "jialinwangpku" } }, { "_id": "67b6b0688b56622e70b9e842", "hidden": false, "name": "Wenbin Ge", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:00:57.101Z", "user": { "_id": "634d06c6f0a69955f662e641", "avatarUrl": "/avatars/5a0af8af0a21d2a93192f4a3c430fc60.svg", "fullname": "Wenbin Ge", "isPro": false, "type": "user", "user": "gewenbin292" } }, { "_id": "67b6b0688b56622e70b9e843", "hidden": false, "name": "Sibo Song", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:01:03.946Z", "user": { "_id": "62e38a600c2a907c388d2264", "avatarUrl": "/avatars/43250732c7c9b3737067c2f0a8ba4ec5.svg", "fullname": "Sibo Song", "isPro": false, "type": "user", "user": "StefanSong" } }, { "_id": "67b6b0688b56622e70b9e844", "hidden": false, "name": "Kai Dang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:01:10.991Z", "user": { "_id": "6712930f0fac3235c56edf5b", "avatarUrl": "/avatars/cafe7cb56ce7c3b2572f5f2d0b89357a.svg", "fullname": "kai dang", "isPro": false, "type": "user", "user": "1vk5i" } }, { "_id": "67b6b0688b56622e70b9e845", "hidden": false, "name": "Peng Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e846", "hidden": false, "name": "Shijie Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e847", "hidden": false, "name": "Jun Tang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e848", "hidden": false, "name": "Humen Zhong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e849", "hidden": false, "name": "Yuanzhi Zhu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:01:55.087Z", "user": { "_id": "627d2723401f42c57b6b7c0c", "avatarUrl": "/avatars/6ff754e56aaee63d8572881a6a966171.svg", "fullname": "Yuanzhi Zhu", "isPro": false, "type": "user", "user": "Yuanzhi" } }, { "_id": "67b6b0688b56622e70b9e84a", "hidden": false, "name": "Mingkun Yang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:44.878Z", "user": { "_id": "6417fa211f1f3b0fa811edc0", "avatarUrl": "/avatars/fa9e1ef1472a736c2ceebe12b77d6c89.svg", "fullname": "Mingkun Yang", "isPro": false, "type": "user", "user": "ayumiymk" } }, { "_id": "67b6b0688b56622e70b9e84b", "hidden": false, "name": "Zhaohai Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e84c", "hidden": false, "name": "Jianqiang Wan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:02:09.102Z", "user": { "_id": "665ea9f161b34f33e04648ef", "avatarUrl": "/avatars/d1d7b9feada68bcb6df5484c1bb3bba4.svg", "fullname": "Jianqiang Wang ", "isPro": false, "type": "user", "user": "xbetax" } }, { "_id": "67b6b0688b56622e70b9e84d", "hidden": false, "name": "Pengfei Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e84e", "hidden": false, "name": "Wei Ding", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:02:48.942Z", "user": { "_id": "676b7679c20c09a8c72e4ade", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/bnUqHtJngTLyC6EHD-BP9.png", "fullname": "weiding", "isPro": false, "type": "user", "user": "weiding" } }, { "_id": "67b6b0688b56622e70b9e84f", "hidden": false, "name": "Zheren Fu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T10:49:47.484Z", "user": { "_id": "63ee22e75f1300034ddaaf54", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676550873969-noauth.jpeg", "fullname": "Zheren Fu", "isPro": false, "type": "user", "user": "darkpromise" } }, { "_id": "67b6b0688b56622e70b9e850", "hidden": false, "name": "Yiheng Xu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:03:12.665Z", "user": { "_id": "601d29ab913ad3afd7b7ddb8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1620447944896-601d29ab913ad3afd7b7ddb8.jpeg", "fullname": "Yiheng Xu", "isPro": true, "type": "user", "user": "ranpox" } }, { "_id": "67b6b0688b56622e70b9e851", "hidden": false, "name": "Jiabo Ye", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e852", "hidden": false, "name": "Xi Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e853", "hidden": false, "name": "Tianbao Xie", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:03:30.898Z", "user": { "_id": "618767e4238063b4615d042b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1636263880877-noauth.jpeg", "fullname": "Tianbao Xie", "isPro": true, "type": "user", "user": "tianbaoxiexxx" } }, { "_id": "67b6b0688b56622e70b9e854", "hidden": false, "name": "Zesen Cheng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:03:38.558Z", "user": { "_id": "65b2529285b6c21448a10d65", "avatarUrl": "/avatars/1b09e2742aecce1bbdc57f0c4504cf38.svg", "fullname": "Zesen Cheng", "isPro": false, "type": "user", "user": "ClownRat" } }, { "_id": "67b6b0688b56622e70b9e855", "hidden": false, "name": "Hang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e856", "hidden": false, "name": "Zhibo Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b0688b56622e70b9e857", "hidden": false, "name": "Haiyang Xu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:42.372Z", "user": { "_id": "645b10e80c73ea27d13f7aca", "avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg", "fullname": "xuhaiyang", "isPro": false, "type": "user", "user": "xhyandwyy" } }, { "_id": "67b6b0688b56622e70b9e858", "hidden": false, "name": "Junyang Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:03:53.317Z", "user": { "_id": "620760a26e3b7210c2ff1943", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg", "fullname": "Junyang Lin", "isPro": false, "type": "user", "user": "JustinLin610" } } ]
2025-02-19T18:00:14
Qwen2.5-VL Technical Report
We introduce Qwen2.5-VL, the latest flagship model of Qwen vision-language series, which demonstrates significant advancements in both foundational capabilities and innovative functionalities. Qwen2.5-VL achieves a major leap forward in understanding and interacting with the world through enhanced visual recognition, precise object localization, robust document parsing, and long-video comprehension. A standout feature of Qwen2.5-VL is its ability to localize objects using bounding boxes or points accurately. It provides robust structured data extraction from invoices, forms, and tables, as well as detailed analysis of charts, diagrams, and layouts. To handle complex inputs, Qwen2.5-VL introduces dynamic resolution processing and absolute time encoding, enabling it to process images of varying sizes and videos of extended durations (up to hours) with second-level event localization. This allows the model to natively perceive spatial scales and temporal dynamics without relying on traditional normalization techniques. By training a native dynamic-resolution Vision Transformer (ViT) from scratch and incorporating Window Attention, we reduce computational overhead while maintaining native resolution. As a result, Qwen2.5-VL excels not only in static image and document understanding but also as an interactive visual agent capable of reasoning, tool usage, and task execution in real-world scenarios such as operating computers and mobile devices. Qwen2.5-VL is available in three sizes, addressing diverse use cases from edge AI to high-performance computing. The flagship Qwen2.5-VL-72B model matches state-of-the-art models like GPT-4o and Claude 3.5 Sonnet, particularly excelling in document and diagram understanding. Additionally, Qwen2.5-VL maintains robust linguistic performance, preserving the core language competencies of the Qwen2.5 LLM.
154
67b6b0688b56622e70b9e875
null
null
2025-02-19T23:34:43.424000
Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering
https://cdn-thumbnails.h…s/2502.13962.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.13962
[ { "_id": "67b691751f861500916ecd5d", "hidden": false, "name": "William Jurayj", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:09.674Z", "user": { "_id": "6372bc95c4267fd7cd77f4d0", "avatarUrl": "/avatars/17a24af68f45487e601687d777b352b6.svg", "fullname": "William Jurayj", "isPro": false, "type": "user", "user": "wjurayj" } }, { "_id": "67b691751f861500916ecd5e", "hidden": false, "name": "Jeffrey Cheng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T15:53:07.241Z", "user": { "_id": "65f28eebf5cf26fe0632ce67", "avatarUrl": "/avatars/cd970cfe4215374c82d47df57ac30795.svg", "fullname": "Jeffrey Cheng", "isPro": false, "type": "user", "user": "nexync" } }, { "_id": "67b691751f861500916ecd5f", "hidden": false, "name": "Benjamin Van Durme", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T18:58:31
Is That Your Final Answer? Test-Time Scaling Improves Selective Question Answering
Scaling the test-time compute of large language models has demonstrated impressive performance on reasoning benchmarks. However, existing evaluations of test-time scaling make the strong assumption that a reasoning system should always give an answer to any question provided. This overlooks concerns about whether a model is confident in its answer, and whether it is appropriate to always provide a response. To address these concerns, we extract confidence scores during reasoning for thresholding model responses. We find that increasing compute budget at inference time not only helps models answer more questions correctly, but also increases confidence in correct responses. We then extend the current paradigm of zero-risk responses during evaluation by considering settings with non-zero levels of response risk, and suggest a recipe for reporting evaluations under these settings.
28
67b691761f861500916ecd8e
null
null
2025-02-19T23:31:36.410000
Thinking Preference Optimization
https://cdn-thumbnails.h…s/2502.13173.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.13173
[ { "_id": "67b6b014f7e569081326494f", "hidden": false, "name": "Wang Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b014f7e5690813264950", "hidden": false, "name": "Hongye Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b014f7e5690813264951", "hidden": false, "name": "Jingfeng Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b014f7e5690813264952", "hidden": false, "name": "Vipin Chaudhary", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6b014f7e5690813264953", "hidden": true, "name": "Xiaotian Han", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:12:59.725Z", "user": { "_id": "650dde4ce14eeb01d42b37a1", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/650dde4ce14eeb01d42b37a1/n5Yv24uofZ2XJjXdYCrKd.png", "fullname": "Xiaotian Han", "isPro": false, "type": "user", "user": "xiaotianhan" } } ]
2025-02-17T19:56:21
Thinking Preference Optimization
Supervised Fine-Tuning (SFT) has been a go-to and effective method for enhancing long chain-of-thought (CoT) reasoning in relatively small LLMs by fine-tuning them with long CoT responses from larger LLMs. To continually improve reasoning abilities, we can either collect new high-quality long CoT reasoning SFT data or repeatedly train on existing SFT datasets. However, acquiring new long CoT SFT data is costly and limited, while repeated training often results in a performance plateau or decline. To further boost the performance with the SFT data, we propose Thinking Preference Optimization (ThinkPO), a simple yet effective post-SFT method that enhances long CoT reasoning without requiring new long CoT responses. Instead, ThinkPO utilizes readily available or easily obtainable short CoT reasoning responses as rejected answers and long CoT responses as chosen answers for the same question. It then applies direct preference optimization to encourage the model to favor longer reasoning outputs. Experiments show that ThinkPO further improves the reasoning performance of SFT-ed models, e.g. it increases math reasoning accuracy of SFT-ed models by 8.6% and output length by 25.9%. Notably, ThinkPO is capable of continually boosting the performance of the publicly distilled SFT model, e.g., increasing the official DeepSeek-R1-Distill-Qwen-7B's performance on MATH500 from 87.4% to 91.2%.
17
67b6b015f7e56908132649a0
null
null
2025-02-19T23:18:32.647000
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
https://cdn-thumbnails.h…s/2502.12638.png
2
{ "_id": "6310a3cd531cc21f9e06de6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg", "followerCount": 3, "fullname": "Zhiyuan Liu", "isHf": false, "isMod": false, "isPro": false, "name": "acharkq", "type": "user" }
true
null
2502.12638
[ { "_id": "67b6acdb3a3df2f965e7af0b", "hidden": false, "name": "Zhiyuan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:43:04.070Z", "user": { "_id": "6310a3cd531cc21f9e06de6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg", "fullname": "Zhiyuan Liu", "isPro": false, "type": "user", "user": "acharkq" } }, { "_id": "67b6acdb3a3df2f965e7af0c", "hidden": false, "name": "Yanchen Luo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:48.054Z", "user": { "_id": "64f04a28f3cd962c21726459", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/MOTc7SWbzc4jdJbMcWMcK.jpeg", "fullname": "LuoYanchen", "isPro": false, "type": "user", "user": "lyc0930" } }, { "_id": "67b6acdb3a3df2f965e7af0d", "hidden": false, "name": "Han Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6acdb3a3df2f965e7af0e", "hidden": false, "name": "Enzhi Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:42:41.957Z", "user": { "_id": "6522bdace3419abdcf8177f6", "avatarUrl": "/avatars/872b22da6489959134b5449bb7ed9636.svg", "fullname": "EnzhiZhang", "isPro": false, "type": "user", "user": "EnzhiZhang" } }, { "_id": "67b6acdb3a3df2f965e7af0f", "hidden": false, "name": "Sihang Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6acdb3a3df2f965e7af10", "hidden": false, "name": "Junfeng Fang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6acdb3a3df2f965e7af11", "hidden": false, "name": "Yaorui Shi", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:42:26.270Z", "user": { "_id": "63edd2d1f765928ceeb49057", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676530369930-noauth.png", "fullname": "Yaorui SHI", "isPro": false, "type": "user", "user": "yrshi" } }, { "_id": "67b6acdb3a3df2f965e7af12", "hidden": false, "name": "Xiang Wang", "status": "extracted_pending", "statusLastChangedAt": "2025-02-20T04:17:33.860Z", "user": { "_id": "65fca775fa59bdf4737b1a84", "avatarUrl": "/avatars/a161b510bde8f57e7686cbb0b4aa6a52.svg", "fullname": "Xiang Wang", "isPro": false, "type": "user", "user": "xiangwang1223" } }, { "_id": "67b6acdb3a3df2f965e7af13", "hidden": false, "name": "Kenji Kawaguchi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6acdb3a3df2f965e7af14", "hidden": false, "name": "Tat-Seng Chua", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:42:12.905Z", "user": { "_id": "6570ae84c4993b8fb96f41a8", "avatarUrl": "/avatars/21f7d79d46ac4df0ecff8eca7678b33f.svg", "fullname": "Tat-Seng Chua", "isPro": false, "type": "user", "user": "chuats" } } ]
2025-02-18T08:40:13
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
3D molecule generation is crucial for drug discovery and material design. While prior efforts focus on 3D diffusion models for their benefits in modeling continuous 3D conformers, they overlook the advantages of 1D SELFIES-based Language Models (LMs), which can generate 100% valid molecules and leverage the billion-scale 1D molecule datasets. To combine these advantages for 3D molecule generation, we propose a foundation model -- NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation. NExT-Mol uses an extensively pretrained molecule LM for 1D molecule generation, and subsequently predicts the generated molecule's 3D conformers with a 3D diffusion model. We enhance NExT-Mol's performance by scaling up the LM's model size, refining the diffusion neural architecture, and applying 1D to 3D transfer learning. Notably, our 1D molecule LM significantly outperforms baselines in distributional similarity while ensuring validity, and our 3D diffusion model achieves leading performances in conformer prediction. Given these improvements in 1D and 3D modeling, NExT-Mol achieves a 26% relative improvement in 3D FCD for de novo 3D generation on GEOM-DRUGS, and a 13% average relative gain for conditional 3D generation on QM9-2014. Our codes and pretrained checkpoints are available at https://github.com/acharkq/NExT-Mol.
8
67b6acdd3a3df2f965e7af85
null
null
2025-02-19T23:07:01.367000
AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence
https://cdn-thumbnails.h…s/2502.13943.png
2
{ "_id": "6529f79e802e3d1a4f8ec662", "avatarUrl": "/avatars/d05320c370a6497d8792ef5acb563dd5.svg", "followerCount": 2, "fullname": "Yuliang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "yuliang03181", "type": "user" }
true
null
2502.13943
[ { "_id": "67b6a9a7c721bee91cac2888", "hidden": false, "name": "Yuliang Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:11:40.282Z", "user": { "_id": "6529f79e802e3d1a4f8ec662", "avatarUrl": "/avatars/d05320c370a6497d8792ef5acb563dd5.svg", "fullname": "Yuliang Liu", "isPro": false, "type": "user", "user": "yuliang03181" } }, { "_id": "67b6a9a7c721bee91cac2889", "hidden": false, "name": "Junjie Lu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T13:05:14.854Z", "user": { "_id": "660e1bb6e089e44d2906a7e8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/660e1bb6e089e44d2906a7e8/MJU8nHQ0xgiPzO8J6L8EC.jpeg", "fullname": "Junjie Lu", "isPro": false, "type": "user", "user": "Lux0926" } }, { "_id": "67b6a9a7c721bee91cac288a", "hidden": false, "name": "Zhaoling Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac288b", "hidden": false, "name": "Chaofeng Qu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:11:10.302Z", "user": { "_id": "65d0ae74c4d2b2e40281256e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d0ae74c4d2b2e40281256e/nNTJkAkHsu9Fru4CpRphC.jpeg", "fullname": "Chaofeng QU", "isPro": false, "type": "user", "user": "kylebrovloski" } }, { "_id": "67b6a9a7c721bee91cac288c", "hidden": false, "name": "Jason Klein Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:11:16.565Z", "user": { "_id": "6731c04d5f55903a1d8c307c", "avatarUrl": "/avatars/704b1d628c55e3141194b08736f21267.svg", "fullname": "Jason Klein Liu", "isPro": false, "type": "user", "user": "jasonkleinlove" } }, { "_id": "67b6a9a7c721bee91cac288d", "hidden": false, "name": "Chonghan Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac288e", "hidden": false, "name": "Zefan Cai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac288f", "hidden": false, "name": "Yunhui Xia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac2890", "hidden": false, "name": "Li Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac2891", "hidden": false, "name": "Jiang Bian", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T17:00:08.549Z", "user": { "_id": "63f253f8f4e30ffd2bd308fb", "avatarUrl": "/avatars/303f4c7ee588f638acf78a7966786e1e.svg", "fullname": "Jiang Bian", "isPro": false, "type": "user", "user": "bianjiang" } }, { "_id": "67b6a9a7c721bee91cac2892", "hidden": false, "name": "Chuheng Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:59:49.312Z", "user": { "_id": "64c32727c370f29a10334d35", "avatarUrl": "/avatars/a3284bdd2e51f437433b89a96a904448.svg", "fullname": "ZhangChuheng", "isPro": false, "type": "user", "user": "zhangchuheng123" } }, { "_id": "67b6a9a7c721bee91cac2893", "hidden": false, "name": "Wei Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a9a7c721bee91cac2894", "hidden": false, "name": "Zhouhan Lin", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T18:35:55
AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence
Current approaches for training Process Reward Models (PRMs) often involve breaking down responses into multiple reasoning steps using rule-based techniques, such as using predefined placeholder tokens or setting the reasoning step's length into a fixed size. These approaches overlook the fact that specific words do not typically mark true decision points in a text. To address this, we propose AdaptiveStep, a method that divides reasoning steps based on the model's confidence in predicting the next word. This division method provides more decision-making information at each step, enhancing downstream tasks, such as reward model learning. Moreover, our method does not require manual annotation. We demonstrate its effectiveness through experiments with AdaptiveStep-trained PRMs in mathematical reasoning and code generation tasks. Experimental results indicate that the outcome PRM achieves state-of-the-art Best-of-N performance, surpassing greedy search strategy with token-level value-guided decoding, while also reducing construction costs by over 30% compared to existing open-source PRMs. In addition, we provide a thorough analysis and case study on the PRM's performance, transferability, and generalization capabilities.
7
67b6a9a8c721bee91cac28e7
null
null
2025-02-19T22:57:23.298000
Craw4LLM: Efficient Web Crawling for LLM Pretraining
https://cdn-thumbnails.h…s/2502.13347.png
2
{ "_id": "6135eeeb5bc6ecdf86b60f0d", "avatarUrl": "/avatars/43cedcf20ab6b0801a662787400e1384.svg", "followerCount": 7, "fullname": "Shi Yu", "isHf": false, "isMod": false, "isPro": false, "name": "yushi", "type": "user" }
true
null
2502.13347
[ { "_id": "67b6a7e83ef3656c48f149b9", "hidden": false, "name": "Shi Yu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:47.487Z", "user": { "_id": "6135eeeb5bc6ecdf86b60f0d", "avatarUrl": "/avatars/43cedcf20ab6b0801a662787400e1384.svg", "fullname": "Shi Yu", "isPro": false, "type": "user", "user": "yushi" } }, { "_id": "67b6a7e83ef3656c48f149ba", "hidden": false, "name": "Zhiyuan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:09:12.683Z", "user": { "_id": "6310a3cd531cc21f9e06de6a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg", "fullname": "Zhiyuan Liu", "isPro": false, "type": "user", "user": "acharkq" } }, { "_id": "67b6a7e83ef3656c48f149bb", "hidden": false, "name": "Chenyan Xiong", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:09:19.014Z", "user": { "_id": "617ae49bc0ff6006217aa22e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1635543252231-617ae49bc0ff6006217aa22e.jpeg", "fullname": "Chenyan Xiong", "isPro": false, "type": "user", "user": "xiongchenyan" } } ]
2025-02-19T00:31:43
Craw4LLM: Efficient Web Crawling for LLM Pretraining
Web crawl is a main source of large language models' (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Crawl4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, it leverages the influence of a webpage in LLM pretraining as the priority score of the web crawler's scheduler, replacing the standard graph connectivity based priority. Our experiments on a web graph containing 900 million webpages from a commercial search engine's index demonstrate the efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just 21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream performances of previous crawls, significantly reducing the crawling waste and alleviating the burdens on websites. Our code is publicly available at https://github.com/cxcscmu/Crawl4LLM.
27
67b6a7e93ef3656c48f149f1
null
null
2025-02-19T22:42:06.502000
Autellix: An Efficient Serving Engine for LLM Agents as General Programs
https://cdn-thumbnails.h…s/2502.13965.png
2
{ "_id": "654037be97949fd2304aab7f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654037be97949fd2304aab7f/2cSME81gcwYa2OTeVlq5Q.jpeg", "followerCount": 3, "fullname": "Michael Luo", "isHf": false, "isMod": false, "isPro": false, "name": "michaelzhiluo", "type": "user" }
true
null
2502.13965
[ { "_id": "67b6a3fa09841367596a1db5", "hidden": false, "name": "Michael Luo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:24.729Z", "user": { "_id": "654037be97949fd2304aab7f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654037be97949fd2304aab7f/2cSME81gcwYa2OTeVlq5Q.jpeg", "fullname": "Michael Luo", "isPro": false, "type": "user", "user": "michaelzhiluo" } }, { "_id": "67b6a3fa09841367596a1db6", "hidden": false, "name": "Xiaoxiang Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1db7", "hidden": false, "name": "Colin Cai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1db8", "hidden": false, "name": "Tianjun Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1db9", "hidden": false, "name": "Justin Wong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1dba", "hidden": false, "name": "Yichuan Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:35:50.487Z", "user": { "_id": "626e3449e7914f0d5ea78ad1", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626e3449e7914f0d5ea78ad1/pVzdmdPMpNcxuj94qiIvB.jpeg", "fullname": "Yichuan", "isPro": false, "type": "user", "user": "Chrisyichuan" } }, { "_id": "67b6a3fa09841367596a1dbb", "hidden": false, "name": "Chi Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T08:38:54.531Z", "user": { "_id": "67bea54186192d5a029f029b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mLRb5e_ucqlvPdexaVk7E.png", "fullname": "Chi Wang", "isPro": false, "type": "user", "user": "chi-wang" } }, { "_id": "67b6a3fa09841367596a1dbc", "hidden": false, "name": "Yanping Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1dbd", "hidden": false, "name": "Zhifeng Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6a3fa09841367596a1dbe", "hidden": false, "name": "Joseph E. Gonzalez", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:10:25.466Z", "user": { "_id": "645d2e8401f4eaab2a0878ce", "avatarUrl": "/avatars/1273c5fb607b4b622a746a42692fa632.svg", "fullname": "Joseph E. Gonzalez", "isPro": false, "type": "user", "user": "ProfJoeyG" } }, { "_id": "67b6a3fa09841367596a1dbf", "hidden": false, "name": "Ion Stoica", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-19T18:59:30
Autellix: An Efficient Serving Engine for LLM Agents as General Programs
Large language model (LLM) applications are evolving beyond simple chatbots into dynamic, general-purpose agentic programs, which scale LLM calls and output tokens to help AI agents reason, explore, and solve complex tasks. However, existing LLM serving systems ignore dependencies between programs and calls, missing significant opportunities for optimization. Our analysis reveals that programs submitted to LLM serving engines experience long cumulative wait times, primarily due to head-of-line blocking at both the individual LLM request and the program. To address this, we introduce Autellix, an LLM serving system that treats programs as first-class citizens to minimize their end-to-end latencies. Autellix intercepts LLM calls submitted by programs, enriching schedulers with program-level context. We propose two scheduling algorithms-for single-threaded and distributed programs-that preempt and prioritize LLM calls based on their programs' previously completed calls. Our evaluation demonstrates that across diverse LLMs and agentic workloads, Autellix improves throughput of programs by 4-15x at the same latency compared to state-of-the-art systems, such as vLLM.
18
67b6a3fb09841367596a1e06
null
null
2025-02-19T22:27:22.403000
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?
https://cdn-thumbnails.h…s/2502.13233.png
2
{ "_id": "64beb6b6140491ca9f803ebf", "avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg", "followerCount": null, "fullname": "Yucheng SHi", "isHf": false, "isMod": false, "isPro": false, "name": "YuchengShi", "type": "user" }
true
null
2502.13233
[ { "_id": "67b689aeba514d2c2c969289", "hidden": false, "name": "Yucheng Shi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:18.925Z", "user": { "_id": "64beb6b6140491ca9f803ebf", "avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg", "fullname": "Yucheng SHi", "isPro": false, "type": "user", "user": "YuchengShi" } }, { "_id": "67b689aeba514d2c2c96928a", "hidden": false, "name": "Tianze Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b689aeba514d2c2c96928b", "hidden": false, "name": "Canyu Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:58:09.392Z", "user": { "_id": "6483af58571c2dcfa98cae82", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6483af58571c2dcfa98cae82/d8Vnh_EG1KxeEoNAAQXHZ.jpeg", "fullname": "Canyu Chen", "isPro": false, "type": "user", "user": "canyuchen" } }, { "_id": "67b689aeba514d2c2c96928c", "hidden": false, "name": "Quanzheng Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b689aeba514d2c2c96928d", "hidden": false, "name": "Tianming Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b689aeba514d2c2c96928e", "hidden": false, "name": "Xiang Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b689aeba514d2c2c96928f", "hidden": false, "name": "Ninghao Liu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T19:12:15
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?
Large Language Models (LLMs) have shown remarkable capabilities in general domains but often struggle with tasks requiring specialized knowledge. Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve external information from static knowledge bases, which can be outdated or incomplete, missing fine-grained clinical details essential for accurate medical question answering. In this work, we propose SearchRAG, a novel framework that overcomes these limitations by leveraging real-time search engines. Our method employs synthetic query generation to convert complex medical questions into search-engine-friendly queries and utilizes uncertainty-based knowledge selection to filter and incorporate the most relevant and informative medical knowledge into the LLM's input. Experimental results demonstrate that our method significantly improves response accuracy in medical question answering tasks, particularly for complex questions requiring detailed and up-to-date knowledge.
13
67b689aeba514d2c2c9692b9
null
null
2025-02-19T22:13:49.764000
RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning
https://cdn-thumbnails.h…s/2502.13144.png
2
{ "_id": "6536187bd34e9f02b9df1c3b", "avatarUrl": "/avatars/0b34d62868b93053b0a05062a018b5bd.svg", "followerCount": 1, "fullname": "Hao Gao", "isHf": false, "isMod": false, "isPro": false, "name": "Hao605", "type": "user" }
true
null
2502.13144
[ { "_id": "67b55c7fba22c1ddbb8d5746", "hidden": false, "name": "Hao Gao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:48.944Z", "user": { "_id": "6536187bd34e9f02b9df1c3b", "avatarUrl": "/avatars/0b34d62868b93053b0a05062a018b5bd.svg", "fullname": "Hao Gao", "isPro": false, "type": "user", "user": "Hao605" } }, { "_id": "67b55c7fba22c1ddbb8d5747", "hidden": false, "name": "Shaoyu Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:04:07.016Z", "user": { "_id": "67adc154a266b54c2835cceb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/HxrPMqiiTWE3z8qhMlW_m.png", "fullname": "Shaoyu Chen", "isPro": false, "type": "user", "user": "Atan-0221" } }, { "_id": "67b55c7fba22c1ddbb8d5748", "hidden": false, "name": "Bo Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d5749", "hidden": false, "name": "Bencheng Liao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d574a", "hidden": false, "name": "Yiang Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d574b", "hidden": false, "name": "Xiaoyang Guo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d574c", "hidden": false, "name": "Yuechuan Pu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d574d", "hidden": false, "name": "Haoran Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d574e", "hidden": false, "name": "Xiangyu Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:05:51.144Z", "user": { "_id": "6481ba38814ff581861335ef", "avatarUrl": "/avatars/183e840c33c6c9f88a009ceb0aec697a.svg", "fullname": "xiangyu", "isPro": false, "type": "user", "user": "xiangyuli" } }, { "_id": "67b55c7fba22c1ddbb8d574f", "hidden": false, "name": "Xinbang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d5750", "hidden": false, "name": "Ying Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d5751", "hidden": false, "name": "Wenyu Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:05:33.229Z", "user": { "_id": "66c2e7fc934e2f07753542ac", "avatarUrl": "/avatars/f6fa3f94435cf1c1d06daa6c925d07d0.svg", "fullname": "LWY", "isPro": false, "type": "user", "user": "wenyuliu" } }, { "_id": "67b55c7fba22c1ddbb8d5752", "hidden": false, "name": "Qian Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55c7fba22c1ddbb8d5753", "hidden": false, "name": "Xinggang Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:05:15.793Z", "user": { "_id": "62600de6d47e3dbae32ce1ce", "avatarUrl": "/avatars/a536417cfec6e10ac415091bd1829426.svg", "fullname": "Xinggang Wang", "isPro": false, "type": "user", "user": "xinggangw" } } ]
2025-02-18T18:59:21
RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning
Existing end-to-end autonomous driving (AD) algorithms typically follow the Imitation Learning (IL) paradigm, which faces challenges such as causal confusion and the open-loop gap. In this work, we establish a 3DGS-based closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS techniques, we construct a photorealistic digital replica of the real physical world, enabling the AD policy to extensively explore the state space and learn to handle out-of-distribution scenarios through large-scale trial and error. To enhance safety, we design specialized rewards that guide the policy to effectively respond to safety-critical events and understand real-world causal relationships. For better alignment with human driving behavior, IL is incorporated into RL training as a regularization term. We introduce a closed-loop evaluation benchmark consisting of diverse, previously unseen 3DGS environments. Compared to IL-based methods, RAD achieves stronger performance in most closed-loop metrics, especially 3x lower collision rate. Abundant closed-loop results are presented at https://hgao-cv.github.io/RAD.
36
67b55c80ba22c1ddbb8d579c
null
null
2025-02-19T21:38:13.468000
Small Models Struggle to Learn from Strong Reasoners
https://cdn-thumbnails.h…s/2502.12143.png
6
{ "_id": "653df1323479e9ebbe3eb6cc", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653df1323479e9ebbe3eb6cc/K_g-r1iMRNKj99LXPuYF3.jpeg", "followerCount": 11, "fullname": "Zhangchen Xu", "isHf": false, "isMod": false, "isPro": true, "name": "flydust", "type": "user" }
true
null
2502.12143
[ { "_id": "67b4d05a9f8a8ab661450397", "hidden": false, "name": "Yuetai Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4d05a9f8a8ab661450398", "hidden": false, "name": "Xiang Yue", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4d05a9f8a8ab661450399", "hidden": false, "name": "Zhangchen Xu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:32.715Z", "user": { "_id": "653df1323479e9ebbe3eb6cc", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653df1323479e9ebbe3eb6cc/K_g-r1iMRNKj99LXPuYF3.jpeg", "fullname": "Zhangchen Xu", "isPro": true, "type": "user", "user": "flydust" } }, { "_id": "67b4d05a9f8a8ab66145039a", "hidden": false, "name": "Fengqing Jiang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:09:45.083Z", "user": { "_id": "6531e1021dd8ebbdc1a6fd8e", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6531e1021dd8ebbdc1a6fd8e/lIcl7zCPtzRsfiUh6uY1o.jpeg", "fullname": "Fengqing Jiang", "isPro": false, "type": "user", "user": "fqjiang" } }, { "_id": "67b4d05a9f8a8ab66145039b", "hidden": false, "name": "Luyao Niu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:09:50.890Z", "user": { "_id": "666dfd4770f5a2cb4aefd12f", "avatarUrl": "/avatars/fa0e0dbc203a21e58dda8fdb4cbc67ad.svg", "fullname": "Luyao Niu", "isPro": false, "type": "user", "user": "LNIU" } }, { "_id": "67b4d05a9f8a8ab66145039c", "hidden": false, "name": "Bill Yuchen Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:09:56.614Z", "user": { "_id": "607f666a4ad99100d63ce35c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/607f666a4ad99100d63ce35c/QxhxnvfeV6efkxwUFHwjI.png", "fullname": "Bill Yuchen Lin", "isPro": false, "type": "user", "user": "yuchenlin" } }, { "_id": "67b4d05a9f8a8ab66145039d", "hidden": false, "name": "Bhaskar Ramasubramanian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4d05a9f8a8ab66145039e", "hidden": false, "name": "Radha Poovendran", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:56:15
Small Models Struggle to Learn from Strong Reasoners
Large language models (LLMs) excel in complex reasoning tasks, and distilling their reasoning capabilities into smaller models has shown promise. However, we uncover an interesting phenomenon, which we term the Small Model Learnability Gap: small models (leq3B parameters) do not consistently benefit from long chain-of-thought (CoT) reasoning or distillation from larger models. Instead, they perform better when fine-tuned on shorter, simpler reasoning chains that better align with their intrinsic learning capacity. To address this, we propose Mix Distillation, a simple yet effective strategy that balances reasoning complexity by combining long and short CoT examples or reasoning from both larger and smaller models. Our experiments demonstrate that Mix Distillation significantly improves small model reasoning performance compared to training on either data alone. These findings highlight the limitations of direct strong model distillation and underscore the importance of adapting reasoning complexity for effective reasoning capability transfer.
28
67b4d05b9f8a8ab6614503cb
null
null
2025-02-19T21:35:20.931000
LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
https://cdn-thumbnails.h…s/2502.13922.png
2
{ "_id": "645475e2548f22be59847604", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/645475e2548f22be59847604/EhSurrZ25u31qQ2TVXQXt.jpeg", "followerCount": 1, "fullname": "Chen", "isHf": false, "isMod": false, "isPro": false, "name": "Guanzheng", "type": "user" }
true
null
2502.13922
[ { "_id": "67b6948dbef24bad725b5d4b", "hidden": false, "name": "Guanzheng Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:30.816Z", "user": { "_id": "645475e2548f22be59847604", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/645475e2548f22be59847604/EhSurrZ25u31qQ2TVXQXt.jpeg", "fullname": "Chen", "isPro": false, "type": "user", "user": "Guanzheng" } }, { "_id": "67b6948dbef24bad725b5d4c", "hidden": false, "name": "Xin Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6948dbef24bad725b5d4d", "hidden": false, "name": "Michael Qizhe Shieh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b6948dbef24bad725b5d4e", "hidden": false, "name": "Lidong Bing", "status": "admin_assigned", "statusLastChangedAt": "2025-02-20T16:08:48.204Z", "user": { "_id": "6454685a548f22be598414c4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/eMjMWKJ-AouF7eY1-RzGF.jpeg", "fullname": "Lidong Bing", "isPro": false, "type": "user", "user": "LidongBing" } } ]
2025-02-19T17:59:03
LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
Large Language Models (LLMs) have demonstrated remarkable capabilities through pretraining and alignment. However, superior short-context LLMs may underperform in long-context scenarios due to insufficient long-context alignment. This alignment process remains challenging due to the impracticality of human annotation for extended contexts and the difficulty in balancing short- and long-context performance. To address these challenges, we introduce LongPO, that enables short-context LLMs to self-evolve to excel on long-context tasks by internally transferring short-context capabilities. LongPO harnesses LLMs to learn from self-generated short-to-long preference data, comprising paired responses generated for identical instructions with long-context inputs and their compressed short-context counterparts, respectively. This preference reveals capabilities and potentials of LLMs cultivated during short-context alignment that may be diminished in under-aligned long-context scenarios. Additionally, LongPO incorporates a short-to-long KL constraint to mitigate short-context performance decline during long-context alignment. When applied to Mistral-7B-Instruct-v0.2 from 128K to 512K context lengths, LongPO fully retains short-context performance and largely outperforms naive SFT and DPO in both long- and short-context tasks. Specifically, \ourMethod-trained models can achieve results on long-context benchmarks comparable to, or even surpassing, those of superior LLMs (e.g., GPT-4-128K) that involve extensive long-context annotation and larger parameter scales.
25
67b6948ebef24bad725b5d84
null
null
2025-02-19T20:37:51.607000
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
https://cdn-thumbnails.h…s/2502.12659.png
2
{ "_id": "64679a226192d39142245e5e", "avatarUrl": "/avatars/05abee0b6317f100923936ca2099e9eb.svg", "followerCount": 4, "fullname": "Xin Eric Wang", "isHf": false, "isMod": false, "isPro": false, "name": "xw-eric", "type": "user" }
false
null
2502.12659
[ { "_id": "67b68700ce3055c9c0fc2987", "hidden": false, "name": "Kaiwen Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc2988", "hidden": false, "name": "Chengzhi Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc2989", "hidden": false, "name": "Xuandong Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc298a", "hidden": false, "name": "Shreedhar Jangam", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc298b", "hidden": false, "name": "Jayanth Srinivasa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc298c", "hidden": false, "name": "Gaowen Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc298d", "hidden": false, "name": "Dawn Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b68700ce3055c9c0fc298e", "hidden": false, "name": "Xin Eric Wang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T09:06:07
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
The rapid development of large reasoning models, such as OpenAI-o3 and DeepSeek-R1, has led to significant improvements in complex reasoning over non-reasoning large language models~(LLMs). However, their enhanced capabilities, combined with the open-source access of models like DeepSeek-R1, raise serious safety concerns, particularly regarding their potential for misuse. In this work, we present a comprehensive safety assessment of these reasoning models, leveraging established safety benchmarks to evaluate their compliance with safety regulations. Furthermore, we investigate their susceptibility to adversarial attacks, such as jailbreaking and prompt injection, to assess their robustness in real-world applications. Through our multi-faceted analysis, we uncover four key findings: (1) There is a significant safety gap between the open-source R1 models and the o3-mini model, on both safety benchmark and attack, suggesting more safety effort on R1 is needed. (2) The distilled reasoning model shows poorer safety performance compared to its safety-aligned base models. (3) The stronger the model's reasoning ability, the greater the potential harm it may cause when answering unsafe questions. (4) The thinking process in R1 models pose greater safety concerns than their final answers. Our study provides insights into the security implications of reasoning models and highlights the need for further advancements in R1 models' safety to close the gap.
6
67b68701ce3055c9c0fc29e4
null
null
2025-02-19T18:20:05.946000
Scaling Autonomous Agents via Automatic Reward Modeling And Planning
https://cdn-thumbnails.h…s/2502.12130.png
2
{ "_id": "654e024de113b04ba5c71e2f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654e024de113b04ba5c71e2f/WH6S_gpQU6OXqDaiPpheK.jpeg", "followerCount": 1, "fullname": "Rui Sun", "isHf": false, "isMod": false, "isPro": false, "name": "ThreeSR", "type": "user" }
true
null
2502.12130
[ { "_id": "67b657d6a267b1a747a7fed6", "hidden": false, "name": "Zhenfang Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b657d6a267b1a747a7fed7", "hidden": false, "name": "Delin Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b657d6a267b1a747a7fed8", "hidden": false, "name": "Rui Sun", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:34.773Z", "user": { "_id": "654e024de113b04ba5c71e2f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/654e024de113b04ba5c71e2f/WH6S_gpQU6OXqDaiPpheK.jpeg", "fullname": "Rui Sun", "isPro": false, "type": "user", "user": "ThreeSR" } }, { "_id": "67b657d6a267b1a747a7fed9", "hidden": false, "name": "Wenjun Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b657d6a267b1a747a7feda", "hidden": false, "name": "Chuang Gan", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:49:25
Scaling Autonomous Agents via Automatic Reward Modeling And Planning
Large language models (LLMs) have demonstrated remarkable capabilities across a range of text-generation tasks. However, LLMs still struggle with problems requiring multi-step decision-making and environmental feedback, such as online shopping, scientific reasoning, and mathematical problem-solving. Unlike pure text data, collecting large-scale decision-making data is challenging. Moreover, many powerful LLMs are only accessible through APIs, which hinders their fine-tuning for agent tasks due to cost and complexity. To address LLM agents' limitations, we propose a framework that can automatically learn a reward model from the environment without human annotations. This model can be used to evaluate the action trajectories of LLM agents and provide heuristics for task planning. Specifically, our approach involves employing one LLM-based agent to navigate an environment randomly, generating diverse action trajectories. Subsequently, a separate LLM is leveraged to assign a task intent and synthesize a negative response alongside the correct response for each trajectory. These triplets (task intent, positive response, and negative response) are then utilized as training data to optimize a reward model capable of scoring action trajectories. The effectiveness and generalizability of our framework are demonstrated through evaluations conducted on different agent benchmarks. In conclusion, our proposed framework represents a significant advancement in enhancing LLM agents' decision-making capabilities. By automating the learning of reward models, we overcome the challenges of data scarcity and API limitations, potentially revolutionizing the application of LLMs in complex and interactive environments. This research paves the way for more sophisticated AI agents capable of tackling a wide range of real-world problems requiring multi-step decision-making.
2
67b657d7a267b1a747a7ff1a
null
null
2025-02-19T13:39:32.672000
YOLOv12: Attention-Centric Real-Time Object Detectors
https://cdn-thumbnails.h…s/2502.12524.png
2
{ "_id": "5f1158120c833276f61f1a84", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg", "followerCount": 777, "fullname": "Niels Rogge", "isHf": true, "isMod": false, "isPro": false, "name": "nielsr", "type": "user" }
false
null
2502.12524
[ { "_id": "67b608ca13df25808fbc22ae", "hidden": false, "name": "Yunjie Tian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b608ca13df25808fbc22af", "hidden": false, "name": "Qixiang Ye", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b608ca13df25808fbc22b0", "hidden": false, "name": "David Doermann", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T04:20:14
YOLOv12: Attention-Centric Real-Time Object Detectors
Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms. YOLOv12 surpasses all popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the computation and 45% of the parameters. More comparisons are shown in Figure 1.
10
67b608cb13df25808fbc2308
null
null
2025-02-19T10:33:08.946000
Harnessing Vision Models for Time Series Analysis: A Survey
https://cdn-thumbnails.h…s/2502.08869.png
2
{ "_id": "67b5efbe38c175486e2869b9", "avatarUrl": "/avatars/64a698259033bb8ac324e57c557a9aa9.svg", "followerCount": null, "fullname": "Jingchao Ni", "isHf": false, "isMod": false, "isPro": false, "name": "nijingchao", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/iBIxlNXQX2KDabTTeqWL0.png", "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/cGTQawzFrVI21iLfRjpFt.png", "https://cdn-uploads.huggingface.co/production/uploads/67b5efbe38c175486e2869b9/j-lNPZ3OqCUHj6vhnQS5v.png" ]
2502.08869
[ { "_id": "67b5f3e30e7fed1190f29f80", "hidden": false, "name": "Jingchao Ni", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T15:14:31.563Z", "user": { "_id": "67b5efbe38c175486e2869b9", "avatarUrl": "/avatars/64a698259033bb8ac324e57c557a9aa9.svg", "fullname": "Jingchao Ni", "isPro": false, "type": "user", "user": "nijingchao" } }, { "_id": "67b5f3e30e7fed1190f29f81", "hidden": false, "name": "Ziming Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5f3e30e7fed1190f29f82", "hidden": false, "name": "ChengAo Shen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:36:57.535Z", "user": { "_id": "66c2a909a9425c872d5213f4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66c2a909a9425c872d5213f4/u-iodlEs2bGzbLwv-cqKx.jpeg", "fullname": "ChengAo Shen", "isPro": false, "type": "user", "user": "ChengAoShen" } }, { "_id": "67b5f3e30e7fed1190f29f83", "hidden": false, "name": "Hanghang Tong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5f3e30e7fed1190f29f84", "hidden": false, "name": "Dongjin Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5f3e30e7fed1190f29f85", "hidden": false, "name": "Wei Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5f3e30e7fed1190f29f86", "hidden": false, "name": "Dongsheng Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5f3e30e7fed1190f29f87", "hidden": false, "name": "Haifeng Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-13T00:42:11
Harnessing Vision Models for Time Series Analysis: A Survey
Time series analysis has witnessed the inspiring development from traditional autoregressive models, deep learning models, to recent Transformers and Large Language Models (LLMs). Efforts in leveraging vision models for time series analysis have also been made along the way but are less visible to the community due to the predominant research on sequence modeling in this domain. However, the discrepancy between continuous time series and the discrete token space of LLMs, and the challenges in explicitly modeling the correlations of variates in multivariate time series have shifted some research attentions to the equally successful Large Vision Models (LVMs) and Vision Language Models (VLMs). To fill the blank in the existing literature, this survey discusses the advantages of vision models over LLMs in time series analysis. It provides a comprehensive and in-depth overview of the existing methods, with dual views of detailed taxonomy that answer the key research questions including how to encode time series as images and how to model the imaged time series for various tasks. Additionally, we address the challenges in the pre- and post-processing steps involved in this framework and outline future directions to further advance time series analysis with vision models.
2
67b5f3e30e7fed1190f29fb7
null
null
2025-02-19T08:03:59.885000
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
https://cdn-thumbnails.h…s/2502.12929.png
2
{ "_id": "643837ef581e6bf0fa9c72f8", "avatarUrl": "/avatars/5b95d2509d1c7640d77a3405ebd53eaf.svg", "followerCount": null, "fullname": "Lakshmi Nair", "isHf": false, "isMod": false, "isPro": false, "name": "lnair", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/643837ef581e6bf0fa9c72f8/HhevzVLx8wGDy7sD0zSAj.png" ]
2502.12929
[ { "_id": "67b546dc2b2ec6908f00c771", "hidden": false, "name": "Lakshmi Nair", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:39.033Z", "user": { "_id": "643837ef581e6bf0fa9c72f8", "avatarUrl": "/avatars/5b95d2509d1c7640d77a3405ebd53eaf.svg", "fullname": "Lakshmi Nair", "isPro": false, "type": "user", "user": "lnair" } }, { "_id": "67b546dc2b2ec6908f00c772", "hidden": false, "name": "Ian Trase", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546dc2b2ec6908f00c773", "hidden": false, "name": "Mark Kim", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T15:11:46
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Learning tasks (AutoML). Our framework outperforms state-of-the-art baselines, achieving improvements of 38.2% - 69.2% on standard data science tasks, and 37.4% - 47.9% on therapeutic chemistry tasks. With an overall operation cost under $1 per task, our framework is well-suited for cost-sensitive applications. Beyond classification and regression, we illustrate the broader applicability of our FoO-based agentic system to tasks such as reinforcement learning and image generation. Our framework presents significant advancements compared to current state-of-the-art agentic systems for AutoML, due to the benefits of FoO in enforcing diversity in LLM solutions through compressed, explainable representations that also support long-term memory when combined with case-based reasoning.
7
67b546dd2b2ec6908f00c7f6
null
null
2025-02-19T07:53:04.918000
Text2World: Benchmarking Large Language Models for Symbolic World Model Generation
https://cdn-thumbnails.h…s/2502.13092.png
2
{ "_id": "6237df4a5ab9df625fb70c1a", "avatarUrl": "/avatars/c5d1a52895cb6515f28019a8e7e3e855.svg", "followerCount": 1, "fullname": "Mengkang Hu", "isHf": false, "isMod": false, "isPro": false, "name": "MengkangHu", "type": "user" }
true
null
2502.13092
[ { "_id": "67b5473109afe1f3029835cb", "hidden": false, "name": "Mengkang Hu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:15.592Z", "user": { "_id": "6237df4a5ab9df625fb70c1a", "avatarUrl": "/avatars/c5d1a52895cb6515f28019a8e7e3e855.svg", "fullname": "Mengkang Hu", "isPro": false, "type": "user", "user": "MengkangHu" } }, { "_id": "67b5473109afe1f3029835cc", "hidden": false, "name": "Tianxing Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5473109afe1f3029835cd", "hidden": false, "name": "Yude Zou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:22.709Z", "user": { "_id": "67b4123c9a692ad23daa8a73", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67b4123c9a692ad23daa8a73/vhkxv0NkM2socQ2tySzqc.jpeg", "fullname": "Yude Zou", "isPro": false, "type": "user", "user": "xdzouyd" } }, { "_id": "67b5473109afe1f3029835ce", "hidden": false, "name": "Yuheng Lei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5473109afe1f3029835cf", "hidden": false, "name": "Qiguang Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:12.095Z", "user": { "_id": "636f526a6cd69d9a36ff2b53", "avatarUrl": "/avatars/8f2271a193fcac609d9be270552b5afa.svg", "fullname": "Qiguang Chen", "isPro": false, "type": "user", "user": "LightChen2333" } }, { "_id": "67b5473109afe1f3029835d0", "hidden": false, "name": "Ming Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5473109afe1f3029835d1", "hidden": false, "name": "Hongyuan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5473109afe1f3029835d2", "hidden": false, "name": "Wenqi Shao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5473109afe1f3029835d3", "hidden": false, "name": "Ping Luo", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T17:59:48
Text2World: Benchmarking Large Language Models for Symbolic World Model Generation
Recently, there has been growing interest in leveraging large language models (LLMs) to generate symbolic world models from textual descriptions. Although LLMs have been extensively explored in the context of world modeling, prior studies encountered several challenges, including evaluation randomness, dependence on indirect metrics, and a limited domain scope. To address these limitations, we introduce a novel benchmark, Text2World, based on planning domain definition language (PDDL), featuring hundreds of diverse domains and employing multi-criteria, execution-based metrics for a more robust evaluation. We benchmark current LLMs using Text2World and find that reasoning models trained with large-scale reinforcement learning outperform others. However, even the best-performing model still demonstrates limited capabilities in world modeling. Building on these insights, we examine several promising strategies to enhance the world modeling capabilities of LLMs, including test-time scaling, agent training, and more. We hope that Text2World can serve as a crucial resource, laying the groundwork for future research in leveraging LLMs as world models. The project page is available at https://text-to-world.github.io/.
12
67b5473209afe1f302983600
null
null
2025-02-19T06:51:04.672000
Atom of Thoughts for Markov LLM Test-Time Scaling
https://cdn-thumbnails.h…s/2502.12018.png
3
{ "_id": "6402e8fb06c715b93407442d", "avatarUrl": "/avatars/12b67f0632be5a53b56d8a68586a7f98.svg", "followerCount": 2, "fullname": "Fengwei Teng", "isHf": false, "isMod": false, "isPro": false, "name": "leavendough", "type": "user" }
true
null
2502.12018
[ { "_id": "67b5c4ed85107d20148ae710", "hidden": false, "name": "Fengwei Teng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T11:49:34.612Z", "user": { "_id": "6402e8fb06c715b93407442d", "avatarUrl": "/avatars/12b67f0632be5a53b56d8a68586a7f98.svg", "fullname": "Fengwei Teng", "isPro": false, "type": "user", "user": "leavendough" } }, { "_id": "67b5c4ed85107d20148ae711", "hidden": false, "name": "Zhaoyang Yu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:43:14.422Z", "user": { "_id": "640dc84b474aa6f89554d518", "avatarUrl": "/avatars/64f47f76d97c5e91b7ab8380bcada61c.svg", "fullname": "Zhaoyang Yu", "isPro": false, "type": "user", "user": "MoshiQAQ" } }, { "_id": "67b5c4ed85107d20148ae712", "hidden": false, "name": "Quan Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5c4ed85107d20148ae713", "hidden": false, "name": "Jiayi Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:43:54.773Z", "user": { "_id": "66071c8b013a0afdf40fbfd1", "avatarUrl": "/avatars/683e4ba5991059110631759a5975eacc.svg", "fullname": "JiaYi Zhang", "isPro": false, "type": "user", "user": "Bbedd" } }, { "_id": "67b5c4ed85107d20148ae714", "hidden": false, "name": "Chenglin Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5c4ed85107d20148ae715", "hidden": false, "name": "Yuyu Luo", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T16:52:42
Atom of Thoughts for Markov LLM Test-Time Scaling
Large Language Models (LLMs) achieve superior performance through training-time scaling, and test-time scaling further enhances their capabilities by conducting effective reasoning during inference. However, as the scale of reasoning increases, existing test-time scaling methods suffer from accumulated historical information, which not only wastes computational resources but also interferes with effective reasoning. To address this issue, we observe that complex reasoning progress is often achieved by solving a sequence of independent subquestions, each being self-contained and verifiable. These subquestions are essentially atomic questions, relying primarily on their current state rather than accumulated history, similar to the memoryless transitions in a Markov process. Based on this observation, we propose Atom of Thoughts (AoT), where each state transition in the reasoning process consists of decomposing the current question into a dependency-based directed acyclic graph and contracting its subquestions, forming a new atomic question state. This iterative decomposition-contraction process continues until reaching directly solvable atomic questions, naturally realizing Markov transitions between question states. Furthermore, these atomic questions can be seamlessly integrated into existing test-time scaling methods, enabling AoT to serve as a plug-in enhancement for improving reasoning capabilities. Experiments across six benchmarks demonstrate the effectiveness of AoT both as a standalone framework and a plug-in enhancement. Notably, on HotpotQA, when applied to gpt-4o-mini, AoT achieves an 80.6% F1 score, surpassing o3-mini by 3.4% and DeepSeek-R1 by 10.6%. The code will be available at https://github.com/qixucen/atom.
15
67b5c4ee85107d20148ae73d
null
null
2025-02-19T06:13:51.101000
Eager Updates For Overlapped Communication and Computation in DiLoCo
https://cdn-thumbnails.h…s/2502.12996.png
2
{ "_id": "622792366303bf1dc304f49f", "avatarUrl": "/avatars/975c1cc3eb2f97cf8e848162056d5bea.svg", "followerCount": 4, "fullname": "Arthur Douillard", "isHf": false, "isMod": false, "isPro": false, "name": "ArthurDouillard", "type": "user" }
true
null
2502.12996
[ { "_id": "67b5bcd091132877cf330179", "hidden": false, "name": "Satyen Kale", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5bcd091132877cf33017a", "hidden": false, "name": "Arthur Douillard", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:42:51.668Z", "user": { "_id": "622792366303bf1dc304f49f", "avatarUrl": "/avatars/975c1cc3eb2f97cf8e848162056d5bea.svg", "fullname": "Arthur Douillard", "isPro": false, "type": "user", "user": "ArthurDouillard" } }, { "_id": "67b5bcd091132877cf33017b", "hidden": false, "name": "Yanislav Donchev", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:42:57.488Z", "user": { "_id": "61796df92357d28cdf952511", "avatarUrl": "/avatars/01821f364cf78c95aed1986d91d40610.svg", "fullname": "Yanislav Donchev", "isPro": false, "type": "user", "user": "yannidd" } } ]
2025-02-18T16:16:14
Eager Updates For Overlapped Communication and Computation in DiLoCo
Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their own local data, and an outer optimization step, where the inner updates are synchronized. While such approaches require orders of magnitude less communication than standard data-parallel training, in settings where the workers are datacenters, even the limited communication requirements of these approaches can still cause significant slow downs due to the blocking necessary at each outer optimization step. In this paper, we investigate techniques to mitigate this issue by overlapping communication with computation in a manner that allows the outer optimization step to fully overlap with the inner optimization phase. We show that a particular variant, dubbed eager updates, provides competitive performance with standard DiLoCo in settings with low bandwidth between workers.
7
67b5bcd191132877cf3301aa
null
null
2025-02-19T04:54:27.788000
FinMTEB: Finance Massive Text Embedding Benchmark
https://cdn-thumbnails.h…s/2502.10990.png
2
{ "_id": "647d834618274bce03013cc2", "avatarUrl": "/avatars/a95c7df96dc4fb6a96193f6dd5068227.svg", "followerCount": 2, "fullname": "yixuan", "isHf": false, "isMod": false, "isPro": true, "name": "yixuantt", "type": "user" }
true
null
2502.10990
[ { "_id": "67b3ee6c1e80a69e79c3155a", "hidden": false, "name": "Yixuan Tang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:50.969Z", "user": { "_id": "647d834618274bce03013cc2", "avatarUrl": "/avatars/a95c7df96dc4fb6a96193f6dd5068227.svg", "fullname": "yixuan", "isPro": true, "type": "user", "user": "yixuantt" } }, { "_id": "67b3ee6c1e80a69e79c3155b", "hidden": false, "name": "Yi Yang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T04:23:52
FinMTEB: Finance Massive Text Embedding Benchmark
Embedding models play a crucial role in representing and retrieving information across various NLP applications. Recent advances in large language models (LLMs) have further enhanced the performance of embedding models. While these models are often benchmarked on general-purpose datasets, real-world applications demand domain-specific evaluation. In this work, we introduce the Finance Massive Text Embedding Benchmark (FinMTEB), a specialized counterpart to MTEB designed for the financial domain. FinMTEB comprises 64 financial domain-specific embedding datasets across 7 tasks that cover diverse textual types in both Chinese and English, such as financial news articles, corporate annual reports, ESG reports, regulatory filings, and earnings call transcripts. We also develop a finance-adapted model, FinPersona-E5, using a persona-based data synthetic method to cover diverse financial embedding tasks for training. Through extensive evaluation of 15 embedding models, including FinPersona-E5, we show three key findings: (1) performance on general-purpose benchmarks shows limited correlation with financial domain tasks; (2) domain-adapted models consistently outperform their general-purpose counterparts; and (3) surprisingly, a simple Bag-of-Words (BoW) approach outperforms sophisticated dense embeddings in financial Semantic Textual Similarity (STS) tasks, underscoring current limitations in dense embedding techniques. Our work establishes a robust evaluation framework for financial NLP applications and provides crucial insights for developing domain-specific embedding models.
3
67b3ee6d1e80a69e79c3158f
null
null
2025-02-19T04:43:42.973000
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
https://cdn-thumbnails.h…s/2502.13063.png
4
{ "_id": "639c6e978a34ed9a404c6a7b", "avatarUrl": "/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg", "followerCount": 7, "fullname": "MIKHAIL BURTSEV", "isHf": false, "isMod": false, "isPro": false, "name": "mbur", "type": "user" }
true
null
2502.13063
[ { "_id": "67b5a7896f72266cb765e744", "hidden": false, "name": "Yuri Kuratov", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T09:42:34.422Z", "user": { "_id": "618b9540682ec1c38327e586", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/618b9540682ec1c38327e586/v_ZBkfh8O9Zh6C2YQpuBX.jpeg", "fullname": "Yury Kuratov", "isPro": false, "type": "user", "user": "yurakuratov" } }, { "_id": "67b5a7896f72266cb765e745", "hidden": false, "name": "Mikhail Arkhipov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5a7896f72266cb765e746", "hidden": false, "name": "Aydar Bulatov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T14:43:06.400Z", "user": { "_id": "64c8b321cb2f1bf0e7c0f54b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c8b321cb2f1bf0e7c0f54b/JflXxMVnG9I0IB5YNyhXF.jpeg", "fullname": "Aydar Bulatov", "isPro": false, "type": "user", "user": "booydar" } }, { "_id": "67b5a7896f72266cb765e747", "hidden": false, "name": "Mikhail Burtsev", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:56:59.080Z", "user": { "_id": "639c6e978a34ed9a404c6a7b", "avatarUrl": "/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg", "fullname": "MIKHAIL BURTSEV", "isPro": false, "type": "user", "user": "mbur" } } ]
2025-02-18T17:08:45
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
A range of recent works addresses the problem of compression of sequence of tokens into a shorter sequence of real-valued vectors to be used as inputs instead of token embeddings or key-value cache. These approaches allow to reduce the amount of compute in existing language models. Despite relying on powerful models as encoders, the maximum attainable lossless compression ratio is typically not higher than x10. This fact is highly intriguing because, in theory, the maximum information capacity of large real-valued vectors is far beyond the presented rates even for 16-bit precision and a modest vector size. In this work, we explore the limits of compression by replacing the encoder with a per-sample optimization procedure. We show that vectors with compression ratios up to x1500 exist, which highlights two orders of magnitude gap between existing and practically attainable solutions. Furthermore, we empirically show that the compression limits are determined not by the length of the input but by the amount of uncertainty to be reduced, namely, the cross-entropy loss on this sequence without any conditioning. The obtained limits highlight the substantial gap between the theoretical capacity of input embeddings and their practical utilization, suggesting significant room for optimization in model design.
64
67b5a78a6f72266cb765e779
null
null
2025-02-19T03:03:51.930000
You Do Not Fully Utilize Transformer's Representation Capacity
https://cdn-thumbnails.h…s/2502.09245.png
3
{ "_id": "63ed5676684767daecac6f8a", "avatarUrl": "/avatars/d0e4a715f9c3fb6d74c183bab751ec35.svg", "followerCount": 4, "fullname": "Yaroslav Aksenov", "isHf": false, "isMod": false, "isPro": false, "name": "yaraksen", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63ed5676684767daecac6f8a/tZDsnW0gjHoYCpbZ-wwJi.png" ]
2502.09245
[ { "_id": "67b57a993d4f319f1fa9424b", "hidden": false, "name": "Gleb Gerasimov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T10:10:30.547Z", "user": { "_id": "65db0871ab2f64915ce05e73", "avatarUrl": "/avatars/77e03f493196c5413cd2a02270e93660.svg", "fullname": "Gleb Gerasimov", "isPro": false, "type": "user", "user": "gudleifrr" } }, { "_id": "67b57a993d4f319f1fa9424c", "hidden": false, "name": "Yaroslav Aksenov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:41.123Z", "user": { "_id": "63ed5676684767daecac6f8a", "avatarUrl": "/avatars/d0e4a715f9c3fb6d74c183bab751ec35.svg", "fullname": "Yaroslav Aksenov", "isPro": false, "type": "user", "user": "yaraksen" } }, { "_id": "67b57a993d4f319f1fa9424d", "hidden": false, "name": "Nikita Balagansky", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:33:26.858Z", "user": { "_id": "60b364e7f88532cd79eaff7b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654185363389-60b364e7f88532cd79eaff7b.jpeg", "fullname": "Nikita Balagansky", "isPro": false, "type": "user", "user": "elephantmipt" } }, { "_id": "67b57a993d4f319f1fa9424e", "hidden": false, "name": "Viacheslav Sinii", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T11:12:28.927Z", "user": { "_id": "6416272d986557e8cac64ece", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6416272d986557e8cac64ece/s3CLjNN_pGj-vJDcENFD2.jpeg", "fullname": "Viacheslav", "isPro": false, "type": "user", "user": "ummagumm-a" } }, { "_id": "67b57a993d4f319f1fa9424f", "hidden": false, "name": "Daniil Gavrilov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:43.143Z", "user": { "_id": "62a9c8edc19f92ae443ab37f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669110208492-62a9c8edc19f92ae443ab37f.png", "fullname": "Daniil Gavrilov", "isPro": false, "type": "user", "user": "kefirski" } } ]
2025-02-13T12:00:50
You Do Not Fully Utilize Transformer's Representation Capacity
In contrast to RNNs, which compress previous tokens into a single hidden state, Transformers can attend to all previous tokens directly. However, standard Transformers only use representations from the immediately preceding layer. In this paper, we show that this design choice causes representation collapse and leads to suboptimal performance. To address this issue, we introduce Layer-Integrated Memory (LIMe), a simple yet powerful approach that preserves the model's overall memory footprint while expanding its representational capacity by allowing access to hidden states from earlier layers. Through extensive experiments across various architectures and different lookup mechanisms, we demonstrate consistent performance improvements on a wide range of tasks. Moreover, our analysis of the learned representation dynamics and our exploration of depthwise circuits reveal how LIMe integrates information across layers, pointing to promising directions for future research.
34
67b57a9a3d4f319f1fa94274
null
null
2025-02-19T02:56:09.510000
Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey
https://cdn-thumbnails.h…s/2502.10708.png
2
{ "_id": "65407ba7a38390065750233f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg", "followerCount": 1, "fullname": "Zirui Song", "isHf": false, "isMod": false, "isPro": false, "name": "Ziruibest", "type": "user" }
true
null
2502.10708
[ { "_id": "67b58e32e972a2806a9a0451", "hidden": false, "name": "Zirui Song", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:38.943Z", "user": { "_id": "65407ba7a38390065750233f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg", "fullname": "Zirui Song", "isPro": false, "type": "user", "user": "Ziruibest" } }, { "_id": "67b58e32e972a2806a9a0452", "hidden": false, "name": "Bin Yan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58e32e972a2806a9a0453", "hidden": false, "name": "Yuhan Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:53.429Z", "user": { "_id": "627a124ffe55fa0f8ce0eaf7", "avatarUrl": "/avatars/5dd30cf87e60f257ecfa7d2f871d3f33.svg", "fullname": "Serendipity", "isPro": false, "type": "user", "user": "Yuhan" } }, { "_id": "67b58e32e972a2806a9a0454", "hidden": false, "name": "Miao Fang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58e32e972a2806a9a0455", "hidden": false, "name": "Mingzhe Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58e32e972a2806a9a0456", "hidden": false, "name": "Rui Yan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58e32e972a2806a9a0457", "hidden": false, "name": "Xiuying Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-15T07:43:43
Injecting Domain-Specific Knowledge into Large Language Models: A Comprehensive Survey
Large Language Models (LLMs) have demonstrated remarkable success in various tasks such as natural language understanding, text summarization, and machine translation. However, their general-purpose nature often limits their effectiveness in domain-specific applications that require specialized knowledge, such as healthcare, chemistry, or legal analysis. To address this, researchers have explored diverse methods to enhance LLMs by integrating domain-specific knowledge. In this survey, we provide a comprehensive overview of these methods, which we categorize into four key approaches: dynamic knowledge injection, static knowledge embedding, modular adapters, and prompt optimization. Each approach offers unique mechanisms to equip LLMs with domain expertise, balancing trade-offs between flexibility, scalability, and efficiency. We discuss how these methods enable LLMs to tackle specialized tasks, compare their advantages and disadvantages, evaluate domain-specific LLMs against general LLMs, and highlight the challenges and opportunities in this emerging field. For those interested in delving deeper into this area, we also summarize the commonly used datasets and benchmarks. To keep researchers updated on the latest studies, we maintain an open-source at: https://github.com/abilliyb/Knowledge_Injection_Survey_Papers, dedicated to documenting research in the field of specialized LLM.
4
67b58e33e972a2806a9a04b8
null
null
2025-02-19T02:47:33.654000
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research
https://cdn-thumbnails.h…s/2502.12669.png
2
{ "_id": "63024676056ec3a2a8714b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg", "followerCount": 5, "fullname": "Xiang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "Dominic789654", "type": "user" }
true
null
2502.12669
[ { "_id": "67b58c806e53744c2a373351", "hidden": false, "name": "Xiang Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:34:03.429Z", "user": { "_id": "63024676056ec3a2a8714b24", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg", "fullname": "Xiang Liu", "isPro": false, "type": "user", "user": "Dominic789654" } }, { "_id": "67b58c806e53744c2a373352", "hidden": false, "name": "Penglei Sun", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:34:15.889Z", "user": { "_id": "64eded5fdfe0a679d840bc98", "avatarUrl": "/avatars/4d4c67c13e547a4d296a301e8694e79e.svg", "fullname": "sunpenglei", "isPro": false, "type": "user", "user": "sunpenglei" } }, { "_id": "67b58c806e53744c2a373353", "hidden": false, "name": "Shuyan Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58c806e53744c2a373354", "hidden": false, "name": "Longhan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58c806e53744c2a373355", "hidden": false, "name": "Peijie Dong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58c806e53744c2a373356", "hidden": false, "name": "Huajie You", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T18:16:35.805Z", "user": { "_id": "660937dff7373477d86501b8", "avatarUrl": "/avatars/1edbe8c92b41bf496b962f71b306ea7b.svg", "fullname": "Huajie You", "isPro": false, "type": "user", "user": "FrankYOU" } }, { "_id": "67b58c806e53744c2a373357", "hidden": false, "name": "Yongqi Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:35:12.059Z", "user": { "_id": "64473221dcbe1333b64b2db2", "avatarUrl": "/avatars/5e4495d3581ad3e6ea3c47650f20b993.svg", "fullname": "yongqi zhang", "isPro": false, "type": "user", "user": "yongqi2023" } }, { "_id": "67b58c806e53744c2a373358", "hidden": false, "name": "Chang Yan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b58c806e53744c2a373359", "hidden": false, "name": "Xiaowen Chu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:35:20.611Z", "user": { "_id": "6676935fcd0b89a0115174b0", "avatarUrl": "/avatars/4caca1b672d29e787814f9a30bf20bcc.svg", "fullname": "Xiaowen Chu", "isPro": false, "type": "user", "user": "wenxinsiju" } }, { "_id": "67b58c806e53744c2a37335a", "hidden": false, "name": "Tong-yi Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T09:19:24
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research
The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain. We present a comprehensive knowledge-enhanced system for PSCs that integrates three key components. First, we develop Perovskite-KG, a domain-specific knowledge graph constructed from 1,517 research papers, containing 23,789 entities and 22,272 relationships. Second, we create two complementary datasets: Perovskite-Chat, comprising 55,101 high-quality question-answer pairs generated through a novel multi-agent framework, and Perovskite-Reasoning, containing 2,217 carefully curated materials science problems. Third, we introduce two specialized large language models: Perovskite-Chat-LLM for domain-specific knowledge assistance and Perovskite-Reasoning-LLM for scientific reasoning tasks. Experimental results demonstrate that our system significantly outperforms existing models in both domain-specific knowledge retrieval and scientific reasoning tasks, providing researchers with effective tools for literature review, experimental design, and complex problem-solving in PSC research.
2
67b58c826e53744c2a3733c2
null
null
2025-02-19T02:27:36.940000
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
https://cdn-thumbnails.h…s/2502.11271.png
3
{ "_id": "60f5f68fa7fd83d025749234", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f5f68fa7fd83d025749234/gCeJAZfzaANAcEvI6v5-P.jpeg", "followerCount": 8, "fullname": "Pan Lu", "isHf": false, "isMod": false, "isPro": false, "name": "lupantech", "type": "user" }
true
null
2502.11271
[ { "_id": "67b4322c217ec18a40587bec", "hidden": false, "name": "Pan Lu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:43.677Z", "user": { "_id": "60f5f68fa7fd83d025749234", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f5f68fa7fd83d025749234/gCeJAZfzaANAcEvI6v5-P.jpeg", "fullname": "Pan Lu", "isPro": false, "type": "user", "user": "lupantech" } }, { "_id": "67b4322c217ec18a40587bed", "hidden": false, "name": "Bowen Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:37.912Z", "user": { "_id": "64b988b965be45c7766b105a", "avatarUrl": "/avatars/ef1944c79cfe851d079dcba0603526ab.svg", "fullname": "Bowen Chen", "isPro": false, "type": "user", "user": "bowen118" } }, { "_id": "67b4322c217ec18a40587bee", "hidden": false, "name": "Sheng Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:35.169Z", "user": { "_id": "653585666141b3927a083b4f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653585666141b3927a083b4f/EpMnW_gKz9FV3OvogvUzi.jpeg", "fullname": "Sheng Liu", "isPro": false, "type": "user", "user": "shengliu66" } }, { "_id": "67b4322c217ec18a40587bef", "hidden": false, "name": "Rahul Thapa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4322c217ec18a40587bf0", "hidden": false, "name": "Joseph Boen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:39:08.417Z", "user": { "_id": "66846f30f53c83742c117277", "avatarUrl": "/avatars/31a2434297a6c194d57f2a0234f1ca2c.svg", "fullname": "Joseph Boen", "isPro": false, "type": "user", "user": "tboen1" } }, { "_id": "67b4322c217ec18a40587bf1", "hidden": false, "name": "James Zou", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T21:18:47
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
Solving complex reasoning tasks may involve visual understanding, domain knowledge retrieval, numerical calculation, and multi-step reasoning. Existing methods augment large language models (LLMs) with external tools but are restricted to specialized domains, limited tool types, or require additional training data. In this paper, we introduce OctoTools, a training-free, user-friendly, and easily extensible open-source agentic framework designed to tackle complex reasoning across diverse domains. OctoTools introduces standardized tool cards to encapsulate tool functionality, a planner for both high-level and low-level planning, and an executor to carry out tool usage. We validate OctoTools' generality across 16 diverse tasks (including MathVista, MMLU-Pro, MedQA, and GAIA-Text), achieving substantial average accuracy gains of 9.3% over GPT-4o. Furthermore, OctoTools outperforms AutoGen, GPT-Functions and LangChain by up to 10.6% when given the same set of tools. Through comprehensive analysis and ablations, OctoTools demonstrates advantages in task planning, effective tool usage, and multi-step problem solving.
16
67b4322d217ec18a40587c27
null
null
2025-02-19T01:24:26.365000
Pre-training Auto-regressive Robotic Models with 4D Representations
https://cdn-thumbnails.h…s/2502.13142.png
2
{ "_id": "667c5764186b27ef806636d3", "avatarUrl": "/avatars/5c08f0109bc0e350624112c0aff544f6.svg", "followerCount": null, "fullname": "Roei Herzig", "isHf": false, "isMod": false, "isPro": false, "name": "roeiherz", "type": "user" }
true
null
2502.13142
[ { "_id": "67b5790132be608036ee94e5", "hidden": false, "name": "Dantong Niu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:12:28.457Z", "user": { "_id": "65c3fdf79d062be813813e45", "avatarUrl": "/avatars/52528a61abe5bbbef4a4a431944973cd.svg", "fullname": "Dantong Niu", "isPro": false, "type": "user", "user": "NdtSoCool" } }, { "_id": "67b5790132be608036ee94e6", "hidden": false, "name": "Yuvan Sharma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:12:35.531Z", "user": { "_id": "65406e82deee4716f1c29271", "avatarUrl": "/avatars/25331a773f8125f9ad1c3d6ac3375586.svg", "fullname": "Yuvan Sharma", "isPro": false, "type": "user", "user": "yuvansharma" } }, { "_id": "67b5790132be608036ee94e7", "hidden": false, "name": "Haoru Xue", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5790132be608036ee94e8", "hidden": false, "name": "Giscard Biamby", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:12:49.219Z", "user": { "_id": "650bd36a7c99ca283e58e973", "avatarUrl": "/avatars/606d24b2dac190ebcbb4b2a2e4671380.svg", "fullname": "Giscard Biamby", "isPro": false, "type": "user", "user": "gbiamby" } }, { "_id": "67b5790132be608036ee94e9", "hidden": false, "name": "Junyi Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:02.956Z", "user": { "_id": "62f0ecd2700bdc19558360de", "avatarUrl": "/avatars/5325b4b763f30c41f30e3aec0d2b59fa.svg", "fullname": "Junyi Zhang", "isPro": false, "type": "user", "user": "Junyi42" } }, { "_id": "67b5790132be608036ee94ea", "hidden": false, "name": "Ziteng Ji", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:13:13.907Z", "user": { "_id": "66a09aec369dd38cf2113070", "avatarUrl": "/avatars/cc13bdd3dc1271d33b083b61e12f1a05.svg", "fullname": "Ziteng Ji", "isPro": false, "type": "user", "user": "zitengj0618" } }, { "_id": "67b5790132be608036ee94eb", "hidden": false, "name": "Trevor Darrell", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:13:20.379Z", "user": { "_id": "64cbdf02f103036e23d1c7f3", "avatarUrl": "/avatars/496069463900dea20929b57381182d39.svg", "fullname": "Trevor Darrell", "isPro": false, "type": "user", "user": "trevordarrell" } }, { "_id": "67b5790132be608036ee94ec", "hidden": false, "name": "Roei Herzig", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:13:26.134Z", "user": { "_id": "667c5764186b27ef806636d3", "avatarUrl": "/avatars/5c08f0109bc0e350624112c0aff544f6.svg", "fullname": "Roei Herzig", "isPro": false, "type": "user", "user": "roeiherz" } } ]
2025-02-18T18:59:01
Pre-training Auto-regressive Robotic Models with 4D Representations
Foundation models pre-trained on massive unlabeled datasets have revolutionized natural language and computer vision, exhibiting remarkable generalization capabilities, thus highlighting the importance of pre-training. Yet, efforts in robotics have struggled to achieve similar success, limited by either the need for costly robotic annotations or the lack of representations that effectively model the physical world. In this paper, we introduce ARM4R, an Auto-regressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better pre-trained robotic model. Specifically, we focus on utilizing 3D point tracking representations from videos derived by lifting 2D representations into 3D space via monocular depth estimation across time. These 4D representations maintain a shared geometric structure between the points and robot state representations up to a linear transformation, enabling efficient transfer learning from human video data to low-level robotic control. Our experiments show that ARM4R can transfer efficiently from human video data to robotics and consistently improves performance on tasks across various robot environments and configurations.
4
67b5790832be608036ee9638
null
null
2025-02-19T01:21:54.836000
PAFT: Prompt-Agnostic Fine-Tuning
https://cdn-thumbnails.h…s/2502.12859.png
8
{ "_id": "65ed3051492a7f35db21fea2", "avatarUrl": "/avatars/4fc0ccc21aa88e4e8ff74a6f850570b8.svg", "followerCount": null, "fullname": "Chenxing Wei", "isHf": false, "isMod": false, "isPro": false, "name": "kittttttt", "type": "user" }
true
null
2502.12859
[ { "_id": "67b576aa489d68b981e086ad", "hidden": false, "name": "Chenxing Wei", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:23:00.016Z", "user": { "_id": "65ed3051492a7f35db21fea2", "avatarUrl": "/avatars/4fc0ccc21aa88e4e8ff74a6f850570b8.svg", "fullname": "Chenxing Wei", "isPro": false, "type": "user", "user": "kittttttt" } }, { "_id": "67b576aa489d68b981e086ae", "hidden": false, "name": "Yao Shu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:34.696Z", "user": { "_id": "66123816d7dfcea8ae55a751", "avatarUrl": "/avatars/3f24468b63e4babd7d9a0c926ca01b23.svg", "fullname": "Shu Yao", "isPro": false, "type": "user", "user": "ZCODE0" } }, { "_id": "67b576aa489d68b981e086af", "hidden": false, "name": "Mingwen Ou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b576aa489d68b981e086b0", "hidden": false, "name": "Ying Tiffany He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b576aa489d68b981e086b1", "hidden": false, "name": "Fei Richard Yu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T13:46:47
PAFT: Prompt-Agnostic Fine-Tuning
While Large Language Models (LLMs) adapt well to downstream tasks after fine-tuning, this adaptability often compromises prompt robustness, as even minor prompt variations can significantly degrade performance. To address this, we propose Prompt-Agnostic Fine-Tuning(PAFT), a simple yet effective approach that dynamically adjusts prompts during fine-tuning. This encourages the model to learn underlying task principles rather than overfitting to specific prompt formulations. PAFT operates in two stages: First, a diverse set of meaningful, synthetic candidate prompts is constructed. Second, during fine-tuning, prompts are randomly sampled from this set to create dynamic training inputs. Extensive experiments across diverse datasets and LLMs demonstrate that models trained with PAFT exhibit strong robustness and generalization across a wide range of prompts, including unseen ones. This enhanced robustness improves both model performance and inference speed while maintaining training efficiency. Ablation studies further confirm the effectiveness of PAFT.
15
67b576aa489d68b981e08708
null
null
2025-02-19T00:22:36.628000
Soundwave: Less is More for Speech-Text Alignment in LLMs
https://cdn-thumbnails.h…s/2502.12900.png
2
{ "_id": "66975b9f8031bf92b428e138", "avatarUrl": "/avatars/3254281a7bac1c8ddde1d6bc7e518b2f.svg", "followerCount": null, "fullname": "Yuhao Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Yoohao", "type": "user" }
true
null
2502.12900
[ { "_id": "67b54851b986e35c41e063da", "hidden": false, "name": "Yuhao Zhang", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T02:56:18.848Z", "user": { "_id": "66975b9f8031bf92b428e138", "avatarUrl": "/avatars/3254281a7bac1c8ddde1d6bc7e518b2f.svg", "fullname": "Yuhao Zhang", "isPro": false, "type": "user", "user": "Yoohao" } }, { "_id": "67b54851b986e35c41e063db", "hidden": false, "name": "Zhiheng Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:05.678Z", "user": { "_id": "66597f2cf769c3c443b7cf41", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/WkZBh7hwlD9wVqCEMQtGX.png", "fullname": "Chihang Lau", "isPro": true, "type": "user", "user": "puccho" } }, { "_id": "67b54851b986e35c41e063dc", "hidden": false, "name": "Fan Bu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:42:08.544Z", "user": { "_id": "668e7f46c243a12604035758", "avatarUrl": "/avatars/35bd20032fafb7d7603266cf9a72d1e0.svg", "fullname": "Fan Bu", "isPro": false, "type": "user", "user": "FanBuCUHK" } }, { "_id": "67b54851b986e35c41e063dd", "hidden": false, "name": "Ruiyu Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:42:14.866Z", "user": { "_id": "67b587c8882e49771f610b51", "avatarUrl": "/avatars/aecfb38b44141b8284416fc261692909.svg", "fullname": "Ruiyu Zhang", "isPro": false, "type": "user", "user": "PhoenixAxis" } }, { "_id": "67b54851b986e35c41e063de", "hidden": false, "name": "Benyou Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:42:23.845Z", "user": { "_id": "637c6703ca8542a0ba900ccb", "avatarUrl": "/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg", "fullname": "Wang", "isPro": false, "type": "user", "user": "Benyou" } }, { "_id": "67b54851b986e35c41e063df", "hidden": false, "name": "Haizhou Li", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T14:36:39
Soundwave: Less is More for Speech-Text Alignment in LLMs
Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwave, which utilizes an efficient training strategy and a novel architecture to address these issues. Results show that Soundwave outperforms the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks, using only one-fiftieth of the training data. Further analysis shows that Soundwave still retains its intelligence during conversation. The project is available at https://github.com/FreedomIntelligence/Soundwave.
76
67b54852b986e35c41e06426
null
null
2025-02-18T23:51:36.910000
Magma: A Foundation Model for Multimodal AI Agents
https://cdn-thumbnails.h…s/2502.13130.png
6
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2502.13130
[ { "_id": "67b5625fb27eb6046b2ceec5", "hidden": false, "name": "Jianwei Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5625fb27eb6046b2ceec6", "hidden": false, "name": "Reuben Tan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:19:01.753Z", "user": { "_id": "674774e8eb8fb5ea40877838", "avatarUrl": "/avatars/7ee8599cb1f7bb4402bc8512faf6ca12.svg", "fullname": "Reuben Tan", "isPro": false, "type": "user", "user": "tanreuben" } }, { "_id": "67b5625fb27eb6046b2ceec7", "hidden": false, "name": "Qianhui Wu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:18:38.510Z", "user": { "_id": "63ef330b1e695b35aa484e11", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ef330b1e695b35aa484e11/bXwpGy0dl8JXeJwJ--ilr.jpeg", "fullname": "Qianhui WU", "isPro": false, "type": "user", "user": "qianhuiwu" } }, { "_id": "67b5625fb27eb6046b2ceec8", "hidden": false, "name": "Ruijie Zheng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:18:32.761Z", "user": { "_id": "653b24ef8f8d60f204872f0a", "avatarUrl": "/avatars/45a55219e8a78be53fd32e96ba460282.svg", "fullname": "Ruijie Zheng", "isPro": false, "type": "user", "user": "rzheng12" } }, { "_id": "67b5625fb27eb6046b2ceec9", "hidden": false, "name": "Baolin Peng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:18:25.610Z", "user": { "_id": "61942296d5c2ba6daa290357", "avatarUrl": "/avatars/594021cc183c4922d48b46f43772a062.svg", "fullname": "Baolin Peng", "isPro": false, "type": "user", "user": "Baolin" } }, { "_id": "67b5625fb27eb6046b2ceeca", "hidden": false, "name": "Yongyuan Liang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:18:00.072Z", "user": { "_id": "6646d5819bb34d2b6b7455d3", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/JFH3ZTPvlaVSg4RJJBb6L.jpeg", "fullname": "Yongyuan Liang", "isPro": false, "type": "user", "user": "cheryyunl" } }, { "_id": "67b5625fb27eb6046b2ceecb", "hidden": false, "name": "Yu Gu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:05.283Z", "user": { "_id": "645a02972abf6165a3ba5df8", "avatarUrl": "/avatars/6166e4f213b9020ba99f853cb04f8db0.svg", "fullname": "Yu Gu", "isPro": false, "type": "user", "user": "aidenygu" } }, { "_id": "67b5625fb27eb6046b2ceecc", "hidden": false, "name": "Mu Cai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5625fb27eb6046b2ceecd", "hidden": false, "name": "Seonghyeon Ye", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:17:47.576Z", "user": { "_id": "62551f7767f0b85962624047", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1664552038624-62551f7767f0b85962624047.png", "fullname": "Seonghyeon Ye", "isPro": false, "type": "user", "user": "seonghyeonye" } }, { "_id": "67b5625fb27eb6046b2ceece", "hidden": false, "name": "Joel Jang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:17:40.305Z", "user": { "_id": "613e1a9267835521a6816b04", "avatarUrl": "/avatars/49edaa425bbce04dff92bbfb12a6b41c.svg", "fullname": "Joel Jang", "isPro": true, "type": "user", "user": "wkddydpf" } }, { "_id": "67b5625fb27eb6046b2ceecf", "hidden": false, "name": "Yuquan Deng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5625fb27eb6046b2ceed0", "hidden": false, "name": "Lars Liden", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:17:09.378Z", "user": { "_id": "65bc13c7719492167d777718", "avatarUrl": "/avatars/d1a3dc52c130b84a47c8a4ddd2e74be8.svg", "fullname": "Lars Liden", "isPro": false, "type": "user", "user": "larsliden" } }, { "_id": "67b5625fb27eb6046b2ceed1", "hidden": false, "name": "Jianfeng Gao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:17:03.043Z", "user": { "_id": "641904caf9d6f1d772ec7af7", "avatarUrl": "/avatars/4a63eac71eb30f70b1a0e9d4708f26c1.svg", "fullname": "Jianfeng Gao", "isPro": false, "type": "user", "user": "wyngjf" } } ]
2025-02-18T18:55:21
Magma: A Foundation Model for Multimodal AI Agents
We present Magma, a foundation model that serves multimodal AI agentic tasks in both the digital and physical worlds. Magma is a significant extension of vision-language (VL) models in that it not only retains the VL understanding ability (verbal intelligence) of the latter, but is also equipped with the ability to plan and act in the visual-spatial world (spatial-temporal intelligence) and complete agentic tasks ranging from UI navigation to robot manipulation. To endow the agentic capabilities, Magma is pretrained on large amounts of heterogeneous datasets spanning from images, videos to robotics data, where the actionable visual objects (e.g., clickable buttons in GUI) in images are labeled by Set-of-Mark (SoM) for action grounding, and the object movements (e.g., the trace of human hands or robotic arms) in videos are labeled by Trace-of-Mark (ToM) for action planning. Extensive experiments show that SoM and ToM reach great synergy and facilitate the acquisition of spatial-temporal intelligence for our Magma model, which is fundamental to a wide range of tasks as shown in Fig.1. In particular, Magma creates new state-of-the-art results on UI navigation and robotic manipulation tasks, outperforming previous models that are specifically tailored to these tasks. On image and video-related multimodal tasks, Magma also compares favorably to popular large multimodal models that are trained on much larger datasets. We make our model and code public for reproducibility at https://microsoft.github.io/Magma.
54
67b56265b27eb6046b2cf08f
null
null
2025-02-18T23:37:46.756000
Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities?
https://cdn-thumbnails.h…s/2502.12215.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.12215
[ { "_id": "67b56007fa141a55e51d9d78", "hidden": false, "name": "Zhiyuan Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b56007fa141a55e51d9d79", "hidden": false, "name": "Qinyuan Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b56007fa141a55e51d9d7a", "hidden": false, "name": "Zhangyue Yin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:50:17.993Z", "user": { "_id": "628c5da32f09ccf530204dbe", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1653366416287-628c5da32f09ccf530204dbe.jpeg", "fullname": "Zhangyue Yin", "isPro": false, "type": "user", "user": "yinzhangyue" } }, { "_id": "67b56007fa141a55e51d9d7b", "hidden": false, "name": "Yunhua Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b56007fa141a55e51d9d7c", "hidden": false, "name": "Xipeng Qiu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:50:29.919Z", "user": { "_id": "61457b8deff2c9fdb4de4988", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg", "fullname": "Xipeng Qiu", "isPro": false, "type": "user", "user": "xpqiu" } } ]
2025-02-17T07:21:11
Revisiting the Test-Time Scaling of o1-like Models: Do they Truly Possess Test-Time Scaling Capabilities?
The advent of test-time scaling in large language models (LLMs), exemplified by OpenAI's o1 series, has advanced reasoning capabilities by scaling computational resource allocation during inference. While successors like QwQ, Deepseek-R1 (R1) and LIMO replicate these advancements, whether these models truly possess test-time scaling capabilities remains underexplored. This study found that longer CoTs of these o1-like models do not consistently enhance accuracy; in fact, correct solutions are often shorter than incorrect ones for the same questions. Further investigation shows this phenomenon is closely related to models' self-revision capabilities - longer CoTs contain more self-revisions, which often lead to performance degradation. We then compare sequential and parallel scaling strategies on QwQ, R1 and LIMO, finding that parallel scaling achieves better coverage and scalability. Based on these insights, we propose Shortest Majority Vote, a method that combines parallel scaling strategies with CoT length characteristics, significantly improving models' test-time scalability compared to conventional majority voting approaches.
16
67b56007fa141a55e51d9da7
null
null
2025-02-18T23:23:34.214000
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models
https://cdn-thumbnails.h…s/2502.12464.png
2
{ "_id": "64ad5f59b7e4b2c1ce47eb43", "avatarUrl": "/avatars/1f13ebe21a90d8c99920aa2c8cd9ac45.svg", "followerCount": 4, "fullname": "Seanie Lee", "isHf": false, "isMod": false, "isPro": false, "name": "Seanie-lee", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/64ad5f59b7e4b2c1ce47eb43/ZEq_vSLjsXuPX3O-TWIpE.png" ]
2502.12464
[ { "_id": "67b55b2cc92c4aa82c13562d", "hidden": false, "name": "Seanie Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:53.341Z", "user": { "_id": "64ad5f59b7e4b2c1ce47eb43", "avatarUrl": "/avatars/1f13ebe21a90d8c99920aa2c8cd9ac45.svg", "fullname": "Seanie Lee", "isPro": false, "type": "user", "user": "Seanie-lee" } }, { "_id": "67b55b2cc92c4aa82c13562e", "hidden": false, "name": "Dong Bok Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55b2cc92c4aa82c13562f", "hidden": false, "name": "Dominik Wagner", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T11:12:27.148Z", "user": { "_id": "6311ba6f05cc08a1408d910a", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1662997515866-6311ba6f05cc08a1408d910a.png", "fullname": "Dominik Wagner", "isPro": false, "type": "user", "user": "dwgnr" } }, { "_id": "67b55b2cc92c4aa82c135630", "hidden": false, "name": "Minki Kang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55b2cc92c4aa82c135631", "hidden": false, "name": "Haebin Seong", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:51:37.783Z", "user": { "_id": "63a9379e2e05ca32e352d93b", "avatarUrl": "/avatars/6cda37befc873a92ed6d5dcba507954a.svg", "fullname": "Haebin Seong", "isPro": false, "type": "user", "user": "hbseong" } }, { "_id": "67b55b2cc92c4aa82c135632", "hidden": false, "name": "Tobias Bocklet", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55b2cc92c4aa82c135633", "hidden": false, "name": "Juho Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55b2cc92c4aa82c135634", "hidden": false, "name": "Sung Ju Hwang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T02:51:17
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models
Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often underperform on "hard" examples where the larger model provides accurate predictions. We observe that many inputs can be reliably handled by the smaller model, while only a small fraction require the larger model's capacity. Motivated by this, we propose SafeRoute, a binary router that distinguishes hard examples from easy ones. Our method selectively applies the larger safety guard model to the data that the router considers hard, improving efficiency while maintaining accuracy compared to solely using the larger safety guard model. Experimental results on multiple benchmark datasets demonstrate that our adaptive model selection significantly enhances the trade-off between computational cost and safety performance, outperforming relevant baselines.
27
67b55b2dc92c4aa82c13568b
null
null
2025-02-18T22:59:16.530000
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
https://cdn-thumbnails.h…s/2502.12170.png
2
{ "_id": "62d77440bad37ef354028365", "avatarUrl": "/avatars/df0dea879e06fa814867e9aad03d1e68.svg", "followerCount": null, "fullname": "Da Xiao", "isHf": false, "isMod": false, "isPro": false, "name": "xiaoda99", "type": "user" }
false
null
2502.12170
[ { "_id": "67b5434f2b2ec6908fffe75e", "hidden": false, "name": "Da Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5434f2b2ec6908fffe75f", "hidden": false, "name": "Qingye Meng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:23:45.040Z", "user": { "_id": "634f9c94cdc89e42cc7b194a", "avatarUrl": "/avatars/42376474afe9b68dc44184c71e210529.svg", "fullname": "Qingye Meng", "isPro": false, "type": "user", "user": "Hilbertmeng" } }, { "_id": "67b5434f2b2ec6908fffe760", "hidden": false, "name": "Shengping Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:23:38.331Z", "user": { "_id": "6706a37cca9b1a88fc3951ea", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/Y4GvpKiVgqm_2Kod52_B8.png", "fullname": "lishengping", "isPro": false, "type": "user", "user": "lishengping" } }, { "_id": "67b5434f2b2ec6908fffe761", "hidden": false, "name": "Xingyuan Yuan", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-13T10:26:27
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective method to address the limitations of residual connections and enhance cross-layer information flow in Transformers. Unlike existing dense connection approaches with static and shared connection weights, MUDD generates connection weights dynamically depending on hidden states at each sequence position and for each decoupled input stream (the query, key, value or residual) of a Transformer block. MUDD connections can be seamlessly integrated into any Transformer architecture to create MUDDFormer. Extensive experiments show that MUDDFormer significantly outperforms Transformers across various model architectures and scales in language modeling, achieving the performance of Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B in five-shot settings, while adding only 0.23% parameters and 0.4% computation. Code in JAX and PyTorch and pre-trained models are available at https://github.com/Caiyun-AI/MUDDFormer .
12
67b543502b2ec6908fffe788
null
null
2025-02-18T22:46:16.586000
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
https://cdn-thumbnails.h…s/2502.10852.png
2
{ "_id": "6430bdd8cd31d174a9f900fb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Y9SPnRfpKSbYc7MhNdP-H.jpeg", "followerCount": 2, "fullname": "Ziyin Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Geralt-Targaryen", "type": "user" }
true
null
2502.10852
[ { "_id": "67b55321f703732d151de666", "hidden": false, "name": "Zeli Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55321f703732d151de667", "hidden": false, "name": "Ziyin Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:10:05.400Z", "user": { "_id": "6430bdd8cd31d174a9f900fb", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Y9SPnRfpKSbYc7MhNdP-H.jpeg", "fullname": "Ziyin Zhang", "isPro": false, "type": "user", "user": "Geralt-Targaryen" } }, { "_id": "67b55321f703732d151de668", "hidden": false, "name": "Guixian Xu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:09:59.600Z", "user": { "_id": "6747329d228a652d5707e094", "avatarUrl": "/avatars/f33e118950e329ce5612877413806e49.svg", "fullname": "GUIXIAN XU", "isPro": false, "type": "user", "user": "Stuart-Xu" } }, { "_id": "67b55321f703732d151de669", "hidden": false, "name": "Jianing Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55321f703732d151de66a", "hidden": false, "name": "XU Han", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55321f703732d151de66b", "hidden": false, "name": "Ting Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55321f703732d151de66c", "hidden": false, "name": "Yushuang Dong", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-15T16:53:10
Multilingual Encoder Knows more than You Realize: Shared Weights Pretraining for Extremely Low-Resource Languages
While multilingual language models like XLM-R have advanced multilingualism in NLP, they still perform poorly in extremely low-resource languages. This situation is exacerbated by the fact that modern LLMs such as LLaMA and Qwen support far fewer languages than XLM-R, making text generation models non-existent for many languages in the world. To tackle this challenge, we propose a novel framework for adapting multilingual encoders to text generation in extremely low-resource languages. By reusing the weights between the encoder and the decoder, our framework allows the model to leverage the learned semantic space of the encoder, enabling efficient learning and effective generalization in low-resource languages. Applying this framework to four Chinese minority languages, we present XLM-SWCM, and demonstrate its superior performance on various downstream tasks even when compared with much larger models.
2
67b55322f703732d151de69d
null
null
2025-02-18T22:43:02.567000
Continuous Diffusion Model for Language Modeling
https://cdn-thumbnails.h…s/2502.11564.png
4
{ "_id": "65e5bd4568234ef5d6decadc", "avatarUrl": "/avatars/c41095a946c0176b949c0b3566136c05.svg", "followerCount": 4, "fullname": "Jaehyeong Jo", "isHf": false, "isMod": false, "isPro": false, "name": "harryjo97", "type": "user" }
true
null
2502.11564
[ { "_id": "67b40f93aba9e111862052ab", "hidden": false, "name": "Jaehyeong Jo", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:27.544Z", "user": { "_id": "65e5bd4568234ef5d6decadc", "avatarUrl": "/avatars/c41095a946c0176b949c0b3566136c05.svg", "fullname": "Jaehyeong Jo", "isPro": false, "type": "user", "user": "harryjo97" } }, { "_id": "67b40f93aba9e111862052ac", "hidden": false, "name": "Sung Ju Hwang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T08:54:29
Continuous Diffusion Model for Language Modeling
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing continuous diffusion models for discrete data have limited performance compared to discrete approaches, and the unclear link between them restricts the development of diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion and continuous flow on the statistical manifold, and building on the analogy, we introduce a simple design for the diffusion process that generalizes previous discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry and a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of autoregressive models. Codes available at https://github.com/harryjo97/RDLM{https://github.com/harryjo97/RDLM}.
50
67b40f94aba9e111862052d5
null
null
2025-02-18T22:35:23.066000
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation
https://cdn-thumbnails.h…s/2502.09838.png
2
{ "_id": "65fc18edfb66882aba4d548e", "avatarUrl": "/avatars/f70d47fe4aba98b5a5cd64f7e002dfd2.svg", "followerCount": null, "fullname": "wenqiao", "isHf": false, "isMod": false, "isPro": false, "name": "wannature", "type": "user" }
true
null
2502.09838
[ { "_id": "67b55078a64445f58c771d84", "hidden": true, "name": "Tianwei Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d85", "hidden": false, "name": "Wenqiao Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:40:41.324Z", "user": { "_id": "65fc18edfb66882aba4d548e", "avatarUrl": "/avatars/f70d47fe4aba98b5a5cd64f7e002dfd2.svg", "fullname": "wenqiao", "isPro": false, "type": "user", "user": "wannature" } }, { "_id": "67b55078a64445f58c771d86", "hidden": false, "name": "Sijing Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d87", "hidden": false, "name": "Yuqian Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d88", "hidden": false, "name": "Binhe Yu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:41:15.907Z", "user": { "_id": "648ad056dc6a5b4b88306ae2", "avatarUrl": "/avatars/66a47a11626c05b706de33a3184182e9.svg", "fullname": "yu", "isPro": false, "type": "user", "user": "binheyu1991" } }, { "_id": "67b55078a64445f58c771d89", "hidden": false, "name": "Haoyuan Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d8a", "hidden": false, "name": "Wanggui He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d8b", "hidden": false, "name": "Hao Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d8c", "hidden": false, "name": "Mengze Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d8d", "hidden": false, "name": "Xiaohui Song", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T14:42:09.080Z", "user": { "_id": "635156f1f74cdcca6f7acf70", "avatarUrl": "/avatars/133e88b30255425f6da7777737f91e81.svg", "fullname": "Xiaohui Song", "isPro": false, "type": "user", "user": "fpcsong" } }, { "_id": "67b55078a64445f58c771d8e", "hidden": false, "name": "Siliang Tang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d8f", "hidden": false, "name": "Jun Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d90", "hidden": false, "name": "Hui Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d91", "hidden": false, "name": "Yueting Zhuang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b55078a64445f58c771d92", "hidden": false, "name": "Beng Chin Ooi", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-14T00:42:36
HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous Knowledge Adaptation
We present HealthGPT, a powerful Medical Large Vision-Language Model (Med-LVLM) that integrates medical visual comprehension and generation capabilities within a unified autoregressive paradigm. Our bootstrapping philosophy is to progressively adapt heterogeneous comprehension and generation knowledge to pre-trained large language models (LLMs). This is achieved through a novel heterogeneous low-rank adaptation (H-LoRA) technique, which is complemented by a tailored hierarchical visual perception approach and a three-stage learning strategy. To effectively learn the HealthGPT, we devise a comprehensive medical domain-specific comprehension and generation dataset called VL-Health. Experimental results demonstrate exceptional performance and scalability of HealthGPT in medical visual unified tasks. Our project can be accessed at https://github.com/DCDmllm/HealthGPT.
10
67b5507aa64445f58c771df9
null
null
2025-02-18T22:08:27.750000
Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation
https://cdn-thumbnails.h…s/2502.13145.png
2
{ "_id": "6577073fc2bf55b1f6bafb49", "avatarUrl": "/avatars/58803398b1a918b7570db17893e65122.svg", "followerCount": 4, "fullname": "liao", "isHf": false, "isMod": false, "isPro": false, "name": "LegendBC", "type": "user" }
true
null
2502.13145
[ { "_id": "67b54b04bd51b4e46e39d287", "hidden": false, "name": "Bencheng Liao", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:00.934Z", "user": { "_id": "6577073fc2bf55b1f6bafb49", "avatarUrl": "/avatars/58803398b1a918b7570db17893e65122.svg", "fullname": "liao", "isPro": false, "type": "user", "user": "LegendBC" } }, { "_id": "67b54b04bd51b4e46e39d288", "hidden": false, "name": "Hongyuan Tao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:13:43.349Z", "user": { "_id": "66a105bb456284adf458d656", "avatarUrl": "/avatars/b543a324f7e159d6e84bc68915e93d24.svg", "fullname": "Tao Hongyuan", "isPro": false, "type": "user", "user": "HongyuanTao" } }, { "_id": "67b54b04bd51b4e46e39d289", "hidden": false, "name": "Qian Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54b04bd51b4e46e39d28a", "hidden": false, "name": "Tianheng Cheng", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:00:58.351Z", "user": { "_id": "646b3db131968a60a01e4cf5", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646b3db131968a60a01e4cf5/DhfdqUYQaD1Qa8Svw996J.jpeg", "fullname": "Tianheng Cheng", "isPro": false, "type": "user", "user": "wondervictor" } }, { "_id": "67b54b04bd51b4e46e39d28b", "hidden": false, "name": "Yingyue Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54b04bd51b4e46e39d28c", "hidden": false, "name": "Haoran Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54b04bd51b4e46e39d28d", "hidden": false, "name": "Wenyu Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:12:36.866Z", "user": { "_id": "66c2e7fc934e2f07753542ac", "avatarUrl": "/avatars/f6fa3f94435cf1c1d06daa6c925d07d0.svg", "fullname": "LWY", "isPro": false, "type": "user", "user": "wenyuliu" } }, { "_id": "67b54b04bd51b4e46e39d28e", "hidden": false, "name": "Xinggang Wang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T18:59:57
Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation
Recent Multimodal Large Language Models (MLLMs) have achieved remarkable performance but face deployment challenges due to their quadratic computational complexity, growing Key-Value cache requirements, and reliance on separate vision encoders. We propose mmMamba, a framework for developing linear-complexity native multimodal state space models through progressive distillation from existing MLLMs using moderate academic computational resources. Our approach enables the direct conversion of trained decoder-only MLLMs to linear-complexity architectures without requiring pre-trained RNN-based LLM or vision encoders. We propose an seeding strategy to carve Mamba from trained Transformer and a three-stage distillation recipe, which can effectively transfer the knowledge from Transformer to Mamba while preserving multimodal capabilities. Our method also supports flexible hybrid architectures that combine Transformer and Mamba layers for customizable efficiency-performance trade-offs. Distilled from the Transformer-based decoder-only HoVLE, mmMamba-linear achieves competitive performance against existing linear and quadratic-complexity VLMs, while mmMamba-hybrid further improves performance significantly, approaching HoVLE's capabilities. At 103K tokens, mmMamba-linear demonstrates 20.6times speedup and 75.8% GPU memory reduction compared to HoVLE, while mmMamba-hybrid achieves 13.5times speedup and 60.2% memory savings. Code and models are released at https://github.com/hustvl/mmMamba
36
67b54b05bd51b4e46e39d2bb
null
null
2025-02-18T22:06:19.200000
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
https://cdn-thumbnails.h…s/2502.11433.png
2
{ "_id": "63b58ed5889aa6707f0bb0f4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg", "followerCount": 15, "fullname": "Jimin Huang", "isHf": false, "isMod": false, "isPro": true, "name": "jiminHuang", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63b58ed5889aa6707f0bb0f4/2C9mhT-1Qz14hik7sxjf2.png" ]
2502.11433
[ { "_id": "67b54a644508bd0617598c21", "hidden": false, "name": "Guojun Xiong", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:14:19.641Z", "user": { "_id": "67b54cbcd9f66be7f6f3f7de", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/RZwRfp6AcseTCDVXW_eUb.png", "fullname": "Guojun Xiong", "isPro": false, "type": "user", "user": "xionggj001" } }, { "_id": "67b54a644508bd0617598c22", "hidden": false, "name": "Zhiyang Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:14:39.220Z", "user": { "_id": "668dedbdf278aa900ce400c9", "avatarUrl": "/avatars/b38e29c6b5092f5892bc2e9a7e625c88.svg", "fullname": "Zhiyang Deng", "isPro": false, "type": "user", "user": "zdeng10" } }, { "_id": "67b54a644508bd0617598c23", "hidden": false, "name": "Keyi Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54a644508bd0617598c24", "hidden": false, "name": "Yupeng Cao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:15:07.509Z", "user": { "_id": "62dd8f328456396d4f8aa894", "avatarUrl": "/avatars/af8f5dc7ff937e3e849ecdfd9ca4750b.svg", "fullname": "Yupeng Cao", "isPro": false, "type": "user", "user": "YupengCao" } }, { "_id": "67b54a644508bd0617598c25", "hidden": false, "name": "Haohang Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:15:15.689Z", "user": { "_id": "634cabd104491d9f7111eea3", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1665969099097-noauth.jpeg", "fullname": "Haohang Li", "isPro": true, "type": "user", "user": "Acatsama" } }, { "_id": "67b54a644508bd0617598c26", "hidden": false, "name": "Yangyang Yu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:15:22.050Z", "user": { "_id": "64f757c6016d60f3199ef5e6", "avatarUrl": "/avatars/2659ba698081265d0480b08161718013.svg", "fullname": "Yangyang Yu", "isPro": false, "type": "user", "user": "ShirleyY" } }, { "_id": "67b54a644508bd0617598c27", "hidden": false, "name": "Xueqing Peng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:15:29.692Z", "user": { "_id": "63a0c0803c8841cfe2cd1f15", "avatarUrl": "/avatars/bbe216db7a33612f23d23ce4ed4ba3f9.svg", "fullname": "Xueqing Peng", "isPro": false, "type": "user", "user": "Xueqing" } }, { "_id": "67b54a644508bd0617598c28", "hidden": false, "name": "Mingquan Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:15:52.483Z", "user": { "_id": "6650e0a99ccb17d9679653c5", "avatarUrl": "/avatars/ed08188e64bdebf58329304742f9ac16.svg", "fullname": "Mingquan Lin", "isPro": false, "type": "user", "user": "mq0051" } }, { "_id": "67b54a644508bd0617598c29", "hidden": false, "name": "Kaleb E Smith", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54a644508bd0617598c2a", "hidden": false, "name": "Xiao-Yang Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b54a644508bd0617598c2b", "hidden": false, "name": "Jimin Huang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:03.181Z", "user": { "_id": "63b58ed5889aa6707f0bb0f4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg", "fullname": "Jimin Huang", "isPro": true, "type": "user", "user": "jiminHuang" } }, { "_id": "67b54a644508bd0617598c2c", "hidden": false, "name": "Sophia Ananiadou", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:16:09.788Z", "user": { "_id": "66f6cb352c5d4ef3578a9c3f", "avatarUrl": "/avatars/0a70c94072bc5e1d018cf12da0904ff0.svg", "fullname": "Sophia Ananiadou", "isPro": false, "type": "user", "user": "Effoula" } }, { "_id": "67b54a644508bd0617598c2d", "hidden": false, "name": "Qianqian Xie", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:16:24.340Z", "user": { "_id": "6479f4317c18dca75e9a9324", "avatarUrl": "/avatars/9aa709230b057f57ee4415c04a622c63.svg", "fullname": "Xie", "isPro": false, "type": "user", "user": "QianqianXie1994" } } ]
2025-02-17T04:45:53
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose FLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements.
31
67b54a654508bd0617598c7e
null
null
2025-02-18T21:59:45.466000
Rethinking Diverse Human Preference Learning through Principal Component Analysis
https://cdn-thumbnails.h…s/2502.13131.png
3
{ "_id": "64d45451c34a346181b130dd", "avatarUrl": "/avatars/9bb8205b889337df5d321539c9b5d69d.svg", "followerCount": 6, "fullname": "Rui Yang", "isHf": false, "isMod": false, "isPro": false, "name": "Ray2333", "type": "user" }
true
null
2502.13131
[ { "_id": "67b5461d29cc269e5a4eb823", "hidden": false, "name": "Feng Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5461d29cc269e5a4eb824", "hidden": true, "name": "Rui Yang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:23.095Z", "user": { "_id": "64d45451c34a346181b130dd", "avatarUrl": "/avatars/9bb8205b889337df5d321539c9b5d69d.svg", "fullname": "Rui Yang", "isPro": false, "type": "user", "user": "Ray2333" } }, { "_id": "67b5461d29cc269e5a4eb825", "hidden": false, "name": "Hao Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5461d29cc269e5a4eb826", "hidden": false, "name": "Chunyuan Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:56:33.053Z", "user": { "_id": "634b9914dcf125e4da02498b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634b9914dcf125e4da02498b/crRgFroWq5U6XWtvlTXSZ.jpeg", "fullname": "Chunyuan Deng", "isPro": false, "type": "user", "user": "CharlesDDDD" } }, { "_id": "67b5461d29cc269e5a4eb827", "hidden": false, "name": "Jiarui Yao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5461d29cc269e5a4eb828", "hidden": false, "name": "Jingyan Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b5461d29cc269e5a4eb829", "hidden": false, "name": "Huan Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:52:47.329Z", "user": { "_id": "6719d581a6cad13741b8bc7f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6719d581a6cad13741b8bc7f/w4EttqfXRgWZJc6HpYOS9.jpeg", "fullname": "Huan Zhang", "isPro": false, "type": "user", "user": "huanzhang12" } }, { "_id": "67b5461d29cc269e5a4eb82a", "hidden": false, "name": "Hanjie Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T18:55:26
Rethinking Diverse Human Preference Learning through Principal Component Analysis
Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive and hard to scale. In this paper, we introduce Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons without requiring fine-grained annotations. Our key insight is to represent human preferences as vectors and analyze them using Principal Component Analysis (PCA). By constructing a dataset of embedding differences between preferred and rejected responses, DRMs identify orthogonal basis vectors that capture distinct aspects of preference. These decomposed rewards can be flexibly combined to align with different user needs, offering an interpretable and scalable alternative to traditional reward models. We demonstrate that DRMs effectively extract meaningful preference dimensions (e.g., helpfulness, safety, humor) and adapt to new users without additional training. Our results highlight DRMs as a powerful framework for personalized and interpretable LLM alignment.
35
67b5461f29cc269e5a4eb8bc
null
null
2025-02-18T21:57:00.289000
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
https://cdn-thumbnails.h…s/2502.12574.png
2
{ "_id": "64cb48f7667f4f808535107e", "avatarUrl": "/avatars/8f77f378ad665b246e1ea3aaba2153ae.svg", "followerCount": 1, "fullname": "chengluo", "isHf": false, "isMod": false, "isPro": false, "name": "wdlctc", "type": "user" }
true
null
2502.12574
[ { "_id": "67b547f555d0424a31b9c384", "hidden": false, "name": "Cheng Luo", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:40:25.130Z", "user": { "_id": "64cb48f7667f4f808535107e", "avatarUrl": "/avatars/8f77f378ad665b246e1ea3aaba2153ae.svg", "fullname": "chengluo", "isPro": false, "type": "user", "user": "wdlctc" } }, { "_id": "67b547f555d0424a31b9c385", "hidden": false, "name": "Zefan Cai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:40:47.077Z", "user": { "_id": "64b15284372d4340772a3dca", "avatarUrl": "/avatars/417d5f1bc1bcb5e4d5de6169673c2cf7.svg", "fullname": "Zefan Cai", "isPro": false, "type": "user", "user": "ZefanCai" } }, { "_id": "67b547f555d0424a31b9c386", "hidden": false, "name": "Hanshi Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547f555d0424a31b9c387", "hidden": false, "name": "Jinqi Xiao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:41:01.931Z", "user": { "_id": "64c15c5bea792b1950e302e4", "avatarUrl": "/avatars/51f84365cc08a1dcd5da70968389aed2.svg", "fullname": "Jinqi Xiao", "isPro": false, "type": "user", "user": "jinqixiao" } }, { "_id": "67b547f555d0424a31b9c388", "hidden": false, "name": "Bo Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547f555d0424a31b9c389", "hidden": false, "name": "Wen Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547f555d0424a31b9c38a", "hidden": false, "name": "Junjie Hu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:41:18.304Z", "user": { "_id": "675f8271a63fff7b5bcbc478", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/9tJn7NyzLMreCJVH4wRho.png", "fullname": "Junjie Hu", "isPro": false, "type": "user", "user": "junjiehu" } }, { "_id": "67b547f555d0424a31b9c38b", "hidden": false, "name": "Jiawei Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547f555d0424a31b9c38c", "hidden": false, "name": "Beidi Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:39:20.563Z", "user": { "_id": "64b732f832403871593e082c", "avatarUrl": "/avatars/dd21932b0c167131ee7545a622c46c3c.svg", "fullname": "Beidi Chen", "isPro": false, "type": "user", "user": "beidic" } }, { "_id": "67b547f555d0424a31b9c38d", "hidden": false, "name": "Anima Anandkumar", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:39:15.091Z", "user": { "_id": "6532920b3e385cfc6002938d", "avatarUrl": "/avatars/cb9cc6d2733031582c83f56dc6cd1dd5.svg", "fullname": "Anima Anandkumar", "isPro": false, "type": "user", "user": "animakumar" } } ]
2025-02-18T06:26:05
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods.
11
67b547f755d0424a31b9c3e5
null
null
2025-02-18T21:56:39.407000
Phantom: Subject-consistent video generation via cross-modal alignment
https://cdn-thumbnails.h…s/2502.11079.png
2
{ "_id": "63a950ac3453852ef5394178", "avatarUrl": "/avatars/48a5e537b10e2247a17e63502e3201a6.svg", "followerCount": 1, "fullname": "Lijie Liu", "isHf": false, "isMod": false, "isPro": false, "name": "liulj13", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63a950ac3453852ef5394178/HuVZ5d9xTlI4R1onRv_F5.mp4" ]
2502.11079
[ { "_id": "67b40141ad717fe02e188c1a", "hidden": false, "name": "Lijie Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:42.570Z", "user": { "_id": "63a950ac3453852ef5394178", "avatarUrl": "/avatars/48a5e537b10e2247a17e63502e3201a6.svg", "fullname": "Lijie Liu", "isPro": false, "type": "user", "user": "liulj13" } }, { "_id": "67b40141ad717fe02e188c1b", "hidden": false, "name": "Tianxiang Ma", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:45:00.117Z", "user": { "_id": "657ab4705e1c941f4c2f7877", "avatarUrl": "/avatars/c450f81f83dd0436ae120ab15616c4f7.svg", "fullname": "Tianxiang Ma", "isPro": false, "type": "user", "user": "Grayson111" } }, { "_id": "67b40141ad717fe02e188c1c", "hidden": false, "name": "Bingchuan Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:47:57.441Z", "user": { "_id": "63b415037af2e415f2599c18", "avatarUrl": "/avatars/4afbe7d6d05a702f1beeed9c53e78153.svg", "fullname": "Bingchuan Li", "isPro": false, "type": "user", "user": "lbc402" } }, { "_id": "67b40141ad717fe02e188c1d", "hidden": false, "name": "Zhuowei Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T09:47:50.995Z", "user": { "_id": "6304e2dabad6ce7fc0287d57", "avatarUrl": "/avatars/3fd4a9a62b0ef98db2573411463a9247.svg", "fullname": "Zhuowei_Chen", "isPro": false, "type": "user", "user": "ZhuoweiChen" } }, { "_id": "67b40141ad717fe02e188c1e", "hidden": false, "name": "Jiawei Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b40141ad717fe02e188c1f", "hidden": false, "name": "Qian He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b40141ad717fe02e188c20", "hidden": false, "name": "Xinglong Wu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T11:02:50
Phantom: Subject-consistent video generation via cross-modal alignment
The continuous development of foundational models for video generation is evolving into various applications, with subject-consistent video generation still in the exploratory stage. We refer to this as Subject-to-Video, which extracts subject elements from reference images and generates subject-consistent video through textual instructions. We believe that the essence of subject-to-video lies in balancing the dual-modal prompts of text and image, thereby deeply and simultaneously aligning both text and visual content. To this end, we propose Phantom, a unified video generation framework for both single and multi-subject references. Building on existing text-to-video and image-to-video architectures, we redesign the joint text-image injection model and drive it to learn cross-modal alignment via text-image-video triplet data. In particular, we emphasize subject consistency in human generation, covering existing ID-preserving video generation while offering enhanced advantages. The project homepage is here https://phantom-video.github.io/Phantom/.
52
67b40144ad717fe02e188cb2
null
null
2025-02-18T21:55:26.822000
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge
https://cdn-thumbnails.h…s/2502.12501.png
2
{ "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "followerCount": null, "fullname": "Qiyuan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "DonJoey", "type": "user" }
true
null
2502.12501
[ { "_id": "67b547ffc9071a3e97139532", "hidden": false, "name": "Qiyuan Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:10.215Z", "user": { "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "fullname": "Qiyuan Zhang", "isPro": false, "type": "user", "user": "DonJoey" } }, { "_id": "67b547ffc9071a3e97139533", "hidden": false, "name": "Yufei Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547ffc9071a3e97139534", "hidden": false, "name": "Yuxin Jiang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:08.101Z", "user": { "_id": "63c20105726f62e411fbe882", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c20105726f62e411fbe882/2UsU9O2psbDjJzz-sAmGH.jpeg", "fullname": "Yuxin Jiang", "isPro": false, "type": "user", "user": "YuxinJiang" } }, { "_id": "67b547ffc9071a3e97139535", "hidden": false, "name": "Liangyou Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547ffc9071a3e97139536", "hidden": false, "name": "Chuhan Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547ffc9071a3e97139537", "hidden": false, "name": "Yasheng Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b547ffc9071a3e97139538", "hidden": false, "name": "Xin Jiang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:54:48.156Z", "user": { "_id": "647415007afa69c3c7a98f1f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647415007afa69c3c7a98f1f/pEl0PozmzNK8_PwUMiikd.jpeg", "fullname": "Xin Jiang", "isPro": false, "type": "user", "user": "horiz94" } }, { "_id": "67b547ffc9071a3e97139539", "hidden": false, "name": "Lifeng Shang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:54:33.330Z", "user": { "_id": "655b1360c11dee7f7e7cf794", "avatarUrl": "/avatars/efb4b91e9bb8ab531331c8e4296f754c.svg", "fullname": "lifengshang", "isPro": false, "type": "user", "user": "lifengshang" } }, { "_id": "67b547ffc9071a3e9713953a", "hidden": false, "name": "Ruiming Tang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:54:25.814Z", "user": { "_id": "6728c3b8d5ceae39aa1d2fdd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/VV4-DjhxzEVceNLmsokn2.png", "fullname": "tang ruiming", "isPro": false, "type": "user", "user": "zhangsan5421" } }, { "_id": "67b547ffc9071a3e9713953b", "hidden": false, "name": "Fuyuan Lyu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T11:54:14.440Z", "user": { "_id": "65d2bb5c6130ef7be012d235", "avatarUrl": "/avatars/1c1e3bbb2c683a5c9d1f792a2c13fc4a.svg", "fullname": "Fuyuan Lyu", "isPro": false, "type": "user", "user": "silentspring2" } }, { "_id": "67b547ffc9071a3e9713953c", "hidden": false, "name": "Chen Ma", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T03:31:06
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge
LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become a widely adopted auto-evaluation method. However, its reliability is compromised by the CoT reasoning's inability to capture comprehensive and deeper details, often leading to incomplete outcomes. Existing methods mainly rely on majority voting or criteria expansion, which is insufficient to address the limitation in CoT. We propose Crowd-based Comparative Evaluation, which introduces additional crowd responses to compare with the candidate responses, thereby exposing deeper and more comprehensive details within the candidate responses. This process effectively guides LLM-as-a-Judge to provide a more detailed CoT judgment. Extensive experiments demonstrate that our approach enhances evaluation reliability, achieving an average accuracy gain of 6.7% across five benchmarks. Moreover, our method produces higher-quality CoTs that facilitate judge distillation and exhibit superior performance in rejection sampling for supervised fine-tuning (SFT), referred to as crowd rejection sampling, thereby enabling more efficient SFT. Our analysis confirms that CoTs generated by ours are more comprehensive and of higher quality, and evaluation accuracy improves as inference scales.
6
67b54800c9071a3e9713956c
null
null
2025-02-18T21:52:22.326000
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
https://cdn-thumbnails.h…s/2502.12513.png
2
{ "_id": "63e202f352b7578dba448ab5", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e202f352b7578dba448ab5/8itVBLcv14m7OVsoF8h1o.jpeg", "followerCount": 4, "fullname": "Yang", "isHf": false, "isMod": false, "isPro": false, "name": "Kaichengalex", "type": "user" }
true
null
2502.12513
[ { "_id": "67b545fd88527668fa8bcc14", "hidden": false, "name": "Tiancheng Gu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:19:15.243Z", "user": { "_id": "6508712e7ee07e274b0f4c94", "avatarUrl": "/avatars/23fe5593b0bce36c2167c3142e57e0e9.svg", "fullname": "Tiancheng Gu", "isPro": false, "type": "user", "user": "GaryGuuu" } }, { "_id": "67b545fd88527668fa8bcc15", "hidden": false, "name": "Kaicheng Yang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:43.180Z", "user": { "_id": "63e202f352b7578dba448ab5", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e202f352b7578dba448ab5/8itVBLcv14m7OVsoF8h1o.jpeg", "fullname": "Yang", "isPro": false, "type": "user", "user": "Kaichengalex" } }, { "_id": "67b545fd88527668fa8bcc16", "hidden": false, "name": "Chaoyi Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b545fd88527668fa8bcc17", "hidden": false, "name": "Yin Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b545fd88527668fa8bcc18", "hidden": true, "name": "Xiang An", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T08:39:00.405Z", "user": { "_id": "6478679d7b370854241b2ad8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6478679d7b370854241b2ad8/dBczWYYdfEt9tQcnVGhQk.jpeg", "fullname": "xiangan", "isPro": false, "type": "user", "user": "xiangan" } }, { "_id": "67b545fd88527668fa8bcc19", "hidden": false, "name": "Ziyong Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b545fd88527668fa8bcc1a", "hidden": false, "name": "Dongnan Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:20:00.745Z", "user": { "_id": "65e7f19e14856e8859fd8adc", "avatarUrl": "/avatars/203e919c361db94028a1b3c6ea52f0c2.svg", "fullname": "Dongnan Liu", "isPro": false, "type": "user", "user": "Nina0607" } }, { "_id": "67b545fd88527668fa8bcc1b", "hidden": false, "name": "Weidong Cai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:20:18.377Z", "user": { "_id": "6760a8f5e4b55ba1b2b0a7b4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/NddUMmwmZFbS25v1q8KyS.png", "fullname": "Weidong Cai", "isPro": false, "type": "user", "user": "SeriousBro" } }, { "_id": "67b545fd88527668fa8bcc1c", "hidden": false, "name": "Jiankang Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:20:25.455Z", "user": { "_id": "62cc7a38376917c0223dd24b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657566065867-noauth.png", "fullname": "JiankangDeng", "isPro": false, "type": "user", "user": "JiankangDeng" } } ]
2025-02-18T03:58:38
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
After pre-training on extensive image-text pairs, Contrastive Language-Image Pre-training (CLIP) demonstrates promising performance on a wide variety of benchmarks. However, a substantial volume of non-paired data, such as multimodal interleaved documents, remains underutilized for vision-language representation learning. To fully leverage these unpaired documents, we initially establish a Real-World Data Extraction pipeline to extract high-quality images and texts. Then we design a hierarchical retrieval method to efficiently associate each image with multiple semantically relevant realistic texts. To further enhance fine-grained visual information, we propose an image semantic augmented generation module for synthetic text production. Furthermore, we employ a semantic balance sampling strategy to improve dataset diversity, enabling better learning of long-tail concepts. Based on these innovations, we construct RealSyn, a dataset combining realistic and synthetic texts, available in three scales: 15M, 30M, and 100M. Extensive experiments demonstrate that RealSyn effectively advances vision-language representation learning and exhibits strong scalability. Models pre-trained on RealSyn achieve state-of-the-art performance on multiple downstream tasks. To facilitate future research, the RealSyn dataset and pre-trained model weights are released at https://github.com/deepglint/RealSyn.
15
67b545fe88527668fa8bcc65
null
null
2025-02-18T21:51:33.957000
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
https://cdn-thumbnails.h…s/2502.13143.png
2
{ "_id": "63c3e8abc7d7f4c63a515a02", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3e8abc7d7f4c63a515a02/npMHnVP2hHLbvoUGe7C4O.jpeg", "followerCount": 2, "fullname": "Zekun Qi", "isHf": false, "isMod": false, "isPro": false, "name": "qizekun", "type": "user" }
true
null
2502.13143
[ { "_id": "67b546c0d8a1eac02c605f6a", "hidden": false, "name": "Zekun Qi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:21.001Z", "user": { "_id": "63c3e8abc7d7f4c63a515a02", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3e8abc7d7f4c63a515a02/npMHnVP2hHLbvoUGe7C4O.jpeg", "fullname": "Zekun Qi", "isPro": false, "type": "user", "user": "qizekun" } }, { "_id": "67b546c0d8a1eac02c605f6b", "hidden": false, "name": "Wenyao Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:08:31.789Z", "user": { "_id": "65f9533b136fb8ddbd14e1fa", "avatarUrl": "/avatars/d88f75da0448093ccd1babba2a37d73f.svg", "fullname": "Zhang", "isPro": false, "type": "user", "user": "WenyaoZhang" } }, { "_id": "67b546c0d8a1eac02c605f6c", "hidden": false, "name": "Yufei Ding", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:08:57.294Z", "user": { "_id": "66bde456198f9d79f2be2d17", "avatarUrl": "/avatars/8c349aecb8a3a7cd7ef9d69e94eca8bd.svg", "fullname": "Yufei Ding", "isPro": false, "type": "user", "user": "YufeiD" } }, { "_id": "67b546c0d8a1eac02c605f6d", "hidden": false, "name": "Runpei Dong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:01:18.622Z", "user": { "_id": "6201fc5d91d53938a6432fbf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg", "fullname": "Runpei Dong", "isPro": false, "type": "user", "user": "RunpeiDong" } }, { "_id": "67b546c0d8a1eac02c605f6e", "hidden": false, "name": "Xinqiang Yu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:27.869Z", "user": { "_id": "675296fdea02ef7b84609893", "avatarUrl": "/avatars/27b3886ca048f108ac26a942f151410b.svg", "fullname": "Yu", "isPro": false, "type": "user", "user": "XinXinQiang" } }, { "_id": "67b546c0d8a1eac02c605f6f", "hidden": false, "name": "Jingwen Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f70", "hidden": false, "name": "Lingyun Xu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:09:41.570Z", "user": { "_id": "6745c4a80739b09408862ac9", "avatarUrl": "/avatars/8bc082fcbbf933b150a252c78d1bb3be.svg", "fullname": "lingyun xu", "isPro": false, "type": "user", "user": "codered010" } }, { "_id": "67b546c0d8a1eac02c605f71", "hidden": false, "name": "Baoyu Li", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:09:48.223Z", "user": { "_id": "67302fa362930cbc461511a8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/L9x68oxL8Ot88pyeU6w7Z.png", "fullname": "Baoyu Li", "isPro": false, "type": "user", "user": "boeyyyy" } }, { "_id": "67b546c0d8a1eac02c605f72", "hidden": false, "name": "Xialin He", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:10:04.207Z", "user": { "_id": "67b42039cca7aff798979e80", "avatarUrl": "/avatars/d410b3617395cd7e2a9c0c89ff12f23d.svg", "fullname": "Xialin He", "isPro": false, "type": "user", "user": "XialinHe" } }, { "_id": "67b546c0d8a1eac02c605f73", "hidden": false, "name": "Guofan Fan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f74", "hidden": false, "name": "Jiazhao Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:11:41.498Z", "user": { "_id": "65658233d35fc55406e8b00d", "avatarUrl": "/avatars/660eaa1923cd3e3478cec8197936a75c.svg", "fullname": "Jiazhao Zhang", "isPro": false, "type": "user", "user": "Jzzhang" } }, { "_id": "67b546c0d8a1eac02c605f75", "hidden": false, "name": "Jiawei He", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:41.045Z", "user": { "_id": "649cd8deccfa6c1a3c4d05ec", "avatarUrl": "/avatars/2b8f72a0643dfd74bc08fba5ed98ce95.svg", "fullname": "Jiawei", "isPro": false, "type": "user", "user": "jiaweihe" } }, { "_id": "67b546c0d8a1eac02c605f76", "hidden": false, "name": "Jiayuan Gu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T10:11:49.058Z", "user": { "_id": "638aa283cebef0d13aa2ec2e", "avatarUrl": "/avatars/3b67e6a6073033864a817230e97c27ca.svg", "fullname": "Jiayuan Gu", "isPro": false, "type": "user", "user": "jigu" } }, { "_id": "67b546c0d8a1eac02c605f77", "hidden": false, "name": "Xin Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f78", "hidden": false, "name": "Kaisheng Ma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f79", "hidden": false, "name": "Zhizheng Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f7a", "hidden": false, "name": "He Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b546c0d8a1eac02c605f7b", "hidden": false, "name": "Li Yi", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T18:59:02
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation
Spatial intelligence is a critical component of embodied AI, promoting robots to understand and interact with their environments. While recent advances have enhanced the ability of VLMs to perceive object locations and positional relationships, they still lack the capability to precisely understand object orientations-a key requirement for tasks involving fine-grained manipulations. Addressing this limitation not only requires geometric reasoning but also an expressive and intuitive way to represent orientation. In this context, we propose that natural language offers a more flexible representation space than canonical frames, making it particularly suitable for instruction-following robotic systems. In this paper, we introduce the concept of semantic orientation, which defines object orientations using natural language in a reference-frame-free manner (e.g., the ''plug-in'' direction of a USB or the ''handle'' direction of a knife). To support this, we construct OrienText300K, a large-scale dataset of 3D models annotated with semantic orientations that link geometric understanding to functional semantics. By integrating semantic orientation into a VLM system, we enable robots to generate manipulation actions with both positional and orientational constraints. Extensive experiments in simulation and real world demonstrate that our approach significantly enhances robotic manipulation capabilities, e.g., 48.7% accuracy on Open6DOR and 74.9% accuracy on SIMPLER.
29
67b546c5d8a1eac02c606090
null
null
2025-02-18T21:18:22.741000
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
https://cdn-thumbnails.h…s/2502.12982.png
4
{ "_id": "6214e4ee1e35c843d42d1f88", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6214e4ee1e35c843d42d1f88/fj-9wuIdPhvogh3BrcXTB.jpeg", "followerCount": 15, "fullname": "Longxu Dou", "isHf": false, "isMod": false, "isPro": true, "name": "dreamerdeo", "type": "user" }
true
null
2502.12982
[ { "_id": "67b53f572b2ec6908ffef365", "hidden": false, "name": "Longxu Dou", "status": "extracted_pending", "statusLastChangedAt": "2025-02-19T02:17:59.980Z", "user": { "_id": "6214e4ee1e35c843d42d1f88", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6214e4ee1e35c843d42d1f88/fj-9wuIdPhvogh3BrcXTB.jpeg", "fullname": "Longxu Dou", "isPro": true, "type": "user", "user": "dreamerdeo" } }, { "_id": "67b53f572b2ec6908ffef366", "hidden": false, "name": "Qian Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef367", "hidden": false, "name": "Fan Zhou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:51:54.538Z", "user": { "_id": "628f6e5ab90dde28ef57d293", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/628f6e5ab90dde28ef57d293/AxNzR2nvrND6Rf3RPkYMk.jpeg", "fullname": "Fan Zhou", "isPro": false, "type": "user", "user": "koalazf99" } }, { "_id": "67b53f572b2ec6908ffef368", "hidden": false, "name": "Changyu Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-26T08:39:02.389Z", "user": { "_id": "64e416dc54e18f390ef79ba4", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/5n01J00ZaVRrebsON8iYA.jpeg", "fullname": "Changyu Chen", "isPro": true, "type": "user", "user": "Cameron-Chen" } }, { "_id": "67b53f572b2ec6908ffef369", "hidden": false, "name": "Zili Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef36a", "hidden": false, "name": "Ziqi Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef36b", "hidden": false, "name": "Zichen Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-27T09:17:58.404Z", "user": { "_id": "65f5392c68b8e0cb3c9977a2", "avatarUrl": "/avatars/aa64772475098e8a135c13072fde6744.svg", "fullname": "Zichen", "isPro": false, "type": "user", "user": "lkevinzc" } }, { "_id": "67b53f572b2ec6908ffef36c", "hidden": false, "name": "Tongyao Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef36d", "hidden": false, "name": "Cunxiao Du", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef36e", "hidden": false, "name": "Penghui Yang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:03:36.624Z", "user": { "_id": "6508463c423b46492eec64e2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6508463c423b46492eec64e2/WSU7NSqjk92Pr2xUIWjCk.png", "fullname": "Penghui Yang", "isPro": false, "type": "user", "user": "phyang" } }, { "_id": "67b53f572b2ec6908ffef36f", "hidden": false, "name": "Haonan Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef370", "hidden": false, "name": "Jiaheng Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:55.378Z", "user": { "_id": "65377c30e48353201e6fdda0", "avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg", "fullname": "Jiaheng Liu", "isPro": false, "type": "user", "user": "CheeryLJH" } }, { "_id": "67b53f572b2ec6908ffef371", "hidden": false, "name": "Yongchi Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef372", "hidden": false, "name": "Xiachong Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef373", "hidden": false, "name": "Xin Mao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef374", "hidden": false, "name": "Man Tsung Yeung", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef375", "hidden": false, "name": "Kunat Pipatanakul", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef376", "hidden": false, "name": "Fajri Koto", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef377", "hidden": false, "name": "Min Si Thu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:45.250Z", "user": { "_id": "63ff6038e7767a895335bd48", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678198534680-63ff6038e7767a895335bd48.jpeg", "fullname": "Min Si Thu", "isPro": false, "type": "user", "user": "jojo-ai-mst" } }, { "_id": "67b53f572b2ec6908ffef378", "hidden": false, "name": "Hynek Kydlíček", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef379", "hidden": false, "name": "Zeyi Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37a", "hidden": false, "name": "Qunshu Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37b", "hidden": false, "name": "Sittipong Sripaisarnmongkol", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37c", "hidden": false, "name": "Kridtaphad Sae-Khow", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37d", "hidden": false, "name": "Nirattisai Thongchim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37e", "hidden": false, "name": "Taechawat Konkaew", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef37f", "hidden": false, "name": "Narong Borijindargoon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef380", "hidden": false, "name": "Anh Dao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef381", "hidden": false, "name": "Matichon Maneegard", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef382", "hidden": false, "name": "Phakphum Artkaew", "status": "claimed_verified", "statusLastChangedAt": "2025-03-03T08:07:36.639Z", "user": { "_id": "631a4855300a072a8da70abd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631a4855300a072a8da70abd/jRnzdW5JBjICYKCmkUFI-.jpeg", "fullname": "phakphum artkaew", "isPro": false, "type": "user", "user": "pakphum" } }, { "_id": "67b53f572b2ec6908ffef383", "hidden": false, "name": "Zheng-Xin Yong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:30.404Z", "user": { "_id": "61424bf4f0d914a5f606a823", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61424bf4f0d914a5f606a823/0td8lR4elBaVvJUD9Pojh.png", "fullname": "Yong Zheng-Xin", "isPro": false, "type": "user", "user": "yongzx" } }, { "_id": "67b53f572b2ec6908ffef384", "hidden": false, "name": "Quan Nguyen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef385", "hidden": false, "name": "Wannaphong Phatthiyaphaibun", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:47.061Z", "user": { "_id": "60dc25da6155a8319f008a6f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1630322686754-60dc25da6155a8319f008a6f.jpeg", "fullname": "Wannaphong Phatthiyaphaibun", "isPro": false, "type": "user", "user": "wannaphong" } }, { "_id": "67b53f572b2ec6908ffef386", "hidden": false, "name": "Hoang H. Tran", "status": "claimed_verified", "statusLastChangedAt": "2025-02-24T09:25:29.514Z", "user": { "_id": "65c19c3172fd754bba256112", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65c19c3172fd754bba256112/cfgYvKfO36PSTFYKQottM.jpeg", "fullname": "Ryan Tran", "isPro": false, "type": "user", "user": "ryanhoangt" } }, { "_id": "67b53f572b2ec6908ffef387", "hidden": false, "name": "Mike Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:03:21.404Z", "user": { "_id": "60d33fbbd7b174177faabd4f", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60d33fbbd7b174177faabd4f/pfyv_xj2B2m2N4F4sT9zJ.jpeg", "fullname": "Mike Zhang", "isPro": true, "type": "user", "user": "jjzha" } }, { "_id": "67b53f572b2ec6908ffef388", "hidden": false, "name": "Shiqi Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef389", "hidden": false, "name": "Tianyu Pang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef38a", "hidden": false, "name": "Chao Du", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef38b", "hidden": false, "name": "Xinyi Wan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef38c", "hidden": false, "name": "Wei Lu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b53f572b2ec6908ffef38d", "hidden": false, "name": "Min Lin", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-18T16:04:57
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages while retaining proficiency in Chinese and English. Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA languages. We also deliver a comprehensive cookbook on how to develop the multilingual model in an efficient manner, including five key aspects: data curation, pre-training, post-training, model customization and evaluation. We hope that Sailor2 model (Apache 2.0 license) will drive language development in the SEA region, and Sailor2 cookbook will inspire researchers to build more inclusive LLMs for other under-served languages.
14
67b53f572b2ec6908ffef3c9
null
null
2025-02-18T20:05:09.186000
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
https://cdn-thumbnails.h…s/2502.11336.png
2
{ "_id": "6538e649f940c8a0358aa8b8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6538e649f940c8a0358aa8b8/veNw6QJuZu8anWCXtOXxu.jpeg", "followerCount": null, "fullname": "Ryuto Koike", "isHf": false, "isMod": false, "isPro": false, "name": "ryuryukke", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/6538e649f940c8a0358aa8b8/LTS6uI3uy5AxEeoD9-oMX.png" ]
2502.11336
[ { "_id": "67b52de36007d463b988b202", "hidden": false, "name": "Ryuto Koike", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:03:41.013Z", "user": { "_id": "6538e649f940c8a0358aa8b8", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6538e649f940c8a0358aa8b8/veNw6QJuZu8anWCXtOXxu.jpeg", "fullname": "Ryuto Koike", "isPro": false, "type": "user", "user": "ryuryukke" } }, { "_id": "67b52de36007d463b988b203", "hidden": false, "name": "Masahiro Kaneko", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:46:06.323Z", "user": { "_id": "652bdb25756a15d750071787", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652bdb25756a15d750071787/qWjkU0_1egP7XYX3sXoSZ.jpeg", "fullname": "Masahiro Kaneko", "isPro": false, "type": "user", "user": "MasahiroKaneko" } }, { "_id": "67b52de36007d463b988b204", "hidden": false, "name": "Ayana Niwa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b52de36007d463b988b205", "hidden": false, "name": "Preslav Nakov", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:46:29.303Z", "user": { "_id": "647f7eb25e1bc4753746bf9f", "avatarUrl": "/avatars/cc9c6210fdc822d8a106937e747dff41.svg", "fullname": "Preslav Nakov", "isPro": false, "type": "user", "user": "preslavnakov" } }, { "_id": "67b52de36007d463b988b206", "hidden": false, "name": "Naoaki Okazaki", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:46:36.241Z", "user": { "_id": "630c0a9b09eceb8fafe89cc6", "avatarUrl": "/avatars/6325a9b34da54d5cbddb814c3987a2fe.svg", "fullname": "Naoaki Okazaki", "isPro": false, "type": "user", "user": "nokazaki" } } ]
2025-02-17T01:15:07
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans verify whether a text is human-written or LLM-generated, they intuitively investigate with which of them it shares more similar spans. However, existing interpretable detectors are not aligned with the human decision-making process and fail to offer evidence that users easily understand. To bridge this gap, we introduce ExaGPT, an interpretable detection approach grounded in the human decision-making process for verifying the origin of a text. ExaGPT identifies a text by checking whether it shares more similar spans with human-written vs. with LLM-generated texts from a datastore. This approach can provide similar span examples that contribute to the decision for each span in the text as evidence. Our human evaluation demonstrates that providing similar span examples contributes more effectively to judging the correctness of the decision than existing interpretable methods. Moreover, extensive experiments in four domains and three generators show that ExaGPT massively outperforms prior powerful detectors by up to +40.9 points of accuracy at a false positive rate of 1%.
0
67b52de46007d463b988b279
null
null
2025-02-18T18:58:34.838000
Diffusion Models without Classifier-free Guidance
https://cdn-thumbnails.h…s/2502.12154.png
2
{ "_id": "6372f265112fb535baf254c4", "avatarUrl": "/avatars/9b821bc533175c7dded48cdb3a3e1a12.svg", "followerCount": 2, "fullname": "tzco", "isHf": false, "isMod": false, "isPro": false, "name": "tzco", "type": "user" }
true
null
2502.12154
[ { "_id": "67b400719ff3ff79dae14701", "hidden": false, "name": "Zhicong Tang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:45.361Z", "user": { "_id": "6372f265112fb535baf254c4", "avatarUrl": "/avatars/9b821bc533175c7dded48cdb3a3e1a12.svg", "fullname": "tzco", "isPro": false, "type": "user", "user": "tzco" } }, { "_id": "67b400719ff3ff79dae14702", "hidden": false, "name": "Jianmin Bao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b400719ff3ff79dae14703", "hidden": false, "name": "Dong Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b400719ff3ff79dae14704", "hidden": false, "name": "Baining Guo", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:59:50
Diffusion Models without Classifier-free Guidance
This paper presents Model-guidance (MG), a novel objective for training diffusion model that addresses and removes of the commonly used Classifier-free guidance (CFG). Our innovative approach transcends the standard modeling of solely data distribution to incorporating the posterior probability of conditions. The proposed technique originates from the idea of CFG and is easy yet effective, making it a plug-and-play module for existing models. Our method significantly accelerates the training process, doubles the inference speed, and achieve exceptional quality that parallel and even surpass concurrent diffusion models with CFG. Extensive experiments demonstrate the effectiveness, efficiency, scalability on different models and datasets. Finally, we establish state-of-the-art performance on ImageNet 256 benchmarks with an FID of 1.34. Our code is available at https://github.com/tzco/Diffusion-wo-CFG.
4
67b400789ff3ff79dae147ee
null
null
2025-02-18T14:56:45.613000
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
https://cdn-thumbnails.h…s/2502.09509.png
2
{ "_id": "661ba524bd9243bf7e598355", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/661ba524bd9243bf7e598355/i77yD4XgJn2vUbn_mIsT8.jpeg", "followerCount": 2, "fullname": "Ioannis Kakogeorgiou", "isHf": false, "isMod": false, "isPro": false, "name": "gkakogeorgiou", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/661ba524bd9243bf7e598355/9XkVow22TY84dDgXm-Duc.gif" ]
2502.09509
[ { "_id": "67b4e4259beded220ad14729", "hidden": false, "name": "Theodoros Kouzelis", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T16:15:22.578Z", "user": { "_id": "6383aa17834d3558a3955186", "avatarUrl": "/avatars/1f6aed0a762379df334bc6a734d42f86.svg", "fullname": "Kouzelis", "isPro": false, "type": "user", "user": "zelaki" } }, { "_id": "67b4e4259beded220ad1472a", "hidden": false, "name": "Ioannis Kakogeorgiou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:03:49.086Z", "user": { "_id": "661ba524bd9243bf7e598355", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/661ba524bd9243bf7e598355/i77yD4XgJn2vUbn_mIsT8.jpeg", "fullname": "Ioannis Kakogeorgiou", "isPro": false, "type": "user", "user": "gkakogeorgiou" } }, { "_id": "67b4e4259beded220ad1472b", "hidden": false, "name": "Spyros Gidaris", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4e4259beded220ad1472c", "hidden": false, "name": "Nikos Komodakis", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-13T17:21:51
EQ-VAE: Equivariance Regularized Latent Space for Improved Generative Image Modeling
Latent generative models have emerged as a leading approach for high-quality image synthesis. These models rely on an autoencoder to compress images into a latent space, followed by a generative model to learn the latent distribution. We identify that existing autoencoders lack equivariance to semantic-preserving transformations like scaling and rotation, resulting in complex latent spaces that hinder generative performance. To address this, we propose EQ-VAE, a simple regularization approach that enforces equivariance in the latent space, reducing its complexity without degrading reconstruction quality. By finetuning pre-trained autoencoders with EQ-VAE, we enhance the performance of several state-of-the-art generative models, including DiT, SiT, REPA and MaskGIT, achieving a 7 speedup on DiT-XL/2 with only five epochs of SD-VAE fine-tuning. EQ-VAE is compatible with both continuous and discrete autoencoders, thus offering a versatile enhancement for a wide range of latent generative models. Project page and code: https://eq-vae.github.io/.
7
67b4e4289beded220ad147c7
null
null
2025-02-18T13:59:31.380000
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
https://cdn-thumbnails.h…s/2502.08826.png
2
{ "_id": "64ba58d377dd483716aba098", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ba58d377dd483716aba098/6VASAUkFpDC-PR01yUJWj.png", "followerCount": 3, "fullname": "Mahdi Abootorabi", "isHf": false, "isMod": false, "isPro": false, "name": "aboots", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/N0fZ0I60EfZjITEnf6gPc.png", "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/CtLxMqUEhWr6d9ztU1YZq.jpeg", "https://cdn-uploads.huggingface.co/production/uploads/64ba58d377dd483716aba098/HczPPOjzArOwgdwb5yv5Z.jpeg" ]
2502.08826
[ { "_id": "67b303f18bd6e9a5cad8bc4d", "hidden": false, "name": "Mohammad Mahdi Abootorabi", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-17T09:40:27.588Z", "user": { "_id": "64ba58d377dd483716aba098", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ba58d377dd483716aba098/6VASAUkFpDC-PR01yUJWj.png", "fullname": "Mahdi Abootorabi", "isPro": false, "type": "user", "user": "aboots" } }, { "_id": "67b303f18bd6e9a5cad8bc4e", "hidden": false, "name": "Amirhosein Zobeiri", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T10:49:51.840Z", "user": { "_id": "659d22a87ca51193252ec403", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659d22a87ca51193252ec403/4r6RO3enW8q6gv_eWFZhl.jpeg", "fullname": "Amirhosein Zobeiri", "isPro": false, "type": "user", "user": "ZobeiriA" } }, { "_id": "67b303f18bd6e9a5cad8bc4f", "hidden": false, "name": "Mahdi Dehghani", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T09:37:45.858Z", "user": { "_id": "67b61b5692649eb787279693", "avatarUrl": "/avatars/1d3b1f4d48ec8064a25ae5f27ea58576.svg", "fullname": "Mahdi Dehghani", "isPro": false, "type": "user", "user": "Mahdi-dh" } }, { "_id": "67b303f18bd6e9a5cad8bc50", "hidden": false, "name": "Mohammadali Mohammadkhani", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:55.291Z", "user": { "_id": "64a43776f489f7ced54a4c4b", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64a43776f489f7ced54a4c4b/zO36YPuyaSoTbug99PxYs.jpeg", "fullname": "Mohammadali Mohammadkhani", "isPro": false, "type": "user", "user": "moali-mkh-2000" } }, { "_id": "67b303f18bd6e9a5cad8bc51", "hidden": false, "name": "Bardia Mohammadi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T14:37:52.589Z", "user": { "_id": "6531334db5672703b2efcf6f", "avatarUrl": "/avatars/598d6b4922d5f7d434ed1cf9ffc94f6d.svg", "fullname": "Bardia Mohammadi", "isPro": false, "type": "user", "user": "bardia79mhd" } }, { "_id": "67b303f18bd6e9a5cad8bc52", "hidden": false, "name": "Omid Ghahroodi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:33:28.667Z", "user": { "_id": "6250819a5bf543dbd2607081", "avatarUrl": "/avatars/dfade22a578b8349ab544acda5e8bcad.svg", "fullname": "Omid Ghahroodi", "isPro": false, "type": "user", "user": "omidgh" } }, { "_id": "67b303f18bd6e9a5cad8bc53", "hidden": false, "name": "Mahdieh Soleymani Baghshah", "status": "extracted_pending", "statusLastChangedAt": "2025-02-17T09:40:02.049Z", "user": { "_id": "661a88f0bcd78151e521bc60", "avatarUrl": "/avatars/bedab01ce7909ecde7a60f891770c18c.svg", "fullname": "Mahdieh Soleymani Baghshah", "isPro": false, "type": "user", "user": "Soleymani" } }, { "_id": "67b303f18bd6e9a5cad8bc54", "hidden": false, "name": "Ehsaneddin Asgari", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-12T22:33:41
Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation
Large Language Models (LLMs) struggle with hallucinations and outdated knowledge due to their reliance on static training data. Retrieval-Augmented Generation (RAG) mitigates these issues by integrating external dynamic information enhancing factual and updated grounding. Recent advances in multimodal learning have led to the development of Multimodal RAG, incorporating multiple modalities such as text, images, audio, and video to enhance the generated outputs. However, cross-modal alignment and reasoning introduce unique challenges to Multimodal RAG, distinguishing it from traditional unimodal RAG. This survey offers a structured and comprehensive analysis of Multimodal RAG systems, covering datasets, metrics, benchmarks, evaluation, methodologies, and innovations in retrieval, fusion, augmentation, and generation. We precisely review training strategies, robustness enhancements, and loss functions, while also exploring the diverse Multimodal RAG scenarios. Furthermore, we discuss open challenges and future research directions to support advancements in this evolving field. This survey lays the foundation for developing more capable and reliable AI systems that effectively leverage multimodal dynamic external knowledge bases. Resources are available at https://github.com/llm-lab-org/Multimodal-RAG-Survey.
17
67b303f28bd6e9a5cad8bc85
null
null
2025-02-18T13:21:05.722000
IHEval: Evaluating Language Models on Following the Instruction Hierarchy
https://cdn-thumbnails.h…s/2502.08745.png
2
{ "_id": "63bf9695da08ed054400205e", "avatarUrl": "/avatars/b6fca49559a61cf66628088c60d26c10.svg", "followerCount": 1, "fullname": "Zhihan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "zhihz0535", "type": "user" }
true
null
2502.08745
[ { "_id": "67b4cf1994ec5e365fb7995d", "hidden": false, "name": "Zhihan Zhang", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-18T18:19:31.455Z", "user": { "_id": "63bf9695da08ed054400205e", "avatarUrl": "/avatars/b6fca49559a61cf66628088c60d26c10.svg", "fullname": "Zhihan Zhang", "isPro": false, "type": "user", "user": "zhihz0535" } }, { "_id": "67b4cf1994ec5e365fb7995e", "hidden": false, "name": "Shiyang Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb7995f", "hidden": false, "name": "Zixuan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79960", "hidden": false, "name": "Xin Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79961", "hidden": false, "name": "Haoming Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79962", "hidden": false, "name": "Xianfeng Tang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:25:56.312Z", "user": { "_id": "6465f6467ff8fcbef7d22513", "avatarUrl": "/avatars/07992835c235fbb07016a0ea4f1d61cb.svg", "fullname": "Xianfeng Tang", "isPro": false, "type": "user", "user": "xianft" } }, { "_id": "67b4cf1994ec5e365fb79963", "hidden": false, "name": "Yifan Gao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79964", "hidden": false, "name": "Zheng Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79965", "hidden": false, "name": "Haodong Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79966", "hidden": false, "name": "Zhaoxuan Tan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:26:12.966Z", "user": { "_id": "638eaca4a26f3d5af37be8b3", "avatarUrl": "/avatars/476890d83191d0cbdb9a3d5351a129da.svg", "fullname": "Zhaoxuan_Tan", "isPro": false, "type": "user", "user": "Zhaoxuan" } }, { "_id": "67b4cf1994ec5e365fb79967", "hidden": false, "name": "Yichuan Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb79968", "hidden": false, "name": "Qingyu Yin", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:26:54.938Z", "user": { "_id": "6453cb22908e259483c0a061", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6453cb22908e259483c0a061/hMgdwZUsUbgquGalzPGzV.jpeg", "fullname": "Qingyu_Yin", "isPro": false, "type": "user", "user": "MikaStars39" } }, { "_id": "67b4cf1994ec5e365fb79969", "hidden": false, "name": "Bing Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4cf1994ec5e365fb7996a", "hidden": false, "name": "Meng Jiang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-12T19:35:28
IHEval: Evaluating Language Models on Following the Instruction Hierarchy
The instruction hierarchy, which establishes a priority order from system messages to user messages, conversation history, and tool outputs, is essential for ensuring consistent and safe behavior in language models (LMs). Despite its importance, this topic receives limited attention, and there is a lack of comprehensive benchmarks for evaluating models' ability to follow the instruction hierarchy. We bridge this gap by introducing IHEval, a novel benchmark comprising 3,538 examples across nine tasks, covering cases where instructions in different priorities either align or conflict. Our evaluation of popular LMs highlights their struggle to recognize instruction priorities. All evaluated models experience a sharp performance decline when facing conflicting instructions, compared to their original instruction-following performance. Moreover, the most competitive open-source model only achieves 48% accuracy in resolving such conflicts. Our results underscore the need for targeted optimization in the future development of LMs.
18
67b4cf1a94ec5e365fb799c1
null
null
2025-02-18T13:04:04.423000
Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning
https://cdn-thumbnails.h…s/2502.09969.png
2
{ "_id": "6391e4e984afa726d66180b9", "avatarUrl": "/avatars/e437e2820745b522a868b8da27d9a11f.svg", "followerCount": 0, "fullname": "Ishika Agarwal", "isHf": false, "isMod": false, "isPro": false, "name": "ishikaa", "type": "user" }
true
null
2502.09969
[ { "_id": "67b4cb6c777b7676c8b3c43d", "hidden": false, "name": "Ishika Agarwal", "status": "extracted_confirmed", "statusLastChangedAt": "2025-02-18T18:06:42.786Z", "user": { "_id": "6391e4e984afa726d66180b9", "avatarUrl": "/avatars/e437e2820745b522a868b8da27d9a11f.svg", "fullname": "Ishika Agarwal", "isPro": false, "type": "user", "user": "ishikaa" } }, { "_id": "67b4cb6c777b7676c8b3c43e", "hidden": false, "name": "Dilek Hakkani-Tür", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-14T07:55:47
Data Valuation using Neural Networks for Efficient Instruction Fine-Tuning
Influence functions provide crucial insights into model training, but existing methods suffer from large computational costs and limited generalization. Particularly, recent works have proposed various metrics and algorithms to calculate the influence of data using language models, which do not scale well with large models and datasets. This is because of the expensive forward and backward passes required for computation, substantial memory requirements to store large models, and poor generalization of influence estimates to new data. In this paper, we explore the use of small neural networks -- which we refer to as the InfluenceNetwork -- to estimate influence values, achieving up to 99% cost reduction. Our evaluation demonstrates that influence values can be estimated with models just 0.0027% the size of full language models (we use 7B and 8B versions). We apply our algorithm of estimating influence values (called NN-CIFT: Neural Networks for effiCient Instruction Fine-Tuning) to the downstream task of subset selection for general instruction fine-tuning. In our study, we include four state-of-the-art influence functions and show no compromise in performance, despite large speedups, between NN-CIFT and the original influence functions. We provide an in-depth hyperparameter analyses of NN-CIFT. The code for our method can be found here: https://github.com/agarwalishika/NN-CIFT.
1
67b4cb6d777b7676c8b3c45c
null
null
2025-02-18T11:57:43.538000
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents
https://cdn-thumbnails.h…s/2502.11357.png
2
{ "_id": "6556717676fe5cfa6a115405", "avatarUrl": "/avatars/570dd8f4eb6baaff12d7ebe11dde6348.svg", "followerCount": 1, "fullname": "Vardaan Pahuja", "isHf": false, "isMod": false, "isPro": false, "name": "vardaan123", "type": "user" }
true
null
2502.11357
[ { "_id": "67b3f1f1f5bd60d66133e1f3", "hidden": false, "name": "Vardaan Pahuja", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:47.969Z", "user": { "_id": "6556717676fe5cfa6a115405", "avatarUrl": "/avatars/570dd8f4eb6baaff12d7ebe11dde6348.svg", "fullname": "Vardaan Pahuja", "isPro": false, "type": "user", "user": "vardaan123" } }, { "_id": "67b3f1f1f5bd60d66133e1f4", "hidden": false, "name": "Yadong Lu", "status": "extracted_pending", "statusLastChangedAt": "2025-02-18T02:35:29.988Z", "user": { "_id": "664bbd75a6bd1b3d2ac7fc34", "avatarUrl": "/avatars/127bf5d611b46ef95a1859a8cf21a160.svg", "fullname": "Yadong Lu", "isPro": false, "type": "user", "user": "adamlu1" } }, { "_id": "67b3f1f1f5bd60d66133e1f5", "hidden": false, "name": "Corby Rosset", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b3f1f1f5bd60d66133e1f6", "hidden": false, "name": "Boyu Gou", "status": "claimed_verified", "statusLastChangedAt": "2025-02-20T17:33:00.709Z", "user": { "_id": "6500870f1e14749e84f8f887", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6500870f1e14749e84f8f887/wfvx4BZvh2OyW-vpq5jEy.jpeg", "fullname": "Boyu Gou", "isPro": false, "type": "user", "user": "BoyuNLP" } }, { "_id": "67b3f1f1f5bd60d66133e1f7", "hidden": false, "name": "Arindam Mitra", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b3f1f1f5bd60d66133e1f8", "hidden": false, "name": "Spencer Whitehead", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b3f1f1f5bd60d66133e1f9", "hidden": false, "name": "Yu Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b3f1f1f5bd60d66133e1fa", "hidden": false, "name": "Ahmed Awadallah", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T02:13:48
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents
Recent success in large multimodal models (LMMs) has sparked promising applications of agents capable of autonomously completing complex web tasks. While open-source LMM agents have made significant advances in offline evaluation benchmarks, their performance still falls substantially short of human-level capabilities in more realistic online settings. A key bottleneck is the lack of diverse and large-scale trajectory-level datasets across various domains, which are expensive to collect. In this paper, we address this challenge by developing a scalable recipe to synthesize the largest and most diverse trajectory-level dataset to date, containing over 94K successful multimodal web trajectories, spanning 49K unique URLs, 720K screenshots, and 33M web elements. In particular, we leverage extensive web exploration and refinement to obtain diverse task intents. The average cost is 28 cents per successful trajectory, making it affordable to a wide range of users in the community. Leveraging this dataset, we train Explorer, a multimodal web agent, and demonstrate strong performance on both offline and online web agent benchmarks such as Mind2Web-Live, Multimodal-Mind2Web, and MiniWob++. Additionally, our experiments highlight data scaling as a key driver for improving web agent capabilities. We hope this study makes state-of-the-art LMM-based agent research at a larger scale more accessible.
9
67b3f1f1f5bd60d66133e24b
null
null
2025-02-18T11:42:58.976000
ILIAS: Instance-Level Image retrieval At Scale
https://cdn-thumbnails.h…s/2502.11748.png
2
{ "_id": "66a3ae59f33ff23e1c027ccd", "avatarUrl": "/avatars/216717d547bf785a2b1696171e5f4b11.svg", "followerCount": 1, "fullname": "Vladan Stojnic", "isHf": false, "isMod": false, "isPro": false, "name": "stojnvla", "type": "user" }
true
null
2502.11748
[ { "_id": "67b465600e5142133055d7c1", "hidden": false, "name": "Giorgos Kordopatis-Zilos", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:38.791Z", "user": { "_id": "673c934bdf13003bd11746fd", "avatarUrl": "/avatars/1aec1157549be85963b39eb54845b695.svg", "fullname": "Giorgos Kordopatis-Zilos", "isPro": false, "type": "user", "user": "gkordo" } }, { "_id": "67b465600e5142133055d7c2", "hidden": false, "name": "Vladan Stojnić", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:39:41.653Z", "user": { "_id": "66a3ae59f33ff23e1c027ccd", "avatarUrl": "/avatars/216717d547bf785a2b1696171e5f4b11.svg", "fullname": "Vladan Stojnic", "isPro": false, "type": "user", "user": "stojnvla" } }, { "_id": "67b465600e5142133055d7c3", "hidden": false, "name": "Anna Manko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b465600e5142133055d7c4", "hidden": false, "name": "Pavel Šuma", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:41.052Z", "user": { "_id": "67ab9acd412358244419d946", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/tq-eUQMYGdlDZVn15DckY.png", "fullname": "Pavel Suma", "isPro": false, "type": "user", "user": "pavelsuma" } }, { "_id": "67b465600e5142133055d7c5", "hidden": false, "name": "Nikolaos-Antonios Ypsilantis", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b465600e5142133055d7c6", "hidden": false, "name": "Nikos Efthymiadis", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T11:49:36.494Z", "user": { "_id": "663e63c5902b965ba35a0308", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/663e63c5902b965ba35a0308/OFX5yvY2LL8KORKsiDhAy.jpeg", "fullname": "Nikos Efthymiadis", "isPro": false, "type": "user", "user": "nikos-efth" } }, { "_id": "67b465600e5142133055d7c7", "hidden": false, "name": "Zakaria Laskar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b465600e5142133055d7c8", "hidden": false, "name": "Jiří Matas", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b465600e5142133055d7c9", "hidden": false, "name": "Ondřej Chum", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b465600e5142133055d7ca", "hidden": false, "name": "Giorgos Tolias", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T12:42:38
ILIAS: Instance-Level Image retrieval At Scale
This work introduces ILIAS, a new test dataset for Instance-Level Image retrieval At Scale. It is designed to evaluate the ability of current and future foundation models and retrieval techniques to recognize particular objects. The key benefits over existing datasets include large scale, domain diversity, accurate ground truth, and a performance that is far from saturated. ILIAS includes query and positive images for 1,000 object instances, manually collected to capture challenging conditions and diverse domains. Large-scale retrieval is conducted against 100 million distractor images from YFCC100M. To avoid false negatives without extra annotation effort, we include only query objects confirmed to have emerged after 2014, i.e. the compilation date of YFCC100M. An extensive benchmarking is performed with the following observations: i) models fine-tuned on specific domains, such as landmarks or products, excel in that domain but fail on ILIAS ii) learning a linear adaptation layer using multi-domain class supervision results in performance improvements, especially for vision-language models iii) local descriptors in retrieval re-ranking are still a key ingredient, especially in the presence of severe background clutter iv) the text-to-image performance of the vision-language foundation models is surprisingly close to the corresponding image-to-image case. website: https://vrg.fel.cvut.cz/ilias/
4
67b465680e5142133055d97d
null
null
2025-02-18T08:59:34.204000
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model
https://cdn-thumbnails.h…s/2502.08820.png
2
{ "_id": "63888d3fd68e37abd599f428", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63888d3fd68e37abd599f428/YaNyxG_oM6IgrHTkFZ6Eq.jpeg", "followerCount": 12, "fullname": "emre can", "isHf": false, "isMod": false, "isPro": true, "name": "emrecanacikgoz", "type": "user" }
true
null
2502.08820
[ { "_id": "67aece59f2e8a2ee35b5affd", "hidden": false, "name": "Emre Can Acikgoz", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:34:01.421Z", "user": { "_id": "63888d3fd68e37abd599f428", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63888d3fd68e37abd599f428/YaNyxG_oM6IgrHTkFZ6Eq.jpeg", "fullname": "emre can", "isPro": true, "type": "user", "user": "emrecanacikgoz" } }, { "_id": "67aece59f2e8a2ee35b5affe", "hidden": false, "name": "Jeremiah Greer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5afff", "hidden": false, "name": "Akul Datta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b000", "hidden": false, "name": "Ze Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b001", "hidden": false, "name": "William Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b002", "hidden": false, "name": "Oussama Elachqar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b003", "hidden": false, "name": "Emmanouil Koukoumidis", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b004", "hidden": false, "name": "Dilek Hakkani-Tür", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67aece59f2e8a2ee35b5b005", "hidden": false, "name": "Gokhan Tur", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-12T22:18:34
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CALM: A Unified Conversational Agentic Language Model
Large Language Models (LLMs) with API-calling capabilities enabled building effective Language Agents (LA), while also revolutionizing the conventional task-oriented dialogue (TOD) paradigm. However, current approaches face a critical dilemma: TOD systems are often trained on a limited set of target APIs, requiring new data to maintain their quality when interfacing with new services, while LAs are not trained to maintain user intent over multi-turn conversations. Because both robust multi-turn management and advanced function calling are crucial for effective conversational agents, we evaluate these skills on three popular benchmarks: MultiWOZ 2.4 (TOD), BFCL V3 (LA), and API-Bank (LA), and our analyses reveal that specialized approaches excel in one domain but underperform in the other. To bridge this chasm, we introduce CALM (Conversational Agentic Language Model), a unified approach that integrates both conversational and agentic capabilities. We created CALM-IT, a carefully constructed multi-task dataset that interleave multi-turn ReAct reasoning with complex API usage. Using CALM-IT, we train three models CALM 8B, CALM 70B, and CALM 405B, which outperform top domain-specific models, including GPT-4o, across all three benchmarks.
4
67aece5af2e8a2ee35b5b03e
null
null
2025-02-18T07:33:17.294000
The Mirage of Model Editing: Revisiting Evaluation in the Wild
https://cdn-thumbnails.h…s/2502.11177.png
2
{ "_id": "64e4090f222b232f03fe5f63", "avatarUrl": "/avatars/1e97328de374d726f64bf16528d36ca4.svg", "followerCount": null, "fullname": "Wanli Yang", "isHf": false, "isMod": false, "isPro": false, "name": "WenDingY", "type": "user" }
false
null
2502.11177
[ { "_id": "67b47dd2e638b35196b8e014", "hidden": false, "name": "Wanli Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e015", "hidden": false, "name": "Fei Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e016", "hidden": false, "name": "Jiajun Tan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e017", "hidden": false, "name": "Xinyu Ma", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:36.300Z", "user": { "_id": "62cd2f13979d883655cd5377", "avatarUrl": "/avatars/400c252d20d68aca56e0d0280498ce17.svg", "fullname": "Xinyu Ma", "isPro": false, "type": "user", "user": "xyma" } }, { "_id": "67b47dd2e638b35196b8e018", "hidden": false, "name": "Qi Cao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e019", "hidden": false, "name": "Dawei Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e01a", "hidden": false, "name": "Huawei Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b47dd2e638b35196b8e01b", "hidden": false, "name": "Xueqi Cheng", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T15:57:55
The Mirage of Model Editing: Revisiting Evaluation in the Wild
Despite near-perfect results in artificial evaluations, the effectiveness of model editing in real-world applications remains unexplored. To bridge this gap, we propose to study model editing in question answering (QA) by establishing a rigorous evaluation practice to assess the effectiveness of editing methods in correcting LLMs' errors. It consists of QAEdit, a new benchmark derived from popular QA datasets, and a standardized evaluation framework. Our single editing experiments indicate that current editing methods perform substantially worse than previously reported (38.5% vs. ~96%). Through module analysis and controlled experiments, we demonstrate that this performance decline stems from issues in evaluation practices of prior editing research. One key issue is the inappropriate use of teacher forcing in testing prevents error propagation by feeding ground truth tokens (inaccessible in real-world scenarios) as input. Furthermore, we simulate real-world deployment by sequential editing, revealing that current approaches fail drastically with only 1000 edits. Our analysis provides a fundamental reexamination of both the real-world applicability of existing model editing methods and their evaluation practices, and establishes a rigorous evaluation framework with key insights to advance reliable and practical model editing research.
10
67b47dd2e638b35196b8e03a
null
null
2025-02-18T07:16:07.632000
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
https://cdn-thumbnails.h…s/2502.10550.png
2
{ "_id": "6668687caee0993c95b0eb81", "avatarUrl": "/avatars/301fe1f395e0a129b1c9785868fa9858.svg", "followerCount": 2, "fullname": "Egor Cherepanov", "isHf": false, "isMod": false, "isPro": false, "name": "avanturist", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6668687caee0993c95b0eb81/zl6FgeOWq-7PC7PRLEyzW.qt" ]
2502.10550
[ { "_id": "67b478517fa6ecaa21d1498d", "hidden": false, "name": "Egor Cherepanov", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:39:34.993Z", "user": { "_id": "6668687caee0993c95b0eb81", "avatarUrl": "/avatars/301fe1f395e0a129b1c9785868fa9858.svg", "fullname": "Egor Cherepanov", "isPro": false, "type": "user", "user": "avanturist" } }, { "_id": "67b478517fa6ecaa21d1498e", "hidden": false, "name": "Nikita Kachaev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b478517fa6ecaa21d1498f", "hidden": false, "name": "Alexey K. Kovalev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b478517fa6ecaa21d14990", "hidden": false, "name": "Aleksandr I. Panov", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-14T20:46:19
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
Memory is crucial for enabling agents to tackle complex tasks with temporal and spatial dependencies. While many reinforcement learning (RL) algorithms incorporate memory, the field lacks a universal benchmark to assess an agent's memory capabilities across diverse scenarios. This gap is particularly evident in tabletop robotic manipulation, where memory is essential for solving tasks with partial observability and ensuring robust performance, yet no standardized benchmarks exist. To address this, we introduce MIKASA (Memory-Intensive Skills Assessment Suite for Agents), a comprehensive benchmark for memory RL, with three key contributions: (1) we propose a comprehensive classification framework for memory-intensive RL tasks, (2) we collect MIKASA-Base - a unified benchmark that enables systematic evaluation of memory-enhanced agents across diverse scenarios, and (3) we develop MIKASA-Robo - a novel benchmark of 32 carefully designed memory-intensive tasks that assess memory capabilities in tabletop robotic manipulation. Our contributions establish a unified framework for advancing memory RL research, driving the development of more reliable systems for real-world applications. The code is available at https://sites.google.com/view/memorybenchrobots/.
5
67b478557fa6ecaa21d14a24
null
null
2025-02-18T06:33:31.888000
Dyve: Thinking Fast and Slow for Dynamic Process Verification
https://cdn-thumbnails.h…s/2502.11157.png
2
{ "_id": "6608fa4f5baec84322ec85ea", "avatarUrl": "/avatars/13bdaff931676b065fa1efef06fef922.svg", "followerCount": 1, "fullname": "Zhong", "isHf": false, "isMod": false, "isPro": false, "name": "Jianyuan1", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6608fa4f5baec84322ec85ea/iiYwe_FlXRwT1RjPvzF-b.png" ]
2502.11157
[ { "_id": "67b44baa5fd91177ed7760a2", "hidden": false, "name": "Jianyuan Zhong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:45.385Z", "user": { "_id": "6608fa4f5baec84322ec85ea", "avatarUrl": "/avatars/13bdaff931676b065fa1efef06fef922.svg", "fullname": "Zhong", "isPro": false, "type": "user", "user": "Jianyuan1" } }, { "_id": "67b44baa5fd91177ed7760a3", "hidden": false, "name": "Zeju Li", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:39:43.674Z", "user": { "_id": "664ac4f7fe822b08e6f06814", "avatarUrl": "/avatars/23193494fcc8e58faf1eee5f1223aca6.svg", "fullname": "Zeju Li", "isPro": false, "type": "user", "user": "zeju-0727" } }, { "_id": "67b44baa5fd91177ed7760a4", "hidden": false, "name": "Zhijian Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44baa5fd91177ed7760a5", "hidden": false, "name": "Xiangyu Wen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:39:45.646Z", "user": { "_id": "641b1b36a5f876fe30c49542", "avatarUrl": "/avatars/ac9267925f45d325c2adb2eb0e38077b.svg", "fullname": "Xiangyu Wen", "isPro": false, "type": "user", "user": "XiangyuWen" } }, { "_id": "67b44baa5fd91177ed7760a6", "hidden": false, "name": "Qiang Xu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T15:11:19
Dyve: Thinking Fast and Slow for Dynamic Process Verification
We present Dyve, a dynamic process verifier that enhances reasoning error detection in large language models by integrating fast and slow thinking, inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate token-level confirmation System 1 for straightforward steps and comprehensive analysis System 2 for complex ones. Leveraging a novel step-wise consensus-filtered process supervision technique, combining Monte Carlo estimation with LLM based evaluation, Dyve curates high-quality supervision signals from noisy data. Experimental results on ProcessBench and the MATH dataset confirm that Dyve significantly outperforms existing process-based verifiers and boosts performance in Best-of-N settings.
6
67b44bab5fd91177ed7760ca
null
null
2025-02-18T06:07:36.212000
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
https://cdn-thumbnails.h…s/2502.11089.png
9
{ "_id": "645e054ff7a55f0d780a8ff7", "avatarUrl": "/avatars/9614510443bee3bd5d6266efd1c39fc1.svg", "followerCount": 5, "fullname": "Chunjiang Ge", "isHf": false, "isMod": false, "isPro": false, "name": "HelloJiang", "type": "user" }
true
null
2502.11089
[ { "_id": "67b43211d3c5f50aa9c03a2d", "hidden": false, "name": "Jingyang Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a2e", "hidden": false, "name": "Huazuo Gao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:43:19.672Z", "user": { "_id": "64e370be59aa5366642ac329", "avatarUrl": "/avatars/0fa1eb6ac6c1aeff3e65bc86a6617f64.svg", "fullname": "Huazuo Gao", "isPro": false, "type": "user", "user": "gaohuazuo" } }, { "_id": "67b43211d3c5f50aa9c03a2f", "hidden": false, "name": "Damai Dai", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:43:30.267Z", "user": { "_id": "659389f8de82e1ef7b9a8b13", "avatarUrl": "/avatars/896ed9f4cdbd317493b303d070b7e12a.svg", "fullname": "Damai Dai", "isPro": false, "type": "user", "user": "DeepSeekDDM" } }, { "_id": "67b43211d3c5f50aa9c03a30", "hidden": false, "name": "Junyu Luo", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:43:36.295Z", "user": { "_id": "66e6c6372c78909baf44cdf8", "avatarUrl": "/avatars/458ea1d545d7c022b0463e7fbbd91db1.svg", "fullname": "Junyu Luo", "isPro": false, "type": "user", "user": "junyuluo" } }, { "_id": "67b43211d3c5f50aa9c03a31", "hidden": false, "name": "Liang Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a32", "hidden": false, "name": "Zhengyan Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:44:02.477Z", "user": { "_id": "65654ed2219af7f841640f27", "avatarUrl": "/avatars/e6904b3479fc5e65ea1f752919ca8290.svg", "fullname": "Zhengyan Zhang", "isPro": false, "type": "user", "user": "ZhengyanZhang" } }, { "_id": "67b43211d3c5f50aa9c03a33", "hidden": false, "name": "Zhenda Xie", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:44:16.691Z", "user": { "_id": "6797ca96e9e2793006a15110", "avatarUrl": "/avatars/2d393d6e5fc2e1a867f7fdd44e055a2f.svg", "fullname": "zhenda xie", "isPro": false, "type": "user", "user": "Zhendaxie" } }, { "_id": "67b43211d3c5f50aa9c03a34", "hidden": false, "name": "Y. X. Wei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a35", "hidden": false, "name": "Lean Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:44:26.979Z", "user": { "_id": "650c509472afb1e60e6151ae", "avatarUrl": "/avatars/c16ab5053a586819dc2b965303215ff7.svg", "fullname": "Lean Wang", "isPro": false, "type": "user", "user": "AdaHousman" } }, { "_id": "67b43211d3c5f50aa9c03a36", "hidden": false, "name": "Zhiping Xiao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:44:34.873Z", "user": { "_id": "66ab566e30c55e83b02aa050", "avatarUrl": "/avatars/62692be88b9ad34ad3f474fb0359ae20.svg", "fullname": "Zhiping Xiao", "isPro": false, "type": "user", "user": "Shockzipper" } }, { "_id": "67b43211d3c5f50aa9c03a37", "hidden": false, "name": "Yuqing Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a38", "hidden": false, "name": "Chong Ruan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-18T16:45:33.988Z", "user": { "_id": "6398203609f12714ed1935c2", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6398203609f12714ed1935c2/uXgl0LgKnFYjq1Wz39-a6.jpeg", "fullname": "Chong Ruan", "isPro": false, "type": "user", "user": "Chester111" } }, { "_id": "67b43211d3c5f50aa9c03a39", "hidden": false, "name": "Ming Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a3a", "hidden": false, "name": "Wenfeng Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b43211d3c5f50aa9c03a3b", "hidden": false, "name": "Wangding Zeng", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T11:53:44
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention
Long-context modeling is crucial for next-generation language models, yet the high computational cost of standard attention mechanisms poses significant computational challenges. Sparse attention offers a promising direction for improving efficiency while maintaining model capabilities. We present NSA, a Natively trainable Sparse Attention mechanism that integrates algorithmic innovations with hardware-aligned optimizations to achieve efficient long-context modeling. NSA employs a dynamic hierarchical sparse strategy, combining coarse-grained token compression with fine-grained token selection to preserve both global context awareness and local precision. Our approach advances sparse attention design with two key innovations: (1) We achieve substantial speedups through arithmetic intensity-balanced algorithm design, with implementation optimizations for modern hardware. (2) We enable end-to-end training, reducing pretraining computation without sacrificing model performance. As shown in Figure 1, experiments show the model pretrained with NSA maintains or exceeds Full Attention models across general benchmarks, long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves substantial speedups over Full Attention on 64k-length sequences across decoding, forward propagation, and backward propagation, validating its efficiency throughout the model lifecycle.
139
67b43212d3c5f50aa9c03a5c
null
null
2025-02-18T05:28:54.029000
Better Embeddings with Coupled Adam
https://cdn-thumbnails.h…s/2502.08441.png
3
{ "_id": "66867e1675f10ce7ef96180e", "avatarUrl": "/avatars/ac85c00ba9d4dc48887b8864a0626743.svg", "followerCount": null, "fullname": "Felix Stollenwerk", "isHf": false, "isMod": false, "isPro": false, "name": "flxst", "type": "user" }
true
null
2502.08441
[ { "_id": "67b30311a2b3622dd42a51ff", "hidden": false, "name": "Felix Stollenwerk", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:32:36.770Z", "user": { "_id": "66867e1675f10ce7ef96180e", "avatarUrl": "/avatars/ac85c00ba9d4dc48887b8864a0626743.svg", "fullname": "Felix Stollenwerk", "isPro": false, "type": "user", "user": "flxst" } }, { "_id": "67b30311a2b3622dd42a5200", "hidden": false, "name": "Tobias Stollenwerk", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-12T14:32:17
Better Embeddings with Coupled Adam
Despite their remarkable capabilities, LLMs learn word representations that exhibit the undesirable yet poorly understood feature of anisotropy. In this paper, we argue that the second moment in Adam is a cause of anisotropic embeddings, and suggest a modified optimizer called Coupled Adam to mitigate the problem. Our experiments demonstrate that Coupled Adam significantly improves the quality of embeddings, while also leading to better upstream and downstream performance on large enough datasets.
1
67b30312a2b3622dd42a522d
null
null
2025-02-18T04:37:21.573000
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
https://cdn-thumbnails.h…s/2502.09083.png
2
{ "_id": "6698cffdb2ebada9f4a7e7d7", "avatarUrl": "/avatars/e66d946c14595d3b008185f2be8d2f57.svg", "followerCount": 2, "fullname": "Greta Warren", "isHf": false, "isMod": false, "isPro": false, "name": "gretawarren", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6698cffdb2ebada9f4a7e7d7/55xAEeg9Xsk87DXHTH9gM.png" ]
2502.09083
[ { "_id": "67b30726d4665a0448e6436d", "hidden": false, "name": "Greta Warren", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:32:34.585Z", "user": { "_id": "6698cffdb2ebada9f4a7e7d7", "avatarUrl": "/avatars/e66d946c14595d3b008185f2be8d2f57.svg", "fullname": "Greta Warren", "isPro": false, "type": "user", "user": "gretawarren" } }, { "_id": "67b30726d4665a0448e6436e", "hidden": false, "name": "Irina Shklovski", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b30726d4665a0448e6436f", "hidden": false, "name": "Isabelle Augenstein", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:49:35.332Z", "user": { "_id": "608918b7df398c3b285ce960", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621507769190-608918b7df398c3b285ce960.jpeg", "fullname": "Isabelle Augenstein", "isPro": false, "type": "user", "user": "IAugenstein" } } ]
2025-02-13T08:56:25
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.
4
67b30727d4665a0448e6438d
null
null
2025-02-18T04:34:15.786000
MagicArticulate: Make Your 3D Models Articulation-Ready
https://cdn-thumbnails.h…s/2502.12135.png
2
{ "_id": "64fb31a34c8924c4fe7498bc", "avatarUrl": "/avatars/6c8e4a66e1b8b3c786a4000210089392.svg", "followerCount": 4, "fullname": "Chaoyue Song", "isHf": false, "isMod": false, "isPro": false, "name": "chaoyue7", "type": "user" }
true
null
2502.12135
[ { "_id": "67b4028237db78705fb256e1", "hidden": false, "name": "Chaoyue Song", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:40.771Z", "user": { "_id": "64fb31a34c8924c4fe7498bc", "avatarUrl": "/avatars/6c8e4a66e1b8b3c786a4000210089392.svg", "fullname": "Chaoyue Song", "isPro": false, "type": "user", "user": "chaoyue7" } }, { "_id": "67b4028237db78705fb256e2", "hidden": false, "name": "Jianfeng Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e3", "hidden": false, "name": "Xiu Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e4", "hidden": false, "name": "Fan Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e5", "hidden": false, "name": "Yiwen Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e6", "hidden": false, "name": "Zhongcong Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e7", "hidden": false, "name": "Jun Hao Liew", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e8", "hidden": false, "name": "Xiaoyang Guo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256e9", "hidden": false, "name": "Fayao Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256ea", "hidden": false, "name": "Jiashi Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4028237db78705fb256eb", "hidden": false, "name": "Guosheng Lin", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:53:27
MagicArticulate: Make Your 3D Models Articulation-Ready
With the explosive growth of 3D content creation, there is an increasing demand for automatically converting static 3D models into articulation-ready versions that support realistic animation. Traditional approaches rely heavily on manual annotation, which is both time-consuming and labor-intensive. Moreover, the lack of large-scale benchmarks has hindered the development of learning-based solutions. In this work, we present MagicArticulate, an effective framework that automatically transforms static 3D models into articulation-ready assets. Our key contributions are threefold. First, we introduce Articulation-XL, a large-scale benchmark containing over 33k 3D models with high-quality articulation annotations, carefully curated from Objaverse-XL. Second, we propose a novel skeleton generation method that formulates the task as a sequence modeling problem, leveraging an auto-regressive transformer to naturally handle varying numbers of bones or joints within skeletons and their inherent dependencies across different 3D models. Third, we predict skinning weights using a functional diffusion process that incorporates volumetric geodesic distance priors between vertices and joints. Extensive experiments demonstrate that MagicArticulate significantly outperforms existing methods across diverse object categories, achieving high-quality articulation that enables realistic animation. Project page: https://chaoyuesong.github.io/MagicArticulate.
8
67b4028437db78705fb25726
null
null
2025-02-18T04:33:41.120000
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models
https://cdn-thumbnails.h…s/2502.10458.png
3
{ "_id": "6354bda206d707b33249c4c2", "avatarUrl": "/avatars/bbd9f76274ac52214df92084d50bc7b5.svg", "followerCount": 1, "fullname": "Zhenxing Mi", "isHf": false, "isMod": false, "isPro": false, "name": "Mifucius", "type": "user" }
true
null
2502.10458
[ { "_id": "67b3ea0f4dd7ea0538ce589d", "hidden": false, "name": "Zhenxing Mi", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:52.837Z", "user": { "_id": "6354bda206d707b33249c4c2", "avatarUrl": "/avatars/bbd9f76274ac52214df92084d50bc7b5.svg", "fullname": "Zhenxing Mi", "isPro": false, "type": "user", "user": "Mifucius" } }, { "_id": "67b3ea0f4dd7ea0538ce589e", "hidden": false, "name": "Kuan-Chieh Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:46.349Z", "user": { "_id": "648ca58a39d2584ee47efef6", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648ca58a39d2584ee47efef6/R7B72bnwc59mdK45rmzYS.png", "fullname": "Kuan-Chieh Wang", "isPro": false, "type": "user", "user": "wangkua1" } }, { "_id": "67b3ea0f4dd7ea0538ce589f", "hidden": false, "name": "Guocheng Qian", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:52.861Z", "user": { "_id": "645fed74335c21d19f3bf76c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/645fed74335c21d19f3bf76c/gwVsllRWtSHbg4a1erkdF.jpeg", "fullname": "Guocheng Qian", "isPro": false, "type": "user", "user": "guochengqian" } }, { "_id": "67b3ea0f4dd7ea0538ce58a0", "hidden": false, "name": "Hanrong Ye", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:58.865Z", "user": { "_id": "62d3ae4d894e7fe42def988f", "avatarUrl": "/avatars/3aafc55d9783459f9a79546fc31dd68a.svg", "fullname": "Hanrong Ye", "isPro": false, "type": "user", "user": "leoye" } }, { "_id": "67b3ea0f4dd7ea0538ce58a1", "hidden": false, "name": "Runtao Liu", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:22:08.197Z", "user": { "_id": "64a653e330dd11336539c439", "avatarUrl": "/avatars/348910ea160829707ac5e74f9f824c60.svg", "fullname": "liuruntao", "isPro": false, "type": "user", "user": "runtao" } }, { "_id": "67b3ea0f4dd7ea0538ce58a2", "hidden": false, "name": "Sergey Tulyakov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b3ea0f4dd7ea0538ce58a3", "hidden": false, "name": "Kfir Aberman", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:22:17.444Z", "user": { "_id": "64db29097266618e853dd6ec", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64db29097266618e853dd6ec/r0MaPQCfAxeKv3ycdKYLK.jpeg", "fullname": "Kfir Aberman", "isPro": false, "type": "user", "user": "kaberman" } }, { "_id": "67b3ea0f4dd7ea0538ce58a4", "hidden": false, "name": "Dan Xu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:53.095Z", "user": { "_id": "66feab48651e00e22f33222e", "avatarUrl": "/avatars/7344377e2c796c7ec85194bb2fc78521.svg", "fullname": "Dan Xu", "isPro": false, "type": "user", "user": "danxuhk" } } ]
2025-02-12T05:30:08
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning in Diffusion Models
This paper presents ThinkDiff, a novel alignment paradigm that empowers text-to-image diffusion models with multimodal in-context understanding and reasoning capabilities by integrating the strengths of vision-language models (VLMs). Existing multimodal diffusion finetuning methods largely focus on pixel-level reconstruction rather than in-context reasoning, and are constrained by the complexity and limited availability of reasoning-based datasets. ThinkDiff addresses these challenges by leveraging vision-language training as a proxy task, aligning VLMs with the decoder of an encoder-decoder large language model (LLM) instead of a diffusion decoder. This proxy task builds on the observation that the LLM decoder shares the same input feature space with diffusion decoders that use the corresponding LLM encoder for prompt embedding. As a result, aligning VLMs with diffusion decoders can be simplified through alignment with the LLM decoder. Without complex training and datasets, ThinkDiff effectively unleashes understanding, reasoning, and composing capabilities in diffusion models. Experiments demonstrate that ThinkDiff significantly improves accuracy from 19.2% to 46.3% on the challenging CoBSAT benchmark for multimodal in-context reasoning generation, with only 5 hours of training on 4 A100 GPUs. Additionally, ThinkDiff demonstrates exceptional performance in composing multiple images and texts into logically coherent images. Project page: https://mizhenxing.github.io/ThinkDiff.
30
67b3ea124dd7ea0538ce592d
https://mizhenxing.github.io/ThinkDiff
https://github.com/MiZhenxing/ThinkDiff
2025-02-18T04:20:25.916000
Intuitive physics understanding emerges from self-supervised pretraining on natural videos
https://cdn-thumbnails.h…s/2502.11831.png
2
{ "_id": "5f1158120c833276f61f1a84", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg", "followerCount": 777, "fullname": "Niels Rogge", "isHf": true, "isMod": false, "isPro": false, "name": "nielsr", "type": "user" }
false
null
2502.11831
[ { "_id": "67b450cf315f7b69956df3d6", "hidden": false, "name": "Quentin Garrido", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:28:09.217Z", "user": { "_id": "63049022412a1b9d381b9dcb", "avatarUrl": "/avatars/7382c0a0e3f5609b754ec09a309d33f6.svg", "fullname": "Quentin Garrido", "isPro": false, "type": "user", "user": "garridoq" } }, { "_id": "67b450cf315f7b69956df3d7", "hidden": false, "name": "Nicolas Ballas", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b450cf315f7b69956df3d8", "hidden": false, "name": "Mahmoud Assran", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b450cf315f7b69956df3d9", "hidden": false, "name": "Adrien Bardes", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b450cf315f7b69956df3da", "hidden": false, "name": "Laurent Najman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b450cf315f7b69956df3db", "hidden": false, "name": "Michael Rabbat", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b450cf315f7b69956df3dc", "hidden": false, "name": "Emmanuel Dupoux", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:44:00.610Z", "user": { "_id": "63317d2118711776b4663c3a", "avatarUrl": "/avatars/7dedd1934c1000b6f81a2a37ec348347.svg", "fullname": "Emmanuel Dupoux", "isPro": false, "type": "user", "user": "edupoux" } }, { "_id": "67b450cf315f7b69956df3dd", "hidden": false, "name": "Yann LeCun", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:43:53.902Z", "user": { "_id": "64ed0b8c2203a126eb1a5b9a", "avatarUrl": "/avatars/9156dc406ed3f9ee62b73657ac20f5ed.svg", "fullname": "Yann LeCun", "isPro": false, "type": "user", "user": "ylecun" } } ]
2025-02-17T14:27:14
Intuitive physics understanding emerges from self-supervised pretraining on natural videos
We investigate the emergence of intuitive physics understanding in general-purpose deep neural network models trained to predict masked regions in natural videos. Leveraging the violation-of-expectation framework, we find that video prediction models trained to predict outcomes in a learned representation space demonstrate an understanding of various intuitive physics properties, such as object permanence and shape consistency. In contrast, video prediction in pixel space and multimodal large language models, which reason through text, achieve performance closer to chance. Our comparisons of these architectures reveal that jointly learning an abstract representation space while predicting missing parts of sensory input, akin to predictive coding, is sufficient to acquire an understanding of intuitive physics, and that even models trained on one week of unique video achieve above chance performance. This challenges the idea that core knowledge -- a set of innate systems to help understand the world -- needs to be hardwired to develop an understanding of intuitive physics.
18
67b450d0315f7b69956df3f9
null
https://github.com/facebookresearch/jepa-intuitive-physics
2025-02-18T04:16:28.219000
Towards Data-Efficient Pretraining for Atomic Property Prediction
https://cdn-thumbnails.h…s/2502.11085.png
3
{ "_id": "642b51385bf2355d02a23d15", "avatarUrl": "/avatars/87985347643b2647555f2453fa4d94fb.svg", "followerCount": 4, "fullname": "Hasan Abed Al Kader Hammoud", "isHf": false, "isMod": false, "isPro": true, "name": "hammh0a", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/642b51385bf2355d02a23d15/bLvTbh56AkUmcmRst8mT3.png" ]
2502.11085
[ { "_id": "67b44f44620ae0bad17d6699", "hidden": false, "name": "Yasir Ghunaim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44f44620ae0bad17d669a", "hidden": false, "name": "Hasan Abed Al Kader Hammoud", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:43.057Z", "user": { "_id": "642b51385bf2355d02a23d15", "avatarUrl": "/avatars/87985347643b2647555f2453fa4d94fb.svg", "fullname": "Hasan Abed Al Kader Hammoud", "isPro": true, "type": "user", "user": "hammh0a" } }, { "_id": "67b44f44620ae0bad17d669b", "hidden": false, "name": "Bernard Ghanem", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-16T11:46:23
Towards Data-Efficient Pretraining for Atomic Property Prediction
This paper challenges the recent paradigm in atomic property prediction that links progress to growing dataset sizes and computational resources. We show that pretraining on a carefully selected, task-relevant dataset can match or even surpass large-scale pretraining, while using as little as 1/24th of the computational cost. We introduce the Chemical Similarity Index (CSI), a novel metric inspired by computer vision's Fr\'echet Inception Distance, for molecular graphs which quantifies the alignment between upstream pretraining datasets and downstream tasks. By selecting the most relevant dataset with minimal CSI distance, we show that models pretrained on a smaller, focused dataset consistently outperform those pretrained on massive, mixed datasets such as JMP, even when those larger datasets include the relevant dataset. Counterintuitively, we also find that indiscriminately adding more data can degrade model performance when the additional data poorly aligns with the task at hand. Our findings highlight that quality often outperforms quantity in pretraining for atomic property prediction.
3
67b44f45620ae0bad17d66b0
null
null
2025-02-18T03:53:47.570000
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
https://cdn-thumbnails.h…s/2502.12054.png
2
{ "_id": "6602548a68d519ed324b47c5", "avatarUrl": "/avatars/5ab411f87440cc2a98c7a1c6a3ed5548.svg", "followerCount": 4, "fullname": "ChengyouJia", "isHf": false, "isMod": false, "isPro": false, "name": "ChengyouJia", "type": "user" }
true
null
2502.12054
[ { "_id": "67b44a6888813676da9f8239", "hidden": false, "name": "Xinyu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823a", "hidden": false, "name": "Yuxuan Dong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823b", "hidden": false, "name": "Yanrui Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823c", "hidden": false, "name": "Jiaxing Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823d", "hidden": false, "name": "Chengyou Jia", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:47.313Z", "user": { "_id": "6602548a68d519ed324b47c5", "avatarUrl": "/avatars/5ab411f87440cc2a98c7a1c6a3ed5548.svg", "fullname": "ChengyouJia", "isPro": false, "type": "user", "user": "ChengyouJia" } }, { "_id": "67b44a6888813676da9f823e", "hidden": false, "name": "Basura Fernando", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f823f", "hidden": false, "name": "Mike Zheng Shou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f8240", "hidden": false, "name": "Lingling Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b44a6888813676da9f8241", "hidden": false, "name": "Jun Liu", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T17:24:14
PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
Large language models demonstrate remarkable capabilities across various domains, especially mathematics and logic reasoning. However, current evaluations overlook physics-based reasoning - a complex task requiring physics theorems and constraints. We present PhysReason, a 1,200-problem benchmark comprising knowledge-based (25%) and reasoning-based (75%) problems, where the latter are divided into three difficulty levels (easy, medium, hard). Notably, problems require an average of 8.1 solution steps, with hard requiring 15.6, reflecting the complexity of physics-based reasoning. We propose the Physics Solution Auto Scoring Framework, incorporating efficient answer-level and comprehensive step-level evaluations. Top-performing models like Deepseek-R1, Gemini-2.0-Flash-Thinking, and o3-mini-high achieve less than 60% on answer-level evaluation, with performance dropping from knowledge questions (75.11%) to hard problems (31.95%). Through step-level evaluation, we identified four key bottlenecks: Physics Theorem Application, Physics Process Understanding, Calculation, and Physics Condition Analysis. These findings position PhysReason as a novel and comprehensive benchmark for evaluating physics-based reasoning capabilities in large language models. Our code and data will be published at https:/dxzxy12138.github.io/PhysReason.
5
67b44a6988813676da9f82d0
null
null
2025-02-18T02:26:18.856000
Large Language Models and Mathematical Reasoning Failures
https://cdn-thumbnails.h…s/2502.11574.png
3
{ "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "followerCount": 39, "fullname": "Birger Moell", "isHf": false, "isMod": false, "isPro": false, "name": "birgermoell", "type": "user" }
true
null
2502.11574
[ { "_id": "67b435c29e5685b308a8edac", "hidden": false, "name": "Johan Boye", "status": "extracted_pending", "statusLastChangedAt": "2025-02-18T07:24:50.956Z", "user": { "_id": "65bcbc01d6d0ffbceb8b2e6e", "avatarUrl": "/avatars/73edb2d6b7b11208439ac88b365079e8.svg", "fullname": "Johan Boye", "isPro": false, "type": "user", "user": "jboye" } }, { "_id": "67b435c29e5685b308a8edad", "hidden": false, "name": "Birger Moell", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:49.328Z", "user": { "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "fullname": "Birger Moell", "isPro": false, "type": "user", "user": "birgermoell" } } ]
2025-02-17T09:07:32
Large Language Models and Mathematical Reasoning Failures
This paper investigates the mathematical reasoning capabilities of large language models (LLMs) using 50 newly constructed high-school-level word problems. Unlike prior studies that focus solely on answer correctness, we rigorously analyze both final answers and solution steps to identify reasoning failures. Evaluating eight state-of-the-art models - including Mixtral, Llama, Gemini, GPT-4o, and OpenAI's o1 variants - we find that while newer models (e.g., o3-mini, deepseek-r1) achieve higher accuracy, all models exhibit errors in spatial reasoning, strategic planning, and arithmetic, sometimes producing correct answers through flawed logic. Common failure modes include unwarranted assumptions, over-reliance on numerical patterns, and difficulty translating physical intuition into mathematical steps. Manual analysis reveals that models struggle with problems requiring multi-step deduction or real-world knowledge, despite possessing broad mathematical knowledge. Our results underscore the importance of evaluating reasoning processes, not just answers, and caution against overestimating LLMs' problem-solving proficiency. The study highlights persistent gaps in LLMs' generalization abilities, emphasizing the need for targeted improvements in structured reasoning and constraint handling.
3
67b435c29e5685b308a8edf1
null
null
2025-02-18T02:23:29.869000
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
https://cdn-thumbnails.h…s/2502.11578.png
2
{ "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "followerCount": 39, "fullname": "Birger Moell", "isHf": false, "isMod": false, "isPro": false, "name": "birgermoell", "type": "user" }
true
null
2502.11578
[ { "_id": "67b435475bff5f34c1ebee1b", "hidden": false, "name": "Birger Moell", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:30:52.639Z", "user": { "_id": "6033e34a9aa44495c80dd043", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1614079701740-6033e34a9aa44495c80dd043.jpeg", "fullname": "Birger Moell", "isPro": false, "type": "user", "user": "birgermoell" } }, { "_id": "67b435475bff5f34c1ebee1c", "hidden": false, "name": "Johan Boye", "status": "extracted_pending", "statusLastChangedAt": "2025-02-18T07:22:48.554Z", "user": { "_id": "65bcbc01d6d0ffbceb8b2e6e", "avatarUrl": "/avatars/73edb2d6b7b11208439ac88b365079e8.svg", "fullname": "Johan Boye", "isPro": false, "type": "user", "user": "jboye" } } ]
2025-02-17T09:09:58
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the LIX readability metric and Average Dependency Distance (ADD). Using Swedish high school and university-level essays, we evaluate the models' abilities to compute LIX scores and perform dependency parsing, comparing their results to established ground truths. Our findings reveal that while all models demonstrate some capacity for these tasks, ChatGPT-o1-mini performs most consistently, achieving the highest accuracy in both LIX computation and dependency parsing. Additionally, we observe a strong significant correlation -0.875 p 0.026 (N=6) between the models' accuracy in computing LIX and their overall performance on the Massive Multitask Language Understanding (MMLU) benchmark. These results suggest that language complexity measurement abilities can serve as a noisy zero-shot proxies for assessing the general capabilities of LLMs, providing a practical method for model evaluation without the need for extensive benchmarking datasets.
0
67b435485bff5f34c1ebee52
null
null
2025-02-18T01:45:36.359000
System Message Generation for User Preferences using Open-Source Models
https://cdn-thumbnails.h…s/2502.11330.png
2
{ "_id": "64587be872b60ae7a3817858", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png", "followerCount": 3, "fullname": "Minbyul Jeong", "isHf": false, "isMod": false, "isPro": false, "name": "Minbyul", "type": "user" }
true
null
2502.11330
[ { "_id": "67b42c5632929e97a92dee90", "hidden": false, "name": "Minbyul Jeong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T09:04:45.723Z", "user": { "_id": "64587be872b60ae7a3817858", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png", "fullname": "Minbyul Jeong", "isPro": false, "type": "user", "user": "Minbyul" } }, { "_id": "67b42c5632929e97a92dee91", "hidden": false, "name": "Jungho Cho", "status": "claimed_verified", "statusLastChangedAt": "2025-02-21T09:59:57.458Z", "user": { "_id": "6596a87480a4560b8f9b9532", "avatarUrl": "/avatars/33f39ee01c1648f1daca41a40a4964fb.svg", "fullname": "Christopher Cho", "isPro": false, "type": "user", "user": "ChristopherCho" } }, { "_id": "67b42c5632929e97a92dee92", "hidden": false, "name": "Minsoo Khang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b42c5632929e97a92dee93", "hidden": false, "name": "Dawoon Jung", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:48:50.413Z", "user": { "_id": "6268ec3bd4aa92e53a66d028", "avatarUrl": "/avatars/af7fd3b4a49aca8b71617f3f17673227.svg", "fullname": "dawoon jung", "isPro": false, "type": "user", "user": "noowad" } }, { "_id": "67b42c5632929e97a92dee94", "hidden": false, "name": "Teakgyu Hong", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:48:42.592Z", "user": { "_id": "6551e62ca12178dd75f4592f", "avatarUrl": "/avatars/f4213f9cb95847182bcca3281b1a042c.svg", "fullname": "Teakgyu Hong", "isPro": false, "type": "user", "user": "tghong" } } ]
2025-02-17T01:05:31
System Message Generation for User Preferences using Open-Source Models
System messages play a crucial role in interactions with large language models (LLMs), often serving as prompts to initiate conversations. Through system messages, users can assign specific roles, perform intended tasks, incorporate background information, specify various output formats and communication styles. Despite such versatility, publicly available data are often lack system messages and subject to strict license constraints in the industry field. Manual labeling of publicly available data with system messages that align with user instructions demands significant resources. In view of such challenges, our work introduces SysGen, a pipeline for generating system messages with better aligned assistant responses from the supervised fine-tuning dataset without system messages. Training on SysGen data has demonstrated substantial improvements in the alignment of model responses with system messages and user instructions, as demonstrated across various open-source models on the Multifacet benchmark, while maintaining minimal impact on other unseen benchmarks such as Open LLM Leaderboard 2. Our qualitative analysis highlights the importance of diverse system messages to ensure better adaptability across different contexts.
15
67b42c5732929e97a92deed7
null
null
2025-02-18T01:02:25.236000
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
https://cdn-thumbnails.h…s/2502.11196.png
6
{ "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "followerCount": 19, "fullname": "Ningyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Ningyu", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/_LGnwvwslWc3YDIirfOKS.png" ]
2502.11196
[ { "_id": "67b42223c2fe54b8d43efed6", "hidden": false, "name": "Yixin Ou", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:22:40.840Z", "user": { "_id": "6241749cf80bd930bd99f3dd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669210243382-6241749cf80bd930bd99f3dd.jpeg", "fullname": "Ou Yixin", "isPro": false, "type": "user", "user": "OE-Heart" } }, { "_id": "67b42223c2fe54b8d43efed7", "hidden": false, "name": "Yunzhi Yao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:22:56.226Z", "user": { "_id": "6122fbe636a4c36a99dbea7b", "avatarUrl": "/avatars/c0cd2c1ef58e315d9adda9d26000f625.svg", "fullname": "Yunzhi Yao", "isPro": false, "type": "user", "user": "cowTodd" } }, { "_id": "67b42223c2fe54b8d43efed8", "hidden": false, "name": "Ningyu Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:04.227Z", "user": { "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "fullname": "Ningyu Zhang", "isPro": false, "type": "user", "user": "Ningyu" } }, { "_id": "67b42223c2fe54b8d43efed9", "hidden": false, "name": "Hui Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b42223c2fe54b8d43efeda", "hidden": false, "name": "Jiacheng Sun", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:23:08.677Z", "user": { "_id": "6591895250d39af7f431ad3d", "avatarUrl": "/avatars/83c9717e7d8bea8cae6a640ee6455214.svg", "fullname": "Sun Jiacheng", "isPro": false, "type": "user", "user": "sunjc826" } }, { "_id": "67b42223c2fe54b8d43efedb", "hidden": false, "name": "Shumin Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:23:17.528Z", "user": { "_id": "6441f1d2603214724ec0c1c2", "avatarUrl": "/avatars/d3c4b759e6a5635e37ff715fae52e5ba.svg", "fullname": "Shumin Deng", "isPro": false, "type": "user", "user": "231sm" } }, { "_id": "67b42223c2fe54b8d43efedc", "hidden": false, "name": "Zhenguo Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b42223c2fe54b8d43efedd", "hidden": false, "name": "Huajun Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:23:34.297Z", "user": { "_id": "64931296137833d7ec7689cd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64931296137833d7ec7689cd/TBihNdp1ZwIWjhfAWjRr6.jpeg", "fullname": "Huajun Chen", "isPro": false, "type": "user", "user": "huajunsir" } } ]
2025-02-16T16:55:43
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired knowledge in their neural computations. We address this issue through the lens of knowledge circuit evolution, identifying computational subgraphs that facilitate knowledge storage and processing. Our systematic analysis of circuit evolution throughout continual pre-training reveals several key findings: (1) the acquisition of new knowledge is influenced by its relevance to pre-existing knowledge; (2) the evolution of knowledge circuits exhibits a distinct phase shift from formation to optimization; (3) the evolution of knowledge circuits follows a deep-to-shallow pattern. These insights not only advance our theoretical understanding of the mechanisms of new knowledge acquisition in LLMs, but also provide potential implications for improving continual pre-training strategies to enhance model performance. Code and data will be available at https://github.com/zjunlp/DynamicKnowledgeCircuits.
22
67b42225c2fe54b8d43eff9b
null
null
2025-02-18T01:01:24.331000
SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
https://cdn-thumbnails.h…s/2502.11167.png
2
{ "_id": "650267e7e751d03da933a24a", "avatarUrl": "/avatars/f047a047d1de304cd97027463541bdf3.svg", "followerCount": 1, "fullname": "Bohan22", "isHf": false, "isMod": false, "isPro": false, "name": "Bohan22", "type": "user" }
true
null
2502.11167
[ { "_id": "67b4221bbc387d2eda6f8637", "hidden": false, "name": "Bohan Lyu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:06.388Z", "user": { "_id": "650267e7e751d03da933a24a", "avatarUrl": "/avatars/f047a047d1de304cd97027463541bdf3.svg", "fullname": "Bohan22", "isPro": false, "type": "user", "user": "Bohan22" } }, { "_id": "67b4221bbc387d2eda6f8638", "hidden": false, "name": "Siqiao Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b4221bbc387d2eda6f8639", "hidden": false, "name": "Zichen Liang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:08.469Z", "user": { "_id": "67286718746a95c09d04cb1d", "avatarUrl": "/avatars/317efa8459cca08c2ff56c3ab116e15c.svg", "fullname": "Zichen Liang", "isPro": false, "type": "user", "user": "zcliang22" } } ]
2025-02-16T15:38:19
SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
Large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as code understanding and code generation. However, an equally important yet underexplored question is whether LLMs can serve as general-purpose surrogate code executors, to predict the output and behavior of a program without actually running it. To systematically investigate this capability, we introduce SURGE, a comprehensive benchmark covering eight key aspects: multi-language programming tasks, competition-level programming problems, repository-level code analysis, high-cost scientific computing, time-complexity-intensive algorithms, buggy code analysis, programs dependent on specific compilers or execution environments, and formal mathematical proof verification. We evaluate multiple open-source and proprietary LLMs on SURGE and conduct a scaling study to analyze the impact of model size and training data scale on surrogate execution accuracy. Additionally, we categorize model prediction errors and explore potential areas for improvement. Our findings indicate that while LLMs can predict code execution results in certain cases, they exhibit limitations in general-purpose surrogate execution. This study provides empirical insights into the feasibility of using LLMs as surrogate code executors. Code and dataset are released at https://github.com/Imbernoulli/SURGE.
10
67b4221ebc387d2eda6f8717
null
null
2025-02-18T00:58:24.094000
ReLearn: Unlearning via Learning for Large Language Models
https://cdn-thumbnails.h…s/2502.11190.png
2
{ "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "followerCount": 19, "fullname": "Ningyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Ningyu", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/A4YB7t6hDVty6QrvLN0a7.png" ]
2502.11190
[ { "_id": "67b420dfb2528c023491f455", "hidden": false, "name": "Haoming Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b420dfb2528c023491f456", "hidden": true, "name": "Ningyuan Zhao", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:20:38.762Z", "user": { "_id": "6641f74c3bd2f40eb41fb0d8", "avatarUrl": "/avatars/b0167a521b6e4aea27e245dd9e026ef3.svg", "fullname": "zhaoningyuan", "isPro": false, "type": "user", "user": "zhaoningyuan" } }, { "_id": "67b420dfb2528c023491f457", "hidden": false, "name": "Liming Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b420dfb2528c023491f458", "hidden": false, "name": "Sendong Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b420dfb2528c023491f459", "hidden": false, "name": "Shumin Deng", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:20:59.377Z", "user": { "_id": "6441f1d2603214724ec0c1c2", "avatarUrl": "/avatars/d3c4b759e6a5635e37ff715fae52e5ba.svg", "fullname": "Shumin Deng", "isPro": false, "type": "user", "user": "231sm" } }, { "_id": "67b420dfb2528c023491f45a", "hidden": false, "name": "Mengru Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:08.761Z", "user": { "_id": "64bf898d979949d2e2585c9a", "avatarUrl": "/avatars/da77c856ec997e2b812c06272a01c8b2.svg", "fullname": "mengruwang", "isPro": false, "type": "user", "user": "mengru" } }, { "_id": "67b420dfb2528c023491f45b", "hidden": false, "name": "Bryan Hooi", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:14.704Z", "user": { "_id": "651d8032c50012d33e914f2f", "avatarUrl": "/avatars/0a44c9f51fc50ce86582e328c361ea00.svg", "fullname": "Bryan Hooi", "isPro": false, "type": "user", "user": "bhooi" } }, { "_id": "67b420dfb2528c023491f45c", "hidden": false, "name": "Nay Oo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b420dfb2528c023491f45d", "hidden": false, "name": "Huajun Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:21:32.744Z", "user": { "_id": "64931296137833d7ec7689cd", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64931296137833d7ec7689cd/TBihNdp1ZwIWjhfAWjRr6.jpeg", "fullname": "Huajun Chen", "isPro": false, "type": "user", "user": "huajunsir" } }, { "_id": "67b420dfb2528c023491f45e", "hidden": false, "name": "Ningyu Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:11.243Z", "user": { "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "fullname": "Ningyu Zhang", "isPro": false, "type": "user", "user": "Ningyu" } } ]
2025-02-16T16:31:00
ReLearn: Unlearning via Learning for Large Language Models
Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgetting while inadequately assessing response fluency and relevance. To address these challenges, we propose ReLearn, a data augmentation and fine-tuning pipeline for effective unlearning, along with a comprehensive evaluation framework. This framework introduces Knowledge Forgetting Rate (KFR) and Knowledge Retention Rate (KRR) to measure knowledge-level preservation, and Linguistic Score (LS) to evaluate generation quality. Our experiments show that ReLearn successfully achieves targeted forgetting while preserving high-quality output. Through mechanistic analysis, we further demonstrate how reverse optimization disrupts coherent text generation, while ReLearn preserves this essential capability. Code is available at https://github.com/zjunlp/unlearn.
29
67b420e2b2528c023491f506
null
null
2025-02-18T00:49:53.124000
Learning Getting-Up Policies for Real-World Humanoid Robots
https://cdn-thumbnails.h…s/2502.12152.png
3
{ "_id": "6201fc5d91d53938a6432fbf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg", "followerCount": 3, "fullname": "Runpei Dong", "isHf": false, "isMod": false, "isPro": false, "name": "RunpeiDong", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6201fc5d91d53938a6432fbf/x35BuXOhc6ubukxLfiVzt.mp4" ]
2502.12152
[ { "_id": "67b41ed52867282b4eb37ce4", "hidden": false, "name": "Xialin He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b41ed52867282b4eb37ce5", "hidden": false, "name": "Runpei Dong", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T09:31:13.178Z", "user": { "_id": "6201fc5d91d53938a6432fbf", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg", "fullname": "Runpei Dong", "isPro": false, "type": "user", "user": "RunpeiDong" } }, { "_id": "67b41ed52867282b4eb37ce6", "hidden": false, "name": "Zixuan Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-02-19T15:44:49.892Z", "user": { "_id": "64a02b4c1396c67ac07798bb", "avatarUrl": "/avatars/9b1b4319edbac5faeb7586a4933791d2.svg", "fullname": "Eric Chen", "isPro": false, "type": "user", "user": "zxuannn" } }, { "_id": "67b41ed52867282b4eb37ce7", "hidden": false, "name": "Saurabh Gupta", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:59:06
Learning Getting-Up Policies for Real-World Humanoid Robots
Automatic fall recovery is a crucial prerequisite before humanoid robots can be reliably deployed. Hand-designing controllers for getting up is difficult because of the varied configurations a humanoid can end up in after a fall and the challenging terrains humanoid robots are expected to operate on. This paper develops a learning framework to produce controllers that enable humanoid robots to get up from varying configurations on varying terrains. Unlike previous successful applications of humanoid locomotion learning, the getting-up task involves complex contact patterns, which necessitates accurately modeling the collision geometry and sparser rewards. We address these challenges through a two-phase approach that follows a curriculum. The first stage focuses on discovering a good getting-up trajectory under minimal constraints on smoothness or speed / torque limits. The second stage then refines the discovered motions into deployable (i.e. smooth and slow) motions that are robust to variations in initial configuration and terrains. We find these innovations enable a real-world G1 humanoid robot to get up from two main situations that we considered: a) lying face up and b) lying face down, both tested on flat, deformable, slippery surfaces and slopes (e.g., sloppy grass and snowfield). To the best of our knowledge, this is the first successful demonstration of learned getting-up policies for human-sized humanoid robots in the real world. Project page: https://humanoid-getup.github.io/
36
67b41edb2867282b4eb37ddf
null
null
2025-02-18T00:28:31.293000
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
https://cdn-thumbnails.h…s/2502.12115.png
5
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2502.12115
[ { "_id": "67b41a72a38d04cc6148d80e", "hidden": false, "name": "Samuel Miserendino", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b41a72a38d04cc6148d80f", "hidden": false, "name": "Michele Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67b41a72a38d04cc6148d810", "hidden": false, "name": "Tejal Patwardhan", "status": "admin_assigned", "statusLastChangedAt": "2025-02-19T15:15:20.891Z", "user": { "_id": "64d4f504887f55fb6eedec74", "avatarUrl": "/avatars/054fb826890adcb330f0e4cbca3ef7c4.svg", "fullname": "Tejal Patwardhan", "isPro": false, "type": "user", "user": "tejalp" } }, { "_id": "67b41a72a38d04cc6148d811", "hidden": false, "name": "Johannes Heidecke", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-02-17T18:41:16
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software engineering tasks from Upwork, valued at \1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks--ranging from 50 bug fixes to \$32,000 feature implementations--and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split, SWE-Lancer Diamond (https://github.com/openai/SWELancer-Benchmark). By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
42
67b41a74a38d04cc6148d84b
null
null