publishedAt
timestamp[ns]date 2023-02-13 12:55:54
2025-05-02 03:36:49
⌀ | title
stringlengths 8
206
⌀ | thumbnail
stringlengths 77
77
⌀ | numComments
int64 0
143
⌀ | submittedBy
dict | isAuthorParticipating
bool 2
classes | mediaUrls
sequencelengths 0
12
⌀ | paper_id
stringlengths 10
10
⌀ | paper_authors
listlengths 1
942
⌀ | paper_publishedAt
timestamp[ns]date 2023-02-13 17:55:54
2025-05-02 07:36:49
⌀ | paper_title
stringlengths 8
206
⌀ | paper_summary
stringlengths 165
1.92k
⌀ | paper_upvotes
int64 0
615
⌀ | paper_discussionId
stringlengths 24
24
⌀ | paper_projectPage
stringclasses 572
values | paper_githubRepo
stringclasses 813
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-03-04T12:05:25.041000 | Efficient Test-Time Scaling via Self-Calibration | 1 | {
"_id": "62ea79dd01ed9b0e8f61ccd3",
"avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg",
"followerCount": 2,
"fullname": "Chengsong Huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ChengsongHuang",
"type": "user"
} | true | null | 2503.00031 | [
{
"_id": "67c732c14aaf26f75cea0d82",
"hidden": false,
"name": "Chengsong Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T21:15:36.013Z",
"user": {
"_id": "62ea79dd01ed9b0e8f61ccd3",
"avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg",
"fullname": "Chengsong Huang",
"isPro": false,
"type": "user",
"user": "ChengsongHuang"
}
},
{
"_id": "67c732c14aaf26f75cea0d83",
"hidden": false,
"name": "Langlin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d84",
"hidden": false,
"name": "Jixuan Leng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d85",
"hidden": false,
"name": "Jiacheng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d86",
"hidden": false,
"name": "Jiaxin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T00:21:14 | Efficient Test-Time Scaling via Self-Calibration | Increasing test-time computation is a straightforward approach to enhancing
the quality of responses in Large Language Models (LLMs). While Best-of-N
sampling and Self-Consistency with majority voting are simple and effective,
they require a fixed number of sampling responses for each query, regardless of
its complexity. This could result in wasted computation for simpler questions
and insufficient exploration for more challenging ones. In this work, we argue
that model confidence of responses can be used for improving the efficiency of
test-time scaling. Unfortunately, LLMs are known to be overconfident and
provide unreliable confidence estimation. To address this limitation, we
introduce Self-Calibration by distilling Self-Consistency-derived confidence
into the model itself. This enables reliable confidence estimation at test time
with one forward pass. We then design confidence-based efficient test-time
scaling methods to handle queries of various difficulty, such as Early-Stopping
for Best-of-N and Self-Consistency with calibrated confidence. Experiments on
three LLMs across six datasets demonstrate the effectiveness of our approach.
Specifically, applying confidence-based Early Stopping to Best-of-N improves
MathQA accuracy from 81.0 to 83.6 with a sample budget of 16 responses,
indicating the efficacy of confidence-based sampling strategy at inference
time. | 8 | 67c732c34aaf26f75cea0df7 | null | null |
|
2025-03-04T10:47:26.717000 | Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis | 1 | {
"_id": "63e0b1925ba41def87930c47",
"avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg",
"followerCount": 1,
"fullname": "Jeffrey Yang Fan Chiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "RandomHakkaDude",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63e0b1925ba41def87930c47/OQIn8hn8i8nP9HMjOk5cR.mp4"
] | 2502.20383 | [
{
"_id": "67c284e76e9f0735ea1c436d",
"hidden": false,
"name": "Jeffrey Yang Fan Chiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:34.456Z",
"user": {
"_id": "63e0b1925ba41def87930c47",
"avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg",
"fullname": "Jeffrey Yang Fan Chiang",
"isPro": false,
"type": "user",
"user": "RandomHakkaDude"
}
},
{
"_id": "67c284e76e9f0735ea1c436e",
"hidden": false,
"name": "Seungjae Lee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:28.645Z",
"user": {
"_id": "64081a908dca6cec91caf136",
"avatarUrl": "/avatars/c45d7fcdf879f4d6020863fd3be39771.svg",
"fullname": "SeungJae Lee",
"isPro": false,
"type": "user",
"user": "SeungJaeLee"
}
},
{
"_id": "67c284e76e9f0735ea1c436f",
"hidden": false,
"name": "Jia-Bin Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:55.181Z",
"user": {
"_id": "641c139b73296f7ee256970c",
"avatarUrl": "/avatars/5a2550d95e686640242840ad3bd0e680.svg",
"fullname": "Jiabin Huang",
"isPro": false,
"type": "user",
"user": "YellowAddice"
}
},
{
"_id": "67c284e76e9f0735ea1c4370",
"hidden": false,
"name": "Furong Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:44.428Z",
"user": {
"_id": "64cbc3e2a257a3212c00a115",
"avatarUrl": "/avatars/836e61be4aeda2080ddf2db9f2626cc6.svg",
"fullname": "Furong Huang Lab at UMD",
"isPro": false,
"type": "user",
"user": "furongh-lab"
}
},
{
"_id": "67c284e76e9f0735ea1c4371",
"hidden": false,
"name": "Yizheng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:37.149Z",
"user": {
"_id": "660daf1d62d63ad000a53b9b",
"avatarUrl": "/avatars/2f79d4b7db395e94b614358c7f322efe.svg",
"fullname": "Yizheng Chen",
"isPro": false,
"type": "user",
"user": "surrealyz"
}
}
] | 2025-02-27T18:56:26 | Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security
Analysis | Recent advancements in Web AI agents have demonstrated remarkable
capabilities in addressing complex web navigation tasks. However, emerging
research shows that these agents exhibit greater vulnerability compared to
standalone Large Language Models (LLMs), despite both being built upon the same
safety-aligned models. This discrepancy is particularly concerning given the
greater flexibility of Web AI Agent compared to standalone LLMs, which may
expose them to a wider range of adversarial user inputs. To build a scaffold
that addresses these concerns, this study investigates the underlying factors
that contribute to the increased vulnerability of Web AI agents. Notably, this
disparity stems from the multifaceted differences between Web AI agents and
standalone LLMs, as well as the complex signals - nuances that simple
evaluation metrics, such as success rate, often fail to capture. To tackle
these challenges, we propose a component-level analysis and a more granular,
systematic evaluation framework. Through this fine-grained investigation, we
identify three critical factors that amplify the vulnerability of Web AI
agents; (1) embedding user goals into the system prompt, (2) multi-step action
generation, and (3) observational capabilities. Our findings highlights the
pressing need to enhance security and robustness in AI agent design and provide
actionable insights for targeted defense strategies. | 1 | 67c284e96e9f0735ea1c43dd | https://vulnerable-ai-agents.github.io/ | null |
|
2025-03-04T08:19:57.557000 | General Reasoning Requires Learning to Reason from the Get-go | 1 | {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"followerCount": null,
"fullname": "Seungwook Han",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hanseungwook",
"type": "user"
} | true | null | 2502.19402 | [
{
"_id": "67c66a6321d722b4247e5959",
"hidden": false,
"name": "Seungwook Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:08:58.266Z",
"user": {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"fullname": "Seungwook Han",
"isPro": false,
"type": "user",
"user": "hanseungwook"
}
},
{
"_id": "67c66a6321d722b4247e595a",
"hidden": false,
"name": "Jyothish Pari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66a6321d722b4247e595b",
"hidden": false,
"name": "Samuel J. Gershman",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-04T13:57:29.748Z",
"user": {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"fullname": "Seungwook Han",
"isPro": false,
"type": "user",
"user": "hanseungwook"
}
},
{
"_id": "67c66a6321d722b4247e595c",
"hidden": false,
"name": "Pulkit Agrawal",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T18:51:12 | General Reasoning Requires Learning to Reason from the Get-go | Large Language Models (LLMs) have demonstrated impressive real-world utility,
exemplifying artificial useful intelligence (AUI). However, their ability to
reason adaptively and robustly -- the hallmarks of artificial general
intelligence (AGI) -- remains fragile. While LLMs seemingly succeed in
commonsense reasoning, programming, and mathematics, they struggle to
generalize algorithmic understanding across novel contexts. Our experiments
with algorithmic tasks in esoteric programming languages reveal that LLM's
reasoning overfits to the training data and is limited in its transferability.
We hypothesize that the core issue underlying such limited transferability is
the coupling of reasoning and knowledge in LLMs.
To transition from AUI to AGI, we propose disentangling knowledge and
reasoning through three key directions: (1) pretaining to reason using RL from
scratch as an alternative to the widely used next-token prediction pretraining,
(2) using a curriculum of synthetic tasks to ease the learning of a
reasoning prior for RL that can then be transferred to natural
language tasks, and (3) learning more generalizable reasoning functions using a
small context window to reduce exploiting spurious correlations between tokens.
Such a reasoning system coupled with a trained retrieval system and a large
external memory bank as a knowledge store can overcome several limitations of
existing architectures at learning to reason in novel scenarios. | 4 | 67c66a6521d722b4247e59c8 | null | null |
|
2025-03-04T08:11:33.371000 | PodAgent: A Comprehensive Framework for Podcast Generation | 1 | {
"_id": "674836767b7151c3ff30f865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png",
"followerCount": null,
"fullname": "Yujia Xiao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yogurt928",
"type": "user"
} | true | null | 2503.00455 | [
{
"_id": "67c6facdd8af5b36fd4b59cf",
"hidden": false,
"name": "Yujia Xiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T16:08:12.490Z",
"user": {
"_id": "674836767b7151c3ff30f865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png",
"fullname": "Yujia Xiao",
"isPro": false,
"type": "user",
"user": "Yogurt928"
}
},
{
"_id": "67c6facdd8af5b36fd4b59d0",
"hidden": false,
"name": "Lei He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d1",
"hidden": false,
"name": "Haohan Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d2",
"hidden": false,
"name": "Fenglong Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d3",
"hidden": false,
"name": "Tan Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-01T11:35:17 | PodAgent: A Comprehensive Framework for Podcast Generation | Existing Existing automatic audio generation methods struggle to generate
podcast-like audio programs effectively. The key challenges lie in in-depth
content generation, appropriate and expressive voice production. This paper
proposed PodAgent, a comprehensive framework for creating audio programs.
PodAgent 1) generates informative topic-discussion content by designing a
Host-Guest-Writer multi-agent collaboration system, 2) builds a voice pool for
suitable voice-role matching and 3) utilizes LLM-enhanced speech synthesis
method to generate expressive conversational speech. Given the absence of
standardized evaluation criteria for podcast-like audio generation, we
developed comprehensive assessment guidelines to effectively evaluate the
model's performance. Experimental results demonstrate PodAgent's effectiveness,
significantly surpassing direct GPT-4 generation in topic-discussion dialogue
content, achieving an 87.4% voice-matching accuracy, and producing more
expressive speech through LLM-guided synthesis. Demo page:
https://podcast-agent.github.io/demo/. Source code:
https://github.com/yujxx/PodAgent. | 5 | 67c6facfd8af5b36fd4b5a45 | https://podcast-agent.github.io/demo/ | https://github.com/yujxx/PodAgent |
|
2025-03-04T06:41:49.997000 | When an LLM is apprehensive about its answers -- and when its uncertainty is justified | 1 | {
"_id": "675708985b91dea24c3ef642",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/675708985b91dea24c3ef642/8KmerI1LwJEBHM2vrC54d.jpeg",
"followerCount": null,
"fullname": "Andrey Goncharov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "aigoncharov",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/675708985b91dea24c3ef642/9wCzAalApYA8hPN94CaEu.png"
] | 2503.01688 | [
{
"_id": "67c6e6735aea9d8918635ac2",
"hidden": false,
"name": "Petr Sychev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:33.230Z",
"user": {
"_id": "6728224623d75cbd1cdbe568",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4sb6TjuzeDc8-PG9hYhjW.jpeg",
"fullname": "Petr Sychev",
"isPro": false,
"type": "user",
"user": "sspetya"
}
},
{
"_id": "67c6e6735aea9d8918635ac3",
"hidden": false,
"name": "Andrey Goncharov",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T11:39:33.550Z",
"user": {
"_id": "675708985b91dea24c3ef642",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/675708985b91dea24c3ef642/8KmerI1LwJEBHM2vrC54d.jpeg",
"fullname": "Andrey Goncharov",
"isPro": false,
"type": "user",
"user": "aigoncharov"
}
},
{
"_id": "67c6e6735aea9d8918635ac4",
"hidden": false,
"name": "Daniil Vyazhev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:34.869Z",
"user": {
"_id": "659e049c01805191e5f67b12",
"avatarUrl": "/avatars/4f33e39d85f8fbdfaeb34143e5038b92.svg",
"fullname": "Vyazhev",
"isPro": false,
"type": "user",
"user": "DanielVyazhev"
}
},
{
"_id": "67c6e6735aea9d8918635ac5",
"hidden": false,
"name": "Edvard Khalafyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6e6735aea9d8918635ac6",
"hidden": false,
"name": "Alexey Zaytsev",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T16:03:46 | When an LLM is apprehensive about its answers -- and when its
uncertainty is justified | Uncertainty estimation is crucial for evaluating Large Language Models
(LLMs), particularly in high-stakes domains where incorrect answers result in
significant consequences. Numerous approaches consider this problem, while
focusing on a specific type of uncertainty, ignoring others. We investigate
what estimates, specifically token-wise entropy and model-as-judge (MASJ),
would work for multiple-choice question-answering tasks for different question
topics. Our experiments consider three LLMs: Phi-4, Mistral, and Qwen of
different sizes from 1.5B to 72B and 14 topics. While MASJ performs similarly
to a random error predictor, the response entropy predicts model error in
knowledge-dependent domains and serves as an effective indicator of question
difficulty: for biology ROC AUC is 0.73. This correlation vanishes for the
reasoning-dependent domain: for math questions ROC-AUC is 0.55. More
principally, we found out that the entropy measure required a reasoning amount.
Thus, data-uncertainty related entropy should be integrated within uncertainty
estimates frameworks, while MASJ requires refinement. Moreover, existing
MMLU-Pro samples are biased, and should balance required amount of reasoning
for different subdomains to provide a more fair assessment of LLMs performance. | 16 | 67c6e6755aea9d8918635b20 | null | https://github.com/LabARSS/question-complextiy-estimation |
|
2025-03-04T05:28:10.012000 | SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity | 1 | {
"_id": "65a0aade5fafc248c2156e95",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65a0aade5fafc248c2156e95/S9YjJMTuKc-U1cFizqUMA.jpeg",
"followerCount": 1,
"fullname": "DeyangKong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DeyangKong",
"type": "user"
} | true | null | 2503.01506 | [
{
"_id": "67c67cf5c8d296910ca74711",
"hidden": false,
"name": "Xiangyu Xi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:25.632Z",
"user": {
"_id": "63edb098679c2cc40abc6c2e",
"avatarUrl": "/avatars/288c7229937c2c3f29fda6d17c7df2eb.svg",
"fullname": "Xiangyu",
"isPro": false,
"type": "user",
"user": "xixy"
}
},
{
"_id": "67c67cf5c8d296910ca74712",
"hidden": false,
"name": "Deyang Kong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:21.910Z",
"user": {
"_id": "65a0aade5fafc248c2156e95",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65a0aade5fafc248c2156e95/S9YjJMTuKc-U1cFizqUMA.jpeg",
"fullname": "DeyangKong",
"isPro": false,
"type": "user",
"user": "DeyangKong"
}
},
{
"_id": "67c67cf5c8d296910ca74713",
"hidden": false,
"name": "Jian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74714",
"hidden": false,
"name": "Jiawei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74715",
"hidden": false,
"name": "Zhengyu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:14.018Z",
"user": {
"_id": "67b7ebf3d00e69f10cfcf551",
"avatarUrl": "/avatars/8adea7ae44c459079113a690ec7da73a.svg",
"fullname": "Chen Zhengyu",
"isPro": false,
"type": "user",
"user": "WQYC"
}
},
{
"_id": "67c67cf5c8d296910ca74716",
"hidden": false,
"name": "Wei Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:23.635Z",
"user": {
"_id": "62fa0ffe0697d224219a0cb7",
"avatarUrl": "/avatars/f0ef59e1c0cf4ab4fe5cee08d488bd03.svg",
"fullname": "Wei Wang",
"isPro": false,
"type": "user",
"user": "WeiWang"
}
},
{
"_id": "67c67cf5c8d296910ca74717",
"hidden": false,
"name": "Jingang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:00.303Z",
"user": {
"_id": "647097cbcfd57849518e656b",
"avatarUrl": "/avatars/c66fe0add29c1bde9e3a98bf4a8793b9.svg",
"fullname": "Jingang Wang",
"isPro": false,
"type": "user",
"user": "bitwjg"
}
},
{
"_id": "67c67cf5c8d296910ca74718",
"hidden": false,
"name": "Xunliang Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74719",
"hidden": false,
"name": "Shikun Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca7471a",
"hidden": false,
"name": "Wei Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T13:22:11 | SampleMix: A Sample-wise Pre-training Data Mixing Strategey by
Coordinating Data Quality and Diversity | Existing pretraining data mixing methods for large language models (LLMs)
typically follow a domain-wise methodology, a top-down process that first
determines domain weights and then performs uniform data sampling across each
domain. However, these approaches neglect significant inter-domain overlaps and
commonalities, failing to control the global diversity of the constructed
training dataset. Further, uniform sampling within domains ignores fine-grained
sample-specific features, potentially leading to suboptimal data distribution.
To address these shortcomings, we propose a novel sample-wise data mixture
approach based on a bottom-up paradigm. This method performs global
cross-domain sampling by systematically evaluating the quality and diversity of
each sample, thereby dynamically determining the optimal domain distribution.
Comprehensive experiments across multiple downstream tasks and perplexity
assessments demonstrate that SampleMix surpasses existing domain-based methods.
Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the
baselines' performance, highlighting the substantial potential of SampleMix to
optimize pre-training data. | 7 | 67c67d03c8d296910ca7494f | null | null |
|
2025-03-04T05:13:44.578000 | Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia | 1 | {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"followerCount": 1,
"fullname": "Zirui Song",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ziruibest",
"type": "user"
} | true | null | 2503.01714 | [
{
"_id": "67c6d22d983375492193aab0",
"hidden": false,
"name": "Chenxi Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T11:16:44.551Z",
"user": {
"_id": "679bc0ec7f3c28bf968321c8",
"avatarUrl": "/avatars/9d5ab9c6af32878e28987518c0210c1a.svg",
"fullname": "Chenxi Wang",
"isPro": false,
"type": "user",
"user": "Aurora-cx"
}
},
{
"_id": "67c6d22d983375492193aab1",
"hidden": false,
"name": "Tianle Gu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:54:29.015Z",
"user": {
"_id": "6346361c5efccdc07f179cae",
"avatarUrl": "/avatars/217818114a4c19ea4f3e5cdafefb625e.svg",
"fullname": "Gu Tianle",
"isPro": false,
"type": "user",
"user": "Carol0110"
}
},
{
"_id": "67c6d22d983375492193aab2",
"hidden": false,
"name": "Zhongyu Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d22d983375492193aab3",
"hidden": false,
"name": "Lang Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d22d983375492193aab4",
"hidden": false,
"name": "Zirui Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T10:17:25.935Z",
"user": {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"fullname": "Zirui Song",
"isPro": false,
"type": "user",
"user": "Ziruibest"
}
},
{
"_id": "67c6d22d983375492193aab5",
"hidden": false,
"name": "Xiuying Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T16:31:45 | Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia | Human readers can efficiently comprehend scrambled words, a phenomenon known
as Typoglycemia, primarily by relying on word form; if word form alone is
insufficient, they further utilize contextual cues for interpretation. While
advanced large language models (LLMs) exhibit similar abilities, the underlying
mechanisms remain unclear. To investigate this, we conduct controlled
experiments to analyze the roles of word form and contextual information in
semantic reconstruction and examine LLM attention patterns. Specifically, we
first propose SemRecScore, a reliable metric to quantify the degree of semantic
reconstruction, and validate its effectiveness. Using this metric, we study how
word form and contextual information influence LLMs' semantic reconstruction
ability, identifying word form as the core factor in this process. Furthermore,
we analyze how LLMs utilize word form and find that they rely on specialized
attention heads to extract and process word form information, with this
mechanism remaining stable across varying levels of word scrambling. This
distinction between LLMs' fixed attention patterns primarily focused on word
form and human readers' adaptive strategy in balancing word form and contextual
information provides insights into enhancing LLM performance by incorporating
human-like, context-aware mechanisms. | 5 | 67c6d22e983375492193ab13 | null | null |
|
2025-03-04T05:12:10.849000 | Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator | 1 | {
"_id": "652bf7edc3cba555d5673c6e",
"avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg",
"followerCount": null,
"fullname": "Kaiwen Zheng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "worstcoder",
"type": "user"
} | true | null | 2503.01103 | [
{
"_id": "67c6d1c35e896ed915374027",
"hidden": false,
"name": "Kaiwen Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T10:17:24.142Z",
"user": {
"_id": "652bf7edc3cba555d5673c6e",
"avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg",
"fullname": "Kaiwen Zheng",
"isPro": false,
"type": "user",
"user": "worstcoder"
}
},
{
"_id": "67c6d1c35e896ed915374028",
"hidden": false,
"name": "Yongxin Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:59.170Z",
"user": {
"_id": "66f4cf1a03b5ba8a7f1f6522",
"avatarUrl": "/avatars/2768d6e37d3f280194cfb8ed274f6015.svg",
"fullname": "Yongxin Chen",
"isPro": false,
"type": "user",
"user": "Ema11"
}
},
{
"_id": "67c6d1c35e896ed915374029",
"hidden": false,
"name": "Huayu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:06.080Z",
"user": {
"_id": "6630f87ee53fcb71c3887df0",
"avatarUrl": "/avatars/50191a3d45bebf90cf08df09477e95db.svg",
"fullname": "HuayuChen",
"isPro": false,
"type": "user",
"user": "HuayuChen"
}
},
{
"_id": "67c6d1c35e896ed91537402a",
"hidden": false,
"name": "Guande He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:20.266Z",
"user": {
"_id": "67492ee82ad3cfc108a41bbb",
"avatarUrl": "/avatars/7ad03e55a8791c62f1271a5c9bf8cc60.svg",
"fullname": "Guande He",
"isPro": false,
"type": "user",
"user": "gdhe17"
}
},
{
"_id": "67c6d1c35e896ed91537402b",
"hidden": false,
"name": "Ming-Yu Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:27.270Z",
"user": {
"_id": "62f049afdf4b93aad5c7f2d6",
"avatarUrl": "/avatars/e272e58ad996733d7098e50248e5b57e.svg",
"fullname": "Ming-Yu Liu",
"isPro": false,
"type": "user",
"user": "mingyuliutw"
}
},
{
"_id": "67c6d1c35e896ed91537402c",
"hidden": false,
"name": "Jun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d1c35e896ed91537402d",
"hidden": false,
"name": "Qinsheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:33.763Z",
"user": {
"_id": "6732d5dea24987c43bfbafd8",
"avatarUrl": "/avatars/1581373b9de5069975716932fceb976b.svg",
"fullname": "Qinsheng Zhang",
"isPro": false,
"type": "user",
"user": "qsh-zh"
}
}
] | 2025-03-03T02:06:22 | Direct Discriminative Optimization: Your Likelihood-Based Visual
Generative Model is Secretly a GAN Discriminator | While likelihood-based generative models, particularly diffusion and
autoregressive models, have achieved remarkable fidelity in visual generation,
the maximum likelihood estimation (MLE) objective inherently suffers from a
mode-covering tendency that limits the generation quality under limited model
capacity. In this work, we propose Direct Discriminative Optimization (DDO) as
a unified framework that bridges likelihood-based generative training and the
GAN objective to bypass this fundamental constraint. Our key insight is to
parameterize a discriminator implicitly using the likelihood ratio between a
learnable target model and a fixed reference model, drawing parallels with the
philosophy of Direct Preference Optimization (DPO). Unlike GANs, this
parameterization eliminates the need for joint training of generator and
discriminator networks, allowing for direct, efficient, and effective
finetuning of a well-trained model to its full potential beyond the limits of
MLE. DDO can be performed iteratively in a self-play manner for progressive
model refinement, with each round requiring less than 1% of pretraining epochs.
Our experiments demonstrate the effectiveness of DDO by significantly advancing
the previous SOTA diffusion model EDM, reducing FID scores from 1.79/1.58 to
new records of 1.30/0.97 on CIFAR-10/ImageNet-64 datasets, and by consistently
improving both guidance-free and CFG-enhanced FIDs of visual autoregressive
models on ImageNet 256times256. | 2 | 67c6d1c65e896ed9153740e4 | https://research.nvidia.com/labs/dir/ddo/ | null |
|
2025-03-04T04:56:33.061000 | From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens | 1 | {
"_id": "63a95a6a7930fa8c7dd63d4e",
"avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg",
"followerCount": 3,
"fullname": "Zilong Zheng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zlzheng",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63a95a6a7930fa8c7dd63d4e/3WZ10b-Ku3GcY1fc1MWx8.mp4"
] | 2502.18890 | [
{
"_id": "67c6cbd6e52534aa6ada2e26",
"hidden": false,
"name": "Tong Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:45.670Z",
"user": {
"_id": "668f7fee5156d55f72af4f21",
"avatarUrl": "/avatars/02edf8d7d5f288d80dc665b18dda4d0a.svg",
"fullname": "TongWu",
"isPro": false,
"type": "user",
"user": "TongWu"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e27",
"hidden": false,
"name": "Junzhe Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:27.834Z",
"user": {
"_id": "6530c9d7d107f378e105d667",
"avatarUrl": "/avatars/889dfcb6514c90351802bebb4a34a78f.svg",
"fullname": "Junzhe Shen",
"isPro": false,
"type": "user",
"user": "JunzheS"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e28",
"hidden": false,
"name": "Zixia Jia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:34.128Z",
"user": {
"_id": "64b7ae6cf53ae848e72b997d",
"avatarUrl": "/avatars/b55dd3d6fcb3ccac2e3880d01a9bdc63.svg",
"fullname": "Zixia Jia",
"isPro": false,
"type": "user",
"user": "vickyandkekey"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e29",
"hidden": false,
"name": "Yuxuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6cbd6e52534aa6ada2e2a",
"hidden": false,
"name": "Zilong Zheng",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T09:45:59.571Z",
"user": {
"_id": "63a95a6a7930fa8c7dd63d4e",
"avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg",
"fullname": "Zilong Zheng",
"isPro": false,
"type": "user",
"user": "zlzheng"
}
}
] | 2025-02-26T07:10:08 | From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence
Generation up to 100K Tokens | Generating ultra-long sequences with large language models (LLMs) has become
increasingly crucial but remains a highly time-intensive task, particularly for
sequences up to 100K tokens. While traditional speculative decoding methods
exist, simply extending their generation limits fails to accelerate the process
and can be detrimental. Through an in-depth analysis, we identify three major
challenges hindering efficient generation: frequent model reloading, dynamic
key-value (KV) management and repetitive generation. To address these issues,
we introduce TOKENSWIFT, a novel framework designed to substantially accelerate
the generation process of ultra-long sequences while maintaining the target
model's inherent quality. Experimental results demonstrate that TOKENSWIFT
achieves over 3 times speedup across models of varying scales (1.5B, 7B, 8B,
14B) and architectures (MHA, GQA). This acceleration translates to hours of
time savings for ultra-long sequence generation, establishing TOKENSWIFT as a
scalable and effective solution at unprecedented lengths. Code can be found at
https://github.com/bigai-nlco/TokenSwift. | 7 | 67c6cbd7e52534aa6ada2e79 | null | https://github.com/bigai-nlco/TokenSwift |
|
2025-03-04T04:54:04.054000 | DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion | 1 | {
"_id": "624bebf604abc7ebb01789af",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg",
"followerCount": 3863,
"fullname": "Apolinário from multimodal AI art",
"isHf": true,
"isMod": false,
"isPro": true,
"name": "multimodalart",
"type": "user"
} | false | null | 2503.01183 | [
{
"_id": "67c6a15e21d722b4248bd9c2",
"hidden": false,
"name": "Ziqian Ning",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c3",
"hidden": false,
"name": "Huakang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c4",
"hidden": false,
"name": "Yuepeng Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c5",
"hidden": false,
"name": "Chunbo Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c6",
"hidden": false,
"name": "Guobin Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c7",
"hidden": false,
"name": "Shuai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c8",
"hidden": false,
"name": "Jixun Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c9",
"hidden": false,
"name": "Lei Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T05:15:34 | DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End
Full-Length Song Generation with Latent Diffusion | Recent advancements in music generation have garnered significant attention,
yet existing approaches face critical limitations. Some current generative
models can only synthesize either the vocal track or the accompaniment track.
While some models can generate combined vocal and accompaniment, they typically
rely on meticulously designed multi-stage cascading architectures and intricate
data pipelines, hindering scalability. Additionally, most systems are
restricted to generating short musical segments rather than full-length songs.
Furthermore, widely used language model-based methods suffer from slow
inference speeds. To address these challenges, we propose DiffRhythm, the first
latent diffusion-based song generation model capable of synthesizing complete
songs with both vocal and accompaniment for durations of up to 4m45s in only
ten seconds, maintaining high musicality and intelligibility. Despite its
remarkable capabilities, DiffRhythm is designed to be simple and elegant: it
eliminates the need for complex data preparation, employs a straightforward
model structure, and requires only lyrics and a style prompt during inference.
Additionally, its non-autoregressive structure ensures fast inference speeds.
This simplicity guarantees the scalability of DiffRhythm. Moreover, we release
the complete training code along with the pre-trained model on large-scale data
to promote reproducibility and further research. | 18 | 67c6a16021d722b4248bda37 | https://aslp-lab.github.io/DiffRhythm.github.io/ | https://github.com/ASLP-lab/DiffRhythm |
|
2025-03-04T04:17:23.806000 | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model | 1 | {
"_id": "642bdfc65edcc5760cb1ea12",
"avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg",
"followerCount": null,
"fullname": "huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yxuan",
"type": "user"
} | true | null | 2502.16779 | [
{
"_id": "67c65c06e116e361574405e9",
"hidden": false,
"name": "Yaxuan Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:27.582Z",
"user": {
"_id": "642bdfc65edcc5760cb1ea12",
"avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg",
"fullname": "huang",
"isPro": false,
"type": "user",
"user": "yxuan"
}
},
{
"_id": "67c65c06e116e361574405ea",
"hidden": false,
"name": "Xili Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405eb",
"hidden": false,
"name": "Jianan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405ec",
"hidden": false,
"name": "Xianbiao Qi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:01:12.106Z",
"user": {
"_id": "6494483aa13255720397287a",
"avatarUrl": "/avatars/61ff2e0371df513194246cf6fbb2b78a.svg",
"fullname": "Xianbiao Qi",
"isPro": false,
"type": "user",
"user": "qixianbiao"
}
},
{
"_id": "67c65c06e116e361574405ed",
"hidden": false,
"name": "Yixing Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405ee",
"hidden": false,
"name": "Xiangyu Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:00:56.439Z",
"user": {
"_id": "666a8f24e2990b0cb16b7bf9",
"avatarUrl": "/avatars/fcbaf8f1e3e53a2a4a819b7cb2c53aa4.svg",
"fullname": "Xiangyu Yue",
"isPro": false,
"type": "user",
"user": "xyyue"
}
}
] | 2025-02-24T02:14:19 | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain
Model | Room layout estimation from multiple-perspective images is poorly
investigated due to the complexities that emerge from multi-view geometry,
which requires muti-step solutions such as camera intrinsic and extrinsic
estimation, image matching, and triangulation. However, in 3D reconstruction,
the advancement of recent 3D foundation models such as DUSt3R has shifted the
paradigm from the traditional multi-step structure-from-motion process to an
end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a
novel method for multi-view room layout estimation leveraging the 3D foundation
model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on
a room layout dataset (Structure3D) with a modified objective to estimate
structural planes. By generating uniform and parsimonious results, Plane-DUSt3R
enables room layout estimation with only a single post-processing step and 2D
detection results. Unlike previous methods that rely on single-perspective or
panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective
images. Moreover, it offers a streamlined, end-to-end solution that simplifies
the process and reduces error accumulation. Experimental results demonstrate
that Plane-DUSt3R not only outperforms state-of-the-art methods on the
synthetic dataset but also proves robust and effective on in the wild data with
different image styles such as cartoon.Our code is available at:
https://github.com/justacar/Plane-DUSt3R | 2 | 67c65c0be116e36157440751 | null | https://github.com/justacar/Plane-DUSt3R |
|
2025-03-04T03:56:04.503000 | OneRec: Unifying Retrieve and Rank with Generative Recommender and Iterative Preference Alignment | 1 | {
"_id": "668f5875b5b3081d776e4094",
"avatarUrl": "/avatars/8c763393f25afbe5fb8b132f775e746a.svg",
"followerCount": 1,
"fullname": "Xiaohuan Zhou",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "XiaohuanZhou",
"type": "user"
} | false | null | 2502.18965 | [
{
"_id": "67c6bfdf96b9f5fa18c517db",
"hidden": false,
"name": "Jiaxin Deng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:32.410Z",
"user": {
"_id": "625f6ebee1994410eef16a42",
"avatarUrl": "/avatars/eaa353afe91e849adcd35656477a6462.svg",
"fullname": "Jiaxin Deng",
"isPro": false,
"type": "user",
"user": "OrpheusBetter"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517dc",
"hidden": false,
"name": "Shiyao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:39.957Z",
"user": {
"_id": "641f8e596d51620635e49707",
"avatarUrl": "/avatars/f30b24da53fea2278f343c318007bb60.svg",
"fullname": "shiyao wang",
"isPro": false,
"type": "user",
"user": "oneself"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517dd",
"hidden": false,
"name": "Kuo Cai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:27.669Z",
"user": {
"_id": "65e6cc77e999cde61fcbc097",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5NxDLRmS2cQgNeZ6ScSNW.png",
"fullname": "CaiKuo",
"isPro": false,
"type": "user",
"user": "caikuo"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517de",
"hidden": false,
"name": "Lejian Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517df",
"hidden": false,
"name": "Qigen Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517e0",
"hidden": false,
"name": "Weifeng Ding",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:17:03.422Z",
"user": {
"_id": "64aeb9342cda6a37a4781b7d",
"avatarUrl": "/avatars/c1584c10ff0f9871315872245c9934fc.svg",
"fullname": "Weifeng Ding",
"isPro": false,
"type": "user",
"user": "DingWF"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517e1",
"hidden": false,
"name": "Qiang Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517e2",
"hidden": false,
"name": "Guorui Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:52.106Z",
"user": {
"_id": "67c6c570cf87e2d2ebfc81aa",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67c6c570cf87e2d2ebfc81aa/7qAstZtIT86Uwrz3u_anv.jpeg",
"fullname": "Guorui Zhou",
"isPro": false,
"type": "user",
"user": "GuoruiZhou"
}
}
] | 2025-02-26T09:25:10 | OneRec: Unifying Retrieve and Rank with Generative Recommender and
Iterative Preference Alignment | Recently, generative retrieval-based recommendation systems have emerged as a
promising paradigm. However, most modern recommender systems adopt a
retrieve-and-rank strategy, where the generative model functions only as a
selector during the retrieval stage. In this paper, we propose OneRec, which
replaces the cascaded learning framework with a unified generative model. To
the best of our knowledge, this is the first end-to-end generative model that
significantly surpasses current complex and well-designed recommender systems
in real-world scenarios. Specifically, OneRec includes: 1) an encoder-decoder
structure, which encodes the user's historical behavior sequences and gradually
decodes the videos that the user may be interested in. We adopt sparse
Mixture-of-Experts (MoE) to scale model capacity without proportionally
increasing computational FLOPs. 2) a session-wise generation approach. In
contrast to traditional next-item prediction, we propose a session-wise
generation, which is more elegant and contextually coherent than point-by-point
generation that relies on hand-crafted rules to properly combine the generated
results. 3) an Iterative Preference Alignment module combined with Direct
Preference Optimization (DPO) to enhance the quality of the generated results.
Unlike DPO in NLP, a recommendation system typically has only one opportunity
to display results for each user's browsing request, making it impossible to
obtain positive and negative samples simultaneously. To address this
limitation, We design a reward model to simulate user generation and customize
the sampling strategy. Extensive experiments have demonstrated that a limited
number of DPO samples can align user interest preferences and significantly
improve the quality of generated results. We deployed OneRec in the main scene
of Kuaishou, achieving a 1.6\% increase in watch-time, which is a substantial
improvement. | 18 | 67c6bfe396b9f5fa18c518e5 | null | null |
|
2025-03-04T03:20:03.380000 | AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond Human Understanding | 1 | {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"followerCount": null,
"fullname": "David Noever",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dnoever",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63136a82e29fb2e86d5e5bdd/mgIPjnhtUaGLR2Iv4ViL6.jpeg"
] | 2503.01063 | [
{
"_id": "67c6b72b7aad9a016ae60797",
"hidden": false,
"name": "David Noever",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:50.200Z",
"user": {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"fullname": "David Noever",
"isPro": false,
"type": "user",
"user": "dnoever"
}
}
] | 2025-03-02T23:59:52 | AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond
Human Understanding | This paper investigates the potential for large language models (LLMs) to
develop private tonal languages for machine-to-machine (M2M) communication.
Inspired by cryptophasia in human twins (affecting up to 50% of twin births)
and natural tonal languages like Mandarin and Vietnamese, we implement a
precise character-to-frequency mapping system that encodes the full ASCII
character set (32-126) using musical semitones. Each character is assigned a
unique frequency, creating a logarithmic progression beginning with space (220
Hz) and ending with tilde (50,175.42 Hz). This spans approximately 7.9 octaves,
with higher characters deliberately mapped to ultrasonic frequencies beyond
human perception (>20 kHz). Our implemented software prototype demonstrates
this encoding through visualization, auditory playback, and ABC musical
notation, allowing for analysis of information density and transmission speed.
Testing reveals that tonal encoding can achieve information rates exceeding
human speech while operating partially outside human perceptual boundaries.
This work responds directly to concerns about AI systems catastrophically
developing private languages within the next five years, providing a concrete
prototype software example of how such communication might function and the
technical foundation required for its emergence, detection, and governance. | 1 | 67c6b72c7aad9a016ae607bb | null | null |
|
2025-03-04T02:48:58.261000 | Liger: Linearizing Large Language Models to Gated Recurrent Structures | 1 | {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"followerCount": 3,
"fullname": "Weigao Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "weigao266",
"type": "user"
} | true | null | 2503.01496 | [
{
"_id": "67c6b05f35198d0f397adc98",
"hidden": false,
"name": "Disen Lan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:46.117Z",
"user": {
"_id": "66ea643899af9ac3463639b1",
"avatarUrl": "/avatars/252d470e761a57834dee3dbc60dfefed.svg",
"fullname": "Disen Lan",
"isPro": false,
"type": "user",
"user": "landisen"
}
},
{
"_id": "67c6b05f35198d0f397adc99",
"hidden": false,
"name": "Weigao Sun",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-04T08:10:52.130Z",
"user": {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"fullname": "Weigao Sun",
"isPro": false,
"type": "user",
"user": "weigao266"
}
},
{
"_id": "67c6b05f35198d0f397adc9a",
"hidden": false,
"name": "Jiaxi Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:18.982Z",
"user": {
"_id": "665dc35752ff9daa9ba5a4ed",
"avatarUrl": "/avatars/df8b01879d97e599b610fa51414d3a18.svg",
"fullname": "Hu Jiaxi",
"isPro": false,
"type": "user",
"user": "Jiaxihu2"
}
},
{
"_id": "67c6b05f35198d0f397adc9b",
"hidden": false,
"name": "Jusen Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:26.432Z",
"user": {
"_id": "65003e857804f04a163328d9",
"avatarUrl": "/avatars/fe32150aabfde8d283b38ccebcf6982e.svg",
"fullname": "Jusen Du",
"isPro": false,
"type": "user",
"user": "JusenK"
}
},
{
"_id": "67c6b05f35198d0f397adc9c",
"hidden": false,
"name": "Yu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T13:08:00 | Liger: Linearizing Large Language Models to Gated Recurrent Structures | Transformers with linear recurrent modeling offer linear-time training and
constant-memory inference. Despite their demonstrated efficiency and
performance, pretraining such non-standard architectures from scratch remains
costly and risky. The linearization of large language models (LLMs) transforms
pretrained standard models into linear recurrent structures, enabling more
efficient deployment. However, current linearization methods typically
introduce additional feature map modules that require extensive fine-tuning and
overlook the gating mechanisms used in state-of-the-art linear recurrent
models. To address these issues, this paper presents Liger, short for
Linearizing LLMs to gated recurrent structures. Liger is a novel approach for
converting pretrained LLMs into gated linear recurrent models without adding
extra parameters. It repurposes the pretrained key matrix weights to construct
diverse gating mechanisms, facilitating the formation of various gated
recurrent structures while avoiding the need to train additional components
from scratch. Using lightweight fine-tuning with Low-Rank Adaptation (LoRA),
Liger restores the performance of the linearized gated recurrent models to
match that of the original LLMs. Additionally, we introduce Liger Attention, an
intra-layer hybrid attention mechanism, which significantly recovers 93\% of
the Transformer-based LLM at 0.02\% pre-training tokens during the
linearization process, achieving competitive results across multiple
benchmarks, as validated on models ranging from 1B to 8B parameters. Code is
available at https://github.com/OpenSparseLLMs/Linearization. | 13 | 67c6b06035198d0f397adcc4 | null | null |
|
2025-03-04T02:27:17.351000 | CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic Environments | 1 | {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"followerCount": 1,
"fullname": "Lei Mingcong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SP4595",
"type": "user"
} | true | null | 2503.00729 | [
{
"_id": "67c6ab3ec0b62d612c54ddf5",
"hidden": false,
"name": "Mingcong Lei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:48.061Z",
"user": {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"fullname": "Lei Mingcong",
"isPro": false,
"type": "user",
"user": "SP4595"
}
},
{
"_id": "67c6ab3ec0b62d612c54ddf6",
"hidden": false,
"name": "Ge Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf7",
"hidden": false,
"name": "Yiming Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf8",
"hidden": false,
"name": "Zhixin Mai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf9",
"hidden": false,
"name": "Qing Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfa",
"hidden": false,
"name": "Yao Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfb",
"hidden": false,
"name": "Zhen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfc",
"hidden": false,
"name": "Shuguang Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfd",
"hidden": false,
"name": "Yatong Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfe",
"hidden": false,
"name": "Jinke Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T04:50:59 | CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic
Environments | Large Language Models (LLMs) exhibit remarkable capabilities in the
hierarchical decomposition of complex tasks through semantic reasoning.
However, their application in embodied systems faces challenges in ensuring
reliable execution of subtask sequences and achieving one-shot success in
long-term task completion. To address these limitations in dynamic
environments, we propose Closed-Loop Embodied Agent (CLEA) -- a novel
architecture incorporating four specialized open-source LLMs with functional
decoupling for closed-loop task management. The framework features two core
innovations: (1) Interactive task planner that dynamically generates executable
subtasks based on the environmental memory, and (2) Multimodal execution critic
employing an evaluation framework to conduct a probabilistic assessment of
action feasibility, triggering hierarchical re-planning mechanisms when
environmental perturbations exceed preset thresholds. To validate CLEA's
effectiveness, we conduct experiments in a real environment with manipulable
objects, using two heterogeneous robots for object search, manipulation, and
search-manipulation integration tasks. Across 12 task trials, CLEA outperforms
the baseline model, achieving a 67.3% improvement in success rate and a 52.8%
increase in task completion rate. These results demonstrate that CLEA
significantly enhances the robustness of task planning and execution in dynamic
environments. | 2 | 67c6ab42c0b62d612c54df71 | https://sp4595.github.io/CLEA/ | https://github.com/SP4595/CLEA-Closed-Loop-Embodied-Agent |
|
2025-03-04T02:21:00.460000 | Speculative Ad-hoc Querying | 1 | {
"_id": "6577437552f02732a463d97d",
"avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg",
"followerCount": null,
"fullname": "Haoyu Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Haoyu0529",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6577437552f02732a463d97d/fEkQ4BZ8Yx_CzsjvHBWFq.qt"
] | 2503.00714 | [
{
"_id": "67c6a803025b72f14ccb0939",
"hidden": false,
"name": "Haoyu Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T07:13:08.306Z",
"user": {
"_id": "6577437552f02732a463d97d",
"avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg",
"fullname": "Haoyu Li",
"isPro": false,
"type": "user",
"user": "Haoyu0529"
}
},
{
"_id": "67c6a803025b72f14ccb093a",
"hidden": false,
"name": "Srikanth Kandula",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093b",
"hidden": false,
"name": "Maria Angels de Luis Balaguer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093c",
"hidden": false,
"name": "Aditya Akella",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093d",
"hidden": false,
"name": "Venkat Arun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T03:44:31 | Speculative Ad-hoc Querying | Analyzing large datasets requires responsive query execution, but executing
SQL queries on massive datasets can be slow. This paper explores whether query
execution can begin even before the user has finished typing, allowing results
to appear almost instantly. We propose SpeQL, a system that leverages Large
Language Models (LLMs) to predict likely queries based on the database schema,
the user's past queries, and their incomplete query. Since exact query
prediction is infeasible, SpeQL speculates on partial queries in two ways: 1)
it predicts the query structure to compile and plan queries in advance, and 2)
it precomputes smaller temporary tables that are much smaller than the original
database, but are still predicted to contain all information necessary to
answer the user's final query. Additionally, SpeQL continuously displays
results for speculated queries and subqueries in real time, aiding exploratory
analysis. A utility/user study showed that SpeQL improved task completion time,
and participants reported that its speculative display of results helped them
discover patterns in the data more quickly. In the study, SpeQL improves user's
query latency by up to 289times and kept the overhead reasonable, at 4$
per hour. | 8 | 67c6a804025b72f14ccb0994 | https://github.com/lihy0529/SpeQL | https://github.com/lihy0529/SpeQL |
|
2025-03-04T02:16:25.633000 | CodeArena: A Collective Evaluation Platform for LLM Code Generation | 1 | {
"_id": "61711f02e0b1ddb56eb9b526",
"avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg",
"followerCount": 3,
"fullname": "Mingzhe Du",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Elfsong",
"type": "user"
} | true | null | 2503.01295 | [
{
"_id": "67c6a8b534aeb86063e94010",
"hidden": false,
"name": "Mingzhe Du",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:49.954Z",
"user": {
"_id": "61711f02e0b1ddb56eb9b526",
"avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg",
"fullname": "Mingzhe Du",
"isPro": false,
"type": "user",
"user": "Elfsong"
}
},
{
"_id": "67c6a8b534aeb86063e94011",
"hidden": false,
"name": "Anh Tuan Luu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:20.575Z",
"user": {
"_id": "655722e80438e0854fae7554",
"avatarUrl": "/avatars/b93a74f7c7880f9fe0f3ffb47e2aef5e.svg",
"fullname": "Luu Anh Tuan",
"isPro": false,
"type": "user",
"user": "anhtuanluu36"
}
},
{
"_id": "67c6a8b534aeb86063e94012",
"hidden": false,
"name": "Bin Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a8b534aeb86063e94013",
"hidden": false,
"name": "Xiaobao Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:48.996Z",
"user": {
"_id": "64cb02869e30a46f7b80b355",
"avatarUrl": "/avatars/81ce4ba78826b54f0e1b53eeaff87ee6.svg",
"fullname": "Xiaobao Wu",
"isPro": false,
"type": "user",
"user": "bobxwu"
}
},
{
"_id": "67c6a8b534aeb86063e94014",
"hidden": false,
"name": "Dong Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:43.013Z",
"user": {
"_id": "67c56a7f083bb2c50254bbe5",
"avatarUrl": "/avatars/bdf6fd8934c2199ff169b178f6482773.svg",
"fullname": "Huang, Dong",
"isPro": false,
"type": "user",
"user": "DongHuang-ebay"
}
},
{
"_id": "67c6a8b534aeb86063e94015",
"hidden": false,
"name": "Terry Yue Zhuo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:33.977Z",
"user": {
"_id": "62b7fb545233925f253531c8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b7fb545233925f253531c8/W50u2G1HK3EtUKHRU189V.jpeg",
"fullname": "Terry Yue Zhuo",
"isPro": false,
"type": "user",
"user": "terryyz"
}
},
{
"_id": "67c6a8b534aeb86063e94016",
"hidden": false,
"name": "Qian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a8b534aeb86063e94017",
"hidden": false,
"name": "See-Kiong Ng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T08:31:16 | CodeArena: A Collective Evaluation Platform for LLM Code Generation | Large Language Models (LLMs) have reshaped code generation by synergizing
their exceptional comprehension of natural language and programming syntax,
thereby substantially boosting developer productivity. These advancements have
prompted numerous efforts to quantitatively evaluate their coding capabilities.
However, persistent challenges, such as benchmark leakage, data dissipation,
and limited system accessibility, continue to impede a timely and accurate
assessment. To address these limitations, we introduce CodeArena, an online
evaluation framework tailored for LLM code generation. The key innovation is a
collective evaluation mechanism, which dynamically recalibrates individual
model scores based on the holistic performance of all participating models,
mitigating score biases caused by widespread benchmark leakage. In addition,
CodeArena ensures open access to all submitted solutions and test cases and
provides automation-friendly APIs to streamline the code evaluation workflow.
Our main contributions are: (1) a collective evaluation system for unbiased
assessment, (2) a public repository of solutions and test cases, and (3)
automation-ready APIs for seamless integration. | 5 | 67c6a8b634aeb86063e9406a | null | null |
|
2025-03-04T01:56:03.632000 | Qilin: A Multimodal Information Retrieval Dataset with APP-level User Sessions | 1 | {
"_id": "60c0ed29d8bc072769d78f48",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg",
"followerCount": 2,
"fullname": "Qian Dong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "qian",
"type": "user"
} | true | null | 2503.00501 | [
{
"_id": "67c6a343ad6b7c2fa29d5e7e",
"hidden": false,
"name": "Jia Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T16:08:10.744Z",
"user": {
"_id": "67c03221aed8409476d39da8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67c03221aed8409476d39da8/eQIhOPRLNoiphsR145mfB.png",
"fullname": "Jia Chen",
"isPro": false,
"type": "user",
"user": "Regulus309"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e7f",
"hidden": false,
"name": "Qian Dong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:51.762Z",
"user": {
"_id": "60c0ed29d8bc072769d78f48",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg",
"fullname": "Qian Dong",
"isPro": false,
"type": "user",
"user": "qian"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e80",
"hidden": false,
"name": "Haitao Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:20:57.898Z",
"user": {
"_id": "67b5d91558369f6b38c5b596",
"avatarUrl": "/avatars/18b08d5d9b05786cad34bc000c7606aa.svg",
"fullname": "Haitao Li",
"isPro": false,
"type": "user",
"user": "haitaoli"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e81",
"hidden": false,
"name": "Xiaohui He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e82",
"hidden": false,
"name": "Yan Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e83",
"hidden": false,
"name": "Shaosheng Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e84",
"hidden": false,
"name": "Yi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e85",
"hidden": false,
"name": "Ping Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e86",
"hidden": false,
"name": "Chen Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e87",
"hidden": false,
"name": "Yao Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e88",
"hidden": false,
"name": "Qingyao Ai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:21:22.100Z",
"user": {
"_id": "6657e7045f6e35c7d541bdd8",
"avatarUrl": "/avatars/368e5cef6c93543b2b92fbca79a4e4b9.svg",
"fullname": "Qingyao Ai",
"isPro": false,
"type": "user",
"user": "aiqy"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e89",
"hidden": false,
"name": "Yiqun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-01T14:15:00 | Qilin: A Multimodal Information Retrieval Dataset with APP-level User
Sessions | User-generated content (UGC) communities, especially those featuring
multimodal content, improve user experiences by integrating visual and textual
information into results (or items). The challenge of improving user
experiences in complex systems with search and recommendation (S\&R) services
has drawn significant attention from both academia and industry these years.
However, the lack of high-quality datasets has limited the research progress on
multimodal S\&R. To address the growing need for developing better S\&R
services, we present a novel multimodal information retrieval dataset in this
paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular
social platform with over 300 million monthly active users and an average
search penetration rate of over 70\%. In contrast to existing datasets,
Qilin offers a comprehensive collection of user sessions with
heterogeneous results like image-text notes, video notes, commercial notes, and
direct answers, facilitating the development of advanced multimodal neural
retrieval models across diverse task settings. To better model user
satisfaction and support the analysis of heterogeneous user behaviors, we also
collect extensive APP-level contextual signals and genuine user feedback.
Notably, Qilin contains user-favored answers and their referred results for
search requests triggering the Deep Query Answering (DQA) module. This allows
not only the training \& evaluation of a Retrieval-augmented Generation (RAG)
pipeline, but also the exploration of how such a module would affect users'
search behavior. Through comprehensive analysis and experiments, we provide
interesting findings and insights for further improving S\&R systems. We hope
that Qilin will significantly contribute to the advancement of
multimodal content platforms with S\&R services in the future. | 11 | 67c6a346ad6b7c2fa29d5f88 | https://huggingface.co/datasets/THUIR/Qilin | https://github.com/RED-Search/Qilin/ |
|
2025-03-04T01:19:45.715000 | Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation | 1 | {
"_id": "6332e2689bf698ce68a22e8c",
"avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg",
"followerCount": 2,
"fullname": "JIANTAO LIN",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "LTT",
"type": "user"
} | true | null | 2503.01370 | [
{
"_id": "67c691673ff65c55829685a0",
"hidden": false,
"name": "Jiantao Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:52:36.682Z",
"user": {
"_id": "6332e2689bf698ce68a22e8c",
"avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg",
"fullname": "JIANTAO LIN",
"isPro": true,
"type": "user",
"user": "LTT"
}
},
{
"_id": "67c691673ff65c55829685a1",
"hidden": false,
"name": "Xin Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a2",
"hidden": false,
"name": "Meixi Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:52:44.047Z",
"user": {
"_id": "63641f09a53b71b7a1b02955",
"avatarUrl": "/avatars/2f43703cbbc56f3e3f98090f44bccfe6.svg",
"fullname": "Meixi Chen",
"isPro": false,
"type": "user",
"user": "MeixiChen"
}
},
{
"_id": "67c691673ff65c55829685a3",
"hidden": false,
"name": "Yingjie Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a4",
"hidden": false,
"name": "Dongyu Yan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:56.252Z",
"user": {
"_id": "64049ae20ab5e22719f35103",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678023295407-noauth.jpeg",
"fullname": "Dongyu Yan",
"isPro": false,
"type": "user",
"user": "StarYDY"
}
},
{
"_id": "67c691673ff65c55829685a5",
"hidden": false,
"name": "Leyi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a6",
"hidden": false,
"name": "Xinli Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:53:15.555Z",
"user": {
"_id": "64b4ab62eec33e27dcd733b5",
"avatarUrl": "/avatars/0a9bf220c9a5efe7279f9b287b087d36.svg",
"fullname": "Xinli XU",
"isPro": false,
"type": "user",
"user": "Xxlbigbrother"
}
},
{
"_id": "67c691673ff65c55829685a7",
"hidden": false,
"name": "Lie XU",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a8",
"hidden": false,
"name": "Shunsi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a9",
"hidden": false,
"name": "Ying-Cong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:53:33.509Z",
"user": {
"_id": "655cba1d87b67834000590e8",
"avatarUrl": "/avatars/3bd43b7c9351f65b8f38f4c8237a0146.svg",
"fullname": "Yingcong Chen",
"isPro": false,
"type": "user",
"user": "yingcongchen"
}
}
] | 2025-03-03T10:07:19 | Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation | Diffusion models have achieved great success in generating 2D images.
However, the quality and generalizability of 3D content generation remain
limited. State-of-the-art methods often require large-scale 3D assets for
training, which are challenging to collect. In this work, we introduce
Kiss3DGen (Keep It Simple and Straightforward in 3D Generation), an efficient
framework for generating, editing, and enhancing 3D objects by repurposing a
well-trained 2D image diffusion model for 3D generation. Specifically, we
fine-tune a diffusion model to generate ''3D Bundle Image'', a tiled
representation composed of multi-view images and their corresponding normal
maps. The normal maps are then used to reconstruct a 3D mesh, and the
multi-view images provide texture mapping, resulting in a complete 3D model.
This simple method effectively transforms the 3D generation problem into a 2D
image generation task, maximizing the utilization of knowledge in pretrained
diffusion models. Furthermore, we demonstrate that our Kiss3DGen model is
compatible with various diffusion model techniques, enabling advanced features
such as 3D editing, mesh and texture enhancement, etc. Through extensive
experiments, we demonstrate the effectiveness of our approach, showcasing its
ability to produce high-quality 3D models efficiently. | 7 | 67c6916b3ff65c5582968702 | https://ltt-o.github.io/Kiss3dgen.github.io/ | https://github.com/EnVision-Research/Kiss3DGen |
|
2025-03-04T00:52:22.204000 | Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models | 1 | {
"_id": "633aaf695df91da9cea92960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/633aaf695df91da9cea92960/9T4y1ru5wt5iKUUqf9_Tt.png",
"followerCount": 12,
"fullname": "Jay Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jayw",
"type": "user"
} | true | null | 2503.01774 | [
{
"_id": "67c694febdab31ec59fea175",
"hidden": false,
"name": "Jay Zhangjie Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:53.874Z",
"user": {
"_id": "633aaf695df91da9cea92960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/633aaf695df91da9cea92960/9T4y1ru5wt5iKUUqf9_Tt.png",
"fullname": "Jay Wu",
"isPro": false,
"type": "user",
"user": "jayw"
}
},
{
"_id": "67c694febdab31ec59fea176",
"hidden": false,
"name": "Yuxuan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea177",
"hidden": false,
"name": "Haithem Turki",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:13:26.878Z",
"user": {
"_id": "656e000253703dd78fd072a9",
"avatarUrl": "/avatars/6702ba8fabe3d08884aa757f90cea333.svg",
"fullname": "Haithem Turki",
"isPro": false,
"type": "user",
"user": "hturki"
}
},
{
"_id": "67c694febdab31ec59fea178",
"hidden": false,
"name": "Xuanchi Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:13:33.467Z",
"user": {
"_id": "658529d61c461dfe88afe8e8",
"avatarUrl": "/avatars/a22c1b07d28c2662833c462c6537d835.svg",
"fullname": "Xuanchi Ren",
"isPro": false,
"type": "user",
"user": "xrenaa"
}
},
{
"_id": "67c694febdab31ec59fea179",
"hidden": false,
"name": "Jun Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea17a",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:27:21.825Z",
"user": {
"_id": "661ab3da2b14565c7acccf5c",
"avatarUrl": "/avatars/fa4fc03664803e02aede4d4c3d50b393.svg",
"fullname": "Mike Zheng Shou",
"isPro": false,
"type": "user",
"user": "AnalMom"
}
},
{
"_id": "67c694febdab31ec59fea17b",
"hidden": false,
"name": "Sanja Fidler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea17c",
"hidden": false,
"name": "Zan Gojcic",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:27:34.034Z",
"user": {
"_id": "6366cda3361a96184dc22139",
"avatarUrl": "/avatars/d8a88c84cb5f69e69dd038674a29be89.svg",
"fullname": "Zan Gojcic",
"isPro": false,
"type": "user",
"user": "zgojcic"
}
},
{
"_id": "67c694febdab31ec59fea17d",
"hidden": false,
"name": "Huan Ling",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T17:58:33 | Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models | Neural Radiance Fields and 3D Gaussian Splatting have revolutionized 3D
reconstruction and novel-view synthesis task. However, achieving photorealistic
rendering from extreme novel viewpoints remains challenging, as artifacts
persist across representations. In this work, we introduce Difix3D+, a novel
pipeline designed to enhance 3D reconstruction and novel-view synthesis through
single-step diffusion models. At the core of our approach is Difix, a
single-step image diffusion model trained to enhance and remove artifacts in
rendered novel views caused by underconstrained regions of the 3D
representation. Difix serves two critical roles in our pipeline. First, it is
used during the reconstruction phase to clean up pseudo-training views that are
rendered from the reconstruction and then distilled back into 3D. This greatly
enhances underconstrained regions and improves the overall 3D representation
quality. More importantly, Difix also acts as a neural enhancer during
inference, effectively removing residual artifacts arising from imperfect 3D
supervision and the limited capacity of current reconstruction models. Difix3D+
is a general solution, a single model compatible with both NeRF and 3DGS
representations, and it achieves an average 2times improvement in FID score
over baselines while maintaining 3D consistency. | 29 | 67c69500bdab31ec59fea24d | https://research.nvidia.com/labs/toronto-ai/difix3d | null |
|
2025-03-04T00:29:56.570000 | VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation | 1 | {
"_id": "62b32a4429a410b7f6b06710",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a4429a410b7f6b06710/VzgvmnlYZWuifZTkIkCxy.jpeg",
"followerCount": 14,
"fullname": "Wenhao Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "WenhaoWang",
"type": "user"
} | true | null | 2503.01739 | [
{
"_id": "67c68f7828a037872c5ce5bb",
"hidden": false,
"name": "Wenhao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:14:37.907Z",
"user": {
"_id": "62b32a4429a410b7f6b06710",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a4429a410b7f6b06710/VzgvmnlYZWuifZTkIkCxy.jpeg",
"fullname": "Wenhao Wang",
"isPro": false,
"type": "user",
"user": "WenhaoWang"
}
},
{
"_id": "67c68f7828a037872c5ce5bc",
"hidden": false,
"name": "Yi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T17:00:36 | VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video
Generation | Text-to-video generative models convert textual prompts into dynamic visual
content, offering wide-ranging applications in film production, gaming, and
education. However, their real-world performance often falls short of user
expectations. One key reason is that these models have not been trained on
videos related to some topics users want to create. In this paper, we propose
VideoUFO, the first Video dataset specifically curated to align with Users'
FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1)
minimal (0.29%) overlap with existing video datasets, and (2) videos
searched exclusively via YouTube's official API under the Creative Commons
license. These two attributes provide future researchers with greater freedom
to broaden their training sources. The VideoUFO comprises over 1.09 million
video clips, each paired with both a brief and a detailed caption
(description). Specifically, through clustering, we first identify 1,291
user-focused topics from the million-scale real text-to-video prompt dataset,
VidProM. Then, we use these topics to retrieve videos from YouTube, split the
retrieved videos into clips, and generate both brief and detailed captions for
each clip. After verifying the clips with specified topics, we are left with
about 1.09 million video clips. Our experiments reveal that (1) current 16
text-to-video models do not achieve consistent performance across all
user-focused topics; and (2) a simple model trained on VideoUFO outperforms
others on worst-performing topics. The dataset is publicly available at
https://huggingface.co/datasets/WenhaoWang/VideoUFO under the CC BY 4.0
License. | 3 | 67c68f7a28a037872c5ce60d | null | null |
|
2025-03-04T00:09:04.418000 | Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs | 1 | {
"_id": "63e6a880f2e9a8f22c5a1630",
"avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg",
"followerCount": null,
"fullname": "Kanishk Gandhi",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "obiwan96",
"type": "user"
} | true | null | 2503.01307 | [
{
"_id": "67c68adc0457c9f809c22df8",
"hidden": false,
"name": "Kanishk Gandhi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:35:01.161Z",
"user": {
"_id": "63e6a880f2e9a8f22c5a1630",
"avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg",
"fullname": "Kanishk Gandhi",
"isPro": false,
"type": "user",
"user": "obiwan96"
}
},
{
"_id": "67c68adc0457c9f809c22df9",
"hidden": false,
"name": "Ayush Chakravarthy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:44.344Z",
"user": {
"_id": "624f9e3d07bd004fb855f5e9",
"avatarUrl": "/avatars/86a349cd4053bc0317e27e75a51c69fa.svg",
"fullname": "Ayush Chakravarthy",
"isPro": false,
"type": "user",
"user": "ayushchakravarthy"
}
},
{
"_id": "67c68adc0457c9f809c22dfa",
"hidden": false,
"name": "Anikait Singh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:05:05.759Z",
"user": {
"_id": "6511ee845b7e52b0251fdee9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6511ee845b7e52b0251fdee9/hTIwiIYBGOVnIrxtpri83.png",
"fullname": "Anikait Singh",
"isPro": false,
"type": "user",
"user": "Asap7772"
}
},
{
"_id": "67c68adc0457c9f809c22dfb",
"hidden": false,
"name": "Nathan Lile",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:58.582Z",
"user": {
"_id": "61aa15fd8a9625ebfe284286",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61aa15fd8a9625ebfe284286/KaGzIeijcgcN15JErCqft.jpeg",
"fullname": "nathan lile",
"isPro": false,
"type": "user",
"user": "nlile"
}
},
{
"_id": "67c68adc0457c9f809c22dfc",
"hidden": false,
"name": "Noah D. Goodman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:05:12.186Z",
"user": {
"_id": "67321274c1f20c742bcf7a8d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ltcQhre6eDRVzn6Vbbyhu.png",
"fullname": "Noah D. Goodman",
"isPro": false,
"type": "user",
"user": "ngoodman"
}
}
] | 2025-03-03T08:46:22 | Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four
Habits of Highly Effective STaRs | Test-time inference has emerged as a powerful paradigm for enabling language
models to ``think'' longer and more carefully about complex challenges, much
like skilled human experts. While reinforcement learning (RL) can drive
self-improvement in language models on verifiable tasks, some models exhibit
substantial gains while others quickly plateau. For instance, we find that
Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game
of Countdown. This discrepancy raises a critical question: what intrinsic
properties enable effective self-improvement? We introduce a framework to
investigate this question by analyzing four key cognitive behaviors --
verification, backtracking, subgoal setting, and backward chaining -- that both
expert human problem solvers and successful language models employ. Our study
reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama
initially lacks them. In systematic experimentation with controlled behavioral
datasets, we find that priming Llama with examples containing these reasoning
behaviors enables substantial improvements during RL, matching or exceeding
Qwen's performance. Importantly, the presence of reasoning behaviors, rather
than correctness of answers, proves to be the critical factor -- models primed
with incorrect solutions containing proper reasoning patterns achieve
comparable performance to those trained on correct solutions. Finally,
leveraging continued pretraining with OpenWebMath data, filtered to amplify
reasoning behaviors, enables the Llama model to match Qwen's self-improvement
trajectory. Our findings establish a fundamental relationship between initial
reasoning behaviors and the capacity for improvement, explaining why some
language models effectively utilize additional computation while others
plateau. | 13 | 67c68add0457c9f809c22e31 | null | null |
|
2025-03-03T23:44:06.105000 | Large-Scale Data Selection for Instruction Tuning | 1 | {
"_id": "62608fc2ffe8827cb1d89f9f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png",
"followerCount": 11,
"fullname": "Hamish Ivison",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hamishivi",
"type": "user"
} | true | null | 2503.01807 | [
{
"_id": "67c67ff6dec55d10cb10fc9e",
"hidden": false,
"name": "Hamish Ivison",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:13.649Z",
"user": {
"_id": "62608fc2ffe8827cb1d89f9f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png",
"fullname": "Hamish Ivison",
"isPro": false,
"type": "user",
"user": "hamishivi"
}
},
{
"_id": "67c67ff6dec55d10cb10fc9f",
"hidden": false,
"name": "Muru Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:14:59.402Z",
"user": {
"_id": "61cc2cf4dcb47bd5ed3cd3b8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1640770780085-noauth.jpeg",
"fullname": "Muru Zhang",
"isPro": false,
"type": "user",
"user": "nanami"
}
},
{
"_id": "67c67ff6dec55d10cb10fca0",
"hidden": false,
"name": "Faeze Brahman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:05.562Z",
"user": {
"_id": "65282b8d578679aac7888aec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65282b8d578679aac7888aec/dibBkhH-z1c70mJZZxJ7u.jpeg",
"fullname": "Faeze Brahman",
"isPro": false,
"type": "user",
"user": "faezeb"
}
},
{
"_id": "67c67ff6dec55d10cb10fca1",
"hidden": false,
"name": "Pang Wei Koh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:14.558Z",
"user": {
"_id": "641b4263abfce26bcf7b27de",
"avatarUrl": "/avatars/e91b4205e4f74b0dd8c333c23203a924.svg",
"fullname": "Pang Wei Koh",
"isPro": false,
"type": "user",
"user": "pangwei"
}
},
{
"_id": "67c67ff6dec55d10cb10fca2",
"hidden": false,
"name": "Pradeep Dasigi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:20.400Z",
"user": {
"_id": "6408fcc93461c51cf735a61e",
"avatarUrl": "/avatars/619f3653911d111f046a5a6c30fc8319.svg",
"fullname": "Pradeep Dasigi",
"isPro": false,
"type": "user",
"user": "pradeepd"
}
}
] | 2025-03-03T18:37:26 | Large-Scale Data Selection for Instruction Tuning | Selecting high-quality training data from a larger pool is a crucial step
when instruction-tuning language models, as carefully curated datasets often
produce models that outperform those trained on much larger, noisier datasets.
Automated data selection approaches for instruction-tuning are typically tested
by selecting small datasets (roughly 10k samples) from small pools (100-200k
samples). However, popular deployed instruction-tuned models often train on
hundreds of thousands to millions of samples, subsampled from even larger data
pools. We present a systematic study of how well data selection methods scale
to these settings, selecting up to 2.5M samples from pools of up to 5.8M
samples and evaluating across 7 diverse tasks. We show that many recently
proposed methods fall short of random selection in this setting (while using
more compute), and even decline in performance when given access to larger
pools of data to select over. However, we find that a variant of
representation-based data selection (RDS+), which uses weighted mean pooling of
pretrained LM hidden states, consistently outperforms more complex methods
across all settings tested -- all whilst being more compute-efficient. Our
findings highlight that the scaling properties of proposed automated selection
methods should be more closely examined. We release our code, data, and models
at https://github.com/hamishivi/automated-instruction-selection. | 5 | 67c67ff9dec55d10cb10fcef | null | null |
|
2025-03-03T23:29:27.952000 | Visual-RFT: Visual Reinforcement Fine-Tuning | 1 | {
"_id": "63fda3fced9eead590ff6918",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg",
"followerCount": 16,
"fullname": "Zeyi Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Zery",
"type": "user"
} | true | null | 2503.01785 | [
{
"_id": "67c6816614a1bf9855188b8b",
"hidden": false,
"name": "Ziyu Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:57.481Z",
"user": {
"_id": "66fe1334ff3ee1f7569fab6d",
"avatarUrl": "/avatars/6868b1a545028a9b8bbded52490dc093.svg",
"fullname": "ziyuliu",
"isPro": false,
"type": "user",
"user": "ziyuliu"
}
},
{
"_id": "67c6816614a1bf9855188b8c",
"hidden": false,
"name": "Zeyi Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:35:03.275Z",
"user": {
"_id": "63fda3fced9eead590ff6918",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg",
"fullname": "Zeyi Sun",
"isPro": false,
"type": "user",
"user": "Zery"
}
},
{
"_id": "67c6816614a1bf9855188b8d",
"hidden": false,
"name": "Yuhang Zang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:32.723Z",
"user": {
"_id": "63859cf3b2906edaf83af9f0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg",
"fullname": "Yuhang Zang",
"isPro": false,
"type": "user",
"user": "yuhangzang"
}
},
{
"_id": "67c6816614a1bf9855188b8e",
"hidden": false,
"name": "Xiaoyi Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:25.627Z",
"user": {
"_id": "67c0849ee08c178ef8d4e05c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mQ6VdnjZnRhb0H_waPclo.png",
"fullname": "Xiaoyi Dong",
"isPro": false,
"type": "user",
"user": "sweetFruit"
}
},
{
"_id": "67c6816614a1bf9855188b8f",
"hidden": false,
"name": "Yuhang Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:19.177Z",
"user": {
"_id": "65000bef18830fabea469fdd",
"avatarUrl": "/avatars/b320c77dfad039d9f9c54127f610d44f.svg",
"fullname": "Cao Yuhang",
"isPro": false,
"type": "user",
"user": "yhcao"
}
},
{
"_id": "67c6816614a1bf9855188b90",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:05.281Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67c6816614a1bf9855188b91",
"hidden": false,
"name": "Dahua Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:11:57.087Z",
"user": {
"_id": "636317ed80c1a705a6eff396",
"avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg",
"fullname": "Dahua Lin",
"isPro": false,
"type": "user",
"user": "lindahua"
}
},
{
"_id": "67c6816614a1bf9855188b92",
"hidden": false,
"name": "Jiaqi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:11:48.889Z",
"user": {
"_id": "64638c4d51fa6e63060521b5",
"avatarUrl": "/avatars/c863ace5b1dc788a341bcf4ddbdfaec1.svg",
"fullname": "JIaqi",
"isPro": false,
"type": "user",
"user": "Jiaqiwang"
}
}
] | 2025-03-03T18:16:32 | Visual-RFT: Visual Reinforcement Fine-Tuning | Reinforcement Fine-Tuning (RFT) in Large Reasoning Models like OpenAI o1
learns from feedback on its answers, which is especially useful in applications
when fine-tuning data is scarce. Recent open-source work like DeepSeek-R1
demonstrates that reinforcement learning with verifiable reward is one key
direction in reproducing o1. While the R1-style model has demonstrated success
in language models, its application in multi-modal domains remains
under-explored. This work introduces Visual Reinforcement Fine-Tuning
(Visual-RFT), which further extends the application areas of RFT on visual
tasks. Specifically, Visual-RFT first uses Large Vision-Language Models (LVLMs)
to generate multiple responses containing reasoning tokens and final answers
for each input, and then uses our proposed visual perception verifiable reward
functions to update the model via the policy optimization algorithm such as
Group Relative Policy Optimization (GRPO). We design different verifiable
reward functions for different perception tasks, such as the Intersection over
Union (IoU) reward for object detection. Experimental results on fine-grained
image classification, few-shot object detection, reasoning grounding, as well
as open-vocabulary object detection benchmarks show the competitive performance
and advanced generalization ability of Visual-RFT compared with Supervised
Fine-tuning (SFT). For example, Visual-RFT improves accuracy by 24.3% over
the baseline in one-shot fine-grained image classification with around 100
samples. In few-shot object detection, Visual-RFT also exceeds the baseline by
21.9 on COCO's two-shot setting and 15.4 on LVIS. Our Visual-RFT represents
a paradigm shift in fine-tuning LVLMs, offering a data-efficient, reward-driven
approach that enhances reasoning and adaptability for domain-specific tasks. | 43 | 67c6816c14a1bf9855188d8c | https://github.com/Liuziyu77/Visual-RFT | https://github.com/Liuziyu77/Visual-RFT |
|
2025-03-03T23:15:05.187000 | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs | 3 | {
"_id": "63f5173bb51da4d61da6c038",
"avatarUrl": "/avatars/0ee530cf80476aa3985c4d591cd384a1.svg",
"followerCount": 6,
"fullname": "Young Jin Kim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ykim362",
"type": "user"
} | true | null | 2503.01743 | [
{
"_id": "67c67d0dfe135a5f482599bb",
"hidden": false,
"name": "Abdelrahman Abouelenin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599bc",
"hidden": false,
"name": "Atabak Ashfaq",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:45:15.511Z",
"user": {
"_id": "669ed17498ba26df962584f5",
"avatarUrl": "/avatars/996c9cf05a4f8e5447552220085157c7.svg",
"fullname": "Atabak Ashfaq",
"isPro": false,
"type": "user",
"user": "atabakashfaqMSFT"
}
},
{
"_id": "67c67d0dfe135a5f482599bd",
"hidden": false,
"name": "Adam Atkinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599be",
"hidden": false,
"name": "Hany Awadalla",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599bf",
"hidden": false,
"name": "Nguyen Bach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599c0",
"hidden": false,
"name": "Jianmin Bao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:34.578Z",
"user": {
"_id": "6481e690f9ed842838a2b106",
"avatarUrl": "/avatars/e89a3c8366df504a95dc08a1a412bf3d.svg",
"fullname": "Jianmin Bao",
"isPro": false,
"type": "user",
"user": "jianmin-ustc"
}
},
{
"_id": "67c67d0dfe135a5f482599c1",
"hidden": false,
"name": "Alon Benhaim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:41.117Z",
"user": {
"_id": "65b9b627e7c838136275a681",
"avatarUrl": "/avatars/22423f3d9a6c4ee34cad3b0894d27d23.svg",
"fullname": "Alon Benhaim",
"isPro": false,
"type": "user",
"user": "alonbenhaim"
}
},
{
"_id": "67c67d0dfe135a5f482599c2",
"hidden": false,
"name": "Martin Cai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:47.556Z",
"user": {
"_id": "66f81b5b3c7ffa7931b4829a",
"avatarUrl": "/avatars/a7f34e8e3fd92fdb96affc367b522fbe.svg",
"fullname": "cai",
"isPro": false,
"type": "user",
"user": "martincai"
}
},
{
"_id": "67c67d0dfe135a5f482599c3",
"hidden": false,
"name": "Vishrav Chaudhary",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:56.428Z",
"user": {
"_id": "659c7ac977ac6f1bf5e63d7e",
"avatarUrl": "/avatars/86a6efde0d483564a67ed5f344d479a0.svg",
"fullname": "Vishrav Chaudhary",
"isPro": false,
"type": "user",
"user": "vishravmsft"
}
},
{
"_id": "67c67d0dfe135a5f482599c4",
"hidden": false,
"name": "Congcong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:04.205Z",
"user": {
"_id": "66c7a93b92e9f5b19f7533ab",
"avatarUrl": "/avatars/e26ebf5cf083a3ec09fce24026ecc76e.svg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "congcongchen"
}
},
{
"_id": "67c67d0dfe135a5f482599c5",
"hidden": false,
"name": "Dong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:11.865Z",
"user": {
"_id": "666470a28f5513b0cf11e850",
"avatarUrl": "/avatars/7beea758882677ad32a12ce56d4d084a.svg",
"fullname": "Dong Chen",
"isPro": false,
"type": "user",
"user": "DongChen06"
}
},
{
"_id": "67c67d0dfe135a5f482599c6",
"hidden": false,
"name": "Dongdong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:18.197Z",
"user": {
"_id": "6567651c6fcc82e5e8c36d4d",
"avatarUrl": "/avatars/ba3cc037a7688c4f8d967fc6043e540d.svg",
"fullname": "Dongdong Chen",
"isPro": false,
"type": "user",
"user": "dongdongchen"
}
},
{
"_id": "67c67d0dfe135a5f482599c7",
"hidden": false,
"name": "Junkun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:43.236Z",
"user": {
"_id": "669db44d61278f96d8c608a4",
"avatarUrl": "/avatars/92a493da10c086af5f2af680f4e2c6c6.svg",
"fullname": "Junkun Chen",
"isPro": false,
"type": "user",
"user": "shtpgshus"
}
},
{
"_id": "67c67d0dfe135a5f482599c8",
"hidden": false,
"name": "Weizhu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:51.832Z",
"user": {
"_id": "64da876370446182be5b608d",
"avatarUrl": "/avatars/e412fdc71404ecdf638e416846e3ebfb.svg",
"fullname": "Weizhu Chen",
"isPro": false,
"type": "user",
"user": "chenweizhu"
}
},
{
"_id": "67c67d0dfe135a5f482599c9",
"hidden": false,
"name": "Yen-Chun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:58.051Z",
"user": {
"_id": "662d6b09a47b4da4b23c8b2a",
"avatarUrl": "/avatars/6770b1d7e25b2cdce04f9904b543d122.svg",
"fullname": "Yen-Chun Chen",
"isPro": false,
"type": "user",
"user": "Yen-ChunChen"
}
},
{
"_id": "67c67d0dfe135a5f482599ca",
"hidden": false,
"name": "Yi-ling Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cb",
"hidden": false,
"name": "Qi Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cc",
"hidden": false,
"name": "Xiyang Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cd",
"hidden": false,
"name": "Ruchao Fan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:17.936Z",
"user": {
"_id": "64a8b800b35f48e37dfd20fe",
"avatarUrl": "/avatars/1e66be9a5238ce86df8b54150520bcc8.svg",
"fullname": "Ruchao Fan",
"isPro": false,
"type": "user",
"user": "fanruchao"
}
},
{
"_id": "67c67d0dfe135a5f482599ce",
"hidden": false,
"name": "Mei Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cf",
"hidden": false,
"name": "Min Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d0",
"hidden": false,
"name": "Amit Garg",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d1",
"hidden": false,
"name": "Abhishek Goswami",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:49:02.466Z",
"user": {
"_id": "62cdae333529c21a2283a0a1",
"avatarUrl": "/avatars/cafc2821e522bbd06d49830e36a073e3.svg",
"fullname": "Abhishek GOSWAMI",
"isPro": false,
"type": "user",
"user": "abgoswam"
}
},
{
"_id": "67c67d0dfe135a5f482599d2",
"hidden": false,
"name": "Junheng Hao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:16.356Z",
"user": {
"_id": "5f04c4394ec31d33a72116d6",
"avatarUrl": "/avatars/75d4b9020070e73604b12e5adc1c8201.svg",
"fullname": "Junheng Hao",
"isPro": false,
"type": "user",
"user": "jeffhao"
}
},
{
"_id": "67c67d0dfe135a5f482599d3",
"hidden": false,
"name": "Amr Hendy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:24.716Z",
"user": {
"_id": "660480db07619487a3718a16",
"avatarUrl": "/avatars/9c08d541913e57fd79988ef93d5095d4.svg",
"fullname": "Amr Hendy",
"isPro": false,
"type": "user",
"user": "amrhendy"
}
},
{
"_id": "67c67d0dfe135a5f482599d4",
"hidden": false,
"name": "Yuxuan Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d5",
"hidden": false,
"name": "Xin Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d6",
"hidden": false,
"name": "Mahmoud Khademi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:53.225Z",
"user": {
"_id": "6440905e27dc46cca590994c",
"avatarUrl": "/avatars/0346f8ad17038fba87649a0fc59d64ab.svg",
"fullname": "Mahmoud Khademi",
"isPro": false,
"type": "user",
"user": "mkhademi"
}
},
{
"_id": "67c67d0dfe135a5f482599d7",
"hidden": false,
"name": "Dongwoo Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:04.257Z",
"user": {
"_id": "662476aec8920ec351b8d3d8",
"avatarUrl": "/avatars/791e40f53073563680ef18f75b3ea95e.svg",
"fullname": "Dongwoo Kim",
"isPro": false,
"type": "user",
"user": "dongwookim-ms"
}
},
{
"_id": "67c67d0dfe135a5f482599d8",
"hidden": false,
"name": "Young Jin Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:19.902Z",
"user": {
"_id": "63f5173bb51da4d61da6c038",
"avatarUrl": "/avatars/0ee530cf80476aa3985c4d591cd384a1.svg",
"fullname": "Young Jin Kim",
"isPro": false,
"type": "user",
"user": "ykim362"
}
},
{
"_id": "67c67d0dfe135a5f482599d9",
"hidden": false,
"name": "Gina Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599da",
"hidden": false,
"name": "Jinyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:17.115Z",
"user": {
"_id": "64004b72330a45b03604303b",
"avatarUrl": "/avatars/a1fa3fc700173238d0336258b000d934.svg",
"fullname": "Jinyu Li",
"isPro": false,
"type": "user",
"user": "FallTraveler"
}
},
{
"_id": "67c67d0dfe135a5f482599db",
"hidden": false,
"name": "Yunsheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599dc",
"hidden": false,
"name": "Chen Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599dd",
"hidden": false,
"name": "Xihui Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:29.024Z",
"user": {
"_id": "6464f05e5cdb9ab50f846c98",
"avatarUrl": "/avatars/3cb2f60a909b59289209ecc7ba75a338.svg",
"fullname": "Xihui Lin",
"isPro": false,
"type": "user",
"user": "linxihui"
}
},
{
"_id": "67c67d0dfe135a5f482599de",
"hidden": false,
"name": "Zeqi Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:38.534Z",
"user": {
"_id": "62c3a0caf5e2eb44f51de87d",
"avatarUrl": "/avatars/3c535c5488476b75443666176fcb4c9b.svg",
"fullname": "Zeqi Lin",
"isPro": false,
"type": "user",
"user": "linzeqi"
}
},
{
"_id": "67c67d0dfe135a5f482599df",
"hidden": false,
"name": "Mengchen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e0",
"hidden": false,
"name": "Yang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e1",
"hidden": false,
"name": "Gilsinia Lopez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:55.169Z",
"user": {
"_id": "60c790f1accf7da31ed8240d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c790f1accf7da31ed8240d/YDohCmgf9OUeWqZIs3Thh.jpeg",
"fullname": "Gilsinia Lopez",
"isPro": false,
"type": "user",
"user": "lgg"
}
},
{
"_id": "67c67d0dfe135a5f482599e2",
"hidden": false,
"name": "Chong Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e3",
"hidden": false,
"name": "Piyush Madan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:02:38.019Z",
"user": {
"_id": "66269a329014ef4d10f55d9d",
"avatarUrl": "/avatars/d4866c32419a7dd07e9aa0660f4bafa9.svg",
"fullname": "Piyush Madan",
"isPro": false,
"type": "user",
"user": "PiyushMadan"
}
},
{
"_id": "67c67d0dfe135a5f482599e4",
"hidden": false,
"name": "Vadim Mazalov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:02:47.430Z",
"user": {
"_id": "65301591944086d1d5fcf656",
"avatarUrl": "/avatars/250a2e898a4fcbe78feaf6e812851bd6.svg",
"fullname": "Vadim Mazalovskii",
"isPro": false,
"type": "user",
"user": "JakeRiley"
}
},
{
"_id": "67c67d0dfe135a5f482599e5",
"hidden": false,
"name": "Ali Mousavi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e6",
"hidden": false,
"name": "Anh Nguyen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:57:52.311Z",
"user": {
"_id": "649bc84833486cdd77c01c66",
"avatarUrl": "/avatars/36f4e4bb15c337c4391bfbd234051f4c.svg",
"fullname": "Nguyen Anh",
"isPro": false,
"type": "user",
"user": "Anhnguyen"
}
},
{
"_id": "67c67d0dfe135a5f482599e7",
"hidden": false,
"name": "Jing Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e8",
"hidden": false,
"name": "Daniel Perez-Becker",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:09.929Z",
"user": {
"_id": "673b7f70cdc852f69bebfed1",
"avatarUrl": "/avatars/1efad61a42b948c750c96472a6192de5.svg",
"fullname": "Daniel Perez-Becker",
"isPro": false,
"type": "user",
"user": "perezbecker"
}
},
{
"_id": "67c67d0dfe135a5f482599e9",
"hidden": false,
"name": "Jacob Platin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ea",
"hidden": false,
"name": "Thomas Portet",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:39.865Z",
"user": {
"_id": "65c52dad286bf45e79491697",
"avatarUrl": "/avatars/01ebc7979273df6e53971ae9835b503f.svg",
"fullname": "Thomas Portet",
"isPro": false,
"type": "user",
"user": "thopo"
}
},
{
"_id": "67c67d0dfe135a5f482599eb",
"hidden": false,
"name": "Kai Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ec",
"hidden": false,
"name": "Bo Ren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:15.919Z",
"user": {
"_id": "668dcf92835bf7e64bbca904",
"avatarUrl": "/avatars/416eb3a3c5318a6a45aad87012296470.svg",
"fullname": "Bo Ren",
"isPro": false,
"type": "user",
"user": "rosrad"
}
},
{
"_id": "67c67d0dfe135a5f482599ed",
"hidden": false,
"name": "Liliang Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:57:37.996Z",
"user": {
"_id": "63815eff4761ddfa00903762",
"avatarUrl": "/avatars/3419b239d42e091586f1c51b526d88e5.svg",
"fullname": "Liliang Ren",
"isPro": false,
"type": "user",
"user": "renll"
}
},
{
"_id": "67c67d0dfe135a5f482599ee",
"hidden": false,
"name": "Sambuddha Roy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ef",
"hidden": false,
"name": "Ning Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f0",
"hidden": false,
"name": "Yelong Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:00:05.457Z",
"user": {
"_id": "6454c337a13edf669cd5d8ea",
"avatarUrl": "/avatars/a383a0dda7c2ef6a0d6c3c64651f42ff.svg",
"fullname": "Yelong Shen",
"isPro": false,
"type": "user",
"user": "uuu6"
}
},
{
"_id": "67c67d0dfe135a5f482599f1",
"hidden": false,
"name": "Saksham Singhal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:03.188Z",
"user": {
"_id": "62743aec8cb70eed79073bc0",
"avatarUrl": "/avatars/3c8b9a91d898f616265f823ab7d432df.svg",
"fullname": "Saksham Singhal",
"isPro": false,
"type": "user",
"user": "sakshamsinghal"
}
},
{
"_id": "67c67d0dfe135a5f482599f2",
"hidden": false,
"name": "Subhojit Som",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:47.241Z",
"user": {
"_id": "678bc6b432ee4968eca9bb6a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/wT-Xa3TYem_EzkZZMyDG0.png",
"fullname": "Subhojit Som",
"isPro": false,
"type": "user",
"user": "susom"
}
},
{
"_id": "67c67d0dfe135a5f482599f3",
"hidden": false,
"name": "Xia Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f4",
"hidden": false,
"name": "Tetyana Sych",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:58:27.814Z",
"user": {
"_id": "64692ad25d701566394fd8da",
"avatarUrl": "/avatars/d6811ccceb14788bfa0aa10fe4ee1054.svg",
"fullname": "Tetyana Sych",
"isPro": false,
"type": "user",
"user": "tesych"
}
},
{
"_id": "67c67d0dfe135a5f482599f5",
"hidden": false,
"name": "Praneetha Vaddamanu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f6",
"hidden": false,
"name": "Shuohang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f7",
"hidden": false,
"name": "Yiming Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T21:16:18.278Z",
"user": {
"_id": "6786f93b3ad5585f2c2828b1",
"avatarUrl": "/avatars/41411af6f7d547041032a29b34041fe8.svg",
"fullname": "Yiming Wang",
"isPro": false,
"type": "user",
"user": "freewym"
}
},
{
"_id": "67c67d0dfe135a5f482599f8",
"hidden": false,
"name": "Zhenghao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f9",
"hidden": false,
"name": "Haibin Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fa",
"hidden": false,
"name": "Haoran Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:04.939Z",
"user": {
"_id": "61384b860317b0a5c10877d3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1631080954171-61384b860317b0a5c10877d3.jpeg",
"fullname": "Haoran Xu",
"isPro": false,
"type": "user",
"user": "haoranxu"
}
},
{
"_id": "67c67d0dfe135a5f482599fb",
"hidden": false,
"name": "Weijian Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:58:36.082Z",
"user": {
"_id": "6398f4b32c20654083f36cde",
"avatarUrl": "/avatars/4591f514483890997c55e9e6d60bbb0f.svg",
"fullname": "Weijian Xu",
"isPro": false,
"type": "user",
"user": "xwjabc"
}
},
{
"_id": "67c67d0dfe135a5f482599fc",
"hidden": false,
"name": "Yifan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fd",
"hidden": false,
"name": "Ziyi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fe",
"hidden": false,
"name": "Donghan Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:41.798Z",
"user": {
"_id": "65b01b8a29ae836e9ed5af24",
"avatarUrl": "/avatars/a8b78a4b54d3f10858c5925521357001.svg",
"fullname": "Donghan Yu",
"isPro": false,
"type": "user",
"user": "donghanyu"
}
},
{
"_id": "67c67d0dfe135a5f482599ff",
"hidden": false,
"name": "Ishmam Zabir",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f48259a00",
"hidden": false,
"name": "Jianwen Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:12.465Z",
"user": {
"_id": "63601ee38fb9c2420ffbe45d",
"avatarUrl": "/avatars/56af091aaff1b42dcfbae84a6ee1e7f7.svg",
"fullname": "Zhang",
"isPro": false,
"type": "user",
"user": "Jianwen"
}
},
{
"_id": "67c67d0dfe135a5f48259a01",
"hidden": false,
"name": "Li Lyna Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:01.540Z",
"user": {
"_id": "62b0009c72043b05d29492b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png",
"fullname": "Li Lyna Zhang",
"isPro": false,
"type": "user",
"user": "lynazhang"
}
},
{
"_id": "67c67d0dfe135a5f48259a02",
"hidden": false,
"name": "Yunan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f48259a03",
"hidden": false,
"name": "Xiren Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:26.629Z",
"user": {
"_id": "66ce4c9f864befb39cfc74e9",
"avatarUrl": "/avatars/ef66398466c470fc1d384c6817d9e461.svg",
"fullname": "Xiren Zhou",
"isPro": false,
"type": "user",
"user": "XirenZhou"
}
}
] | 2025-03-03T17:05:52 | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
Models via Mixture-of-LoRAs | We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable
language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language
model trained on high-quality web and synthetic data, significantly
outperforming recent open-source models of similar size and matching the
performance of models twice its size on math and coding tasks requiring complex
reasoning. This achievement is driven by a carefully curated synthetic data
recipe emphasizing high-quality math and coding datasets. Compared to its
predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of
200K tokens to better support multilingual applications, as well as group query
attention for more efficient long-sequence generation. Phi-4-Multimodal is a
multimodal model that integrates text, vision, and speech/audio input
modalities into a single model. Its novel modality extension approach leverages
LoRA adapters and modality-specific routers to allow multiple inference modes
combining various modalities without interference. For example, it now ranks
first in the OpenASR leaderboard to date, although the LoRA component of the
speech/audio modality has just 460 million parameters. Phi-4-Multimodal
supports scenarios involving (vision + language), (vision + speech), and
(speech/audio) inputs, outperforming larger vision-language and speech-language
models on a wide range of tasks. Additionally, we experiment to further train
Phi-4-Mini to enhance its reasoning capabilities. Despite its compact
3.8-billion-parameter size, this experimental version achieves reasoning
performance on par with or surpassing significantly larger models, including
DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B. | 42 | 67c67d0efe135a5f48259a38 | https://huggingface.co/microsoft/Phi-4-multimodal-instruct | null |
|
2025-03-03T22:35:45.299000 | DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting | 1 | {
"_id": "6485d5b300c9cfe5c2470c81",
"avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg",
"followerCount": 3,
"fullname": "Kai",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KaiLv",
"type": "user"
} | true | null | 2503.00784 | [
{
"_id": "67c673bcf47209364f0cec96",
"hidden": false,
"name": "Kai Lv",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:14:11.523Z",
"user": {
"_id": "6485d5b300c9cfe5c2470c81",
"avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg",
"fullname": "Kai",
"isPro": false,
"type": "user",
"user": "KaiLv"
}
},
{
"_id": "67c673bcf47209364f0cec97",
"hidden": false,
"name": "Honglin Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:14:04.672Z",
"user": {
"_id": "638ef0b0c67af472d31674a6",
"avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg",
"fullname": "Honglin Guo",
"isPro": false,
"type": "user",
"user": "KYLN24"
}
},
{
"_id": "67c673bcf47209364f0cec98",
"hidden": false,
"name": "Qipeng Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:13:46.322Z",
"user": {
"_id": "6491cd52b1e5d3444528edb1",
"avatarUrl": "/avatars/a85635d886c7f157b6723dec5c01c030.svg",
"fullname": "Qipeng Guo",
"isPro": false,
"type": "user",
"user": "QipengGuo"
}
},
{
"_id": "67c673bcf47209364f0cec99",
"hidden": false,
"name": "Xipeng Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:13:40.885Z",
"user": {
"_id": "61457b8deff2c9fdb4de4988",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg",
"fullname": "Xipeng Qiu",
"isPro": false,
"type": "user",
"user": "xpqiu"
}
}
] | 2025-03-02T08:27:48 | DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with
Dynamic Multi-Sequence Drafting | Large language models (LLMs) exhibit exceptional performance across a wide
range of tasks; however, their token-by-token autoregressive generation process
significantly hinders inference speed. Speculative decoding presents a
promising draft-then-verify framework that reduces generation latency while
maintaining output distribution fidelity. Nevertheless, the draft model
introduces additional computational overhead, becoming a performance bottleneck
and increasing the time to first token (TTFT). Previous approaches to mitigate
draft model overhead have primarily relied on heuristics and generally failed
to match the quality of the draft language models. To address these challenges,
we propose DuoDecoding, a novel approach that strategically deploys the draft
and target models on the CPU and GPU respectively, enabling parallel decoding
while preserving draft quality. Our method incorporates a hardware-aware
optimal draft budget to minimize idle times and employs dynamic multi-sequence
drafting to enhance draft quality. Extensive experiments across seven tasks
show that DuoDecoding achieves up to 2.61x speedup in generation latency, while
reducing TTFT to 83% of that in conventional speculative decoding. The Code is
available at https://github.com/KaiLv69/DuoDecoding. | 8 | 67c673bdf47209364f0cecb7 | null | https://github.com/KaiLv69/DuoDecoding |
|
2025-03-03T21:22:16.512000 | Predictive Data Selection: The Data That Predicts Is the Data That Teaches | 1 | {
"_id": "641c9662043963b1c0a1df52",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c9662043963b1c0a1df52/L1o85EHztv_xP9r6ppljf.jpeg",
"followerCount": 2,
"fullname": "KaShun SHUM",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ksshumab",
"type": "user"
} | true | null | 2503.00808 | [
{
"_id": "67c66382e5394bda7cbd03f9",
"hidden": false,
"name": "Kashun Shum",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:25.484Z",
"user": {
"_id": "641c9662043963b1c0a1df52",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c9662043963b1c0a1df52/L1o85EHztv_xP9r6ppljf.jpeg",
"fullname": "KaShun SHUM",
"isPro": false,
"type": "user",
"user": "ksshumab"
}
},
{
"_id": "67c66382e5394bda7cbd03fa",
"hidden": false,
"name": "Yuzhen Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:23.329Z",
"user": {
"_id": "6462def82a83863b97c0611e",
"avatarUrl": "/avatars/c03e9cc7d75b0266fcc56ecb6ee62148.svg",
"fullname": "Yuzhen Huang",
"isPro": false,
"type": "user",
"user": "yuzhen17"
}
},
{
"_id": "67c66382e5394bda7cbd03fb",
"hidden": false,
"name": "Hongjian Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fc",
"hidden": false,
"name": "Ding Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fd",
"hidden": false,
"name": "Yixuan Liao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fe",
"hidden": false,
"name": "Xiaoxin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03ff",
"hidden": false,
"name": "Qian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd0400",
"hidden": false,
"name": "Junxian He",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T09:21:28 | Predictive Data Selection: The Data That Predicts Is the Data That
Teaches | Language model pretraining involves training on extensive corpora, where data
quality plays a pivotal role. In this work, we aim to directly estimate the
contribution of data during pretraining and select pretraining data in an
efficient manner. Specifically, we draw inspiration from recent findings
showing that compression efficiency (i.e., the normalized loss) of diverse
models on certain text correlates strongly with their downstream performance,
when the text domain aligns with the downstream benchmark (Huang et al., 2024).
Building on this observation, we hypothesize that data on which model losses
are predictive of downstream abilities also contribute effectively to learning.
To leverage this insight, we introduce data selection based on data's
Predictive strength (Preselect), a lightweight and efficient data selection
method that requires training and deploying only a fastText-based scorer.
Through comprehensive experiments with 1B and 3B parameter models, we
demonstrate that models trained on 30B tokens selected with PreSelect surpasses
the performance of a vanilla baseline trained on 300B tokens, achieving a 10x
reduction in compute requirements. Furthermore, PreSelect significantly
outperforms other competitive data selection baselines, such as DCLM and
FineWeb-Edu on a scale of 3B models trained on 100B tokens. We open-source our
trained data selection scorer along with the curated datasets at
https://github.com/hkust-nlp/PreSelect. | 45 | 67c66383e5394bda7cbd0428 | null | https://github.com/hkust-nlp/PreSelect |
|
2025-03-03T11:25:57.425000 | Multi-Turn Code Generation Through Single-Step Rewards | 2 | {
"_id": "6421d2972143035270db37b9",
"avatarUrl": "/avatars/4fadeafc273d32cf72fe2f12d444c5e8.svg",
"followerCount": 2,
"fullname": "Gonzalo Gonzalez",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "chalo2000",
"type": "user"
} | true | null | 2502.20380 | [
{
"_id": "67c34e3beae05d8f94f800b4",
"hidden": false,
"name": "Arnav Kumar Jain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b5",
"hidden": false,
"name": "Gonzalo Gonzalez-Pumariega",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:15:14.593Z",
"user": {
"_id": "6421d2972143035270db37b9",
"avatarUrl": "/avatars/4fadeafc273d32cf72fe2f12d444c5e8.svg",
"fullname": "Gonzalo Gonzalez",
"isPro": false,
"type": "user",
"user": "chalo2000"
}
},
{
"_id": "67c34e3beae05d8f94f800b6",
"hidden": false,
"name": "Wayne Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b7",
"hidden": false,
"name": "Alexander M Rush",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b8",
"hidden": false,
"name": "Wenting Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b9",
"hidden": false,
"name": "Sanjiban Choudhury",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:55:05 | Multi-Turn Code Generation Through Single-Step Rewards | We address the problem of code generation from multi-turn execution feedback.
Existing methods either generate code without feedback or use complex,
hierarchical reinforcement learning to optimize multi-turn rewards. We propose
a simple yet scalable approach, muCode, that solves multi-turn code
generation using only single-step rewards. Our key insight is that code
generation is a one-step recoverable MDP, where the correct code can be
recovered from any intermediate code state in a single turn. muCode
iteratively trains both a generator to provide code solutions conditioned on
multi-turn execution feedback and a verifier to score the newly generated code.
Experimental evaluations show that our approach achieves significant
improvements over the state-of-the-art baselines. We provide analysis of the
design choices of the reward models and policy, and show the efficacy of
muCode at utilizing the execution feedback. Our code is available at
https://github.com/portal-cornell/muCode. | 24 | 67c34e3ceae05d8f94f8010e | https://portal-cornell.github.io/muCode/ | https://github.com/portal-cornell/muCode |
|
2025-03-03T10:56:33.810000 | Preference Learning Unlocks LLMs' Psycho-Counseling Skills | 2 | {
"_id": "650857fef3060ea840ffbbfe",
"avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg",
"followerCount": 1,
"fullname": "Mian Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "billmianz",
"type": "user"
} | true | null | 2502.19731 | [
{
"_id": "67c36b35e12b50f698e7db1d",
"hidden": false,
"name": "Mian Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:31.238Z",
"user": {
"_id": "650857fef3060ea840ffbbfe",
"avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg",
"fullname": "Mian Zhang",
"isPro": false,
"type": "user",
"user": "billmianz"
}
},
{
"_id": "67c36b35e12b50f698e7db1e",
"hidden": false,
"name": "Shaun M. Eack",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c36b35e12b50f698e7db1f",
"hidden": false,
"name": "Zhiyu Zoey Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T03:50:25 | Preference Learning Unlocks LLMs' Psycho-Counseling Skills | Applying large language models (LLMs) to assist in psycho-counseling is an
emerging and meaningful approach, driven by the significant gap between patient
needs and the availability of mental health support. However, current LLMs
struggle to consistently provide effective responses to client speeches,
largely due to the lack of supervision from high-quality real psycho-counseling
data, whose content is typically inaccessible due to client privacy concerns.
Furthermore, the quality of therapists' responses in available sessions can
vary significantly based on their professional training and experience.
Assessing the quality of therapists' responses remains an open challenge. In
this work, we address these challenges by first proposing a set of professional
and comprehensive principles to evaluate therapists' responses to client
speeches. Using these principles, we create a preference dataset,
PsychoCounsel-Preference, which contains 36k high-quality preference comparison
pairs. This dataset aligns with the preferences of professional
psychotherapists, providing a robust foundation for evaluating and improving
LLMs in psycho-counseling. Experiments on reward modeling and preference
learning demonstrate that PsychoCounsel-Preference is an excellent resource for
LLMs to acquire essential skills for responding to clients in a counseling
session. Our best-aligned model, PsychoCounsel-Llama3-8B, achieves an
impressive win rate of 87% against GPT-4o. We release PsychoCounsel-Preference,
PsychoCounsel-Llama3-8B and the reward model PsychoCounsel Llama3-8B-Reward to
facilitate the research of psycho-counseling with LLMs at:
https://hf.co/Psychotherapy-LLM. | 6 | 67c36b36e12b50f698e7db51 | null | null |
|
2025-03-03T10:26:31.746000 | EgoNormia: Benchmarking Physical Social Norm Understanding | 2 | {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"followerCount": 7,
"fullname": "Hao Zhu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ProKil",
"type": "user"
} | true | null | 2502.20490 | [
{
"_id": "67c5c853e7c5cfb1d2b52858",
"hidden": false,
"name": "MohammadHossein Rezaei",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-03T16:56:51.354Z",
"user": {
"_id": "63f6ba02a67b8acfa50407bb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f6ba02a67b8acfa50407bb/ueUb01p1mhuRNrkyfEHtc.jpeg",
"fullname": "MohammadHossein Rezaei",
"isPro": false,
"type": "user",
"user": "mhr2004"
}
},
{
"_id": "67c5c853e7c5cfb1d2b52859",
"hidden": false,
"name": "Yicheng Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285a",
"hidden": false,
"name": "Phil Cuvin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285b",
"hidden": false,
"name": "Caleb Ziems",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285c",
"hidden": false,
"name": "Yanzhe Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285d",
"hidden": false,
"name": "Hao Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:08.219Z",
"user": {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"fullname": "Hao Zhu",
"isPro": false,
"type": "user",
"user": "ProKil"
}
},
{
"_id": "67c5c853e7c5cfb1d2b5285e",
"hidden": false,
"name": "Diyi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T19:54:16 | EgoNormia: Benchmarking Physical Social Norm Understanding | Human activity is moderated by norms. When performing actions in the real
world, humans not only follow norms, but also consider the trade-off between
different norms However, machines are often trained without explicit
supervision on norm understanding and reasoning, especially when the norms are
grounded in a physical and social context. To improve and evaluate the
normative reasoning capability of vision-language models (VLMs), we present
EgoNormia |epsilon|, consisting of 1,853 ego-centric videos of human
interactions, each of which has two related questions evaluating both the
prediction and justification of normative actions. The normative actions
encompass seven categories: safety, privacy, proxemics, politeness,
cooperation, coordination/proactivity, and communication/legibility. To compile
this dataset at scale, we propose a novel pipeline leveraging video sampling,
automatic answer generation, filtering, and human validation. Our work
demonstrates that current state-of-the-art vision-language models lack robust
norm understanding, scoring a maximum of 45% on EgoNormia (versus a human bench
of 92%). Our analysis of performance in each dimension highlights the
significant risks of safety, privacy, and the lack of collaboration and
communication capability when applied to real-world agents. We additionally
show that through a retrieval-based generation method, it is possible to use
EgoNomia to enhance normative reasoning in VLMs. | 4 | 67c5c857e7c5cfb1d2b52994 | https://egonormia.org | https://github.com/open-social-world/egonormia |
|
2025-03-03T09:49:10.381000 | How far can we go with ImageNet for Text-to-Image generation? | 2 | {
"_id": "630652803aed65d34e98eee3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630652803aed65d34e98eee3/XG_PuVFA6ziGQZd3UUZSF.jpeg",
"followerCount": 3,
"fullname": "Nicolas Dufour",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "nicolas-dufour",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/630652803aed65d34e98eee3/8GIi2e6959v5dl4XUVqkc.png"
] | 2502.21318 | [
{
"_id": "67c5c13ca10c7059c3d3d4c9",
"hidden": false,
"name": "L. Degeorge",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:10.195Z",
"user": {
"_id": "63bb08b07fd5e883e13efd32",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63bb08b07fd5e883e13efd32/aKAR8alYsYteEQImBrWO7.jpeg",
"fullname": "Lucas Degeorge",
"isPro": false,
"type": "user",
"user": "Lucasdegeorge"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4ca",
"hidden": false,
"name": "A. Ghosh",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-03T18:07:11.151Z",
"user": {
"_id": "66f971c83d94062a4aa808ef",
"avatarUrl": "/avatars/f1d6c4d85d20fd4a614278ecd784c772.svg",
"fullname": "Arijit Ghosh",
"isPro": false,
"type": "user",
"user": "arijitghosh"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4cb",
"hidden": false,
"name": "N. Dufour",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:14.366Z",
"user": {
"_id": "630652803aed65d34e98eee3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630652803aed65d34e98eee3/XG_PuVFA6ziGQZd3UUZSF.jpeg",
"fullname": "Nicolas Dufour",
"isPro": false,
"type": "user",
"user": "nicolas-dufour"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4cc",
"hidden": false,
"name": "D. Picard",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c13ca10c7059c3d3d4cd",
"hidden": false,
"name": "V. Kalogeiton",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T18:59:42 | How far can we go with ImageNet for Text-to-Image generation? | Recent text-to-image (T2I) generation models have achieved remarkable results
by training on billion-scale datasets, following a `bigger is better' paradigm
that prioritizes data quantity over quality. We challenge this established
paradigm by demonstrating that strategic data augmentation of small,
well-curated datasets can match or outperform models trained on massive
web-scraped collections. Using only ImageNet enhanced with well-designed text
and image augmentations, we achieve a +2 overall score over SD-XL on GenEval
and +5 on DPGBench while using just 1/10th the parameters and 1/1000th the
training images. Our results suggest that strategic data augmentation, rather
than massive datasets, could offer a more sustainable path forward for T2I
generation. | 22 | 67c5c145a10c7059c3d3d693 | https://lucasdegeorge.github.io/projects/t2i_imagenet/ | https://github.com/lucasdegeorge/T2I-ImageNet |
|
2025-03-03T09:44:46.734000 | DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping | 2 | {
"_id": "655d9f43b5da99edaf3f2f81",
"avatarUrl": "/avatars/c7225b3ed54d099a4fd87682427fb5bf.svg",
"followerCount": 2,
"fullname": "Yifan Zhong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yifan-Zhong",
"type": "user"
} | false | null | 2502.20900 | [
{
"_id": "67c5beea1b2c18e03a3d5218",
"hidden": false,
"name": "Yifan Zhong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d5219",
"hidden": false,
"name": "Xuchuan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521a",
"hidden": false,
"name": "Ruochong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521b",
"hidden": false,
"name": "Ceyao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521c",
"hidden": false,
"name": "Yitao Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521d",
"hidden": false,
"name": "Yaodong Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521e",
"hidden": false,
"name": "Yuanpei Chen",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-03T14:38:37.342Z",
"user": {
"_id": "6393a8af84c565d2c3419b7c",
"avatarUrl": "/avatars/f2a237a58dd0a25ef5c1a98e60acbb5c.svg",
"fullname": "chen",
"isPro": false,
"type": "user",
"user": "yuanpei"
}
}
] | 2025-02-28T09:57:20 | DexGraspVLA: A Vision-Language-Action Framework Towards General
Dexterous Grasping | Dexterous grasping remains a fundamental yet challenging problem in robotics.
A general-purpose robot must be capable of grasping diverse objects in
arbitrary scenarios. However, existing research typically relies on specific
assumptions, such as single-object settings or limited environments, leading to
constrained generalization. Our solution is DexGraspVLA, a hierarchical
framework that utilizes a pre-trained Vision-Language model as the high-level
task planner and learns a diffusion-based policy as the low-level Action
controller. The key insight lies in iteratively transforming diverse language
and visual inputs into domain-invariant representations, where imitation
learning can be effectively applied due to the alleviation of domain shift.
Thus, it enables robust generalization across a wide range of real-world
scenarios. Notably, our method achieves a 90+% success rate under thousands of
unseen object, lighting, and background combinations in a ``zero-shot''
environment. Empirical analysis further confirms the consistency of internal
model behavior across environmental variations, thereby validating our design
and explaining its generalization performance. We hope our work can be a step
forward in achieving general dexterous grasping. Our demo and code can be found
at https://dexgraspvla.github.io/. | 6 | 67c5beed1b2c18e03a3d52c0 | null | null |
|
2025-03-03T09:33:49.658000 | TeleRAG: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval | 2 | {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"followerCount": null,
"fullname": "Keisuke Kamahori",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kamahori",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6304ac1a412a1b9d381ca378/BYM8EdFZVDrDbfX8LKVC2.png"
] | 2502.20969 | [
{
"_id": "67c5bc8babe08983d98a4248",
"hidden": false,
"name": "Chien-Yu Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4249",
"hidden": false,
"name": "Keisuke Kamahori",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:17.078Z",
"user": {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"fullname": "Keisuke Kamahori",
"isPro": false,
"type": "user",
"user": "kamahori"
}
},
{
"_id": "67c5bc8babe08983d98a424a",
"hidden": false,
"name": "Yiyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424b",
"hidden": false,
"name": "Xiaoxiang Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424c",
"hidden": false,
"name": "Madhav Kashyap",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424d",
"hidden": false,
"name": "Yile Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424e",
"hidden": false,
"name": "Rulin Shao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424f",
"hidden": false,
"name": "Zihao Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4250",
"hidden": false,
"name": "Kan Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4251",
"hidden": false,
"name": "Stephanie Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4252",
"hidden": false,
"name": "Arvind Krishnamurthy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4253",
"hidden": false,
"name": "Rohan Kadekodi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4254",
"hidden": false,
"name": "Luis Ceze",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4255",
"hidden": false,
"name": "Baris Kasikci",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T11:32:22 | TeleRAG: Efficient Retrieval-Augmented Generation Inference with
Lookahead Retrieval | Retrieval-augmented generation (RAG) extends large language models (LLMs)
with external data sources to enhance factual correctness and domain coverage.
Modern RAG pipelines rely on large datastores, leading to system challenges in
latency-sensitive deployments, especially when limited GPU memory is available.
To address these challenges, we propose TeleRAG, an efficient inference system
that reduces RAG latency with minimal GPU memory requirements. The core
innovation of TeleRAG is lookahead retrieval, a prefetching mechanism that
anticipates required data and transfers it from CPU to GPU in parallel with LLM
generation. By leveraging the modularity of RAG pipelines, the inverted file
index (IVF) search algorithm and similarities between queries, TeleRAG
optimally overlaps data movement and computation. Experimental results show
that TeleRAG reduces end-to-end RAG inference latency by up to 1.72x on average
compared to state-of-the-art systems, enabling faster, more memory-efficient
deployments of advanced RAG applications. | 7 | 67c5bc8cabe08983d98a426c | null | null |
|
2025-03-03T08:13:06.912000 | MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing | 2 | {
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
} | false | null | 2502.21291 | [
{
"_id": "67c5aad632a7208c9ae1d020",
"hidden": false,
"name": "Xueyun Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d021",
"hidden": false,
"name": "Wei Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-03T13:12:57.839Z",
"user": {
"_id": "63044e025c70c21d0eaf08bc",
"avatarUrl": "/avatars/a2d39973d7fbcbe9d4cce5648b3149c2.svg",
"fullname": "Wei Li",
"isPro": false,
"type": "user",
"user": "Wiley085"
}
},
{
"_id": "67c5aad632a7208c9ae1d022",
"hidden": false,
"name": "Bingbing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d023",
"hidden": false,
"name": "Yige Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d024",
"hidden": false,
"name": "Yuanzhuo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d025",
"hidden": false,
"name": "Huawei Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T18:21:08 | MIGE: A Unified Framework for Multimodal Instruction-Based Image
Generation and Editing | Despite significant progress in diffusion-based image generation,
subject-driven generation and instruction-based editing remain challenging.
Existing methods typically treat them separately, struggling with limited
high-quality data and poor generalization. However, both tasks require
capturing complex visual variations while maintaining consistency between
inputs and outputs. Therefore, we propose MIGE, a unified framework that
standardizes task representations using multimodal instructions. It treats
subject-driven generation as creation on a blank canvas and instruction-based
editing as modification of an existing image, establishing a shared
input-output formulation. MIGE introduces a novel multimodal encoder that maps
free-form multimodal instructions into a unified vision-language space,
integrating visual and semantic features through a feature fusion
mechanism.This unification enables joint training of both tasks, providing two
key advantages: (1) Cross-Task Enhancement: By leveraging shared visual and
semantic representations, joint training improves instruction adherence and
visual consistency in both subject-driven generation and instruction-based
editing. (2) Generalization: Learning in a unified format facilitates
cross-task knowledge transfer, enabling MIGE to generalize to novel
compositional tasks, including instruction-based subject-driven editing.
Experiments show that MIGE excels in both subject-driven generation and
instruction-based editing while setting a state-of-the-art in the new task of
instruction-based subject-driven editing. Code and model have been publicly
available at https://github.com/Eureka-Maggie/MIGE. | 4 | 67c5aad932a7208c9ae1d19a | null | https://github.com/Eureka-Maggie/MIGE |
|
2025-03-03T07:33:14.717000 | LettuceDetect: A Hallucination Detection Framework for RAG Applications | 2 | {
"_id": "646264832538819c729e32ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264832538819c729e32ba/syc-UpPQyR3Nbf-gYndc4.jpeg",
"followerCount": 1,
"fullname": "Adam Kovacs",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "adaamko",
"type": "user"
} | true | null | 2502.17125 | [
{
"_id": "67c0536530abbab5c723f2e0",
"hidden": false,
"name": "Ádám Kovács",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:13.294Z",
"user": {
"_id": "646264832538819c729e32ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264832538819c729e32ba/syc-UpPQyR3Nbf-gYndc4.jpeg",
"fullname": "Adam Kovacs",
"isPro": true,
"type": "user",
"user": "adaamko"
}
},
{
"_id": "67c0536530abbab5c723f2e1",
"hidden": false,
"name": "Gábor Recski",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T13:11:47 | LettuceDetect: A Hallucination Detection Framework for RAG Applications | Retrieval Augmented Generation (RAG) systems remain vulnerable to
hallucinated answers despite incorporating external knowledge sources. We
present LettuceDetect a framework that addresses two critical limitations in
existing hallucination detection methods: (1) the context window constraints of
traditional encoder-based methods, and (2) the computational inefficiency of
LLM based approaches. Building on ModernBERT's extended context capabilities
(up to 8k tokens) and trained on the RAGTruth benchmark dataset, our approach
outperforms all previous encoder-based models and most prompt-based models,
while being approximately 30 times smaller than the best models. LettuceDetect
is a token-classification model that processes context-question-answer triples,
allowing for the identification of unsupported claims at the token level.
Evaluations on the RAGTruth corpus demonstrate an F1 score of 79.22% for
example-level detection, which is a 14.8% improvement over Luna, the previous
state-of-the-art encoder-based architecture. Additionally, the system can
process 30 to 60 examples per second on a single GPU, making it more practical
for real-world RAG applications. | 5 | 67c0536630abbab5c723f31e | null | https://github.com/KRLabsOrg/LettuceDetect |
|
2025-03-03T07:04:47.515000 | Optimal Brain Apoptosis | 2 | {
"_id": "668e62f6514c46e257387f6b",
"avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg",
"followerCount": null,
"fullname": "Mingyuan Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "mingyuansun",
"type": "user"
} | true | null | 2502.17941 | [
{
"_id": "67c59a7e6eb050aa82406452",
"hidden": false,
"name": "Mingyuan Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:21.192Z",
"user": {
"_id": "668e62f6514c46e257387f6b",
"avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg",
"fullname": "Mingyuan Sun",
"isPro": false,
"type": "user",
"user": "mingyuansun"
}
},
{
"_id": "67c59a7e6eb050aa82406453",
"hidden": false,
"name": "Zheng Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406454",
"hidden": false,
"name": "Jiaxu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406455",
"hidden": false,
"name": "Junjie Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406456",
"hidden": false,
"name": "Delei Kong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406457",
"hidden": false,
"name": "Chenming Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406458",
"hidden": false,
"name": "Yuetong Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406459",
"hidden": false,
"name": "Renjing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T08:03:04 | Optimal Brain Apoptosis | The increasing complexity and parameter count of Convolutional Neural
Networks (CNNs) and Transformers pose challenges in terms of computational
efficiency and resource demands. Pruning has been identified as an effective
strategy to address these challenges by removing redundant elements such as
neurons, channels, or connections, thereby enhancing computational efficiency
without heavily compromising performance. This paper builds on the foundational
work of Optimal Brain Damage (OBD) by advancing the methodology of parameter
importance estimation using the Hessian matrix. Unlike previous approaches that
rely on approximations, we introduce Optimal Brain Apoptosis (OBA), a novel
pruning method that calculates the Hessian-vector product value directly for
each parameter. By decomposing the Hessian matrix across network layers and
identifying conditions under which inter-layer Hessian submatrices are
non-zero, we propose a highly efficient technique for computing the
second-order Taylor expansion of parameters. This approach allows for a more
precise pruning process, particularly in the context of CNNs and Transformers,
as validated in our experiments including VGG19, ResNet32, ResNet50, and
ViT-B/16 on CIFAR10, CIFAR100 and Imagenet datasets. Our code is available at
https://github.com/NEU-REAL/OBA. | 7 | 67c59a7f6eb050aa824064b9 | null | https://github.com/NEU-REAL/OBA |
|
2025-03-03T04:21:42.563000 | Tell me why: Visual foundation models as self-explainable classifiers | 2 | {
"_id": "66588b6fd22637bfab498709",
"avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg",
"followerCount": null,
"fullname": "Hugues Turbé",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hturbe",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/66588b6fd22637bfab498709/4VG_eDtZKZ4kj1AdG_P14.png"
] | 2502.19577 | [
{
"_id": "67c42356054ae6d1c760b643",
"hidden": false,
"name": "Hugues Turbé",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:15:04.391Z",
"user": {
"_id": "66588b6fd22637bfab498709",
"avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg",
"fullname": "Hugues Turbé",
"isPro": false,
"type": "user",
"user": "hturbe"
}
},
{
"_id": "67c42356054ae6d1c760b644",
"hidden": false,
"name": "Mina Bjelogrlic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c42356054ae6d1c760b645",
"hidden": false,
"name": "Gianmarco Mengaldo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c42356054ae6d1c760b646",
"hidden": false,
"name": "Christian Lovis",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T21:40:30 | Tell me why: Visual foundation models as self-explainable classifiers | Visual foundation models (VFMs) have become increasingly popular due to their
state-of-the-art performance. However, interpretability remains crucial for
critical applications. In this sense, self-explainable models (SEM) aim to
provide interpretable classifiers that decompose predictions into a weighted
sum of interpretable concepts. Despite their promise, recent studies have shown
that these explanations often lack faithfulness. In this work, we combine VFMs
with a novel prototypical architecture and specialized training objectives. By
training only a lightweight head (approximately 1M parameters) on top of frozen
VFMs, our approach (ProtoFM) offers an efficient and interpretable solution.
Evaluations demonstrate that our approach achieves competitive classification
performance while outperforming existing models across a range of
interpretability metrics derived from the literature. Code is available at
https://github.com/hturbe/proto-fm. | 9 | 67c4235c054ae6d1c760b806 | null | null |
|
2025-03-03T02:35:09.967000 | Chain of Draft: Thinking Faster by Writing Less | 4 | {
"_id": "63da3d7ae697e5898cb86854",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675246771355-noauth.jpeg",
"followerCount": 86,
"fullname": "Talha Rüzgar Akkuş",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Q-bert",
"type": "user"
} | true | null | 2502.18600 | [
{
"_id": "67c0a8058589d8ecb79d472b",
"hidden": false,
"name": "Silei Xu",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-27T18:01:14.543Z",
"user": {
"_id": "6594b1bb57a556fbe162915e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6594b1bb57a556fbe162915e/WuYxqbbvaJaT-xsk5KhoT.jpeg",
"fullname": "Silei Xu",
"isPro": false,
"type": "user",
"user": "sileixu"
}
},
{
"_id": "67c0a8058589d8ecb79d472c",
"hidden": false,
"name": "Wenhao Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0a8058589d8ecb79d472d",
"hidden": false,
"name": "Lingxiao Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0a8058589d8ecb79d472e",
"hidden": false,
"name": "Pengcheng He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:30:43.479Z",
"user": {
"_id": "5efd09cf49ed724c8a135868",
"avatarUrl": "/avatars/af12bc94657979677a9f26183f0c9727.svg",
"fullname": "Pengcheng He",
"isPro": false,
"type": "user",
"user": "DeBERTa"
}
}
] | 2025-02-25T19:36:06 | Chain of Draft: Thinking Faster by Writing Less | Large Language Models (LLMs) have demonstrated remarkable performance in
solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT)
prompting, which emphasizes verbose, step-by-step reasoning. However, humans
typically employ a more efficient strategy: drafting concise intermediate
thoughts that capture only essential information. In this work, we propose
Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes,
where LLMs generate minimalistic yet informative intermediate reasoning outputs
while solving tasks. By reducing verbosity and focusing on critical insights,
CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of
the tokens, significantly reducing cost and latency across various reasoning
tasks. | 35 | 67c0a8078589d8ecb79d47ed | null | https://github.com/sileix/chain-of-draft |
|
2025-03-02T22:22:01.895000 | ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents | 2 | {
"_id": "657429d833e5a4bf5b278615",
"avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg",
"followerCount": 1,
"fullname": "QiuchenWang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "autumncc",
"type": "user"
} | true | null | 2502.18017 | [
{
"_id": "67bef5a6070ec160042d99f4",
"hidden": false,
"name": "Qiuchen Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:57.850Z",
"user": {
"_id": "657429d833e5a4bf5b278615",
"avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg",
"fullname": "QiuchenWang",
"isPro": false,
"type": "user",
"user": "autumncc"
}
},
{
"_id": "67bef5a6070ec160042d99f5",
"hidden": false,
"name": "Ruixue Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bef5a6070ec160042d99f6",
"hidden": false,
"name": "Zehui Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:18.129Z",
"user": {
"_id": "64892d31cbda0d1cdb956897",
"avatarUrl": "/avatars/3cdafe03a8295124636347d15a099aaf.svg",
"fullname": "Zehui Chen",
"isPro": false,
"type": "user",
"user": "lovesnowbest"
}
},
{
"_id": "67bef5a6070ec160042d99f7",
"hidden": false,
"name": "Weiqi Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:12.075Z",
"user": {
"_id": "65351cbe6141b3927afaed17",
"avatarUrl": "/avatars/5abf5f2c4ab329e63a7f45c15c9dfb93.svg",
"fullname": "weiqi wu",
"isPro": false,
"type": "user",
"user": "vickywu"
}
},
{
"_id": "67bef5a6070ec160042d99f8",
"hidden": false,
"name": "Shihang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:05.679Z",
"user": {
"_id": "62e8efb14210d3fe69eacb42",
"avatarUrl": "/avatars/2feadd75274bf353b910f4679ef72b39.svg",
"fullname": "Shihang Wang",
"isPro": false,
"type": "user",
"user": "shihang"
}
},
{
"_id": "67bef5a6070ec160042d99f9",
"hidden": false,
"name": "Pengjun Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:31:59.813Z",
"user": {
"_id": "63a091e42fabbbb89991f5ce",
"avatarUrl": "/avatars/d55485b06461764c36c9edf9d6e8892c.svg",
"fullname": "pengjun xie",
"isPro": false,
"type": "user",
"user": "xpjandy"
}
},
{
"_id": "67bef5a6070ec160042d99fa",
"hidden": false,
"name": "Feng Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T09:26:12 | ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic
Iterative Reasoning Agents | Understanding information from visually rich documents remains a significant
challenge for traditional Retrieval-Augmented Generation (RAG) methods.
Existing benchmarks predominantly focus on image-based question answering (QA),
overlooking the fundamental challenges of efficient retrieval, comprehension,
and reasoning within dense visual documents. To bridge this gap, we introduce
ViDoSeek, a novel dataset designed to evaluate RAG performance on visually rich
documents requiring complex reasoning. Based on it, we identify key limitations
in current RAG approaches: (i) purely visual retrieval methods struggle to
effectively integrate both textual and visual features, and (ii) previous
approaches often allocate insufficient reasoning tokens, limiting their
effectiveness. To address these challenges, we propose ViDoRAG, a novel
multi-agent RAG framework tailored for complex reasoning across visual
documents. ViDoRAG employs a Gaussian Mixture Model (GMM)-based hybrid strategy
to effectively handle multi-modal retrieval. To further elicit the model's
reasoning capabilities, we introduce an iterative agent workflow incorporating
exploration, summarization, and reflection, providing a framework for
investigating test-time scaling in RAG domains. Extensive experiments on
ViDoSeek validate the effectiveness and generalization of our approach.
Notably, ViDoRAG outperforms existing methods by over 10% on the competitive
ViDoSeek benchmark. | 17 | 67bef5a7070ec160042d9a65 | null | https://github.com/Alibaba-NLP/ViDoRAG |
|
2025-03-02T22:08:44.891000 | Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20396 | [
{
"_id": "67c51d36c830dcb76bbb5994",
"hidden": false,
"name": "Toru Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:25.709Z",
"user": {
"_id": "65e8b34632f166badb8d893a",
"avatarUrl": "/avatars/a55da1d08dc1104e6c539cd3f1ef1ebe.svg",
"fullname": "T",
"isPro": false,
"type": "user",
"user": "toruowo"
}
},
{
"_id": "67c51d36c830dcb76bbb5995",
"hidden": false,
"name": "Kartik Sachdev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51d36c830dcb76bbb5996",
"hidden": false,
"name": "Linxi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51d36c830dcb76bbb5997",
"hidden": false,
"name": "Jitendra Malik",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:36:34.177Z",
"user": {
"_id": "65369a95605a07338de78ab0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/sGFjOjLT2akN-sn5beVWL.jpeg",
"fullname": "Jitendra Malik ",
"isPro": false,
"type": "user",
"user": "jitendra1995"
}
},
{
"_id": "67c51d36c830dcb76bbb5998",
"hidden": false,
"name": "Yuke Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:59:52 | Sim-to-Real Reinforcement Learning for Vision-Based Dexterous
Manipulation on Humanoids | Reinforcement learning has delivered promising results in achieving human- or
even superhuman-level capabilities across diverse problem domains, but success
in dexterous robot manipulation remains limited. This work investigates the key
challenges in applying reinforcement learning to solve a collection of
contact-rich manipulation tasks on a humanoid embodiment. We introduce novel
techniques to overcome the identified challenges with empirical validation. Our
main contributions include an automated real-to-sim tuning module that brings
the simulated environment closer to the real world, a generalized reward design
scheme that simplifies reward engineering for long-horizon contact-rich
manipulation tasks, a divide-and-conquer distillation process that improves the
sample efficiency of hard-exploration problems while maintaining sim-to-real
performance, and a mixture of sparse and dense object representations to bridge
the sim-to-real perception gap. We show promising results on three humanoid
dexterous manipulation tasks, with ablation studies on each technique. Our work
presents a successful approach to learning humanoid dexterous manipulation
using sim-to-real reinforcement learning, achieving robust generalization and
high performance without the need for human demonstration. | 11 | 67c51d39c830dcb76bbb5a1f | null | null |
|
2025-03-02T22:04:15.087000 | HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20811 | [
{
"_id": "67c51c198d02783fa3a6249d",
"hidden": false,
"name": "Xiao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a6249e",
"hidden": false,
"name": "Jingyun Hua",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a6249f",
"hidden": false,
"name": "Weihong Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:42:30.547Z",
"user": {
"_id": "675a69699e086bd6250a36ef",
"avatarUrl": "/avatars/95c72e3975d1a37f8655a2fe629746ec.svg",
"fullname": "Weihong Lin",
"isPro": false,
"type": "user",
"user": "lwher1996"
}
},
{
"_id": "67c51c198d02783fa3a624a0",
"hidden": false,
"name": "Yuanxing Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a1",
"hidden": false,
"name": "Fuzheng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a2",
"hidden": false,
"name": "Jianlong Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a3",
"hidden": false,
"name": "Di Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a4",
"hidden": false,
"name": "Liqiang Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T07:53:40 | HAIC: Improving Human Action Understanding and Generation with Better
Captions for Multi-modal Large Language Models | Recent Multi-modal Large Language Models (MLLMs) have made great progress in
video understanding. However, their performance on videos involving human
actions is still limited by the lack of high-quality data. To address this, we
introduce a two-stage data annotation pipeline. First, we design strategies to
accumulate videos featuring clear human actions from the Internet. Second,
videos are annotated in a standardized caption format that uses human
attributes to distinguish individuals and chronologically details their actions
and interactions. Through this pipeline, we curate two datasets, namely
HAICTrain and HAICBench. HAICTrain comprises 126K video-caption pairs
generated by Gemini-Pro and verified for training purposes. Meanwhile,
HAICBench includes 500 manually annotated video-caption pairs and
1,400 QA pairs, for a comprehensive evaluation of human action understanding.
Experimental results demonstrate that training with HAICTrain not only
significantly enhances human understanding abilities across 4 benchmarks, but
can also improve text-to-video generation results. Both the HAICTrain and
HAICBench are released at https://huggingface.co/datasets/KuaishouHAIC/HAIC. | 1 | 67c51c1b8d02783fa3a62543 | null | null |
|
2025-03-02T22:00:31.796000 | SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20545 | [
{
"_id": "67c51b459d5807d6674b3d3c",
"hidden": false,
"name": "Kechen Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:29.578Z",
"user": {
"_id": "6742deb4d3ad4510c12da658",
"avatarUrl": "/avatars/91407d854560ef9a2facd80fa8fab6ec.svg",
"fullname": "Kechen Li",
"isPro": false,
"type": "user",
"user": "Kechen-Li"
}
},
{
"_id": "67c51b459d5807d6674b3d3d",
"hidden": false,
"name": "Wenqi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51b459d5807d6674b3d3e",
"hidden": false,
"name": "Coralia Cartis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51b459d5807d6674b3d3f",
"hidden": false,
"name": "Tianbo Ji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:35:49.782Z",
"user": {
"_id": "64bb61e876a6e2efcc728e22",
"avatarUrl": "/avatars/b0ed1c9f13fd1f2c99d202155001e39b.svg",
"fullname": "Tianbo Ji",
"isPro": false,
"type": "user",
"user": "jitianbo"
}
},
{
"_id": "67c51b459d5807d6674b3d40",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:14:45.635Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
}
] | 2025-02-27T21:41:43 | SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers | Large Language Models (LLMs) have achieved human-level proficiency across
diverse tasks, but their ability to perform rigorous mathematical problem
solving remains an open challenge. In this work, we investigate a fundamental
yet computationally intractable problem: determining whether a given
multivariate polynomial is nonnegative. This problem, closely related to
Hilbert's Seventeenth Problem, plays a crucial role in global polynomial
optimization and has applications in various fields. First, we introduce
SoS-1K, a meticulously curated dataset of approximately 1,000 polynomials,
along with expert-designed reasoning instructions based on five progressively
challenging criteria. Evaluating multiple state-of-the-art LLMs, we find that
without structured guidance, all models perform only slightly above the random
guess baseline 50%. However, high-quality reasoning instructions significantly
improve accuracy, boosting performance up to 81%. Furthermore, our 7B model,
SoS-7B, fine-tuned on SoS-1K for just 4 hours, outperforms the 671B DeepSeek-V3
and GPT-4o-mini in accuracy while only requiring 1.8% and 5% of the computation
time needed for letters, respectively. Our findings highlight the potential of
LLMs to push the boundaries of mathematical reasoning and tackle NP-hard
problems. | 17 | 67c51b469d5807d6674b3d88 | null | null |
|
2025-03-02T21:48:46.577000 | LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation | 2 | {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"followerCount": null,
"fullname": "Keisuke Kamahori",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kamahori",
"type": "user"
} | true | null | 2502.20583 | [
{
"_id": "67c516998d02783fa3a52dc8",
"hidden": false,
"name": "Keisuke Kamahori",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T08:07:02.986Z",
"user": {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"fullname": "Keisuke Kamahori",
"isPro": false,
"type": "user",
"user": "kamahori"
}
},
{
"_id": "67c516998d02783fa3a52dc9",
"hidden": false,
"name": "Jungo Kasai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:43:49.097Z",
"user": {
"_id": "62908273c740ebb981a6dba4",
"avatarUrl": "/avatars/465f50369c367b07670f5209c83d65f2.svg",
"fullname": "Jungo Kasai",
"isPro": false,
"type": "user",
"user": "jungok"
}
},
{
"_id": "67c516998d02783fa3a52dca",
"hidden": false,
"name": "Noriyuki Kojima",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:43:56.698Z",
"user": {
"_id": "628c26a8b80bb09700d6af86",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1653352051245-noauth.jpeg",
"fullname": "Noriyuki Kojima",
"isPro": false,
"type": "user",
"user": "kojimano"
}
},
{
"_id": "67c516998d02783fa3a52dcb",
"hidden": false,
"name": "Baris Kasikci",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:44:04.084Z",
"user": {
"_id": "654132fe5a9a913c6c870e79",
"avatarUrl": "/avatars/2f6807eddef1929c571977e9af35f952.svg",
"fullname": "Baris Kasikci",
"isPro": false,
"type": "user",
"user": "kasikci"
}
}
] | 2025-02-27T22:52:21 | LiteASR: Efficient Automatic Speech Recognition with Low-Rank
Approximation | Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper,
rely on deep encoder-decoder architectures, and their encoders are a critical
bottleneck for efficient deployment due to high computational intensity. We
introduce LiteASR, a low-rank compression scheme for ASR encoders that
significantly reduces inference costs while maintaining transcription accuracy.
Our approach leverages the strong low-rank properties observed in intermediate
activations: by applying principal component analysis (PCA) with a small
calibration dataset, we approximate linear transformations with a chain of
low-rank matrix multiplications, and further optimize self-attention to work in
the reduced dimension. Evaluation results show that our method can compress
Whisper large-v3's encoder size by over 50%, matching Whisper medium's size
with better transcription accuracy, thereby establishing a new Pareto-optimal
frontier of efficiency and performance. The code of LiteASR is available at
https://github.com/efeslab/LiteASR. | 9 | 67c516998d02783fa3a52dfd | null | https://github.com/efeslab/LiteASR |
|
2025-03-02T21:35:24.437000 | DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point Thinking | 4 | {
"_id": "63664c8fa2abcdf2fd6425ed",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8fa2abcdf2fd6425ed/IywpB0DXZ_twkmZmVSCCD.jpeg",
"followerCount": 1,
"fullname": "Li Zhuoqun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lzq2021",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/y_kT4GP3xgm-5RdguMNV7.png",
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/wDAS_USsxsVHbin1I5CEe.png",
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/4lJgWp9V8pm4vDBUH4I5n.png"
] | 2502.20730 | [
{
"_id": "67c514aba3d873e41624a082",
"hidden": false,
"name": "Zhuoqun Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T08:07:26.218Z",
"user": {
"_id": "63664c8fa2abcdf2fd6425ed",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8fa2abcdf2fd6425ed/IywpB0DXZ_twkmZmVSCCD.jpeg",
"fullname": "Li Zhuoqun",
"isPro": false,
"type": "user",
"user": "lzq2021"
}
},
{
"_id": "67c514aba3d873e41624a083",
"hidden": false,
"name": "Haiyang Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T09:31:12.493Z",
"user": {
"_id": "64a4ceda9a90f701134189b7",
"avatarUrl": "/avatars/859a189c5d2ae2fcb9aa2d79104fbfe7.svg",
"fullname": "Haiyang Yu",
"isPro": false,
"type": "user",
"user": "yhycai"
}
},
{
"_id": "67c514aba3d873e41624a084",
"hidden": false,
"name": "Xuanang Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:31.384Z",
"user": {
"_id": "63ef664304b0e373992a2633",
"avatarUrl": "/avatars/cba554ff88bd8b68ae51bea8ee991d13.svg",
"fullname": "Xuanang Chen",
"isPro": false,
"type": "user",
"user": "xuanang"
}
},
{
"_id": "67c514aba3d873e41624a085",
"hidden": false,
"name": "Hongyu Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:28:09.791Z",
"user": {
"_id": "6711c702f858a456b4b9f3a4",
"avatarUrl": "/avatars/178e9567c3111ab22717c3c0dd003a6a.svg",
"fullname": "Hongyu Lin",
"isPro": false,
"type": "user",
"user": "sanmusunrise"
}
},
{
"_id": "67c514aba3d873e41624a086",
"hidden": false,
"name": "Yaojie Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:38.957Z",
"user": {
"_id": "6216496a9b34d2fb49144599",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6216496a9b34d2fb49144599/41CKA_h1Ffj3RzVabSAkm.jpeg",
"fullname": "Yaojie Lu",
"isPro": false,
"type": "user",
"user": "luyaojie"
}
},
{
"_id": "67c514aba3d873e41624a087",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c514aba3d873e41624a088",
"hidden": false,
"name": "Xianpei Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:51.007Z",
"user": {
"_id": "65e99a77e71555ed193609cf",
"avatarUrl": "/avatars/38ceb127883944677665da967d17dd18.svg",
"fullname": "Xianpei Han",
"isPro": false,
"type": "user",
"user": "xphan"
}
},
{
"_id": "67c514aba3d873e41624a089",
"hidden": false,
"name": "Yongbin Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:57.561Z",
"user": {
"_id": "66641b2fd8e1e34bc621e688",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66641b2fd8e1e34bc621e688/csPETwnx2zCIHSWi9uAi-.png",
"fullname": "Yongbin Li",
"isPro": false,
"type": "user",
"user": "Yongbin-Li"
}
},
{
"_id": "67c514aba3d873e41624a08a",
"hidden": false,
"name": "Le Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T05:23:10 | DeepSolution: Boosting Complex Engineering Solution Design via
Tree-based Exploration and Bi-point Thinking | Designing solutions for complex engineering challenges is crucial in human
production activities. However, previous research in the retrieval-augmented
generation (RAG) field has not sufficiently addressed tasks related to the
design of complex engineering solutions. To fill this gap, we introduce a new
benchmark, SolutionBench, to evaluate a system's ability to generate complete
and feasible solutions for engineering problems with multiple complex
constraints. To further advance the design of complex engineering solutions, we
propose a novel system, SolutionRAG, that leverages the tree-based exploration
and bi-point thinking mechanism to generate reliable solutions. Extensive
experimental results demonstrate that SolutionRAG achieves state-of-the-art
(SOTA) performance on the SolutionBench, highlighting its potential to enhance
the automation and reliability of complex engineering solution design in
real-world applications. | 30 | 67c514aca3d873e41624a10b | null | https://github.com/Li-Z-Q/DeepSolution |
|
2025-02-28T16:51:51.551000 | PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving | 3 | {
"_id": "61a00714f5119f1651f7e4be",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1651013366729-61a00714f5119f1651f7e4be.jpeg",
"followerCount": 1,
"fullname": "Mihir Parmar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Mihir3009",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/61a00714f5119f1651f7e4be/dZJBpAQlVaJSFYXhuE1Rl.png"
] | 2502.16111 | [
{
"_id": "67be18d2bb66802239ec8095",
"hidden": false,
"name": "Mihir Parmar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8096",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8097",
"hidden": false,
"name": "Palash Goyal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8098",
"hidden": false,
"name": "Yanfei Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8099",
"hidden": false,
"name": "Long Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809a",
"hidden": false,
"name": "Swaroop Mishra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809b",
"hidden": false,
"name": "Hossein Mobahi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809c",
"hidden": false,
"name": "Jindong Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809d",
"hidden": false,
"name": "Zifeng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809e",
"hidden": false,
"name": "Hootan Nakhost",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809f",
"hidden": false,
"name": "Chitta Baral",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a0",
"hidden": false,
"name": "Chen-Yu Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a1",
"hidden": false,
"name": "Tomas Pfister",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a2",
"hidden": false,
"name": "Hamid Palangi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-22T06:21:56 | PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning
Trajectories for Complex Problem Solving | Recent agent frameworks and inference-time algorithms often struggle with
complex planning problems due to limitations in verifying generated plans or
reasoning and varying complexity of instances within a single task. Many
existing methods for these tasks either perform task-level verification without
considering constraints or apply inference-time algorithms without adapting to
instance-level complexity. To address these limitations, we propose PlanGEN, a
model-agnostic and easily scalable agent framework with three key components:
constraint, verification, and selection agents. Specifically, our approach
proposes constraint-guided iterative verification to enhance performance of
inference-time algorithms--Best of N, Tree-of-Thought, and REBASE. In PlanGEN
framework, the selection agent optimizes algorithm choice based on instance
complexity, ensuring better adaptability to complex planning problems.
Experimental results demonstrate significant improvements over the strongest
baseline across multiple benchmarks, achieving state-of-the-art results on
NATURAL PLAN (sim8%uparrow), OlympiadBench (sim4%uparrow), DocFinQA
(sim7%uparrow), and GPQA (sim1%uparrow). Our key finding highlights
that constraint-guided iterative verification improves inference-time
algorithms, and adaptive selection further boosts performance on complex
planning and reasoning problems. | 7 | 67be18d3bb66802239ec80d1 | null | null |
|
2025-02-28T13:21:13.227000 | Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation | 2 | {
"_id": "65317ea1501804124f011950",
"avatarUrl": "/avatars/b055c3aba0c65d5377c69472e4576480.svg",
"followerCount": 3,
"fullname": "Ren",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "OliverRen",
"type": "user"
} | false | null | 2502.20388 | [
{
"_id": "67c1643aa4ccbde471532ba6",
"hidden": false,
"name": "Sucheng Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba7",
"hidden": false,
"name": "Qihang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba8",
"hidden": false,
"name": "Ju He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba9",
"hidden": false,
"name": "Xiaohui Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532baa",
"hidden": false,
"name": "Alan Yuille",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532bab",
"hidden": false,
"name": "Liang-Chieh Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:59:08 | Beyond Next-Token: Next-X Prediction for Autoregressive Visual
Generation | Autoregressive (AR) modeling, known for its next-token prediction paradigm,
underpins state-of-the-art language and visual generative models.
Traditionally, a ``token'' is treated as the smallest prediction unit, often a
discrete symbol in language or a quantized patch in vision. However, the
optimal token definition for 2D image structures remains an open question.
Moreover, AR models suffer from exposure bias, where teacher forcing during
training leads to error accumulation at inference. In this paper, we propose
xAR, a generalized AR framework that extends the notion of a token to an entity
X, which can represent an individual patch token, a cell (a ktimes k
grouping of neighboring patches), a subsample (a non-local grouping of distant
patches), a scale (coarse-to-fine resolution), or even a whole image.
Additionally, we reformulate discrete token classification as
continuous entity regression, leveraging flow-matching methods at each
AR step. This approach conditions training on noisy entities instead of ground
truth tokens, leading to Noisy Context Learning, which effectively alleviates
exposure bias. As a result, xAR offers two key advantages: (1) it enables
flexible prediction units that capture different contextual granularity and
spatial structures, and (2) it mitigates exposure bias by avoiding reliance on
teacher forcing. On ImageNet-256 generation benchmark, our base model, xAR-B
(172M), outperforms DiT-XL/SiT-XL (675M) while achieving 20times faster
inference. Meanwhile, xAR-H sets a new state-of-the-art with an FID of 1.24,
running 2.2times faster than the previous best-performing model without
relying on vision foundation modules (\eg, DINOv2) or advanced guidance
interval sampling. | 13 | 67c1643ba4ccbde471532c03 | null | null |
|
2025-02-28T08:54:03.125000 | On Relation-Specific Neurons in Large Language Models | 2 | {
"_id": "61bf84c8ca59d6d196a1b4e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"followerCount": 44,
"fullname": "Amir Hossein Kargaran",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "kargaranamir",
"type": "user"
} | true | null | 2502.17355 | [
{
"_id": "67bf1808b91e7e6477d92c1e",
"hidden": false,
"name": "Yihong Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T15:14:48.351Z",
"user": {
"_id": "653f7e569e84d1e8b6a66e70",
"avatarUrl": "/avatars/24eaa6434508a162c349aebfc51990ff.svg",
"fullname": "Yihong Liu",
"isPro": false,
"type": "user",
"user": "yihongLiu"
}
},
{
"_id": "67bf1808b91e7e6477d92c1f",
"hidden": false,
"name": "Runsheng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:28.041Z",
"user": {
"_id": "63629b9f2a84d82a8c8feb32",
"avatarUrl": "/avatars/8484b5bf8311b28249757729b1ce80f8.svg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "Runsheng"
}
},
{
"_id": "67bf1808b91e7e6477d92c20",
"hidden": false,
"name": "Lea Hirlimann",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:18.398Z",
"user": {
"_id": "658559148615630cb3ec5b6b",
"avatarUrl": "/avatars/dd804ca277e6b19903bb550cc167ba4a.svg",
"fullname": "Lea Hirlimann",
"isPro": false,
"type": "user",
"user": "hirlimann"
}
},
{
"_id": "67bf1808b91e7e6477d92c21",
"hidden": false,
"name": "Ahmad Dawar Hakimi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:11.693Z",
"user": {
"_id": "62502669d2d191ac43320ade",
"avatarUrl": "/avatars/7997e9b2012059edb22b745c3b737481.svg",
"fullname": "Ahmad Dawar Hakimi",
"isPro": false,
"type": "user",
"user": "adhakimi"
}
},
{
"_id": "67bf1808b91e7e6477d92c22",
"hidden": false,
"name": "Mingyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf1808b91e7e6477d92c23",
"hidden": false,
"name": "Amir Hossein Kargaran",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:07.932Z",
"user": {
"_id": "61bf84c8ca59d6d196a1b4e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"fullname": "Amir Hossein Kargaran",
"isPro": false,
"type": "user",
"user": "kargaranamir"
}
},
{
"_id": "67bf1808b91e7e6477d92c24",
"hidden": false,
"name": "Sascha Rothe",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf1808b91e7e6477d92c25",
"hidden": false,
"name": "François Yvon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:57.343Z",
"user": {
"_id": "62ab10f04bd2ebf5dbad205c",
"avatarUrl": "/avatars/65356b3b057159cc67a86efb26b53486.svg",
"fullname": "François Yvon",
"isPro": false,
"type": "user",
"user": "fyvo"
}
},
{
"_id": "67bf1808b91e7e6477d92c26",
"hidden": false,
"name": "Hinrich Schütze",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T17:33:18 | On Relation-Specific Neurons in Large Language Models | In large language models (LLMs), certain neurons can store distinct pieces of
knowledge learned during pretraining. While knowledge typically appears as a
combination of relations and entities, it remains unclear whether some neurons
focus on a relation itself -- independent of any entity. We hypothesize such
neurons detect a relation in the input text and guide generation involving such
a relation. To investigate this, we study the Llama-2 family on a chosen set of
relations with a statistics-based method. Our experiments demonstrate the
existence of relation-specific neurons. We measure the effect of selectively
deactivating candidate neurons specific to relation r on the LLM's ability to
handle (1) facts whose relation is r and (2) facts whose relation is a
different relation r' neq r. With respect to their capacity for encoding
relation information, we give evidence for the following three properties of
relation-specific neurons. (i) Neuron cumulativity. The neurons for
r present a cumulative effect so that deactivating a larger portion of them
results in the degradation of more facts in r. (ii) Neuron
versatility. Neurons can be shared across multiple closely related as well as
less related relations. Some relation neurons transfer across languages.
(iii) Neuron interference. Deactivating neurons specific to one
relation can improve LLM generation performance for facts of other relations.
We will make our code publicly available at
https://github.com/cisnlp/relation-specific-neurons. | 6 | 67bf1808b91e7e6477d92c55 | null | null |
|
2025-02-28T08:46:19.110000 | Guardians of the Agentic System: Preventing Many Shots Jailbreak with Agentic System | 2 | {
"_id": "653425f4ed74ace63395826c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QJlB0DOEel6U9b-95wasK.png",
"followerCount": 3,
"fullname": "Saikat Barua",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AlignAI",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/653425f4ed74ace63395826c/czZ9fF4yF6yz3E89YtU6e.jpeg"
] | 2502.16750 | [
{
"_id": "67c1b63744d780e60d7c5274",
"hidden": false,
"name": "Saikat Barua",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T13:24:57.086Z",
"user": {
"_id": "653425f4ed74ace63395826c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QJlB0DOEel6U9b-95wasK.png",
"fullname": "Saikat Barua",
"isPro": false,
"type": "user",
"user": "AlignAI"
}
},
{
"_id": "67c1b63744d780e60d7c5275",
"hidden": false,
"name": "Mostafizur Rahman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5276",
"hidden": false,
"name": "Md Jafor Sadek",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:21:48.563Z",
"user": {
"_id": "63c99ab3dfac8071d01b61d4",
"avatarUrl": "/avatars/9151241b8af4d64d7771740587d1b7a5.svg",
"fullname": "MD Jafor Sadek Khan",
"isPro": false,
"type": "user",
"user": "Jafor"
}
},
{
"_id": "67c1b63744d780e60d7c5277",
"hidden": false,
"name": "Rafiul Islam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5278",
"hidden": false,
"name": "Shehnaz Khaled",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5279",
"hidden": false,
"name": "Ahmedul Kabir",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T23:35:15 | Guardians of the Agentic System: Preventing Many Shots Jailbreak with
Agentic System | The autonomous AI agents using large language models can create undeniable
values in all span of the society but they face security threats from
adversaries that warrants immediate protective solutions because trust and
safety issues arise. Considering the many-shot jailbreaking and deceptive
alignment as some of the main advanced attacks, that cannot be mitigated by the
static guardrails used during the supervised training, points out a crucial
research priority for real world robustness. The combination of static
guardrails in dynamic multi-agent system fails to defend against those attacks.
We intend to enhance security for LLM-based agents through the development of
new evaluation frameworks which identify and counter threats for safe
operational deployment. Our work uses three examination methods to detect rogue
agents through a Reverse Turing Test and analyze deceptive alignment through
multi-agent simulations and develops an anti-jailbreaking system by testing it
with GEMINI 1.5 pro and llama-3.3-70B, deepseek r1 models using tool-mediated
adversarial scenarios. The detection capabilities are strong such as 94\%
accuracy for GEMINI 1.5 pro yet the system suffers persistent vulnerabilities
when under long attacks as prompt length increases attack success rates (ASR)
and diversity metrics become ineffective in prediction while revealing multiple
complex system faults. The findings demonstrate the necessity of adopting
flexible security systems based on active monitoring that can be performed by
the agents themselves together with adaptable interventions by system admin as
the current models can create vulnerabilities that can lead to the unreliable
and vulnerable system. So, in our work, we try to address such situations and
propose a comprehensive framework to counteract the security issues. | 10 | 67c1b63a44d780e60d7c5317 | null | null |
|
2025-02-28T07:55:48.923000 | Training Consistency Models with Variational Noise Coupling | 2 | {
"_id": "67c07f498589d8ecb7912686",
"avatarUrl": "/avatars/84e77389c211a7c4237f73208658c23a.svg",
"followerCount": null,
"fullname": "Gianluigi Silvestri",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gisilvs",
"type": "user"
} | true | null | 2502.18197 | [
{
"_id": "67c07fa2a43d7939d6d90d54",
"hidden": false,
"name": "Gianluigi Silvestri",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:04.844Z",
"user": {
"_id": "67c07f498589d8ecb7912686",
"avatarUrl": "/avatars/84e77389c211a7c4237f73208658c23a.svg",
"fullname": "Gianluigi Silvestri",
"isPro": false,
"type": "user",
"user": "gisilvs"
}
},
{
"_id": "67c07fa2a43d7939d6d90d55",
"hidden": false,
"name": "Luca Ambrogioni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c07fa2a43d7939d6d90d56",
"hidden": false,
"name": "Chieh-Hsin Lai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c07fa2a43d7939d6d90d57",
"hidden": false,
"name": "Yuhta Takida",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:15:50.340Z",
"user": {
"_id": "66138c4074f830bc7d9d6622",
"avatarUrl": "/avatars/d50da6d7597d3bcf63f9f0c74e910155.svg",
"fullname": "Yuhta Takida",
"isPro": false,
"type": "user",
"user": "ytakida"
}
},
{
"_id": "67c07fa2a43d7939d6d90d58",
"hidden": false,
"name": "Yuki Mitsufuji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:15:56.987Z",
"user": {
"_id": "665e32384ecc8a7181634f6d",
"avatarUrl": "/avatars/8752f952010540d14f45eac849e91371.svg",
"fullname": "Yuki Mitsufuji",
"isPro": false,
"type": "user",
"user": "mittu1204"
}
}
] | 2025-02-25T13:38:04 | Training Consistency Models with Variational Noise Coupling | Consistency Training (CT) has recently emerged as a promising alternative to
diffusion models, achieving competitive performance in image generation tasks.
However, non-distillation consistency training often suffers from high variance
and instability, and analyzing and improving its training dynamics is an active
area of research. In this work, we propose a novel CT training approach based
on the Flow Matching framework. Our main contribution is a trained
noise-coupling scheme inspired by the architecture of Variational Autoencoders
(VAE). By training a data-dependent noise emission model implemented as an
encoder architecture, our method can indirectly learn the geometry of the
noise-to-data mapping, which is instead fixed by the choice of the forward
process in classical CT. Empirical results across diverse image datasets show
significant generative improvements, with our model outperforming baselines and
achieving the state-of-the-art (SoTA) non-distillation CT FID on CIFAR-10, and
attaining FID on par with SoTA on ImageNet at 64 times 64 resolution in
2-step generation. Our code is available at https://github.com/sony/vct . | 5 | 67c07fa6a43d7939d6d90e1f | null | null |
|
2025-02-28T07:25:35.166000 | Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling | 2 | {
"_id": "6442882f8443bce4c98a88aa",
"avatarUrl": "/avatars/70d5aa651b07b43629554096d76efd4c.svg",
"followerCount": 1,
"fullname": "Kong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "imsuperkong",
"type": "user"
} | true | null | 2502.20378 | [
{
"_id": "67c1aa781c3a8036977ed8b1",
"hidden": false,
"name": "Hanyang Kong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T15:13:50.949Z",
"user": {
"_id": "6442882f8443bce4c98a88aa",
"avatarUrl": "/avatars/70d5aa651b07b43629554096d76efd4c.svg",
"fullname": "Kong",
"isPro": false,
"type": "user",
"user": "imsuperkong"
}
},
{
"_id": "67c1aa781c3a8036977ed8b2",
"hidden": false,
"name": "Xingyi Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:45:42.180Z",
"user": {
"_id": "634cfebc350bcee9bed20a4d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634cfebc350bcee9bed20a4d/fN47nN5rhw-HJaFLBZWQy.png",
"fullname": "Xingyi Yang",
"isPro": false,
"type": "user",
"user": "adamdad"
}
},
{
"_id": "67c1aa781c3a8036977ed8b3",
"hidden": false,
"name": "Xinchao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:45:56.197Z",
"user": {
"_id": "63fc03a50aab060792ffef39",
"avatarUrl": "/avatars/9d5b1bb2a41928e08176b703935133ab.svg",
"fullname": "Wangxinchao",
"isPro": false,
"type": "user",
"user": "wxcTest"
}
}
] | 2025-02-27T18:53:06 | Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via
Sparse Time-Variant Attribute Modeling | Rendering dynamic scenes from monocular videos is a crucial yet challenging
task. The recent deformable Gaussian Splatting has emerged as a robust solution
to represent real-world dynamic scenes. However, it often leads to heavily
redundant Gaussians, attempting to fit every training view at various time
steps, leading to slower rendering speeds. Additionally, the attributes of
Gaussians in static areas are time-invariant, making it unnecessary to model
every Gaussian, which can cause jittering in static regions. In practice, the
primary bottleneck in rendering speed for dynamic scenes is the number of
Gaussians. In response, we introduce Efficient Dynamic Gaussian Splatting
(EDGS), which represents dynamic scenes via sparse time-variant attribute
modeling. Our approach formulates dynamic scenes using a sparse anchor-grid
representation, with the motion flow of dense Gaussians calculated via a
classical kernel representation. Furthermore, we propose an unsupervised
strategy to efficiently filter out anchors corresponding to static areas. Only
anchors associated with deformable objects are input into MLPs to query
time-variant attributes. Experiments on two real-world datasets demonstrate
that our EDGS significantly improves the rendering speed with superior
rendering quality compared to previous state-of-the-art methods. | 4 | 67c1aa7a1c3a8036977ed977 | null | null |
|
2025-02-28T04:47:08.197000 | Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting | 2 | {
"_id": "63c7a33121bd95f80ed74652",
"avatarUrl": "/avatars/7dd59afea785a2bff0ec2b757abd474e.svg",
"followerCount": 2,
"fullname": "Siyuan Huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "thuhsy",
"type": "user"
} | true | null | 2502.19459 | [
{
"_id": "67c185f46a31b8fe77434551",
"hidden": false,
"name": "Yu Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:40.098Z",
"user": {
"_id": "636de85cc4a7a729c164d2b5",
"avatarUrl": "/avatars/3e281e547e1697e1c06805e7e63f3918.svg",
"fullname": "Yu Liu",
"isPro": false,
"type": "user",
"user": "YuLiu"
}
},
{
"_id": "67c185f46a31b8fe77434552",
"hidden": false,
"name": "Baoxiong Jia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:33.475Z",
"user": {
"_id": "6304b389bad6ce7fc02691d5",
"avatarUrl": "/avatars/a762ca59624ce409650165f36b973488.svg",
"fullname": "Baoxiong Jia",
"isPro": false,
"type": "user",
"user": "BuzzBeater"
}
},
{
"_id": "67c185f46a31b8fe77434553",
"hidden": false,
"name": "Ruijie Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:27.394Z",
"user": {
"_id": "64ab8cb76324705e6a65f7c4",
"avatarUrl": "/avatars/15dcae6c345d31ea6e17c11108a7deb7.svg",
"fullname": "Ruijie Lu",
"isPro": false,
"type": "user",
"user": "JasonAplp"
}
},
{
"_id": "67c185f46a31b8fe77434554",
"hidden": false,
"name": "Junfeng Ni",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:20.550Z",
"user": {
"_id": "65ae5edddacd99fd58277620",
"avatarUrl": "/avatars/5ba35c984d54eef4eacf11ebebafa3a0.svg",
"fullname": "Junfeng Ni",
"isPro": false,
"type": "user",
"user": "JunfengNi"
}
},
{
"_id": "67c185f46a31b8fe77434555",
"hidden": false,
"name": "Song-Chun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c185f46a31b8fe77434556",
"hidden": false,
"name": "Siyuan Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:02.710Z",
"user": {
"_id": "63c7a33121bd95f80ed74652",
"avatarUrl": "/avatars/7dd59afea785a2bff0ec2b757abd474e.svg",
"fullname": "Siyuan Huang",
"isPro": false,
"type": "user",
"user": "thuhsy"
}
}
] | 2025-02-26T10:25:32 | Building Interactable Replicas of Complex Articulated Objects via
Gaussian Splatting | Building articulated objects is a key challenge in computer vision. Existing
methods often fail to effectively integrate information across different object
states, limiting the accuracy of part-mesh reconstruction and part dynamics
modeling, particularly for complex multi-part articulated objects. We introduce
ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient
representation to address these issues. Our method incorporates canonical
Gaussians with coarse-to-fine initialization and updates for aligning
articulated part information across different object states, and employs a
skinning-inspired part dynamics modeling module to improve both part-mesh
reconstruction and articulation learning. Extensive experiments on both
synthetic and real-world datasets, including a new benchmark for complex
multi-part objects, demonstrate that ArtGS achieves state-of-the-art
performance in joint parameter estimation and part mesh reconstruction. Our
approach significantly improves reconstruction quality and efficiency,
especially for multi-part articulated objects. Additionally, we provide
comprehensive analyses of our design choices, validating the effectiveness of
each component to highlight potential areas for future improvement. | 8 | 67c185f66a31b8fe774345d2 | https://articulate-gs.github.io | https://github.com/YuLiu-LY/ArtGS |
|
2025-02-28T04:36:05.045000 | MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning | 3 | {
"_id": "631b9ff5824f2502e3557c7e",
"avatarUrl": "/avatars/076043c9dba07644a570692563ef8114.svg",
"followerCount": 5,
"fullname": "liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "che111",
"type": "user"
} | true | null | 2502.19634 | [
{
"_id": "67c12bf3505a88e4a1866a01",
"hidden": false,
"name": "Jiazhen Pan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:10.120Z",
"user": {
"_id": "66588c8a338165aad1516756",
"avatarUrl": "/avatars/c6539b4ef65f465f6f762628d6921be6.svg",
"fullname": "JZPeterPan",
"isPro": false,
"type": "user",
"user": "JZPeterPan"
}
},
{
"_id": "67c12bf3505a88e4a1866a02",
"hidden": false,
"name": "Che Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T09:28:38.598Z",
"user": {
"_id": "631b9ff5824f2502e3557c7e",
"avatarUrl": "/avatars/076043c9dba07644a570692563ef8114.svg",
"fullname": "liu",
"isPro": false,
"type": "user",
"user": "che111"
}
},
{
"_id": "67c12bf3505a88e4a1866a03",
"hidden": false,
"name": "Junde Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:14:18.528Z",
"user": {
"_id": "6317257fc92fd6fee317ff7c",
"avatarUrl": "/avatars/2f460a2f28562c987becb2acad8d93e7.svg",
"fullname": "Junde Wu",
"isPro": false,
"type": "user",
"user": "morson"
}
},
{
"_id": "67c12bf3505a88e4a1866a04",
"hidden": false,
"name": "Fenglin Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:23:44.550Z",
"user": {
"_id": "647c7c311f878439e2fe50e7",
"avatarUrl": "/avatars/be0ddfc98c98f66b88c939c0451907a5.svg",
"fullname": "Fenglin Liu",
"isPro": false,
"type": "user",
"user": "fenglinliu"
}
},
{
"_id": "67c12bf3505a88e4a1866a05",
"hidden": false,
"name": "Jiayuan Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T15:14:46.124Z",
"user": {
"_id": "66aff1d8ccc0fb3883dd19a8",
"avatarUrl": "/avatars/dc9a0c622f0509c5bc9bf82d8f6ad7e3.svg",
"fullname": "Jiayuan Zhu",
"isPro": false,
"type": "user",
"user": "jiayuanz3"
}
},
{
"_id": "67c12bf3505a88e4a1866a06",
"hidden": false,
"name": "Hongwei Bran Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12bf3505a88e4a1866a07",
"hidden": false,
"name": "Chen Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12bf3505a88e4a1866a08",
"hidden": false,
"name": "Cheng Ouyang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T17:49:25.580Z",
"user": {
"_id": "67c1e8cd73ae02d78044324b",
"avatarUrl": "/avatars/45cdcc832ee46da139e8163969186d26.svg",
"fullname": "C O",
"isPro": false,
"type": "user",
"user": "ellivreksaB"
}
},
{
"_id": "67c12bf3505a88e4a1866a09",
"hidden": false,
"name": "Daniel Rueckert",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T23:57:34 | MedVLM-R1: Incentivizing Medical Reasoning Capability of Vision-Language
Models (VLMs) via Reinforcement Learning | Reasoning is a critical frontier for advancing medical image analysis, where
transparency and trustworthiness play a central role in both clinician trust
and regulatory approval. Although Medical Visual Language Models (VLMs) show
promise for radiological tasks, most existing VLMs merely produce final answers
without revealing the underlying reasoning. To address this gap, we introduce
MedVLM-R1, a medical VLM that explicitly generates natural language reasoning
to enhance transparency and trustworthiness. Instead of relying on supervised
fine-tuning (SFT), which often suffers from overfitting to training
distributions and fails to foster genuine reasoning, MedVLM-R1 employs a
reinforcement learning framework that incentivizes the model to discover
human-interpretable reasoning paths without using any reasoning references.
Despite limited training data (600 visual question answering samples) and model
parameters (2B), MedVLM-R1 boosts accuracy from 55.11% to 78.22% across MRI,
CT, and X-ray benchmarks, outperforming larger models trained on over a million
samples. It also demonstrates robust domain generalization under
out-of-distribution tasks. By unifying medical image analysis with explicit
reasoning, MedVLM-R1 marks a pivotal step toward trustworthy and interpretable
AI in clinical practice. | 54 | 67c12bf4505a88e4a1866a35 | null | null |
|
2025-02-28T04:02:19.534000 | Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think | 3 | {
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
} | false | null | 2502.20172 | [
{
"_id": "67c17b8f60206395233b7e46",
"hidden": false,
"name": "Liang Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:34:40.397Z",
"user": {
"_id": "658c481dd1c8b106727a8b73",
"avatarUrl": "/avatars/d34a7a62c3a524e5fdd2d5994348db58.svg",
"fullname": "Liang Chen",
"isPro": false,
"type": "user",
"user": "liangchen-ms"
}
},
{
"_id": "67c17b8f60206395233b7e47",
"hidden": false,
"name": "Shuai Bai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:33:50.796Z",
"user": {
"_id": "63451cf0a05b51f7ded25505",
"avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg",
"fullname": "shuai bai",
"isPro": false,
"type": "user",
"user": "bluelike"
}
},
{
"_id": "67c17b8f60206395233b7e48",
"hidden": false,
"name": "Wenhao Chai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:33:57.358Z",
"user": {
"_id": "637c7503fe115289cfecbe6b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676361945047-637c7503fe115289cfecbe6b.jpeg",
"fullname": "Wenhao Chai",
"isPro": false,
"type": "user",
"user": "wchai"
}
},
{
"_id": "67c17b8f60206395233b7e49",
"hidden": false,
"name": "Weichu Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:34:48.933Z",
"user": {
"_id": "678609789a285d232ee14157",
"avatarUrl": "/avatars/a6cb2c571d9ef6deb0b1659f754afe7f.svg",
"fullname": "Weichu Xie",
"isPro": false,
"type": "user",
"user": "akarinmoe"
}
},
{
"_id": "67c17b8f60206395233b7e4a",
"hidden": false,
"name": "Haozhe Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c17b8f60206395233b7e4b",
"hidden": false,
"name": "Leon Vinci",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c17b8f60206395233b7e4c",
"hidden": false,
"name": "Junyang Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:34:24.275Z",
"user": {
"_id": "620760a26e3b7210c2ff1943",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"isPro": false,
"type": "user",
"user": "JustinLin610"
}
},
{
"_id": "67c17b8f60206395233b7e4d",
"hidden": false,
"name": "Baobao Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T15:08:39 | Multimodal Representation Alignment for Image Generation: Text-Image
Interleaved Control Is Easier Than You Think | The field of advanced text-to-image generation is witnessing the emergence of
unified frameworks that integrate powerful text encoders, such as CLIP and T5,
with Diffusion Transformer backbones. Although there have been efforts to
control output images with additional conditions, like canny and depth map, a
comprehensive framework for arbitrary text-image interleaved control is still
lacking. This gap is especially evident when attempting to merge concepts or
visual elements from multiple images in the generation process. To mitigate the
gap, we conducted preliminary experiments showing that large multimodal models
(LMMs) offer an effective shared representation space, where image and text can
be well-aligned to serve as a condition for external diffusion models. Based on
this discovery, we propose Dream Engine, an efficient and unified framework
designed for arbitrary text-image interleaved control in image generation
models. Building on powerful text-to-image models like SD3.5, we replace the
original text-only encoders by incorporating versatile multimodal information
encoders such as QwenVL. Our approach utilizes a two-stage training paradigm,
consisting of joint text-image alignment and multimodal interleaved instruction
tuning. Our experiments demonstrate that this training method is effective,
achieving a 0.69 overall score on the GenEval benchmark, and matching the
performance of state-of-the-art text-to-image models like SD3.5 and FLUX. | 24 | 67c17b9160206395233b7e9c | null | null |
|
2025-02-28T03:27:32.294000 | NeoBERT: A Next-Generation BERT | 6 | {
"_id": "6317233cc92fd6fee317e030",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png",
"followerCount": 1617,
"fullname": "Tom Aarsen",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "tomaarsen",
"type": "user"
} | false | null | 2502.19587 | [
{
"_id": "67c13aa6a43d7939d60eb02e",
"hidden": false,
"name": "Lola Le Breton",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:35:52.732Z",
"user": {
"_id": "6512e961332b85e7cf8c1431",
"avatarUrl": "/avatars/d4bdb9670166112dcb36753bc1823b28.svg",
"fullname": "Lola Le Breton",
"isPro": false,
"type": "user",
"user": "Lolalb"
}
},
{
"_id": "67c13aa6a43d7939d60eb02f",
"hidden": false,
"name": "Quentin Fournier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c13aa6a43d7939d60eb030",
"hidden": false,
"name": "Mariam El Mezouar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:36:28.502Z",
"user": {
"_id": "6504c139eac45ee2e4d36893",
"avatarUrl": "/avatars/cc3d65f558988ab885aa0357f6e2d29d.svg",
"fullname": "Mariam El Mezouar",
"isPro": false,
"type": "user",
"user": "mariamelm"
}
},
{
"_id": "67c13aa6a43d7939d60eb031",
"hidden": false,
"name": "Sarath Chandar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:36:22.663Z",
"user": {
"_id": "66fabb66c2bf89d75e8cdd4d",
"avatarUrl": "/avatars/c40f55a77e2fb34ba38a79f04df82893.svg",
"fullname": "Sarath Chandar",
"isPro": false,
"type": "user",
"user": "apsarath"
}
}
] | 2025-02-26T22:00:22 | NeoBERT: A Next-Generation BERT | Recent innovations in architecture, pre-training, and fine-tuning have led to
the remarkable in-context learning and reasoning abilities of large
auto-regressive language models such as LLaMA and DeepSeek. In contrast,
encoders like BERT and RoBERTa have not seen the same level of progress despite
being foundational for many downstream NLP applications. To bridge this gap, we
introduce NeoBERT, a next-generation encoder that redefines the capabilities of
bidirectional models by integrating state-of-the-art advancements in
architecture, modern data, and optimized pre-training methodologies. NeoBERT is
designed for seamless adoption: it serves as a plug-and-play replacement for
existing base models, relies on an optimal depth-to-width ratio, and leverages
an extended context length of 4,096 tokens. Despite its compact 250M parameter
footprint, it achieves state-of-the-art results on the massive MTEB benchmark,
outperforming BERT large, RoBERTa large, NomicBERT, and ModernBERT under
identical fine-tuning conditions. In addition, we rigorously evaluate the
impact of each modification on GLUE and design a uniform fine-tuning and
evaluation framework for MTEB. We release all code, data, checkpoints, and
training scripts to accelerate research and real-world adoption. | 34 | 67c13aa7a43d7939d60eb065 | null | null |
|
2025-02-28T01:55:41.427000 | Lean and Mean: Decoupled Value Policy Optimization with Global Value Guidance | 2 | {
"_id": "669dcf6200970c3b27aafa5d",
"avatarUrl": "/avatars/bb9ed5ff86326fdaeb184c6b0e40f74f.svg",
"followerCount": null,
"fullname": "kaikai yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "keanudicap",
"type": "user"
} | true | null | 2502.16944 | [
{
"_id": "67be807e8a5a805423137ca2",
"hidden": false,
"name": "Chenghua Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:30:31.972Z",
"user": {
"_id": "664af07a691370727c281031",
"avatarUrl": "/avatars/e5ed17342e0ea953bacc7d57e9f3b686.svg",
"fullname": "Cheng Hua Huang",
"isPro": false,
"type": "user",
"user": "LanceZomax"
}
},
{
"_id": "67be807e8a5a805423137ca3",
"hidden": false,
"name": "Lu Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:30:39.129Z",
"user": {
"_id": "6406afa8a577649430c64363",
"avatarUrl": "/avatars/9bd1768c91d509c8c49970e9fd7775a5.svg",
"fullname": "LuWang",
"isPro": false,
"type": "user",
"user": "LuWang"
}
},
{
"_id": "67be807e8a5a805423137ca4",
"hidden": false,
"name": "Fangkai Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:17:46.382Z",
"user": {
"_id": "669dcf6200970c3b27aafa5d",
"avatarUrl": "/avatars/bb9ed5ff86326fdaeb184c6b0e40f74f.svg",
"fullname": "kaikai yang",
"isPro": false,
"type": "user",
"user": "keanudicap"
}
},
{
"_id": "67be807e8a5a805423137ca5",
"hidden": false,
"name": "Pu Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be807e8a5a805423137ca6",
"hidden": false,
"name": "Zhixu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:30:46.084Z",
"user": {
"_id": "661dd71d2ae9013218415e6f",
"avatarUrl": "/avatars/389883e5886628c07cb0b08fc8c93c3b.svg",
"fullname": "Zhixu Li",
"isPro": false,
"type": "user",
"user": "ZhixuLi"
}
},
{
"_id": "67be807e8a5a805423137ca7",
"hidden": false,
"name": "Qingwei Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:31:00.637Z",
"user": {
"_id": "652fc9f39bc50a6c0e435224",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652fc9f39bc50a6c0e435224/70OBVDHHBsxG2giJ-E3_1.jpeg",
"fullname": "Lin Qingwei",
"isPro": false,
"type": "user",
"user": "Eliblo1969"
}
},
{
"_id": "67be807e8a5a805423137ca8",
"hidden": false,
"name": "Dongmei Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:31:14.045Z",
"user": {
"_id": "66473d2c7abe6ad66e81a3dd",
"avatarUrl": "/avatars/82f40244806c06ffeaa1c4265e9725ea.svg",
"fullname": "ZHANGDONGMEI",
"isPro": false,
"type": "user",
"user": "ZDM6426"
}
},
{
"_id": "67be807e8a5a805423137ca9",
"hidden": false,
"name": "Saravan Rajmohan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be807e8a5a805423137caa",
"hidden": false,
"name": "Qi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T08:11:33 | Lean and Mean: Decoupled Value Policy Optimization with Global Value
Guidance | Proximal Policy Optimization (PPO)-based Reinforcement Learning from Human
Feedback (RLHF) is essential for aligning large language models (LLMs) with
human preferences. It requires joint training of an actor and critic with a
pretrained, fixed reward model for guidance. This approach increases
computational complexity and instability due to actor-critic interdependence.
Additionally, PPO lacks access to true environment rewards in LLM tasks,
limiting its adaptability. Under such conditions, pretraining a value model or
a reward model becomes equivalent, as both provide fixed supervisory signals
without new ground-truth feedback. To address these issues, we propose
Decoupled Value Policy Optimization (DVPO), a lean framework that
replaces traditional reward modeling with a pretrained global value model
(GVM). The GVM is conditioned on policy trajectories and predicts token-level
return-to-go estimates. By decoupling value model from policy training (via
frozen GVM-driven RL objectives), DVPO eliminates actor-critic interdependence,
reducing GPU memory usage by 40\% and training time by 35\% compared to
conventional RLHF. Experiments across benchmarks show DVPO outperforms
efficient RLHF methods (e.g., DPO) while matching state-of-the-art PPO in
performance. | 10 | 67be807e8a5a805423137cc2 | null | null |
|
2025-02-28T01:14:11.268000 | FINEREASON: Evaluating and Improving LLMs' Deliberate Reasoning through Reflective Puzzle Solving | 2 | {
"_id": "64e85b3edb3767299865e0e3",
"avatarUrl": "/avatars/fdbe121535dea940edd2766161393485.svg",
"followerCount": null,
"fullname": "Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Guizhen",
"type": "user"
} | true | null | 2502.20238 | [
{
"_id": "67c15306333e2f71f01c8e35",
"hidden": false,
"name": "Guizhen Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:28:41.974Z",
"user": {
"_id": "64e85b3edb3767299865e0e3",
"avatarUrl": "/avatars/fdbe121535dea940edd2766161393485.svg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "Guizhen"
}
},
{
"_id": "67c15306333e2f71f01c8e36",
"hidden": false,
"name": "Weiwen Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:28:47.949Z",
"user": {
"_id": "67627b97fc88502751bfd2b8",
"avatarUrl": "/avatars/4b1f5c333f9255181d7b9078c5d4eb32.svg",
"fullname": "Wei",
"isPro": false,
"type": "user",
"user": "weiwenxu"
}
},
{
"_id": "67c15306333e2f71f01c8e37",
"hidden": false,
"name": "Hao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c15306333e2f71f01c8e38",
"hidden": false,
"name": "Hou Pong Chan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c15306333e2f71f01c8e39",
"hidden": false,
"name": "Chaoqun Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:28:58.424Z",
"user": {
"_id": "61657b0b20606e5e73f611cc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61657b0b20606e5e73f611cc/6ZPne2GYlWkxrx35ND1P8.png",
"fullname": "CHAOQUN LIU",
"isPro": false,
"type": "user",
"user": "lukecq"
}
},
{
"_id": "67c15306333e2f71f01c8e3a",
"hidden": false,
"name": "Lidong Bing",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:29:04.845Z",
"user": {
"_id": "6454685a548f22be598414c4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/eMjMWKJ-AouF7eY1-RzGF.jpeg",
"fullname": "Lidong Bing",
"isPro": false,
"type": "user",
"user": "LidongBing"
}
},
{
"_id": "67c15306333e2f71f01c8e3b",
"hidden": false,
"name": "Deli Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c15306333e2f71f01c8e3c",
"hidden": false,
"name": "Anh Tuan Luu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:29:21.748Z",
"user": {
"_id": "655722e80438e0854fae7554",
"avatarUrl": "/avatars/b93a74f7c7880f9fe0f3ffb47e2aef5e.svg",
"fullname": "Luu Anh Tuan",
"isPro": false,
"type": "user",
"user": "anhtuanluu36"
}
},
{
"_id": "67c15306333e2f71f01c8e3d",
"hidden": false,
"name": "Yu Rong",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T16:23:25 | FINEREASON: Evaluating and Improving LLMs' Deliberate Reasoning through
Reflective Puzzle Solving | Many challenging reasoning tasks require not just rapid, intuitive responses,
but a more deliberate, multi-step approach. Recent progress in large language
models (LLMs) highlights an important shift from the "System 1" way of quick
reactions to the "System 2" style of reflection-and-correction problem solving.
However, current benchmarks heavily rely on the final-answer accuracy, leaving
much of a model's intermediate reasoning steps unexamined. This fails to assess
the model's ability to reflect and rectify mistakes within the reasoning
process. To bridge this gap, we introduce FINEREASON, a logic-puzzle benchmark
for fine-grained evaluation of LLMs' reasoning capabilities. Each puzzle can be
decomposed into atomic steps, making it ideal for rigorous validation of
intermediate correctness. Building on this, we introduce two tasks: state
checking, and state transition, for a comprehensive evaluation of how models
assess the current situation and plan the next move. To support broader
research, we also provide a puzzle training set aimed at enhancing performance
on general mathematical tasks. We show that models trained on our state
checking and transition data demonstrate gains in math reasoning by up to 5.1%
on GSM8K. | 23 | 67c15307333e2f71f01c8ebc | null | null |
|
2025-02-28T00:14:01.841000 | Mobius: Text to Seamless Looping Video Generation via Latent Shift | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20307 | [
{
"_id": "67c1460201cef6d4b9b9ac73",
"hidden": false,
"name": "Xiuli Bi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1460201cef6d4b9b9ac74",
"hidden": false,
"name": "Jianfei Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1460201cef6d4b9b9ac75",
"hidden": false,
"name": "Bo Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1460201cef6d4b9b9ac76",
"hidden": false,
"name": "Yong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1460201cef6d4b9b9ac77",
"hidden": false,
"name": "Xiaodong Cun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:38:07.038Z",
"user": {
"_id": "63184c517ca1b876d99b7e0e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63184c517ca1b876d99b7e0e/b-qDExoeJuDXK0cJBZKnz.jpeg",
"fullname": "Xiaodong Cun",
"isPro": false,
"type": "user",
"user": "vinthony"
}
},
{
"_id": "67c1460201cef6d4b9b9ac78",
"hidden": false,
"name": "Chi-Man Pun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1460201cef6d4b9b9ac79",
"hidden": false,
"name": "Bin Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T17:33:51 | Mobius: Text to Seamless Looping Video Generation via Latent Shift | We present Mobius, a novel method to generate seamlessly looping videos from
text descriptions directly without any user annotations, thereby creating new
visual materials for the multi-media presentation. Our method repurposes the
pre-trained video latent diffusion model for generating looping videos from
text prompts without any training. During inference, we first construct a
latent cycle by connecting the starting and ending noise of the videos. Given
that the temporal consistency can be maintained by the context of the video
diffusion model, we perform multi-frame latent denoising by gradually shifting
the first-frame latent to the end in each step. As a result, the denoising
context varies in each step while maintaining consistency throughout the
inference process. Moreover, the latent cycle in our method can be of any
length. This extends our latent-shifting approach to generate seamless looping
videos beyond the scope of the video diffusion model's context. Unlike previous
cinemagraphs, the proposed method does not require an image as appearance,
which will restrict the motions of the generated results. Instead, our method
can produce more dynamic motion and better visual quality. We conduct multiple
experiments and comparisons to verify the effectiveness of the proposed method,
demonstrating its efficacy in different scenarios. All the code will be made
available. | 16 | 67c1460501cef6d4b9b9addf | null | null |
|
2025-02-28T00:10:30.864000 | FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20126 | [
{
"_id": "67c14524af5eaa8dd062a216",
"hidden": false,
"name": "Sotiris Anagnostidis",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:32:18.844Z",
"user": {
"_id": "62f8f4ff92e64c61bc6938da",
"avatarUrl": "/avatars/d386eb35d2c3d52186b2a8ec957f51bc.svg",
"fullname": "Sotiris Anagnostidis",
"isPro": false,
"type": "user",
"user": "sanagnos"
}
},
{
"_id": "67c14524af5eaa8dd062a217",
"hidden": false,
"name": "Gregor Bachmann",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:32:24.781Z",
"user": {
"_id": "64996389a3f227b05cbd956f",
"avatarUrl": "/avatars/a586e7f99efdc7b61e05d62945575096.svg",
"fullname": "Gregor Bachmann",
"isPro": false,
"type": "user",
"user": "gregorbachmann"
}
},
{
"_id": "67c14524af5eaa8dd062a218",
"hidden": false,
"name": "Yeongmin Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:32:30.708Z",
"user": {
"_id": "65d6a010c57d1c140e395e31",
"avatarUrl": "/avatars/ddc9cfc98da36b639bd9205ee65b6967.svg",
"fullname": "Yeongmin Kim",
"isPro": false,
"type": "user",
"user": "YeongminKim"
}
},
{
"_id": "67c14524af5eaa8dd062a219",
"hidden": false,
"name": "Jonas Kohler",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:32:36.925Z",
"user": {
"_id": "6408a3a19e9f790c905281c2",
"avatarUrl": "/avatars/3517962e54bda141018e13f7e21fb1ae.svg",
"fullname": "jonas köhler",
"isPro": false,
"type": "user",
"user": "Jonaskohler"
}
},
{
"_id": "67c14524af5eaa8dd062a21a",
"hidden": false,
"name": "Markos Georgopoulos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c14524af5eaa8dd062a21b",
"hidden": false,
"name": "Artsiom Sanakoyeu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c14524af5eaa8dd062a21c",
"hidden": false,
"name": "Yuming Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c14524af5eaa8dd062a21d",
"hidden": false,
"name": "Albert Pumarola",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c14524af5eaa8dd062a21e",
"hidden": false,
"name": "Ali Thabet",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c14524af5eaa8dd062a21f",
"hidden": false,
"name": "Edgar Schönfeld",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:03.571Z",
"user": {
"_id": "6571c662d5c6a6d3b0bdae88",
"avatarUrl": "/avatars/8080bcdf8b331f62383b724050189660.svg",
"fullname": "Edgar Schoenfeld",
"isPro": false,
"type": "user",
"user": "edgarschoenfeld"
}
}
] | 2025-02-27T14:16:56 | FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality
Samples with Less Compute | Despite their remarkable performance, modern Diffusion Transformers are
hindered by substantial resource requirements during inference, stemming from
the fixed and large amount of compute needed for each denoising step. In this
work, we revisit the conventional static paradigm that allocates a fixed
compute budget per denoising iteration and propose a dynamic strategy instead.
Our simple and sample-efficient framework enables pre-trained DiT models to be
converted into flexible ones -- dubbed FlexiDiT -- allowing them to
process inputs at varying compute budgets. We demonstrate how a single
flexible model can generate images without any drop in quality, while
reducing the required FLOPs by more than 40\% compared to their static
counterparts, for both class-conditioned and text-conditioned image generation.
Our method is general and agnostic to input and conditioning modalities. We
show how our approach can be readily extended for video generation, where
FlexiDiT models generate samples with up to 75\% less compute without
compromising performance. | 18 | 67c14529af5eaa8dd062a38c | null | null |
|
2025-02-28T00:03:34.893000 | R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning Learning | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.19735 | [
{
"_id": "67c1438fd7ffcd1cab1fc412",
"hidden": false,
"name": "Minggui He",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-28T05:03:12.675Z",
"user": {
"_id": "6727998d4fc2e4f7cc0c85d3",
"avatarUrl": "/avatars/ac18eaadd606f7fae64996502f393cf2.svg",
"fullname": "he",
"isPro": false,
"type": "user",
"user": "boommmmm"
}
},
{
"_id": "67c1438fd7ffcd1cab1fc413",
"hidden": false,
"name": "Yilun Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:52:32.981Z",
"user": {
"_id": "67c676d517073afcaa053da9",
"avatarUrl": "/avatars/3bbfd9fd20b2e6f9dd40c2fc7f74e241.svg",
"fullname": "Yilun Liu",
"isPro": false,
"type": "user",
"user": "lunyiliu"
}
},
{
"_id": "67c1438fd7ffcd1cab1fc414",
"hidden": false,
"name": "Shimin Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc415",
"hidden": false,
"name": "Yuanchang Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc416",
"hidden": false,
"name": "Hongyong Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc417",
"hidden": false,
"name": "Chang Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc418",
"hidden": false,
"name": "Li Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc419",
"hidden": false,
"name": "Hongxia Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc41a",
"hidden": false,
"name": "Daimeng Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc41b",
"hidden": false,
"name": "Weibin Meng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:44:36.712Z",
"user": {
"_id": "67285a87e347d62d66473b9a",
"avatarUrl": "/avatars/6aeb022c2728ace62bf6884fdb3c9f9c.svg",
"fullname": "WeibinMeng",
"isPro": false,
"type": "user",
"user": "weibinmeng"
}
},
{
"_id": "67c1438fd7ffcd1cab1fc41c",
"hidden": false,
"name": "Hao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1438fd7ffcd1cab1fc41d",
"hidden": false,
"name": "Boxing Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:44:51.085Z",
"user": {
"_id": "66f98351ee969ff116986327",
"avatarUrl": "/avatars/e038ef1462f77f5e87e868339993f92d.svg",
"fullname": "Boxing Chen",
"isPro": false,
"type": "user",
"user": "BoxingChen"
}
},
{
"_id": "67c1438fd7ffcd1cab1fc41e",
"hidden": false,
"name": "Osamu Yoshie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T03:57:00 | R1-T1: Fully Incentivizing Translation Capability in LLMs via Reasoning
Learning | Despite recent breakthroughs in reasoning-enhanced large language models
(LLMs) like DeepSeek-R1, incorporating inference-time reasoning into machine
translation (MT), where human translators naturally employ structured,
multi-layered reasoning chain-of-thoughts (CoTs), is yet underexplored.
Existing methods either design a fixed CoT tailored for a specific MT sub-task
(e.g., literature translation), or rely on synthesizing CoTs unaligned with
humans and supervised fine-tuning (SFT) prone to catastrophic forgetting,
limiting their adaptability to diverse translation scenarios. This paper
introduces R1-Translator (R1-T1), a novel framework to achieve inference-time
reasoning for general MT via reinforcement learning (RL) with human-aligned
CoTs comprising six common patterns. Our approach pioneers three innovations:
(1) extending reasoning-based translation beyond MT sub-tasks to six languages
and diverse tasks (e.g., legal/medical domain adaptation, idiom resolution);
(2) formalizing six expert-curated CoT templates that mirror hybrid human
strategies like context-aware paraphrasing and back translation; and (3)
enabling self-evolving CoT discovery and anti-forgetting adaptation through RL
with KL-constrained rewards. Experimental results indicate a steady translation
performance improvement in 21 languages and 80 translation directions on
Flores-101 test set, especially on the 15 languages unseen from training, with
its general multilingual abilities preserved compared with plain SFT. | 7 | 67c14390d7ffcd1cab1fc479 | null | null |
|
2025-02-27T23:34:45.416000 | UniTok: A Unified Tokenizer for Visual Generation and Understanding | 2 | {
"_id": "6344dcb1cd37e44d9ed46508",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344dcb1cd37e44d9ed46508/J92UKSxKR3iziD2WJfih4.jpeg",
"followerCount": 7,
"fullname": "Yi Jiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "JiangYi",
"type": "user"
} | true | null | 2502.20321 | [
{
"_id": "67c13c68d8247a49b808fdac",
"hidden": false,
"name": "Chuofan Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:31:29.232Z",
"user": {
"_id": "62c585eb09baf76938a70de8",
"avatarUrl": "/avatars/ae8cca53710b3325bf0dd0f08c2b1bbf.svg",
"fullname": "Chuofan Ma",
"isPro": false,
"type": "user",
"user": "cfma"
}
},
{
"_id": "67c13c68d8247a49b808fdad",
"hidden": false,
"name": "Yi Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:07.566Z",
"user": {
"_id": "6344dcb1cd37e44d9ed46508",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344dcb1cd37e44d9ed46508/J92UKSxKR3iziD2WJfih4.jpeg",
"fullname": "Yi Jiang",
"isPro": false,
"type": "user",
"user": "JiangYi"
}
},
{
"_id": "67c13c68d8247a49b808fdae",
"hidden": false,
"name": "Junfeng Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:31:35.467Z",
"user": {
"_id": "6572ac949a5c2d6df9fab3c5",
"avatarUrl": "/avatars/b5d70a86a452198381eee1c8f513ceec.svg",
"fullname": "Junfeng Wu",
"isPro": false,
"type": "user",
"user": "JunfengWu"
}
},
{
"_id": "67c13c68d8247a49b808fdaf",
"hidden": false,
"name": "Jihan Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:31:41.522Z",
"user": {
"_id": "6304baf041387c7f1177a5d2",
"avatarUrl": "/avatars/795c63f2394080eec78ca7981d4a1f78.svg",
"fullname": "Jihan Yang",
"isPro": false,
"type": "user",
"user": "jihanyang"
}
},
{
"_id": "67c13c68d8247a49b808fdb0",
"hidden": false,
"name": "Xin Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c13c68d8247a49b808fdb1",
"hidden": false,
"name": "Zehuan Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:32:06.178Z",
"user": {
"_id": "661a80af3557013b638061d5",
"avatarUrl": "/avatars/4c551aeb223e257a5fc45b5b6c7ded49.svg",
"fullname": "Zehuan Yuan",
"isPro": false,
"type": "user",
"user": "sweetrabor"
}
},
{
"_id": "67c13c68d8247a49b808fdb2",
"hidden": false,
"name": "Bingyue Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c13c68d8247a49b808fdb3",
"hidden": false,
"name": "Xiaojuan Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T17:47:01 | UniTok: A Unified Tokenizer for Visual Generation and Understanding | The representation disparity between visual generation and understanding
imposes a critical gap in integrating these capabilities into a single
framework. To bridge this gap, we introduce UniTok, a discrete visual tokenizer
that encodes fine-grained details for generation while also capturing
high-level semantics for understanding. Despite recent studies have shown that
these objectives could induce loss conflicts in training, we reveal that the
underlying bottleneck stems from limited representational capacity of discrete
tokens. We address this by introducing multi-codebook quantization, which
divides vector quantization with several independent sub-codebooks to expand
the latent feature space, while avoiding training instability caused by
overlarge codebooks. Our method significantly raises the upper limit of unified
discrete tokenizers to match or even surpass domain-specific continuous
tokenizers. For instance, UniTok achieves a remarkable rFID of 0.38 (versus
0.87 for SD-VAE) and a zero-shot accuracy of 78.6% (versus 76.2% for CLIP) on
ImageNet. Our code is available at https://github.com/FoundationVision/UniTok. | 25 | 67c13c6ad8247a49b8090003 | null | null |
|
2025-02-27T23:04:14.619000 | CODESYNC: Synchronizing Large Language Models with Dynamic Code Evolution at Scale | 2 | {
"_id": "643be8879f5d314db2d9ed23",
"avatarUrl": "/avatars/64e9bb2c4e10fbe03e2b81afedf40865.svg",
"followerCount": 4,
"fullname": "Chen Dongping",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "shuaishuaicdp",
"type": "user"
} | false | null | 2502.16645 | [
{
"_id": "67c12e60d8247a49b805694f",
"hidden": false,
"name": "Chenlong Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:29:50.564Z",
"user": {
"_id": "6441270ead24e9b2cfbc45e0",
"avatarUrl": "/avatars/92eab1ae50efaaee070674ae20244fc0.svg",
"fullname": "Wang Chenlong",
"isPro": false,
"type": "user",
"user": "Wildxxxxx75"
}
},
{
"_id": "67c12e60d8247a49b8056950",
"hidden": false,
"name": "Zhaoyang Chu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:29:56.482Z",
"user": {
"_id": "64fb128552e82dd432682b06",
"avatarUrl": "/avatars/c141326a5d8c17d35be40e12579810bb.svg",
"fullname": "Zhaoyang Chu",
"isPro": false,
"type": "user",
"user": "chuzy"
}
},
{
"_id": "67c12e60d8247a49b8056951",
"hidden": false,
"name": "Zhengxiang Cheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T09:28:33.569Z",
"user": {
"_id": "669096da35cddb688a352ca8",
"avatarUrl": "/avatars/d01f34d99d89447d27c0fd43734ae6d9.svg",
"fullname": "zxiang",
"isPro": false,
"type": "user",
"user": "zx10086"
}
},
{
"_id": "67c12e60d8247a49b8056952",
"hidden": false,
"name": "Xuyi Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T09:28:31.564Z",
"user": {
"_id": "6743e9d4303e7ce5b9d13e9b",
"avatarUrl": "/avatars/cdaf150380e9c8916547185b968a2670.svg",
"fullname": "xy",
"isPro": false,
"type": "user",
"user": "yxy0807"
}
},
{
"_id": "67c12e60d8247a49b8056953",
"hidden": false,
"name": "Kaiyue Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12e60d8247a49b8056954",
"hidden": false,
"name": "Yao Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12e60d8247a49b8056955",
"hidden": false,
"name": "Zhou Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12e60d8247a49b8056956",
"hidden": false,
"name": "Xuanhua Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12e60d8247a49b8056957",
"hidden": false,
"name": "Dongping Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:30:19.705Z",
"user": {
"_id": "65e2be1e630e2db23829ee8d",
"avatarUrl": "/avatars/294f9ba909037f03669dc0bb80cabfe3.svg",
"fullname": "Dongping Chen",
"isPro": false,
"type": "user",
"user": "fjchendp"
}
}
] | 2025-02-23T16:46:18 | CODESYNC: Synchronizing Large Language Models with Dynamic Code
Evolution at Scale | Large Language Models (LLMs) have exhibited exceptional performance in
software engineering yet face challenges in adapting to continually evolving
code knowledge, particularly regarding the frequent updates of third-party
library APIs. This limitation, stemming from static pre-training datasets,
often results in non-executable code or implementations with suboptimal safety
and efficiency. To this end, this paper introduces CODESYNC, a data engine for
identifying outdated code patterns and collecting real-time code knowledge
updates from Python third-party libraries. Building upon CODESYNC, we develop
CODESYNCBENCH, a comprehensive benchmark for assessing LLMs' ability to stay
synchronized with code evolution, which covers real-world updates for 220 APIs
from six Python libraries. Our benchmark offers 3,300 test cases across three
evaluation tasks and an update-aware instruction tuning dataset consisting of
2,200 training samples. Extensive experiments on 14 state-of-the-art LLMs
reveal that they struggle with dynamic code evolution, even with the support of
advanced knowledge updating methods (e.g., DPO, ORPO, and SimPO). We believe
that our benchmark can offer a strong foundation for the development of more
effective methods for real-time code knowledge updating in the future. The
experimental code and dataset are publicly available at:
https://github.com/Lucky-voyage/Code-Sync. | 19 | 67c12e61d8247a49b805698f | null | null |
|
2025-02-27T22:38:04.562000 | SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning | 2 | {
"_id": "654da66fb36f85a025bc24b6",
"avatarUrl": "/avatars/e5542856ab4bf1845e8f546b5f17cd99.svg",
"followerCount": 1,
"fullname": "Zexiong Ma",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "mizersy",
"type": "user"
} | true | null | 2502.20127 | [
{
"_id": "67c12de08cd49ca63e230b99",
"hidden": false,
"name": "Zexiong Ma",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T09:28:35.503Z",
"user": {
"_id": "654da66fb36f85a025bc24b6",
"avatarUrl": "/avatars/e5542856ab4bf1845e8f546b5f17cd99.svg",
"fullname": "Zexiong Ma",
"isPro": false,
"type": "user",
"user": "mizersy"
}
},
{
"_id": "67c12de08cd49ca63e230b9a",
"hidden": false,
"name": "Chao Peng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T15:13:55.041Z",
"user": {
"_id": "64425a502f4abae43fc0446c",
"avatarUrl": "/avatars/7448a8d024813d8a20e09c162a189304.svg",
"fullname": "Chao Peng",
"isPro": false,
"type": "user",
"user": "pengchao"
}
},
{
"_id": "67c12de08cd49ca63e230b9b",
"hidden": false,
"name": "Pengfei Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:40:45.303Z",
"user": {
"_id": "663f2dde4aeb9c177297fbd8",
"avatarUrl": "/avatars/ddc143481d2af893c9cdff1a33ccda28.svg",
"fullname": "PengFei",
"isPro": false,
"type": "user",
"user": "PengFeiGao"
}
},
{
"_id": "67c12de08cd49ca63e230b9c",
"hidden": false,
"name": "Xiangxin Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12de08cd49ca63e230b9d",
"hidden": false,
"name": "Yanzhen Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12de08cd49ca63e230b9e",
"hidden": false,
"name": "Bing Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T14:19:45 | SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning | Mainstream issue-resolving frameworks predominantly rely on commercial
models, leading to high costs and privacy concerns. Existing training
approaches for issue resolving struggle with poor generalization and fail to
fully leverage open-source development resources. We propose Subtask-oriented
Reinforced Fine-Tuning (SoRFT), a novel training approach to enhance the issue
resolving capability of LLMs. We decomposes issue resolving into structured
subtasks: file localization, function localization, line localization, and code
edit generation. SoRFT consists of two training stages: (1) rejection-sampled
supervised fine-tuning, Chain of Thought (CoT) data is filtered using
ground-truth before fine-tuning the LLM, and (2) rule-based reinforcement
learning, which leverages PPO with ground-truth based rewards. We evaluate the
SoRFT-trained model on SWE-Bench Verified and SWE-Bench Lite, achieving
state-of-the-art (SOTA) performance among open-source models (e.g., resolve
21.4% issues on SWE-Bench Verified with SoRFT-Qwen-7B). The experimental
results demonstrate that SoRFT significantly enhances issue-resolving
performance, improves model generalization, and provides a cost-efficient
alternative to commercial models. | 9 | 67c12de08cd49ca63e230bd1 | null | null |
|
2025-02-27T22:27:24.486000 | R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts | 5 | {
"_id": "647f5af5b0e96764589f3b2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VJ4cDyjp5M3V5WmI5gPIU.jpeg",
"followerCount": 12,
"fullname": "Tianyi Zhou",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zhoutianyi",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/PaZkWIhqZBRCSfBA-k4OX.png",
"https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/FASlyPDiSb9VHZaeWMj9H.png",
"https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/kGeIJVMDDAbIassiuYIb2.png",
"https://cdn-uploads.huggingface.co/production/uploads/647f5af5b0e96764589f3b2a/Tw2Bf_RsFTPARKLJWIlKM.png"
] | 2502.20395 | [
{
"_id": "67c12b5def9af74902537b98",
"hidden": false,
"name": "Zhongyang Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:14:22.809Z",
"user": {
"_id": "671002fd13203512e7b8f9e3",
"avatarUrl": "/avatars/313d8ea313ed300750cfdaaca44fdb6e.svg",
"fullname": "Zhongyang Li",
"isPro": false,
"type": "user",
"user": "Lzy01241010"
}
},
{
"_id": "67c12b5def9af74902537b99",
"hidden": false,
"name": "Ziyue Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12b5def9af74902537b9a",
"hidden": false,
"name": "Tianyi Zhou",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:14:16.482Z",
"user": {
"_id": "647f5af5b0e96764589f3b2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/VJ4cDyjp5M3V5WmI5gPIU.jpeg",
"fullname": "Tianyi Zhou",
"isPro": false,
"type": "user",
"user": "zhoutianyi"
}
}
] | 2025-02-27T18:59:32 | R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts | In large multimodal models (LMMs), the perception of non-language modalities
(e.g., visual representations) is usually not on par with the large language
models (LLMs)' powerful reasoning capabilities, deterring LMMs' performance on
challenging downstream tasks. This weakness has been recently mitigated by
replacing the vision encoder with a mixture-of-experts (MoE), which provides
rich, multi-granularity, and diverse representations required by diverse
downstream tasks. The performance of multimodal MoE largely depends on its
router, which reweights and mixes the representations of different experts for
each input. However, we find that the end-to-end trained router does not always
produce the optimal routing weights for every test sample. To bridge the gap,
we propose a novel and efficient method "Re-Routing in Test-Time(R2-T2) that
locally optimizes the vector of routing weights in test-time by moving it
toward those vectors of the correctly predicted samples in a neighborhood of
the test sample. We propose three R2-T2 strategies with different optimization
objectives and neighbor-search spaces. R2-T2 consistently and greatly improves
state-of-the-art LMMs' performance on challenging benchmarks of diverse tasks,
without training any base-model parameters. | 40 | 67c12b5eef9af74902537c00 | null | null |
|
2025-02-27T22:22:53.713000 | LongRoPE2: Near-Lossless LLM Context Window Scaling | 2 | {
"_id": "62b0009c72043b05d29492b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png",
"followerCount": 27,
"fullname": "Li Lyna Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lynazhang",
"type": "user"
} | true | null | 2502.20082 | [
{
"_id": "67c12b6d25c74ee5b6e2ce8e",
"hidden": false,
"name": "Ning Shang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:28:26.117Z",
"user": {
"_id": "632bc663eafe8eca5e9bfdbc",
"avatarUrl": "/avatars/787553c73e9a96adc5219e67acd29c00.svg",
"fullname": "Ning Shang",
"isPro": false,
"type": "user",
"user": "J-shang"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce8f",
"hidden": false,
"name": "Li Lyna Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:26:58.131Z",
"user": {
"_id": "62b0009c72043b05d29492b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png",
"fullname": "Li Lyna Zhang",
"isPro": false,
"type": "user",
"user": "lynazhang"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce90",
"hidden": false,
"name": "Siyuan Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:14:20.687Z",
"user": {
"_id": "6495b0b844bc2e9ce6cc849b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/j6aucl_tefMHwtD-bdUAw.jpeg",
"fullname": "Siyuan Wang",
"isPro": false,
"type": "user",
"user": "OldKingMeister"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce91",
"hidden": false,
"name": "Gaokai Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:28:05.040Z",
"user": {
"_id": "65efe691ccef3501d586bb62",
"avatarUrl": "/avatars/c4716c532754b487359e77e43afe09bc.svg",
"fullname": "Gaokai Zhang",
"isPro": false,
"type": "user",
"user": "gaokaiz2"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce92",
"hidden": false,
"name": "Gilsinia Lopez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:27:52.164Z",
"user": {
"_id": "60c790f1accf7da31ed8240d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c790f1accf7da31ed8240d/YDohCmgf9OUeWqZIs3Thh.jpeg",
"fullname": "Gilsinia Lopez",
"isPro": false,
"type": "user",
"user": "lgg"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce93",
"hidden": false,
"name": "Fan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12b6d25c74ee5b6e2ce94",
"hidden": false,
"name": "Weizhu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:27:17.122Z",
"user": {
"_id": "64da876370446182be5b608d",
"avatarUrl": "/avatars/e412fdc71404ecdf638e416846e3ebfb.svg",
"fullname": "Weizhu Chen",
"isPro": false,
"type": "user",
"user": "chenweizhu"
}
},
{
"_id": "67c12b6d25c74ee5b6e2ce95",
"hidden": false,
"name": "Mao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T13:41:07 | LongRoPE2: Near-Lossless LLM Context Window Scaling | LongRoPE2 is a novel approach that extends the effective context window of
pre-trained large language models (LLMs) to the target length, while preserving
the performance on the original shorter context window. This is achieved by
three contributions: (1) a hypothesis that insufficient training in higher RoPE
dimensions contributes to the persistent out-of-distribution (OOD) issues
observed in existing methods; (2) an effective RoPE rescaling algorithm that
adopts evolutionary search guided by "needle-driven" perplexity to address the
insufficient training problem; (3) a mixed context window training approach
that fine-tunes model weights to adopt rescaled RoPE for long-context sequences
while preserving the short-context performance with the original RoPE.
Extensive experiments on LLaMA3-8B and Phi3-mini-3.8B across various benchmarks
validate the hypothesis and demonstrate the effectiveness of LongRoPE2.
Remarkably, LongRoPE2 extends LLaMA3-8B to achieve a 128K effective context
length while retaining over 98.5% of short-context performance, using only 10B
tokens -- 80x fewer than Meta's approach, which fails to reach the target
effective context length. Code will be available at
https://github.com/microsoft/LongRoPE. | 29 | 67c12b6e25c74ee5b6e2ceb5 | null | null |
|
2025-02-27T22:15:54.222000 | Self-rewarding correction for mathematical reasoning | 6 | {
"_id": "643e59806db6ba8c5ee123f3",
"avatarUrl": "/avatars/4052f2a250107f43b3634c3ee3cc30a1.svg",
"followerCount": 16,
"fullname": "Wei Xiong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "weqweasdas",
"type": "user"
} | false | null | 2502.19613 | [
{
"_id": "67c12987505a88e4a185e0d7",
"hidden": false,
"name": "Wei Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c12987505a88e4a185e0d8",
"hidden": false,
"name": "Hanning Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:22:33.128Z",
"user": {
"_id": "6470e0f1cfd57849519033a5",
"avatarUrl": "/avatars/7ffefee3e36a4e37b9f4510bc6b689d1.svg",
"fullname": "Hanning Zhang",
"isPro": false,
"type": "user",
"user": "HanningZhang"
}
},
{
"_id": "67c12987505a88e4a185e0d9",
"hidden": false,
"name": "Chenlu Ye",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:22:38.981Z",
"user": {
"_id": "65eec5c1d7d63c2ed0615421",
"avatarUrl": "/avatars/8c32f5e7d4b1940088bdec73c0b86fab.svg",
"fullname": "Chenlu Ye",
"isPro": false,
"type": "user",
"user": "Chenlu123"
}
},
{
"_id": "67c12987505a88e4a185e0da",
"hidden": false,
"name": "Lichang Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:14:29.479Z",
"user": {
"_id": "62323bb408bcea92917e42ee",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62323bb408bcea92917e42ee/2vHxkv-oSROtLteOnqa8P.jpeg",
"fullname": "Lichang Chen",
"isPro": false,
"type": "user",
"user": "Lichang-Chen"
}
},
{
"_id": "67c12987505a88e4a185e0db",
"hidden": false,
"name": "Nan Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:23:02.992Z",
"user": {
"_id": "64b8922ca1827cc8d04ae919",
"avatarUrl": "/avatars/0aaa83e3d09a82434e1d6af724aaa485.svg",
"fullname": "Nan Jiang",
"isPro": false,
"type": "user",
"user": "nanjiang"
}
},
{
"_id": "67c12987505a88e4a185e0dc",
"hidden": false,
"name": "Tong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T23:01:16 | Self-rewarding correction for mathematical reasoning | We study self-rewarding reasoning large language models (LLMs), which can
simultaneously generate step-by-step reasoning and evaluate the correctness of
their outputs during the inference time-without external feedback. This
integrated approach allows a single model to independently guide its reasoning
process, offering computational advantages for model deployment. We
particularly focus on the representative task of self-correction, where models
autonomously detect errors in their responses, revise outputs, and decide when
to terminate iterative refinement loops. To enable this, we propose a
two-staged algorithmic framework for constructing self-rewarding reasoning
models using only self-generated data. In the first stage, we employ sequential
rejection sampling to synthesize long chain-of-thought trajectories that
incorporate both self-rewarding and self-correction mechanisms. Fine-tuning
models on these curated data allows them to learn the patterns of
self-rewarding and self-correction. In the second stage, we further enhance the
models' ability to assess response accuracy and refine outputs through
reinforcement learning with rule-based signals. Experiments with Llama-3 and
Qwen-2.5 demonstrate that our approach surpasses intrinsic self-correction
capabilities and achieves performance comparable to systems that rely on
external reward models. | 71 | 67c12989505a88e4a185e115 | null | null |
|
2025-02-27T21:19:58.170000 | Adapting Automatic Speech Recognition for Accented Air Traffic Control Communications | 2 | {
"_id": "60a546bdf9b53404e7806278",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621444268349-noauth.png",
"followerCount": 2,
"fullname": "Prannaya Gupta",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ThePyProgrammer",
"type": "user"
} | true | null | 2502.20311 | [
{
"_id": "67c11d0bd1f37121ad63acfb",
"hidden": false,
"name": "Marcus Yu Zhe Wee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63acfc",
"hidden": false,
"name": "Justin Juin Hng Wong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:29.818Z",
"user": {
"_id": "63b9189c060d6595d2af72a8",
"avatarUrl": "/avatars/dff8c3f215a2e5192baec752c34c5ed0.svg",
"fullname": "Justin Wong",
"isPro": false,
"type": "user",
"user": "amidstdebug"
}
},
{
"_id": "67c11d0bd1f37121ad63acfd",
"hidden": false,
"name": "Lynus Lim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63acfe",
"hidden": false,
"name": "Joe Yu Wei Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63acff",
"hidden": false,
"name": "Prannaya Gupta",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:32.107Z",
"user": {
"_id": "60a546bdf9b53404e7806278",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621444268349-noauth.png",
"fullname": "Prannaya Gupta",
"isPro": false,
"type": "user",
"user": "ThePyProgrammer"
}
},
{
"_id": "67c11d0bd1f37121ad63ad00",
"hidden": false,
"name": "Dillion Lim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63ad01",
"hidden": false,
"name": "En Hao Tew",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63ad02",
"hidden": false,
"name": "Aloysius Keng Siew Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11d0bd1f37121ad63ad03",
"hidden": false,
"name": "Yong Zhi Lim",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T17:35:59 | Adapting Automatic Speech Recognition for Accented Air Traffic Control
Communications | Effective communication in Air Traffic Control (ATC) is critical to
maintaining aviation safety, yet the challenges posed by accented English
remain largely unaddressed in Automatic Speech Recognition (ASR) systems.
Existing models struggle with transcription accuracy for Southeast
Asian-accented (SEA-accented) speech, particularly in noisy ATC environments.
This study presents the development of ASR models fine-tuned specifically for
Southeast Asian accents using a newly created dataset. Our research achieves
significant improvements, achieving a Word Error Rate (WER) of 0.0982 or 9.82%
on SEA-accented ATC speech. Additionally, the paper highlights the importance
of region-specific datasets and accent-focused training, offering a pathway for
deploying ASR systems in resource-constrained military operations. The findings
emphasize the need for noise-robust training techniques and region-specific
datasets to improve transcription accuracy for non-Western accents in ATC
communications. | 5 | 67c11d0cd1f37121ad63ad24 | null | null |
|
2025-02-27T21:02:33.864000 | MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge | 2 | {
"_id": "65745569839aa08899ea5d27",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4X8waDwiphbfKZySrYlFy.jpeg",
"followerCount": 2,
"fullname": "kailinjiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kailinjiang",
"type": "user"
} | false | null | 2502.19870 | [
{
"_id": "67c11908dfcbe8a49cf19952",
"hidden": false,
"name": "Yuntao Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19953",
"hidden": false,
"name": "Kailin Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19954",
"hidden": false,
"name": "Zhi Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19955",
"hidden": false,
"name": "Chenrui Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19956",
"hidden": false,
"name": "Zilong Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19957",
"hidden": false,
"name": "Siyuan Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c11908dfcbe8a49cf19958",
"hidden": false,
"name": "Qing Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T08:21:28 | MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge | Knowledge editing techniques have emerged as essential tools for updating the
factual knowledge of large language models (LLMs) and multimodal models (LMMs),
allowing them to correct outdated or inaccurate information without retraining
from scratch. However, existing benchmarks for multimodal knowledge editing
primarily focus on entity-level knowledge represented as simple triplets, which
fail to capture the complexity of real-world multimodal information. To address
this issue, we introduce MMKE-Bench, a comprehensive MultiModal Knowledge
Editing Benchmark, designed to evaluate the ability of LMMs to edit diverse
visual knowledge in real-world scenarios. MMKE-Bench addresses these
limitations by incorporating three types of editing tasks: visual entity
editing, visual semantic editing, and user-specific editing. Besides,
MMKE-Bench uses free-form natural language to represent and edit knowledge,
offering a more flexible and effective format. The benchmark consists of 2,940
pieces of knowledge and 8,363 images across 33 broad categories, with
evaluation questions automatically generated and human-verified. We assess five
state-of-the-art knowledge editing methods on three prominent LMMs, revealing
that no method excels across all criteria, and that visual and user-specific
edits are particularly challenging. MMKE-Bench sets a new standard for
evaluating the robustness of multimodal knowledge editing techniques, driving
progress in this rapidly evolving field. | 3 | 67c1190cdfcbe8a49cf19aac | null | null |
|
2025-02-27T14:03:36.365000 | Towards Optimal Multi-draft Speculative Decoding | 2 | {
"_id": "6623ea65b642e29cdf90a1b4",
"avatarUrl": "/avatars/e32e90574c1162b2be87ed78604e3e4d.svg",
"followerCount": 1,
"fullname": "TongZheng",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "TongZheng1999",
"type": "user"
} | true | null | 2502.18779 | [
{
"_id": "67c0b4d0cda310c08781e820",
"hidden": false,
"name": "Zhengmian Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e821",
"hidden": false,
"name": "Tong Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:08:55.900Z",
"user": {
"_id": "6623ea65b642e29cdf90a1b4",
"avatarUrl": "/avatars/e32e90574c1162b2be87ed78604e3e4d.svg",
"fullname": "TongZheng",
"isPro": true,
"type": "user",
"user": "TongZheng1999"
}
},
{
"_id": "67c0b4d0cda310c08781e822",
"hidden": false,
"name": "Vignesh Viswanathan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e823",
"hidden": false,
"name": "Ziyi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e824",
"hidden": false,
"name": "Ryan A. Rossi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e825",
"hidden": false,
"name": "Yihan Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e826",
"hidden": false,
"name": "Dinesh Manocha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0b4d0cda310c08781e827",
"hidden": false,
"name": "Heng Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T03:22:44 | Towards Optimal Multi-draft Speculative Decoding | Large Language Models (LLMs) have become an indispensable part of natural
language processing tasks. However, autoregressive sampling has become an
efficiency bottleneck. Multi-Draft Speculative Decoding (MDSD) is a recent
approach where, when generating each token, a small draft model generates
multiple drafts, and the target LLM verifies them in parallel, ensuring that
the final output conforms to the target model distribution. The two main design
choices in MDSD are the draft sampling method and the verification algorithm.
For a fixed draft sampling method, the optimal acceptance rate is a solution to
an optimal transport problem, but the complexity of this problem makes it
difficult to solve for the optimal acceptance rate and measure the gap between
existing verification algorithms and the theoretical upper bound. This paper
discusses the dual of the optimal transport problem, providing a way to
efficiently compute the optimal acceptance rate. For the first time, we measure
the theoretical upper bound of MDSD efficiency for vocabulary sizes in the
thousands and quantify the gap between existing verification algorithms and
this bound. We also compare different draft sampling methods based on their
optimal acceptance rates. Our results show that the draft sampling method
strongly influences the optimal acceptance rate, with sampling without
replacement outperforming sampling with replacement. Additionally, existing
verification algorithms do not reach the theoretical upper bound for both
without replacement and with replacement sampling. Our findings suggest that
carefully designed draft sampling methods can potentially improve the optimal
acceptance rate and enable the development of verification algorithms that
closely match the theoretical upper bound. | 4 | 67c0b4d1cda310c08781e864 | null | null |
|
2025-02-27T11:09:15.703000 | FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in LLMs Elicits Effective Personalization to Real Users | 2 | {
"_id": "6511ee845b7e52b0251fdee9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6511ee845b7e52b0251fdee9/hTIwiIYBGOVnIrxtpri83.png",
"followerCount": 4,
"fullname": "Anikait Singh",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Asap7772",
"type": "user"
} | true | null | 2502.19312 | [
{
"_id": "67c01972d63ea6742473aa2a",
"hidden": false,
"name": "Anikait Singh",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-27T07:51:17.284Z",
"user": {
"_id": "6511ee845b7e52b0251fdee9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6511ee845b7e52b0251fdee9/hTIwiIYBGOVnIrxtpri83.png",
"fullname": "Anikait Singh",
"isPro": false,
"type": "user",
"user": "Asap7772"
}
},
{
"_id": "67c01972d63ea6742473aa2b",
"hidden": false,
"name": "Sheryl Hsu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa2c",
"hidden": false,
"name": "Kyle Hsu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa2d",
"hidden": false,
"name": "Eric Mitchell",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa2e",
"hidden": false,
"name": "Stefano Ermon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa2f",
"hidden": false,
"name": "Tatsunori Hashimoto",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa30",
"hidden": false,
"name": "Archit Sharma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01972d63ea6742473aa31",
"hidden": false,
"name": "Chelsea Finn",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T17:08:46 | FSPO: Few-Shot Preference Optimization of Synthetic Preference Data in
LLMs Elicits Effective Personalization to Real Users | Effective personalization of LLMs is critical for a broad range of
user-interfacing applications such as virtual assistants and content curation.
Inspired by the strong in-context learning capabilities of LLMs, we propose
Few-Shot Preference Optimization (FSPO), which reframes reward modeling as a
meta-learning problem. Under this framework, an LLM learns to quickly adapt to
a user via a few labeled preferences from that user, constructing a
personalized reward function for them. Additionally, since real-world
preference data is scarce and challenging to collect at scale, we propose
careful design choices to construct synthetic preference datasets for
personalization, generating over 1M synthetic personalized preferences using
publicly available LLMs. In particular, to successfully transfer from synthetic
data to real users, we find it crucial for the data to exhibit both high
diversity and coherent, self-consistent structure. We evaluate FSPO on
personalized open-ended generation for up to 1,500 synthetic users across
across three domains: movie reviews, pedagogical adaptation based on
educational background, and general question answering, along with a controlled
human study. Overall, FSPO achieves an 87% Alpaca Eval winrate on average in
generating responses that are personalized to synthetic users and a 72% winrate
with real human users in open-ended question answering. | 5 | 67c01975d63ea6742473aa52 | null | null |
|
2025-02-27T10:12:03.128000 | Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization | 3 | {
"_id": "6308c49c454dc257521bc7f9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6308c49c454dc257521bc7f9/UWUS6OPa6OpVu1T0gd-wJ.jpeg",
"followerCount": 19,
"fullname": "Taishi",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Taishi-N324",
"type": "user"
} | true | null | 2502.19261 | [
{
"_id": "67c07170af68756abc571ab8",
"hidden": false,
"name": "Taishi Nakamura",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:06.783Z",
"user": {
"_id": "6308c49c454dc257521bc7f9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6308c49c454dc257521bc7f9/UWUS6OPa6OpVu1T0gd-wJ.jpeg",
"fullname": "Taishi",
"isPro": false,
"type": "user",
"user": "Taishi-N324"
}
},
{
"_id": "67c07170af68756abc571ab9",
"hidden": false,
"name": "Takuya Akiba",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:52.711Z",
"user": {
"_id": "6482810dba6c556892f6f257",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6482810dba6c556892f6f257/c7-wiVKenXiRtwnRpnjZN.jpeg",
"fullname": "Takuya Akiba",
"isPro": false,
"type": "user",
"user": "iwiwi"
}
},
{
"_id": "67c07170af68756abc571aba",
"hidden": false,
"name": "Kazuki Fujii",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c07170af68756abc571abb",
"hidden": false,
"name": "Yusuke Oda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c07170af68756abc571abc",
"hidden": false,
"name": "Rio Yokota",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c07170af68756abc571abd",
"hidden": false,
"name": "Jun Suzuki",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T16:06:36 | Drop-Upcycling: Training Sparse Mixture of Experts with Partial
Re-initialization | The Mixture of Experts (MoE) architecture reduces the training and inference
cost significantly compared to a dense model of equivalent capacity. Upcycling
is an approach that initializes and trains an MoE model using a pre-trained
dense model. While upcycling leads to initial performance gains, the training
progresses slower than when trained from scratch, leading to suboptimal
performance in the long term. We propose Drop-Upcycling - a method that
effectively addresses this problem. Drop-Upcycling combines two seemingly
contradictory approaches: utilizing the knowledge of pre-trained dense models
while statistically re-initializing some parts of the weights. This approach
strategically promotes expert specialization, significantly enhancing the MoE
model's efficiency in knowledge acquisition. Extensive large-scale experiments
demonstrate that Drop-Upcycling significantly outperforms previous MoE
construction methods in the long term, specifically when training on hundreds
of billions of tokens or more. As a result, our MoE model with 5.9B active
parameters achieves comparable performance to a 13B dense model in the same
model family, while requiring approximately 1/4 of the training FLOPs. All
experimental resources, including source code, training data, model checkpoints
and logs, are publicly available to promote reproducibility and future research
on MoE. | 6 | 67c07172af68756abc571b53 | null | null |
|
2025-02-27T09:41:49.469000 | Rank1: Test-Time Compute for Reranking in Information Retrieval | 2 | {
"_id": "6362d9712691058b19de1ba4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6362d9712691058b19de1ba4/c9QrA2oE6lcs_46ShaTY1.jpeg",
"followerCount": 15,
"fullname": "Orion Weller",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "orionweller",
"type": "user"
} | false | null | 2502.18418 | [
{
"_id": "67bf17b23f838c1e33ac7c4d",
"hidden": false,
"name": "Orion Weller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf17b23f838c1e33ac7c4e",
"hidden": false,
"name": "Kathryn Ricci",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf17b23f838c1e33ac7c4f",
"hidden": false,
"name": "Eugene Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf17b23f838c1e33ac7c50",
"hidden": false,
"name": "Andrew Yates",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf17b23f838c1e33ac7c51",
"hidden": false,
"name": "Dawn Lawrie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf17b23f838c1e33ac7c52",
"hidden": false,
"name": "Benjamin Van Durme",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T18:14:06 | Rank1: Test-Time Compute for Reranking in Information Retrieval | We introduce Rank1, the first reranking model trained to take advantage of
test-time compute. Rank1 demonstrates the applicability within retrieval of
using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for
distillation in order to rapidly improve the performance of a smaller model. We
gather and open-source a dataset of more than 600,000 examples of R1 reasoning
traces from queries and passages in MS MARCO. Models trained on this dataset
show: (1) state-of-the-art performance on advanced reasoning and instruction
following datasets; (2) work remarkably well out of distribution due to the
ability to respond to user-input prompts; and (3) have explainable reasoning
chains that can be given to users or RAG-based systems. Further, we demonstrate
that quantized versions of these models retain strong performance while using
less compute/memory. Overall, Rank1 shows that test-time compute allows for a
fundamentally new type of explainable and performant reranker model for search. | 23 | 67bf17b33f838c1e33ac7c8e | null | null |
|
2025-02-27T07:31:45.499000 | DOEI: Dual Optimization of Embedding Information for Attention-Enhanced Class Activation Maps | 2 | {
"_id": "64ec877bb93654d4ca5c92e9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ec877bb93654d4ca5c92e9/GvHk_KSdE9Rhnk_o-NaZX.jpeg",
"followerCount": 1,
"fullname": "Zeyu Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SteveZeyuZhang",
"type": "user"
} | true | null | 2502.15885 | [
{
"_id": "67c05aeca2a76d8a27d33c8a",
"hidden": false,
"name": "Hongjie Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c8b",
"hidden": false,
"name": "Zeyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:53:51.821Z",
"user": {
"_id": "64ec877bb93654d4ca5c92e9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ec877bb93654d4ca5c92e9/GvHk_KSdE9Rhnk_o-NaZX.jpeg",
"fullname": "Zeyu Zhang",
"isPro": false,
"type": "user",
"user": "SteveZeyuZhang"
}
},
{
"_id": "67c05aeca2a76d8a27d33c8c",
"hidden": false,
"name": "Guansong Pang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c8d",
"hidden": false,
"name": "Xu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c8e",
"hidden": false,
"name": "Shimin Wen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c8f",
"hidden": false,
"name": "Yu Bai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c90",
"hidden": false,
"name": "Daji Ergu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c91",
"hidden": false,
"name": "Ying Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c05aeca2a76d8a27d33c92",
"hidden": false,
"name": "Yang Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T19:06:01 | DOEI: Dual Optimization of Embedding Information for Attention-Enhanced
Class Activation Maps | Weakly supervised semantic segmentation (WSSS) typically utilizes limited
semantic annotations to obtain initial Class Activation Maps (CAMs). However,
due to the inadequate coupling between class activation responses and semantic
information in high-dimensional space, the CAM is prone to object co-occurrence
or under-activation, resulting in inferior recognition accuracy. To tackle this
issue, we propose DOEI, Dual Optimization of Embedding Information, a novel
approach that reconstructs embedding representations through semantic-aware
attention weight matrices to optimize the expression capability of embedding
information. Specifically, DOEI amplifies tokens with high confidence and
suppresses those with low confidence during the class-to-patch interaction.
This alignment of activation responses with semantic information strengthens
the propagation and decoupling of target features, enabling the generated
embeddings to more accurately represent target features in high-level semantic
space. In addition, we propose a hybrid-feature alignment module in DOEI that
combines RGB values, embedding-guided features, and self-attention weights to
increase the reliability of candidate tokens. Comprehensive experiments show
that DOEI is an effective plug-and-play module that empowers state-of-the-art
visual transformer-based WSSS models to significantly improve the quality of
CAMs and segmentation performance on popular benchmarks, including PASCAL VOC
(+3.6%, +1.5%, +1.2% mIoU) and MS COCO (+1.2%, +1.6% mIoU). Code will be
available at https://github.com/AIGeeksGroup/DOEI. | 2 | 67c05af3a2a76d8a27d33faf | null | null |
|
2025-02-27T04:18:26.724000 | Project Alexandria: Towards Freeing Scientific Knowledge from Copyright Burdens via LLMs | 2 | {
"_id": "6464a0d41683d3c81f51924a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg",
"followerCount": 5,
"fullname": "Ameya Prabhu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AmeyaPrabhu",
"type": "user"
} | true | null | 2502.19413 | [
{
"_id": "67c02d6aa15ac71dcf1c754e",
"hidden": false,
"name": "Christoph Schuhmann",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02d6aa15ac71dcf1c754f",
"hidden": false,
"name": "Gollam Rabby",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:33.105Z",
"user": {
"_id": "64ac21f11cacea8d4b8f2b3f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ac21f11cacea8d4b8f2b3f/asQOf8wFZ4vmqIeyxfvUR.jpeg",
"fullname": "Gollam Rabby",
"isPro": false,
"type": "user",
"user": "tourist800"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7550",
"hidden": false,
"name": "Ameya Prabhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:05.760Z",
"user": {
"_id": "6464a0d41683d3c81f51924a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg",
"fullname": "Ameya Prabhu",
"isPro": false,
"type": "user",
"user": "AmeyaPrabhu"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7551",
"hidden": false,
"name": "Tawsif Ahmed",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:11.253Z",
"user": {
"_id": "635b9bc5cb0f36a40bb43ee3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/635b9bc5cb0f36a40bb43ee3/SVe1ZvCIfNlYdpWJ35Nu0.jpeg",
"fullname": "tawsif",
"isPro": false,
"type": "user",
"user": "sleeping4cat"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7552",
"hidden": false,
"name": "Andreas Hochlehnert",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:08.923Z",
"user": {
"_id": "64ff3944f0d65cca9b867ed2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ff3944f0d65cca9b867ed2/jWnHkF4AUzh51MkC0UT6b.png",
"fullname": "Andreas Hochlehnert",
"isPro": false,
"type": "user",
"user": "libeanim"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7553",
"hidden": false,
"name": "Huu Nguyen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:16.174Z",
"user": {
"_id": "5fc6879e1c5ee87b1164876d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fc6879e1c5ee87b1164876d/Tjnm_lv0Bq0gPbFOTDH6E.jpeg",
"fullname": "Huu Nguyen",
"isPro": false,
"type": "user",
"user": "huu-ontocord"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7554",
"hidden": false,
"name": "Nick Akinci Heidrich",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02d6aa15ac71dcf1c7555",
"hidden": false,
"name": "Ludwig Schmidt",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02d6aa15ac71dcf1c7556",
"hidden": false,
"name": "Robert Kaczmarczyk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02d6aa15ac71dcf1c7557",
"hidden": false,
"name": "Sören Auer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02d6aa15ac71dcf1c7558",
"hidden": false,
"name": "Jenia Jitsev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:55.422Z",
"user": {
"_id": "6355b485b8b79340d4630dd5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6355b485b8b79340d4630dd5/HIZO4ybweRy48VdCtk2MB.jpeg",
"fullname": "Jenia Jitsev",
"isPro": false,
"type": "user",
"user": "JJitsev"
}
},
{
"_id": "67c02d6aa15ac71dcf1c7559",
"hidden": false,
"name": "Matthias Bethge",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T18:56:52 | Project Alexandria: Towards Freeing Scientific Knowledge from Copyright
Burdens via LLMs | Paywalls, licenses and copyright rules often restrict the broad dissemination
and reuse of scientific knowledge. We take the position that it is both legally
and technically feasible to extract the scientific knowledge in scholarly
texts. Current methods, like text embeddings, fail to reliably preserve factual
content, and simple paraphrasing may not be legally sound. We urge the
community to adopt a new idea: convert scholarly documents into Knowledge Units
using LLMs. These units use structured data capturing entities, attributes and
relationships without stylistic content. We provide evidence that Knowledge
Units: (1) form a legally defensible framework for sharing knowledge from
copyrighted research texts, based on legal analyses of German copyright law and
U.S. Fair Use doctrine, and (2) preserve most (~95%) factual knowledge from
original text, measured by MCQ performance on facts from the original
copyrighted text across four research domains. Freeing scientific knowledge
from copyright promises transformative benefits for scientific research and
education by allowing language models to reuse important facts from copyrighted
text. To support this, we share open-source tools for converting research
documents into Knowledge Units. Overall, our work posits the feasibility of
democratizing access to scientific knowledge while respecting copyright. | 19 | 67c02d6ba15ac71dcf1c7596 | null | null |
|
2025-02-27T04:15:43.126000 | GHOST 2.0: generative high-fidelity one shot transfer of heads | 2 | {
"_id": "67aafccd7517c92ba71142f2",
"avatarUrl": "/avatars/ef4b5c6867250b8b7af2c995dd7ad740.svg",
"followerCount": 2,
"fullname": "Anastasiia Iashchenko",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "nastasia-y",
"type": "user"
} | true | null | 2502.18417 | [
{
"_id": "67c02b2eb14cf3cbc800c292",
"hidden": false,
"name": "Alexander Groshev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02b2eb14cf3cbc800c293",
"hidden": false,
"name": "Anastasiia Iashchenko",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:13:49.896Z",
"user": {
"_id": "67aafccd7517c92ba71142f2",
"avatarUrl": "/avatars/ef4b5c6867250b8b7af2c995dd7ad740.svg",
"fullname": "Anastasiia Iashchenko",
"isPro": false,
"type": "user",
"user": "nastasia-y"
}
},
{
"_id": "67c02b2eb14cf3cbc800c294",
"hidden": false,
"name": "Pavel Paramonov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c02b2eb14cf3cbc800c295",
"hidden": false,
"name": "Denis Dimitrov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:35.272Z",
"user": {
"_id": "6669a678465d1d802181e456",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6669a678465d1d802181e456/ZCthBBhDFQnh0bBkgUQUU.png",
"fullname": "Denis Dimitrov",
"isPro": false,
"type": "user",
"user": "dendimitrov"
}
},
{
"_id": "67c02b2eb14cf3cbc800c296",
"hidden": false,
"name": "Andrey Kuznetsov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:37.211Z",
"user": {
"_id": "643984dceb7c5616ef3f5d54",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643984dceb7c5616ef3f5d54/10JRkblrRIEVci6UJwvPz.jpeg",
"fullname": "Andrey Kuznetsov",
"isPro": false,
"type": "user",
"user": "kuznetsoffandrey"
}
}
] | 2025-02-25T18:13:55 | GHOST 2.0: generative high-fidelity one shot transfer of heads | While the task of face swapping has recently gained attention in the research
community, a related problem of head swapping remains largely unexplored. In
addition to skin color transfer, head swap poses extra challenges, such as the
need to preserve structural information of the whole head during synthesis and
inpaint gaps between swapped head and background. In this paper, we address
these concerns with GHOST 2.0, which consists of two problem-specific modules.
First, we introduce enhanced Aligner model for head reenactment, which
preserves identity information at multiple scales and is robust to extreme pose
variations. Secondly, we use a Blender module that seamlessly integrates the
reenacted head into the target background by transferring skin color and
inpainting mismatched regions. Both modules outperform the baselines on the
corresponding tasks, allowing to achieve state of the art results in head
swapping. We also tackle complex cases, such as large difference in hair styles
of source and target. Code is available at
https://github.com/ai-forever/ghost-2.0 | 61 | 67c02b31b14cf3cbc800c34b | null | null |
|
2025-02-27T02:43:05.341000 | BIG-Bench Extra Hard | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2502.19187 | [
{
"_id": "67c01747e8c7d56a8e0cbdc3",
"hidden": false,
"name": "Mehran Kazemi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdc4",
"hidden": false,
"name": "Bahare Fatemi",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-27T07:42:00.525Z",
"user": {
"_id": "654e97ef5da3196a78409341",
"avatarUrl": "/avatars/1a5ea7351ca21960891cf9721b9f4667.svg",
"fullname": "Bahare Fatemi",
"isPro": false,
"type": "user",
"user": "baharefatemi"
}
},
{
"_id": "67c01747e8c7d56a8e0cbdc5",
"hidden": false,
"name": "Hritik Bansal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdc6",
"hidden": false,
"name": "John Palowitch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdc7",
"hidden": false,
"name": "Chrysovalantis Anastasiou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdc8",
"hidden": false,
"name": "Sanket Vaibhav Mehta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdc9",
"hidden": false,
"name": "Lalit K. Jain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdca",
"hidden": false,
"name": "Virginia Aglietti",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdcb",
"hidden": false,
"name": "Disha Jindal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdcc",
"hidden": false,
"name": "Peter Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdcd",
"hidden": false,
"name": "Nishanth Dikkala",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdce",
"hidden": false,
"name": "Gladys Tyen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdcf",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd0",
"hidden": false,
"name": "Uri Shalit",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd1",
"hidden": false,
"name": "Silvia Chiappa",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd2",
"hidden": false,
"name": "Kate Olszewska",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd3",
"hidden": false,
"name": "Yi Tay",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd4",
"hidden": false,
"name": "Vinh Q. Tran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd5",
"hidden": false,
"name": "Quoc V. Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01747e8c7d56a8e0cbdd6",
"hidden": false,
"name": "Orhan Firat",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T14:50:50 | BIG-Bench Extra Hard | Large language models (LLMs) are increasingly deployed in everyday
applications, demanding robust general reasoning capabilities and diverse
reasoning skillset. However, current LLM reasoning benchmarks predominantly
focus on mathematical and coding abilities, leaving a gap in evaluating broader
reasoning proficiencies. One particular exception is the BIG-Bench dataset,
which has served as a crucial benchmark for evaluating the general reasoning
capabilities of LLMs, thanks to its diverse set of challenging tasks that
allowed for a comprehensive assessment of general reasoning across various
skills within a unified framework. However, recent advances in LLMs have led to
saturation on BIG-Bench, and its harder version BIG-Bench Hard (BBH).
State-of-the-art models achieve near-perfect scores on many tasks in BBH, thus
diminishing its utility. To address this limitation, we introduce BIG-Bench
Extra Hard (BBEH), a new benchmark designed to push the boundaries of LLM
reasoning evaluation. BBEH replaces each task in BBH with a novel task that
probes a similar reasoning capability but exhibits significantly increased
difficulty. We evaluate various models on BBEH and observe a (harmonic) average
accuracy of 9.8\% for the best general-purpose model and 44.8\% for the best
reasoning-specialized model, indicating substantial room for improvement and
highlighting the ongoing challenge of achieving robust general reasoning in
LLMs. We release BBEH publicly at: https://github.com/google-deepmind/bbeh. | 6 | 67c01748e8c7d56a8e0cbe0b | null | null |
|
2025-02-27T02:36:29.037000 | Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation | 2 | {
"_id": "6506832221ac448013f94995",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6506832221ac448013f94995/sVUI1JV4Dxan5l-MqNze4.jpeg",
"followerCount": 1,
"fullname": "Shashwat Goel",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "shash42",
"type": "user"
} | true | null | 2502.19414 | [
{
"_id": "67c01587925b73feaf61ac41",
"hidden": false,
"name": "Shiven Sinha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:41.540Z",
"user": {
"_id": "66325cc59292069aed610056",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66325cc59292069aed610056/acL_eIdQsBoeDiG9OAvvv.jpeg",
"fullname": "Shiven Sinha",
"isPro": false,
"type": "user",
"user": "shivensinha4"
}
},
{
"_id": "67c01587925b73feaf61ac42",
"hidden": false,
"name": "Shashwat Goel",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:13.498Z",
"user": {
"_id": "6506832221ac448013f94995",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6506832221ac448013f94995/sVUI1JV4Dxan5l-MqNze4.jpeg",
"fullname": "Shashwat Goel",
"isPro": false,
"type": "user",
"user": "shash42"
}
},
{
"_id": "67c01587925b73feaf61ac43",
"hidden": false,
"name": "Ponnurangam Kumaraguru",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01587925b73feaf61ac44",
"hidden": false,
"name": "Jonas Geiping",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01587925b73feaf61ac45",
"hidden": false,
"name": "Matthias Bethge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c01587925b73feaf61ac46",
"hidden": false,
"name": "Ameya Prabhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:39.585Z",
"user": {
"_id": "6464a0d41683d3c81f51924a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg",
"fullname": "Ameya Prabhu",
"isPro": false,
"type": "user",
"user": "AmeyaPrabhu"
}
}
] | 2025-02-26T18:58:13 | Can Language Models Falsify? Evaluating Algorithmic Reasoning with
Counterexample Creation | There is growing excitement about the potential of Language Models (LMs) to
accelerate scientific discovery. Falsifying hypotheses is key to scientific
progress, as it allows claims to be iteratively refined over time. This process
requires significant researcher effort, reasoning, and ingenuity. Yet current
benchmarks for LMs predominantly assess their ability to generate solutions
rather than challenge them. We advocate for developing benchmarks that evaluate
this inverse capability - creating counterexamples for subtly incorrect
solutions. To demonstrate this approach, we start with the domain of
algorithmic problem solving, where counterexamples can be evaluated
automatically using code execution. Specifically, we introduce REFUTE, a
dynamically updating benchmark that includes recent problems and incorrect
submissions from programming competitions, where human experts successfully
identified counterexamples. Our analysis finds that the best reasoning agents,
even OpenAI o3-mini (high) with code execution feedback, can create
counterexamples for only <9% of incorrect solutions in REFUTE, even though
ratings indicate its ability to solve up to 48% of these problems from scratch.
We hope our work spurs progress in evaluating and enhancing LMs' ability to
falsify incorrect solutions - a capability that is crucial for both
accelerating research and making models self-improve through reliable
reflective reasoning. | 17 | 67c01588925b73feaf61ad2c | null | null |
|
2025-02-27T00:47:02.948000 | CritiQ: Mining Data Quality Criteria from Human Preferences | 2 | {
"_id": "638ef0b0c67af472d31674a6",
"avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg",
"followerCount": 1,
"fullname": "Honglin Guo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KYLN24",
"type": "user"
} | true | null | 2502.19279 | [
{
"_id": "67bffaca3f838c1e33e074e7",
"hidden": false,
"name": "Honglin Guo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:13:52.094Z",
"user": {
"_id": "638ef0b0c67af472d31674a6",
"avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg",
"fullname": "Honglin Guo",
"isPro": false,
"type": "user",
"user": "KYLN24"
}
},
{
"_id": "67bffaca3f838c1e33e074e8",
"hidden": false,
"name": "Kai Lv",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074e9",
"hidden": false,
"name": "Qipeng Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074ea",
"hidden": false,
"name": "Tianyi Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074eb",
"hidden": false,
"name": "Zhiheng Xi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074ec",
"hidden": false,
"name": "Demin Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074ed",
"hidden": false,
"name": "Qiuyinzhe Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074ee",
"hidden": false,
"name": "Yu Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074ef",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074f0",
"hidden": false,
"name": "Xipeng Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bffaca3f838c1e33e074f1",
"hidden": false,
"name": "Tao Gui",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T16:33:41 | CritiQ: Mining Data Quality Criteria from Human Preferences | Language model heavily depends on high-quality data for optimal performance.
Existing approaches rely on manually designed heuristics, the perplexity of
existing models, training classifiers, or careful prompt engineering, which
require significant expert experience and human annotation effort while
introduce biases. We introduce CritiQ, a novel data selection method that
automatically mines criteria from human preferences for data quality with only
sim30 human-annotated pairs and performs efficient data selection. The main
component, CritiQ Flow, employs a manager agent to evolve quality criteria and
worker agents to make pairwise judgments. We build a knowledge base that
extracts quality criteria from previous work to boost CritiQ Flow. Compared to
perplexity- and classifier- based methods, verbal criteria are more
interpretable and possess reusable value. After deriving the criteria, we train
the CritiQ Scorer to give quality scores and perform efficient data selection.
We demonstrate the effectiveness of our method in the code, math, and logic
domains, achieving high accuracy on human-annotated test sets. To validate the
quality of the selected data, we continually train Llama 3.1 models and observe
improved performance on downstream tasks compared to uniform sampling. Ablation
studies validate the benefits of the knowledge base and the reflection process.
We analyze how criteria evolve and the effectiveness of majority voting. | 7 | 67bffacc3f838c1e33e075a2 | null | null |
|
2025-02-27T00:37:24.965000 | PosterSum: A Multimodal Benchmark for Scientific Poster Summarization | 2 | {
"_id": "657ccbf2869d5bb0e53b482f",
"avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg",
"followerCount": 4,
"fullname": "Rohit Saxena",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "rohitsaxena",
"type": "user"
} | true | null | 2502.17540 | [
{
"_id": "67bff9608d761fc6a75e24ad",
"hidden": false,
"name": "Rohit Saxena",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:13:54.284Z",
"user": {
"_id": "657ccbf2869d5bb0e53b482f",
"avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg",
"fullname": "Rohit Saxena",
"isPro": false,
"type": "user",
"user": "rohitsaxena"
}
},
{
"_id": "67bff9608d761fc6a75e24ae",
"hidden": false,
"name": "Pasquale Minervini",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bff9608d761fc6a75e24af",
"hidden": false,
"name": "Frank Keller",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T18:35:39 | PosterSum: A Multimodal Benchmark for Scientific Poster Summarization | Generating accurate and concise textual summaries from multimodal documents
is challenging, especially when dealing with visually complex content like
scientific posters. We introduce PosterSum, a novel benchmark to advance the
development of vision-language models that can understand and summarize
scientific posters into research paper abstracts. Our dataset contains 16,305
conference posters paired with their corresponding abstracts as summaries. Each
poster is provided in image format and presents diverse visual understanding
challenges, such as complex layouts, dense text regions, tables, and figures.
We benchmark state-of-the-art Multimodal Large Language Models (MLLMs) on
PosterSum and demonstrate that they struggle to accurately interpret and
summarize scientific posters. We propose Segment & Summarize, a hierarchical
method that outperforms current MLLMs on automated metrics, achieving a 3.14%
gain in ROUGE-L. This will serve as a starting point for future research on
poster summarization. | 2 | 67bff96d8d761fc6a75e27a0 | null | null |
|
2025-02-27T00:17:58.262000 | Language Models' Factuality Depends on the Language of Inquiry | 2 | {
"_id": "65d2f1e0fe21569868393411",
"avatarUrl": "/avatars/1401020e76d958bef3f33e7449773694.svg",
"followerCount": 1,
"fullname": "Tushar Aggarwal",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AggarwalTushar",
"type": "user"
} | true | null | 2502.17955 | [
{
"_id": "67bff526ca6e3c22b6e89d71",
"hidden": false,
"name": "Tushar Aggarwal",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-27T18:59:26.826Z",
"user": {
"_id": "65d2f1e0fe21569868393411",
"avatarUrl": "/avatars/1401020e76d958bef3f33e7449773694.svg",
"fullname": "Tushar Aggarwal",
"isPro": false,
"type": "user",
"user": "AggarwalTushar"
}
},
{
"_id": "67bff526ca6e3c22b6e89d72",
"hidden": false,
"name": "Kumar Tanmay",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bff526ca6e3c22b6e89d73",
"hidden": false,
"name": "Ayush Agrawal",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:13:56.625Z",
"user": {
"_id": "61a7cbb0fcbbebe775bf17fd",
"avatarUrl": "/avatars/8b54907c6a1ea90a1242f26e03e117af.svg",
"fullname": "Ayush Agrawal",
"isPro": false,
"type": "user",
"user": "ayush1801"
}
},
{
"_id": "67bff526ca6e3c22b6e89d74",
"hidden": false,
"name": "Kumar Ayush",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bff526ca6e3c22b6e89d75",
"hidden": false,
"name": "Hamid Palangi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bff526ca6e3c22b6e89d76",
"hidden": false,
"name": "Paul Pu Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T08:27:18 | Language Models' Factuality Depends on the Language of Inquiry | Multilingual language models (LMs) are expected to recall factual knowledge
consistently across languages, yet they often fail to transfer knowledge
between languages even when they possess the correct information in one of the
languages. For example, we find that an LM may correctly identify Rashed Al
Shashai as being from Saudi Arabia when asked in Arabic, but consistently fails
to do so when asked in English or Swahili. To systematically investigate this
limitation, we introduce a benchmark of 10,000 country-related facts across 13
languages and propose three novel metrics: Factual Recall Score, Knowledge
Transferability Score, and Cross-Lingual Factual Knowledge Transferability
Score-to quantify factual recall and knowledge transferability in LMs across
different languages. Our results reveal fundamental weaknesses in today's
state-of-the-art LMs, particularly in cross-lingual generalization where models
fail to transfer knowledge effectively across different languages, leading to
inconsistent performance sensitive to the language used. Our findings emphasize
the need for LMs to recognize language-specific factual reliability and
leverage the most trustworthy information across languages. We release our
benchmark and evaluation framework to drive future research in multilingual
knowledge transfer. | 29 | 67bff528ca6e3c22b6e89ddd | null | null |
|
2025-02-27T00:08:09.082000 | Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance | 2 | {
"_id": "63b58ed5889aa6707f0bb0f4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg",
"followerCount": 15,
"fullname": "Jimin Huang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "jiminHuang",
"type": "user"
} | true | null | 2502.18772 | [
{
"_id": "67bfc297ca6e3c22b6d99c78",
"hidden": false,
"name": "Xueqing Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c79",
"hidden": false,
"name": "Triantafillos Papadopoulos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c7a",
"hidden": false,
"name": "Efstathia Soufleri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c7b",
"hidden": false,
"name": "Polydoros Giannouris",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T13:24:53.698Z",
"user": {
"_id": "673354dc5b8d4dccb4da9b63",
"avatarUrl": "/avatars/ae6bd67a1d93fee89bf5c576fd8ddc39.svg",
"fullname": "Polydoros Giannouris",
"isPro": false,
"type": "user",
"user": "PolydorosG"
}
},
{
"_id": "67bfc297ca6e3c22b6d99c7c",
"hidden": false,
"name": "Ruoyu Xiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c7d",
"hidden": false,
"name": "Yan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c7e",
"hidden": false,
"name": "Lingfei Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c7f",
"hidden": false,
"name": "Jimin Huang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-27T01:40:40.189Z",
"user": {
"_id": "63b58ed5889aa6707f0bb0f4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg",
"fullname": "Jimin Huang",
"isPro": true,
"type": "user",
"user": "jiminHuang"
}
},
{
"_id": "67bfc297ca6e3c22b6d99c80",
"hidden": false,
"name": "Qianqian Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfc297ca6e3c22b6d99c81",
"hidden": false,
"name": "Sophia Ananiadou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T03:04:01 | Plutus: Benchmarking Large Language Models in Low-Resource Greek Finance | Despite Greece's pivotal role in the global economy, large language models
(LLMs) remain underexplored for Greek financial context due to the linguistic
complexity of Greek and the scarcity of domain-specific datasets. Previous
efforts in multilingual financial natural language processing (NLP) have
exposed considerable performance disparities, yet no dedicated Greek financial
benchmarks or Greek-specific financial LLMs have been developed until now. To
bridge this gap, we introduce Plutus-ben, the first Greek Financial Evaluation
Benchmark, and Plutus-8B, the pioneering Greek Financial LLM, fine-tuned with
Greek domain-specific data. Plutus-ben addresses five core financial NLP tasks
in Greek: numeric and textual named entity recognition, question answering,
abstractive summarization, and topic classification, thereby facilitating
systematic and reproducible LLM assessments. To underpin these tasks, we
present three novel, high-quality Greek financial datasets, thoroughly
annotated by expert native Greek speakers, augmented by two existing resources.
Our comprehensive evaluation of 22 LLMs on Plutus-ben reveals that Greek
financial NLP remains challenging due to linguistic complexity, domain-specific
terminology, and financial reasoning gaps. These findings underscore the
limitations of cross-lingual transfer, the necessity for financial expertise in
Greek-trained models, and the challenges of adapting financial LLMs to Greek
text. We release Plutus-ben, Plutus-8B, and all associated datasets publicly to
promote reproducible research and advance Greek financial NLP, fostering
broader multilingual inclusivity in finance. | 30 | 67bfc298ca6e3c22b6d99caa | null | null |
|
2025-02-26T23:05:13.440000 | Kanana: Compute-efficient Bilingual Language Models | 2 | {
"_id": "60436d159e905013ae8715d7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1623809612769-60436d159e905013ae8715d7.jpeg",
"followerCount": 5,
"fullname": "Minho Ryu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "bzantium",
"type": "user"
} | true | null | 2502.18934 | [
{
"_id": "67bfe1bf4426925c82fe5953",
"hidden": false,
"name": "Kanana LLM Team",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5954",
"hidden": false,
"name": "Yunju Bak",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-27T12:55:35.505Z",
"user": {
"_id": "64d08bd75de9e1e911b24226",
"avatarUrl": "/avatars/e572bb47659393573a0c1fb3d333dd7b.svg",
"fullname": "Yunju Bak",
"isPro": false,
"type": "user",
"user": "yunjubak63"
}
},
{
"_id": "67bfe1bf4426925c82fe5955",
"hidden": false,
"name": "Hojin Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5956",
"hidden": false,
"name": "Minho Ryu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:17.979Z",
"user": {
"_id": "60436d159e905013ae8715d7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1623809612769-60436d159e905013ae8715d7.jpeg",
"fullname": "Minho Ryu",
"isPro": false,
"type": "user",
"user": "bzantium"
}
},
{
"_id": "67bfe1bf4426925c82fe5957",
"hidden": false,
"name": "Jiyeon Ham",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:11.786Z",
"user": {
"_id": "66ebb4fdc5b2c25450fd17de",
"avatarUrl": "/avatars/e6b40dcbe2eba838ba21be9221758a3c.svg",
"fullname": "Jiyeon Ham",
"isPro": false,
"type": "user",
"user": "jiyeonham"
}
},
{
"_id": "67bfe1bf4426925c82fe5958",
"hidden": false,
"name": "Seungjae Jung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5959",
"hidden": false,
"name": "Daniel Wontae Nam",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:09.613Z",
"user": {
"_id": "66c82a50c1b3c03c61aea140",
"avatarUrl": "/avatars/3c508f96bdca2f2ce9746d3decd4718e.svg",
"fullname": "daniel nam",
"isPro": false,
"type": "user",
"user": "daniel-rl2"
}
},
{
"_id": "67bfe1bf4426925c82fe595a",
"hidden": false,
"name": "Taegyeong Eo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe595b",
"hidden": false,
"name": "Donghun Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe595c",
"hidden": false,
"name": "Doohae Jung",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:06.858Z",
"user": {
"_id": "6142e17fe9e656d4459121e4",
"avatarUrl": "/avatars/6baebd4598a845ec7fdb735eb0d53139.svg",
"fullname": "Doohae Jung",
"isPro": false,
"type": "user",
"user": "Doohae"
}
},
{
"_id": "67bfe1bf4426925c82fe595d",
"hidden": false,
"name": "Boseop Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:01.989Z",
"user": {
"_id": "60f559be68ee3ef098e407cf",
"avatarUrl": "/avatars/e1f00ff1c1c9fa7f591535d39c7d5e44.svg",
"fullname": "Boseop Kim",
"isPro": false,
"type": "user",
"user": "seopbo"
}
},
{
"_id": "67bfe1bf4426925c82fe595e",
"hidden": false,
"name": "Nayeon Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:13.867Z",
"user": {
"_id": "6605028007a154c768e1c4c7",
"avatarUrl": "/avatars/88678edb83fdb466067e38acd22d07de.svg",
"fullname": "Nayeon Kim",
"isPro": false,
"type": "user",
"user": "lana-ny"
}
},
{
"_id": "67bfe1bf4426925c82fe595f",
"hidden": false,
"name": "Jaesun Park",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:15.898Z",
"user": {
"_id": "6136f65440e43b8f748a0833",
"avatarUrl": "/avatars/f72a5ae3d3e94485de8aed8df94abdad.svg",
"fullname": "Jaesun Park",
"isPro": false,
"type": "user",
"user": "jaesun"
}
},
{
"_id": "67bfe1bf4426925c82fe5960",
"hidden": false,
"name": "Hyunho Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5961",
"hidden": false,
"name": "Hyunwoong Ko",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-27T12:58:05.546Z",
"user": {
"_id": "5fd888cf61e46993190ce543",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1634604273263-5fd888cf61e46993190ce543.jpeg",
"fullname": "Hyunwoong Ko",
"isPro": false,
"type": "user",
"user": "hyunwoongko"
}
},
{
"_id": "67bfe1bf4426925c82fe5962",
"hidden": false,
"name": "Changmin Lee",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:04.506Z",
"user": {
"_id": "63d268bb57ab367124ea7b75",
"avatarUrl": "/avatars/11312cde1e9f077aa9e5103b48be5de6.svg",
"fullname": "Changmin Lee",
"isPro": false,
"type": "user",
"user": "changminlee"
}
},
{
"_id": "67bfe1bf4426925c82fe5963",
"hidden": false,
"name": "Kyoung-Woon On",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-27T12:58:11.269Z",
"user": {
"_id": "62bd31e1d2c8a6542f53fcba",
"avatarUrl": "/avatars/4ac18a7bcaf9dd3885b0478dea90818f.svg",
"fullname": "Kyoung-Woon On",
"isPro": false,
"type": "user",
"user": "kloud"
}
},
{
"_id": "67bfe1bf4426925c82fe5964",
"hidden": false,
"name": "Seulye Baeg",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5965",
"hidden": false,
"name": "Junrae Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5966",
"hidden": false,
"name": "Sunghee Jung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5967",
"hidden": false,
"name": "Jieun Kang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5968",
"hidden": false,
"name": "EungGyun Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe5969",
"hidden": false,
"name": "Eunhwa Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596a",
"hidden": false,
"name": "Byeongil Ko",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596b",
"hidden": false,
"name": "Daniel Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596c",
"hidden": false,
"name": "Minchul Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596d",
"hidden": false,
"name": "Miok Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596e",
"hidden": false,
"name": "Shinbok Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe1bf4426925c82fe596f",
"hidden": false,
"name": "Gaeun Seo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-27T12:59:39.670Z",
"user": {
"_id": "63148a8f5f47a18962765802",
"avatarUrl": "/avatars/bc58a863727794006dddf758efa09411.svg",
"fullname": "gaeunseo",
"isPro": true,
"type": "user",
"user": "gaeunseo"
}
}
] | 2025-02-26T08:36:20 | Kanana: Compute-efficient Bilingual Language Models | We introduce Kanana, a series of bilingual language models that demonstrate
exceeding performance in Korean and competitive performance in English. The
computational cost of Kanana is significantly lower than that of
state-of-the-art models of similar size. The report details the techniques
employed during pre-training to achieve compute-efficient yet competitive
models, including high quality data filtering, staged pre-training, depth
up-scaling, and pruning and distillation. Furthermore, the report outlines the
methodologies utilized during the post-training of the Kanana models,
encompassing supervised fine-tuning and preference optimization, aimed at
enhancing their capability for seamless interaction with users. Lastly, the
report elaborates on plausible approaches used for language model adaptation to
specific scenarios, such as embedding, retrieval augmented generation, and
function calling. The Kanana model series spans from 2.1B to 32.5B parameters
with 2.1B models (base, instruct, embedding) publicly released to promote
research on Korean language models. | 58 | 67bfe1c04426925c82fe59a1 | null | null |
|
2025-02-26T23:04:47.406000 | Can Large Language Models Detect Errors in Long Chain-of-Thought Reasoning? | 2 | {
"_id": "65377c30e48353201e6fdda0",
"avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg",
"followerCount": 7,
"fullname": "Jiaheng Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "CheeryLJH",
"type": "user"
} | false | null | 2502.19361 | [
{
"_id": "67bfe435ca6e3c22b6e29442",
"hidden": false,
"name": "Yancheng He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29443",
"hidden": false,
"name": "Shilong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29444",
"hidden": false,
"name": "Jiaheng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29445",
"hidden": false,
"name": "Weixun Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29446",
"hidden": false,
"name": "Xingyuan Bu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29447",
"hidden": false,
"name": "Ge Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:13:58.959Z",
"user": {
"_id": "638efcf4c67af472d316d424",
"avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg",
"fullname": "Ge Zhang",
"isPro": false,
"type": "user",
"user": "zhangysk"
}
},
{
"_id": "67bfe435ca6e3c22b6e29448",
"hidden": false,
"name": "Zhongyuan Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e29449",
"hidden": false,
"name": "Zhaoxiang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e2944a",
"hidden": false,
"name": "Wenbo Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfe435ca6e3c22b6e2944b",
"hidden": false,
"name": "Bo Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T17:59:27 | Can Large Language Models Detect Errors in Long Chain-of-Thought
Reasoning? | Recently, o1-like models have drawn significant attention, where these models
produce the long Chain-of-Thought (CoT) reasoning steps to improve the
reasoning abilities of existing Large Language Models (LLMs). In this paper, to
understand the qualities of these long CoTs and measure the critique abilities
of existing LLMs on these long CoTs, we introduce the DeltaBench, including the
generated long CoTs from different o1-like models (e.g., QwQ, DeepSeek-R1) for
different reasoning tasks (e.g., Math, Code, General Reasoning), to measure the
ability to detect errors in long CoT reasoning. Based on DeltaBench, we first
perform fine-grained analysis of the generated long CoTs to discover the
effectiveness and efficiency of different o1-like models. Then, we conduct
extensive evaluations of existing process reward models (PRMs) and critic
models to detect the errors of each annotated process, which aims to
investigate the boundaries and limitations of existing PRMs and critic models.
Finally, we hope that DeltaBench could guide developers to better understand
the long CoT reasoning abilities of their models. | 24 | 67bfe438ca6e3c22b6e2948e | null | null |
|
2025-02-26T22:29:40.056000 | MolSpectra: Pre-training 3D Molecular Representation with Multi-modal Energy Spectra | 2 | {
"_id": "64e84ec6d41a68b065bf78a7",
"avatarUrl": "/avatars/bae3c5e3210b40af6e4f113e85f3e206.svg",
"followerCount": null,
"fullname": "Liang Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AzureLeon1",
"type": "user"
} | true | null | 2502.16284 | [
{
"_id": "67bfdbd0302c06f220658e9d",
"hidden": false,
"name": "Liang Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:42.802Z",
"user": {
"_id": "64e84ec6d41a68b065bf78a7",
"avatarUrl": "/avatars/bae3c5e3210b40af6e4f113e85f3e206.svg",
"fullname": "Liang Wang",
"isPro": false,
"type": "user",
"user": "AzureLeon1"
}
},
{
"_id": "67bfdbd0302c06f220658e9e",
"hidden": false,
"name": "Shaozhen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfdbd0302c06f220658e9f",
"hidden": false,
"name": "Yu Rong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfdbd0302c06f220658ea0",
"hidden": false,
"name": "Deli Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfdbd0302c06f220658ea1",
"hidden": false,
"name": "Qiang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfdbd0302c06f220658ea2",
"hidden": false,
"name": "Shu Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfdbd0302c06f220658ea3",
"hidden": false,
"name": "Liang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-22T16:34:32 | MolSpectra: Pre-training 3D Molecular Representation with Multi-modal
Energy Spectra | Establishing the relationship between 3D structures and the energy states of
molecular systems has proven to be a promising approach for learning 3D
molecular representations. However, existing methods are limited to modeling
the molecular energy states from classical mechanics. This limitation results
in a significant oversight of quantum mechanical effects, such as quantized
(discrete) energy level structures, which offer a more accurate estimation of
molecular energy and can be experimentally measured through energy spectra. In
this paper, we propose to utilize the energy spectra to enhance the
pre-training of 3D molecular representations (MolSpectra), thereby infusing the
knowledge of quantum mechanics into the molecular representations.
Specifically, we propose SpecFormer, a multi-spectrum encoder for encoding
molecular spectra via masked patch reconstruction. By further aligning outputs
from the 3D encoder and spectrum encoder using a contrastive objective, we
enhance the 3D encoder's understanding of molecules. Evaluations on public
benchmarks reveal that our pre-trained representations surpass existing methods
in predicting molecular properties and modeling dynamics. | 5 | 67bfdbd1302c06f220658ece | null | null |
|
2025-02-26T22:18:06.494000 | Towards an AI co-scientist | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.18864 | [
{
"_id": "67bfd957c2a9b64ab3f97aa7",
"hidden": false,
"name": "Juraj Gottweis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aa8",
"hidden": false,
"name": "Wei-Hung Weng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aa9",
"hidden": false,
"name": "Alexander Daryin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aaa",
"hidden": false,
"name": "Tao Tu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aab",
"hidden": false,
"name": "Anil Palepu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aac",
"hidden": false,
"name": "Petar Sirkovic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aad",
"hidden": false,
"name": "Artiom Myaskovsky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aae",
"hidden": false,
"name": "Felix Weissenberger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aaf",
"hidden": false,
"name": "Keran Rong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab0",
"hidden": false,
"name": "Ryutaro Tanno",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab1",
"hidden": false,
"name": "Khaled Saab",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab2",
"hidden": false,
"name": "Dan Popovici",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab3",
"hidden": false,
"name": "Jacob Blum",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab4",
"hidden": false,
"name": "Fan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab5",
"hidden": false,
"name": "Katherine Chou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab6",
"hidden": false,
"name": "Avinatan Hassidim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab7",
"hidden": false,
"name": "Burak Gokturk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab8",
"hidden": false,
"name": "Amin Vahdat",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ab9",
"hidden": false,
"name": "Pushmeet Kohli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97aba",
"hidden": false,
"name": "Yossi Matias",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97abb",
"hidden": false,
"name": "Andrew Carroll",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97abc",
"hidden": false,
"name": "Kavita Kulkarni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97abd",
"hidden": false,
"name": "Nenad Tomasev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97abe",
"hidden": false,
"name": "Yuan Guan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97abf",
"hidden": false,
"name": "Vikram Dhillon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac0",
"hidden": false,
"name": "Eeshit Dhaval Vaishnav",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac1",
"hidden": false,
"name": "Byron Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac2",
"hidden": false,
"name": "Tiago R D Costa",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac3",
"hidden": false,
"name": "José R Penadés",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac4",
"hidden": false,
"name": "Gary Peltz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac5",
"hidden": false,
"name": "Yunhan Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac6",
"hidden": false,
"name": "Annalisa Pawlosky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac7",
"hidden": false,
"name": "Alan Karthikesalingam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd957c2a9b64ab3f97ac8",
"hidden": false,
"name": "Vivek Natarajan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T06:17:13 | Towards an AI co-scientist | Scientific discovery relies on scientists generating novel hypotheses that
undergo rigorous experimental validation. To augment this process, we introduce
an AI co-scientist, a multi-agent system built on Gemini 2.0. The AI
co-scientist is intended to help uncover new, original knowledge and to
formulate demonstrably novel research hypotheses and proposals, building upon
prior evidence and aligned to scientist-provided research objectives and
guidance. The system's design incorporates a generate, debate, and evolve
approach to hypothesis generation, inspired by the scientific method and
accelerated by scaling test-time compute. Key contributions include: (1) a
multi-agent architecture with an asynchronous task execution framework for
flexible compute scaling; (2) a tournament evolution process for self-improving
hypotheses generation. Automated evaluations show continued benefits of
test-time compute, improving hypothesis quality. While general purpose, we
focus development and validation in three biomedical areas: drug repurposing,
novel target discovery, and explaining mechanisms of bacterial evolution and
anti-microbial resistance. For drug repurposing, the system proposes candidates
with promising validation findings, including candidates for acute myeloid
leukemia that show tumor inhibition in vitro at clinically applicable
concentrations. For novel target discovery, the AI co-scientist proposed new
epigenetic targets for liver fibrosis, validated by anti-fibrotic activity and
liver cell regeneration in human hepatic organoids. Finally, the AI
co-scientist recapitulated unpublished experimental results via a parallel in
silico discovery of a novel gene transfer mechanism in bacterial evolution.
These results, detailed in separate, co-timed reports, demonstrate the
potential to augment biomedical and scientific discovery and usher an era of AI
empowered scientists. | 37 | 67bfd958c2a9b64ab3f97afa | null | null |
|
2025-02-26T22:16:03.582000 | AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement | 2 | {
"_id": "61b58aa0d65058ce70beb98c",
"avatarUrl": "/avatars/aefd9271b891abc6dd2ded1a30eebca4.svg",
"followerCount": 1,
"fullname": "Zhexin Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "nonstopfor",
"type": "user"
} | false | null | 2502.16776 | [
{
"_id": "67bfd8d546083445aacb4605",
"hidden": false,
"name": "Zhexin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4606",
"hidden": false,
"name": "Leqi Lei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4607",
"hidden": false,
"name": "Junxiao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4608",
"hidden": false,
"name": "Xijie Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4609",
"hidden": false,
"name": "Yida Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460a",
"hidden": false,
"name": "Shiyao Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460b",
"hidden": false,
"name": "Renmiao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460c",
"hidden": false,
"name": "Qinglin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460d",
"hidden": false,
"name": "Xinyuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460e",
"hidden": false,
"name": "Hao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb460f",
"hidden": false,
"name": "Hao Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:45.366Z",
"user": {
"_id": "653f1ef4aabbf15fc76a259c",
"avatarUrl": "/avatars/94e569999d913e961266394ea2875965.svg",
"fullname": "LLLeo Li",
"isPro": false,
"type": "user",
"user": "LLLeo612"
}
},
{
"_id": "67bfd8d546083445aacb4610",
"hidden": false,
"name": "Xianqi Lei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4611",
"hidden": false,
"name": "Chengwei Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4612",
"hidden": false,
"name": "Lei Sha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4613",
"hidden": false,
"name": "Hongning Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd8d546083445aacb4614",
"hidden": false,
"name": "Minlie Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T02:11:52 | AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and
Improvement | As AI models are increasingly deployed across diverse real-world scenarios,
ensuring their safety remains a critical yet underexplored challenge. While
substantial efforts have been made to evaluate and enhance AI safety, the lack
of a standardized framework and comprehensive toolkit poses significant
obstacles to systematic research and practical adoption. To bridge this gap, we
introduce AISafetyLab, a unified framework and toolkit that integrates
representative attack, defense, and evaluation methodologies for AI safety.
AISafetyLab features an intuitive interface that enables developers to
seamlessly apply various techniques while maintaining a well-structured and
extensible codebase for future advancements. Additionally, we conduct empirical
studies on Vicuna, analyzing different attack and defense strategies to provide
valuable insights into their comparative effectiveness. To facilitate ongoing
research and development in AI safety, AISafetyLab is publicly available at
https://github.com/thu-coai/AISafetyLab, and we are committed to its continuous
maintenance and improvement. | 5 | 67bfd8d646083445aacb464f | null | null |
|
2025-02-26T22:10:20.646000 | Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator | 4 | {
"_id": "64196320ed725fef64419c2a",
"avatarUrl": "/avatars/96feb22fb5e8931d6c9e0ea06148266f.svg",
"followerCount": 3,
"fullname": "Chi Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DrChiZhang",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/64196320ed725fef64419c2a/k13rSuJPlDkMtzwdHXCXm.png"
] | 2502.19204 | [
{
"_id": "67bfd735ca6e3c22b6de43c7",
"hidden": false,
"name": "Xiankang He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd735ca6e3c22b6de43c8",
"hidden": false,
"name": "Dongyan Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd735ca6e3c22b6de43c9",
"hidden": false,
"name": "Hongji Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd735ca6e3c22b6de43ca",
"hidden": false,
"name": "Ruibo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd735ca6e3c22b6de43cb",
"hidden": false,
"name": "Ying Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd735ca6e3c22b6de43cc",
"hidden": false,
"name": "Chi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T15:10:05 | Distill Any Depth: Distillation Creates a Stronger Monocular Depth
Estimator | Monocular depth estimation (MDE) aims to predict scene depth from a single
RGB image and plays a crucial role in 3D scene understanding. Recent advances
in zero-shot MDE leverage normalized depth representations and
distillation-based learning to improve generalization across diverse scenes.
However, current depth normalization methods for distillation, relying on
global normalization, can amplify noisy pseudo-labels, reducing distillation
effectiveness. In this paper, we systematically analyze the impact of different
depth normalization strategies on pseudo-label distillation. Based on our
findings, we propose Cross-Context Distillation, which integrates global and
local depth cues to enhance pseudo-label quality. Additionally, we introduce a
multi-teacher distillation framework that leverages complementary strengths of
different depth estimation models, leading to more robust and accurate depth
predictions. Extensive experiments on benchmark datasets demonstrate that our
approach significantly outperforms state-of-the-art methods, both
quantitatively and qualitatively. | 11 | 67bfd736ca6e3c22b6de441e | null | null |
|
2025-02-26T22:07:49.438000 | TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.19400 | [
{
"_id": "67bfd6f15db054ee3c5a766b",
"hidden": false,
"name": "Max Ku",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:55.238Z",
"user": {
"_id": "631d760344503b7227837242",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631d760344503b7227837242/3b6JRusFX6GKJpsN9ZdeJ.png",
"fullname": "Max Ku",
"isPro": false,
"type": "user",
"user": "vinesmsuic"
}
},
{
"_id": "67bfd6f15db054ee3c5a766c",
"hidden": false,
"name": "Thomas Chong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:49.567Z",
"user": {
"_id": "6365d5baa7a1324ccd5ecdb9",
"avatarUrl": "/avatars/636d3f410b878e451a878a6cf171dd53.svg",
"fullname": "Thomas Chong",
"isPro": false,
"type": "user",
"user": "chongcht"
}
},
{
"_id": "67bfd6f15db054ee3c5a766d",
"hidden": false,
"name": "Jonathan Leung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd6f15db054ee3c5a766e",
"hidden": false,
"name": "Krish Shah",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:47.269Z",
"user": {
"_id": "67bfdfdbf856fd8ddbb7e0f0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/rIR0QnVM3wxMCulG2R9SJ.png",
"fullname": "Krish Shah",
"isPro": false,
"type": "user",
"user": "KrishKrosh"
}
},
{
"_id": "67bfd6f15db054ee3c5a766f",
"hidden": false,
"name": "Alvin Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:52.146Z",
"user": {
"_id": "6696061aa8dbb9a9997dfff6",
"avatarUrl": "/avatars/d8f0bbff362fd630e6e60aab141076d3.svg",
"fullname": "Alvin Yu",
"isPro": false,
"type": "user",
"user": "AlvinYuVotee"
}
},
{
"_id": "67bfd6f15db054ee3c5a7670",
"hidden": false,
"name": "Wenhu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T18:50:09 | TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem
Understanding | Understanding domain-specific theorems often requires more than just
text-based reasoning; effective communication through structured visual
explanations is crucial for deeper comprehension. While large language models
(LLMs) demonstrate strong performance in text-based theorem reasoning, their
ability to generate coherent and pedagogically meaningful visual explanations
remains an open challenge. In this work, we introduce TheoremExplainAgent, an
agentic approach for generating long-form theorem explanation videos (over 5
minutes) using Manim animations. To systematically evaluate multimodal theorem
explanations, we propose TheoremExplainBench, a benchmark covering 240 theorems
across multiple STEM disciplines, along with 5 automated evaluation metrics.
Our results reveal that agentic planning is essential for generating detailed
long-form videos, and the o3-mini agent achieves a success rate of 93.8% and an
overall score of 0.77. However, our quantitative and qualitative studies show
that most of the videos produced exhibit minor issues with visual element
layout. Furthermore, multimodal explanations expose deeper reasoning flaws that
text-based explanations fail to reveal, highlighting the importance of
multimodal explanations. | 41 | 67bfd6f25db054ee3c5a7699 | https://tiger-ai-lab.github.io/TheoremExplainAgent/ | https://github.com/TIGER-AI-Lab/TheoremExplainAgent |
|
2025-02-26T22:05:16.150000 | Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems | 2 | {
"_id": "625a5446f1063e7085d5178a",
"avatarUrl": "/avatars/5e78186f13f74b14e01583e06ff6c4dc.svg",
"followerCount": 1,
"fullname": "Hao Peng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Wesleythu",
"type": "user"
} | true | null | 2502.19328 | [
{
"_id": "67bfcb774d22a9379b29334c",
"hidden": false,
"name": "Hao Peng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:21.224Z",
"user": {
"_id": "625a5446f1063e7085d5178a",
"avatarUrl": "/avatars/5e78186f13f74b14e01583e06ff6c4dc.svg",
"fullname": "Hao Peng",
"isPro": false,
"type": "user",
"user": "Wesleythu"
}
},
{
"_id": "67bfcb774d22a9379b29334d",
"hidden": false,
"name": "Yunjia Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfcb774d22a9379b29334e",
"hidden": false,
"name": "Xiaozhi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfcb774d22a9379b29334f",
"hidden": false,
"name": "Zijun Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfcb774d22a9379b293350",
"hidden": false,
"name": "Bin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfcb774d22a9379b293351",
"hidden": false,
"name": "Lei Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfcb774d22a9379b293352",
"hidden": false,
"name": "Juanzi Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T17:19:12 | Agentic Reward Modeling: Integrating Human Preferences with Verifiable
Correctness Signals for Reliable Reward Systems | Reward models (RMs) are crucial for the training and inference-time scaling
up of large language models (LLMs). However, existing reward models primarily
focus on human preferences, neglecting verifiable correctness signals which
have shown strong potential in training LLMs. In this paper, we propose agentic
reward modeling, a reward system that combines reward models with verifiable
correctness signals from different aspects to provide reliable rewards. We
empirically implement a reward agent, named RewardAgent, that combines human
preference rewards with two verifiable signals: factuality and instruction
following, to provide more reliable rewards. We conduct comprehensive
experiments on existing reward model benchmarks and inference time best-of-n
searches on real-world downstream tasks. RewardAgent significantly outperforms
vanilla reward models, demonstrating its effectiveness. We further construct
training preference pairs using RewardAgent and train an LLM with the DPO
objective, achieving superior performance on various NLP benchmarks compared to
conventional reward models. Our codes are publicly released to facilitate
further research (https://github.com/THU-KEG/Agentic-Reward-Modeling). | 20 | 67bfcb784d22a9379b29338f | null | null |
|
2025-02-26T22:02:50.690000 | VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model | 2 | {
"_id": "654dbac9938fbf1e696be8aa",
"avatarUrl": "/avatars/b3c4035c48169c1bfb04a439fce3499f.svg",
"followerCount": 2,
"fullname": "Chaoyun Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vyokky",
"type": "user"
} | true | null | 2502.18906 | [
{
"_id": "67bfd5d2381f8fcb67e5ad36",
"hidden": false,
"name": "Jiani Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:18.436Z",
"user": {
"_id": "64531f631a57e1179c203e6b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64531f631a57e1179c203e6b/C_J7pXFLqoJoHYPPhK3J9.jpeg",
"fullname": "zjn",
"isPro": false,
"type": "user",
"user": "garlicisnotmyfavor"
}
},
{
"_id": "67bfd5d2381f8fcb67e5ad37",
"hidden": false,
"name": "Lu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad38",
"hidden": false,
"name": "Fangkai Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:57.452Z",
"user": {
"_id": "669dcf6200970c3b27aafa5d",
"avatarUrl": "/avatars/bb9ed5ff86326fdaeb184c6b0e40f74f.svg",
"fullname": "kaikai yang",
"isPro": false,
"type": "user",
"user": "keanudicap"
}
},
{
"_id": "67bfd5d2381f8fcb67e5ad39",
"hidden": false,
"name": "Chaoyun Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:14:59.653Z",
"user": {
"_id": "654dbac9938fbf1e696be8aa",
"avatarUrl": "/avatars/b3c4035c48169c1bfb04a439fce3499f.svg",
"fullname": "Chaoyun Zhang",
"isPro": false,
"type": "user",
"user": "vyokky"
}
},
{
"_id": "67bfd5d2381f8fcb67e5ad3a",
"hidden": false,
"name": "Lingrui Mei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad3b",
"hidden": false,
"name": "Wenjie Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad3c",
"hidden": false,
"name": "Qingwei Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad3d",
"hidden": false,
"name": "Dongmei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad3e",
"hidden": false,
"name": "Saravan Rajmohan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bfd5d2381f8fcb67e5ad3f",
"hidden": false,
"name": "Qi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T07:52:02 | VEM: Environment-Free Exploration for Training GUI Agent with Value
Environment Model | Training Vision-Language Models (VLMs) for Graphical User Interfaces (GUI)
agents via Reinforcement Learning (RL) faces critical challenges:
environment-based RL requires costly interactions, while environment-free
methods struggle with distribution shift and reward generalization. We propose
an environment-free RL framework that decouples value estimation from policy
optimization by leveraging a pretrained Value Environment Model (VEM). VEM
predicts state-action values directly from offline data, distilling human-like
priors about GUI interaction outcomes without requiring next-state prediction
or environmental feedback. This avoids compounding errors and enhances
resilience to UI changes by focusing on semantic reasoning (e.g., Does this
action advance the user's goal?). The framework operates in two stages: (1)
pretraining VEM to estimate long-term action utilities and (2) guiding policy
exploration with frozen VEM signals, enabling layout-agnostic GUI automation.
Evaluated on Android-in-the-Wild benchmarks, VEM achieves state-of-the-art
performance in both offline and online settings, outperforming environment-free
baselines significantly and matching environment-based approaches without
interaction costs. Importantly, VEM demonstrates that semantic-aware value
estimation can achieve comparable performance with online-trained methods. | 11 | 67bfd5d7381f8fcb67e5ae3d | null | null |
|
2025-02-26T18:40:15.965000 | Scaling LLM Pre-training with Vocabulary Curriculum | 2 | {
"_id": "64d98ef7a4839890b25eb78b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg",
"followerCount": 14,
"fullname": "Fangyuan Yu",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "Ksgk-fy",
"type": "user"
} | true | null | 2502.17910 | [
{
"_id": "67be7f96b4ca41e2807a4fb0",
"hidden": false,
"name": "Fangyuan Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:26:07.778Z",
"user": {
"_id": "64d98ef7a4839890b25eb78b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64d98ef7a4839890b25eb78b/215-CSVLl81z6CAq0ECWU.jpeg",
"fullname": "Fangyuan Yu",
"isPro": true,
"type": "user",
"user": "Ksgk-fy"
}
}
] | 2025-02-25T07:18:29 | Scaling LLM Pre-training with Vocabulary Curriculum | Modern language models rely on static vocabularies, fixed before pretraining,
in contrast to the adaptive vocabulary acquisition observed in human language
learning. To bridge this gap, we introduce vocabulary curriculum learning, an
approach that improves pretraining efficiency with log-linear scaling gains
relative to vocabulary size. Our method alternates between entropy-guided
vocabulary expansion and model optimization, enabling models to learn
transferable representations across diverse tokenization granularities. This
approach naturally gives rise to an optimal computation allocation pattern:
longer tokens capture predictable content, while shorter tokens focus on more
complex, harder-to-predict contexts. Experiments on small-scale GPT models
demonstrate improved scaling efficiency, reinforcing the effectiveness of
dynamic tokenization. We release our code to support further research and plan
to extend our experiments to larger models and diverse domains. | 1 | 67be7f97b4ca41e2807a4fed | null | null |
|
2025-02-26T16:56:34.818000 | LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation | 2 | {
"_id": "64edcb9b84cc47a8b50bfab7",
"avatarUrl": "/avatars/1b4defb79eef3753a540efa76c16462a.svg",
"followerCount": 1,
"fullname": "Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Kinpz",
"type": "user"
} | true | null | 2502.18302 | [
{
"_id": "67befd09afb202a5b7518572",
"hidden": false,
"name": "Pengzhi Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:26.179Z",
"user": {
"_id": "64edcb9b84cc47a8b50bfab7",
"avatarUrl": "/avatars/1b4defb79eef3753a540efa76c16462a.svg",
"fullname": "Li",
"isPro": false,
"type": "user",
"user": "Kinpz"
}
},
{
"_id": "67befd09afb202a5b7518573",
"hidden": false,
"name": "Pengfei Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67befd09afb202a5b7518574",
"hidden": false,
"name": "Zide Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T13:08:16.002Z",
"user": {
"_id": "6431974f034ecbefddd4b463",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/O6yDcmDTZk9IH2wIbuCQ3.jpeg",
"fullname": "刘自得",
"isPro": false,
"type": "user",
"user": "zideliu"
}
},
{
"_id": "67befd09afb202a5b7518575",
"hidden": false,
"name": "Wei He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67befd09afb202a5b7518576",
"hidden": false,
"name": "Xuhao Pan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T13:09:11.684Z",
"user": {
"_id": "64478f36a478b20f17543240",
"avatarUrl": "/avatars/d5d40abc458282852cb758c6ca664038.svg",
"fullname": "Pan Xuhao",
"isPro": false,
"type": "user",
"user": "qingtian0"
}
},
{
"_id": "67befd09afb202a5b7518577",
"hidden": false,
"name": "Xudong Rao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67befd09afb202a5b7518578",
"hidden": false,
"name": "Tao Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67befd09afb202a5b7518579",
"hidden": false,
"name": "Wei Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T13:09:29.481Z",
"user": {
"_id": "632acc6b7fb39c2b6351d84e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/632acc6b7fb39c2b6351d84e/2_VLtRuRUl-hNW4a89qUp.jpeg",
"fullname": "wei chen",
"isPro": false,
"type": "user",
"user": "weichen"
}
}
] | 2025-02-25T15:42:34 | LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven
Language Representation | In this paper, we introduce LDGen, a novel method for integrating large
language models (LLMs) into existing text-to-image diffusion models while
minimizing computational demands. Traditional text encoders, such as CLIP and
T5, exhibit limitations in multilingual processing, hindering image generation
across diverse languages. We address these challenges by leveraging the
advanced capabilities of LLMs. Our approach employs a language representation
strategy that applies hierarchical caption optimization and human instruction
techniques to derive precise semantic information,. Subsequently, we
incorporate a lightweight adapter and a cross-modal refiner to facilitate
efficient feature alignment and interaction between LLMs and image features.
LDGen reduces training time and enables zero-shot multilingual image
generation. Experimental results indicate that our method surpasses baseline
models in both prompt adherence and image aesthetic quality, while seamlessly
supporting multiple languages. Project page: https://zrealli.github.io/LDGen. | 4 | 67befd0cafb202a5b751865e | null | null |
|
2025-02-26T14:46:51.721000 | MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs | 2 | {
"_id": "635b99d47a1656011516bff9",
"avatarUrl": "/avatars/7243c4171ff127ba90631f105881d9d7.svg",
"followerCount": 3,
"fullname": "jiarui zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jrzhang",
"type": "user"
} | true | null | 2502.17422 | [
{
"_id": "67bf6ea633d6740f711cc995",
"hidden": false,
"name": "Jiarui Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:15:01.691Z",
"user": {
"_id": "635b99d47a1656011516bff9",
"avatarUrl": "/avatars/7243c4171ff127ba90631f105881d9d7.svg",
"fullname": "jiarui zhang",
"isPro": false,
"type": "user",
"user": "jrzhang"
}
},
{
"_id": "67bf6ea633d6740f711cc996",
"hidden": false,
"name": "Mahyar Khayatkhoei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf6ea633d6740f711cc997",
"hidden": false,
"name": "Prateek Chhikara",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf6ea633d6740f711cc998",
"hidden": false,
"name": "Filip Ilievski",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T18:54:40 | MLLMs Know Where to Look: Training-free Perception of Small Visual
Details with Multimodal LLMs | Multimodal Large Language Models (MLLMs) have experienced rapid progress in
visual recognition tasks in recent years. Given their potential integration
into many critical applications, it is important to understand the limitations
of their visual perception. In this work, we study whether MLLMs can perceive
small visual details as effectively as large ones when answering questions
about images. We observe that their performance is very sensitive to the size
of the visual subject of the question, and further show that this effect is in
fact causal by conducting an intervention study. Next, we study the attention
patterns of MLLMs when answering visual questions, and intriguingly find that
they consistently know where to look, even when they provide the wrong answer.
Based on these findings, we then propose training-free visual intervention
methods that leverage the internal knowledge of any MLLM itself, in the form of
attention and gradient maps, to enhance its perception of small visual details.
We evaluate our proposed methods on two widely-used MLLMs and seven visual
question answering benchmarks and show that they can significantly improve
MLLMs' accuracy without requiring any training. Our results elucidate the risk
of applying MLLMs to visual recognition tasks concerning small details and
indicate that visual intervention using the model's internal state is a
promising direction to mitigate this risk. | 7 | 67bf6eaa33d6740f711ccac2 | null | null |
|
2025-02-26T12:51:05.089000 | Curie: Toward Rigorous and Automated Scientific Experimentation with AI Agents | 5 | {
"_id": "648fc22019e7511674b31f12",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648fc22019e7511674b31f12/9kRR00GMFYcuj6zR0BVfx.jpeg",
"followerCount": 1,
"fullname": "Amber",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AmberLJC",
"type": "user"
} | false | null | 2502.16069 | [
{
"_id": "67bf51f8653c05485b571e71",
"hidden": false,
"name": "Patrick Tser Jern Kon",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:25.807Z",
"user": {
"_id": "64b7111e17681d64b19cf95e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b7111e17681d64b19cf95e/VHPfCUl1nBS3OMMVi96CR.jpeg",
"fullname": "Patrick Kon",
"isPro": false,
"type": "user",
"user": "patkon"
}
},
{
"_id": "67bf51f8653c05485b571e72",
"hidden": false,
"name": "Jiachen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e73",
"hidden": false,
"name": "Qiuyi Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e74",
"hidden": false,
"name": "Yiming Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e75",
"hidden": false,
"name": "Zhenning Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e76",
"hidden": false,
"name": "Yibo Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e77",
"hidden": false,
"name": "Jayanth Srinivasa",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e78",
"hidden": false,
"name": "Myungjin Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e79",
"hidden": false,
"name": "Mosharaf Chowdhury",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf51f8653c05485b571e7a",
"hidden": false,
"name": "Ang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-22T03:58:19 | Curie: Toward Rigorous and Automated Scientific Experimentation with AI
Agents | Scientific experimentation, a cornerstone of human progress, demands rigor in
reliability, methodical control, and interpretability to yield meaningful
results. Despite the growing capabilities of large language models (LLMs) in
automating different aspects of the scientific process, automating rigorous
experimentation remains a significant challenge. To address this gap, we
propose Curie, an AI agent framework designed to embed rigor into the
experimentation process through three key components: an intra-agent rigor
module to enhance reliability, an inter-agent rigor module to maintain
methodical control, and an experiment knowledge module to enhance
interpretability. To evaluate Curie, we design a novel experimental benchmark
composed of 46 questions across four computer science domains, derived from
influential research papers, and widely adopted open-source projects. Compared
to the strongest baseline tested, we achieve a 3.4times improvement in
correctly answering experimental questions.Curie is open-sourced at
https://github.com/Just-Curieous/Curie. | 17 | 67bf51fa653c05485b571f00 | null | null |
|
2025-02-26T12:34:59.916000 | An Overview of Large Language Models for Statisticians | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.17814 | [
{
"_id": "67bf50aa9a1df81dba235650",
"hidden": false,
"name": "Wenlong Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235651",
"hidden": false,
"name": "Weizhe Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235652",
"hidden": false,
"name": "Emily Getzen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235653",
"hidden": false,
"name": "Kyunghyun Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235654",
"hidden": false,
"name": "Michael I. Jordan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235655",
"hidden": false,
"name": "Song Mei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235656",
"hidden": false,
"name": "Jason E Weston",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235657",
"hidden": false,
"name": "Weijie J. Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235658",
"hidden": false,
"name": "Jing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf50aa9a1df81dba235659",
"hidden": false,
"name": "Linjun Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T03:40:36 | An Overview of Large Language Models for Statisticians | Large Language Models (LLMs) have emerged as transformative tools in
artificial intelligence (AI), exhibiting remarkable capabilities across diverse
tasks such as text generation, reasoning, and decision-making. While their
success has primarily been driven by advances in computational power and deep
learning architectures, emerging problems -- in areas such as uncertainty
quantification, decision-making, causal inference, and distribution shift --
require a deeper engagement with the field of statistics. This paper explores
potential areas where statisticians can make important contributions to the
development of LLMs, particularly those that aim to engender trustworthiness
and transparency for human users. Thus, we focus on issues such as uncertainty
quantification, interpretability, fairness, privacy, watermarking and model
adaptation. We also consider possible roles for LLMs in statistical analysis.
By bridging AI and statistics, we aim to foster a deeper collaboration that
advances both the theoretical foundations and practical applications of LLMs,
ultimately shaping their role in addressing complex societal challenges. | 4 | 67bf50ab9a1df81dba2356ba | null | null |
|
2025-02-26T10:53:44.153000 | WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More Challenging | 2 | {
"_id": "6586f687ce38d143c4092ed7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6586f687ce38d143c4092ed7/uPYZgk-lGGEfxa0kASX0y.jpeg",
"followerCount": null,
"fullname": "Ahmed Mohamed Elhady",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ahmedselhady",
"type": "user"
} | true | null | 2502.18316 | [
{
"_id": "67bf0ccbb2f5c23eb0a69a7d",
"hidden": false,
"name": "Ahmed Elhady",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:14.872Z",
"user": {
"_id": "6586f687ce38d143c4092ed7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6586f687ce38d143c4092ed7/uPYZgk-lGGEfxa0kASX0y.jpeg",
"fullname": "Ahmed Mohamed Elhady",
"isPro": false,
"type": "user",
"user": "ahmedselhady"
}
},
{
"_id": "67bf0ccbb2f5c23eb0a69a7e",
"hidden": false,
"name": "Eneko Agirre",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf0ccbb2f5c23eb0a69a7f",
"hidden": false,
"name": "Mikel Artetxe",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T16:09:38 | WiCkeD: A Simple Method to Make Multiple Choice Benchmarks More
Challenging | We introduce WiCkeD, a simple method to increase the complexity of existing
multiple-choice benchmarks by randomly replacing a choice with "None of the
above", a method often used in educational tests. We show that WiCkeD can be
automatically applied to any existing benchmark, making it more challenging. We
apply WiCkeD to 6 popular benchmarks and use it to evaluate 18 open-weight
LLMs. The performance of the models drops 12.1 points on average with respect
to the original versions of the datasets. When using chain-of-thought on 3 MMLU
datasets, the performance drop for the WiCkeD variant is similar to the one
observed when using the LLMs directly, showing that WiCkeD is also challenging
for models with enhanced reasoning abilities. WiCkeD also uncovers that some
models are more sensitive to the extra reasoning required, providing additional
information with respect to the original benchmarks. We relase our code and
data at https://github.com/ahmedselhady/wicked-benchmarks. | 2 | 67bf0ccdb2f5c23eb0a69b25 | null | null |
|
2025-02-26T10:43:07.864000 | Prompt-to-Leaderboard | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.14855 | [
{
"_id": "67b8e77477a3ed169f302415",
"hidden": false,
"name": "Evan Frick",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f302416",
"hidden": false,
"name": "Connor Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f302417",
"hidden": false,
"name": "Joseph Tennyson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f302418",
"hidden": false,
"name": "Tianle Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f302419",
"hidden": false,
"name": "Wei-Lin Chiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f30241a",
"hidden": false,
"name": "Anastasios N. Angelopoulos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8e77477a3ed169f30241b",
"hidden": false,
"name": "Ion Stoica",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T18:58:07 | Prompt-to-Leaderboard | Large language model (LLM) evaluations typically rely on aggregated metrics
like accuracy or human preference, averaging across users and prompts. This
averaging obscures user- and prompt-specific variations in model performance.
To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces
leaderboards specific to a prompt. The core idea is to train an LLM taking
natural language prompts as input to output a vector of Bradley-Terry
coefficients which are then used to predict the human preference vote. The
resulting prompt-dependent leaderboards allow for unsupervised task-specific
evaluation, optimal routing of queries to models, personalization, and
automated evaluation of model strengths and weaknesses. Data from Chatbot Arena
suggest that P2L better captures the nuanced landscape of language model
performance than the averaged leaderboard. Furthermore, our findings suggest
that P2L's ability to produce prompt-specific evaluations follows a power law
scaling similar to that observed in LLMs themselves. In January 2025, the
router we trained based on this methodology achieved the \#1 spot in the
Chatbot Arena leaderboard. Our code is available at this GitHub link:
https://github.com/lmarena/p2l. | 7 | 67b8e77577a3ed169f302470 | null | null |
|
2025-02-26T09:40:23.169000 | Finding the Sweet Spot: Preference Data Construction for Scaling Preference Optimization | 2 | {
"_id": "6239888e7fef05b7bdd5fcff",
"avatarUrl": "/avatars/54fcc756b8c0936b6bb410c6e0e02d75.svg",
"followerCount": 1,
"fullname": "Hai Ye",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "oceanpty",
"type": "user"
} | false | null | 2502.16825 | [
{
"_id": "67bf243823f222a2cc2858d0",
"hidden": false,
"name": "Yao Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d1",
"hidden": false,
"name": "Hai Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d2",
"hidden": false,
"name": "Linyao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d3",
"hidden": false,
"name": "Hwee Tou Ng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d4",
"hidden": false,
"name": "Lidong Bing",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d5",
"hidden": false,
"name": "Xiaoli Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf243823f222a2cc2858d6",
"hidden": false,
"name": "Roy Ka-wei Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T04:22:57 | Finding the Sweet Spot: Preference Data Construction for Scaling
Preference Optimization | Iterative data generation and model retraining are widely used to align large
language models (LLMs). It typically involves a policy model to generate
on-policy responses and a reward model to guide training data selection. Direct
Preference Optimization (DPO) further enhances this process by constructing
preference pairs of chosen and rejected responses. In this work, we aim to
scale up the number of on-policy samples via repeated random sampling to
improve alignment performance. Conventional practice selects the sample with
the highest reward as chosen and the lowest as rejected for DPO. However, our
experiments reveal that this strategy leads to a decline in performance
as the sample size increases. To address this, we investigate preference data
construction through the lens of underlying normal distribution of sample
rewards. We categorize the reward space into seven representative points and
systematically explore all 21 (C_7^2) pairwise combinations. Through
evaluations on four models using AlpacaEval 2, we find that selecting the
rejected response at reward position mu - 2sigma rather than the minimum
reward, is crucial for optimal performance. We finally introduce a scalable
preference data construction strategy that consistently enhances model
performance as the sample scale increases. | 6 | 67bf243923f222a2cc285919 | null | null |
|
2025-02-26T07:28:05.618000 | LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models | 2 | {
"_id": "62d19a4b1e36881a57f31c6a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d19a4b1e36881a57f31c6a/C-tAc0uXvpIggh0nWB2Dy.jpeg",
"followerCount": 1,
"fullname": "Hugo Pitorro",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "twigs",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/62d19a4b1e36881a57f31c6a/GN78Zj956f5CoUuGof_RC.png"
] | 2502.15612 | [
{
"_id": "67bc8aeb70194f240328e1cf",
"hidden": false,
"name": "Hugo Pitorro",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T15:46:13.861Z",
"user": {
"_id": "62d19a4b1e36881a57f31c6a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d19a4b1e36881a57f31c6a/C-tAc0uXvpIggh0nWB2Dy.jpeg",
"fullname": "Hugo Pitorro",
"isPro": false,
"type": "user",
"user": "twigs"
}
},
{
"_id": "67bc8aeb70194f240328e1d0",
"hidden": false,
"name": "Marcos Treviso",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T17:33:59 | LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models | State space models (SSMs), such as Mamba, have emerged as an efficient
alternative to transformers for long-context sequence modeling. However,
despite their growing adoption, SSMs lack the interpretability tools that have
been crucial for understanding and improving attention-based architectures.
While recent efforts provide insights into Mamba's internal mechanisms, they do
not explicitly decompose token-wise contributions, leaving gaps in
understanding how Mamba selectively processes sequences across layers. In this
work, we introduce LaTIM, a novel token-level decomposition method for both
Mamba-1 and Mamba-2 that enables fine-grained interpretability. We extensively
evaluate our method across diverse tasks, including machine translation,
copying, and retrieval-based generation, demonstrating its effectiveness in
revealing Mamba's token-to-token interaction patterns. | 4 | 67bc8aed70194f240328e2cc | null | null |
|
2025-02-26T02:37:36.287000 | Introducing Visual Perception Token into Multimodal Large Language Model | 2 | {
"_id": "635364b3c41f548fe39db945",
"avatarUrl": "/avatars/ad1916bbfabca0b6651c8eabacc5eba8.svg",
"followerCount": 2,
"fullname": "Runpeng Yu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "rp-yu",
"type": "user"
} | true | null | 2502.17425 | [
{
"_id": "67bddd63c7d8b835b82ced9a",
"hidden": false,
"name": "Runpeng Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:14:07.580Z",
"user": {
"_id": "635364b3c41f548fe39db945",
"avatarUrl": "/avatars/ad1916bbfabca0b6651c8eabacc5eba8.svg",
"fullname": "Runpeng Yu",
"isPro": false,
"type": "user",
"user": "rp-yu"
}
},
{
"_id": "67bddd63c7d8b835b82ced9b",
"hidden": false,
"name": "Xinyin Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:14:13.670Z",
"user": {
"_id": "64396ebc21221ac7411852b3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64396ebc21221ac7411852b3/SR0dC8N0bdj9tZFxYPpSf.jpeg",
"fullname": "Xinyin Ma",
"isPro": false,
"type": "user",
"user": "horseee"
}
},
{
"_id": "67bddd63c7d8b835b82ced9c",
"hidden": false,
"name": "Xinchao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:14:53.838Z",
"user": {
"_id": "63fc03a50aab060792ffef39",
"avatarUrl": "/avatars/9d5b1bb2a41928e08176b703935133ab.svg",
"fullname": "Wangxinchao",
"isPro": false,
"type": "user",
"user": "wxcTest"
}
}
] | 2025-02-24T18:56:12 | Introducing Visual Perception Token into Multimodal Large Language Model | To utilize visual information, Multimodal Large Language Model (MLLM) relies
on the perception process of its vision encoder. The completeness and accuracy
of visual perception significantly influence the precision of spatial
reasoning, fine-grained understanding, and other tasks. However, MLLM still
lacks the autonomous capability to control its own visual perception processes,
for example, selectively reviewing specific regions of an image or focusing on
information related to specific object categories. In this work, we propose the
concept of Visual Perception Token, aiming to empower MLLM with a mechanism to
control its visual perception processes. We design two types of Visual
Perception Tokens, termed the Region Selection Token and the Vision Re-Encoding
Token. MLLMs autonomously generate these tokens, just as they generate text,
and use them to trigger additional visual perception actions. The Region
Selection Token explicitly identifies specific regions in an image that require
further perception, while the Vision Re-Encoding Token uses its hidden states
as control signals to guide additional visual perception processes. Extensive
experiments demonstrate the advantages of these tokens in handling spatial
reasoning, improving fine-grained understanding, and other tasks. On average,
the introduction of Visual Perception Tokens improves the performance of a 2B
model by 23.6\%, increasing its score from 0.572 to 0.708, and even outperforms
a 7B parameter model by 13.4\% (from 0.624). Please check out our repo
https://github.com/yu-rp/VisualPerceptionToken | 14 | 67bddd64c7d8b835b82cee5a | null | null |
|
2025-02-26T01:04:23.776000 | The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve? | 2 | {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg",
"followerCount": 5,
"fullname": "Xiang Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Dominic789654",
"type": "user"
} | true | null | 2502.17535 | [
{
"_id": "67beaec94a1d9d7e368a7840",
"hidden": false,
"name": "Zhenheng Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:15:34.971Z",
"user": {
"_id": "66a4a319a1711696948b045c",
"avatarUrl": "/avatars/1d92d57a949332cb8227697b9a0c2f39.svg",
"fullname": "Zhenheng Tang",
"isPro": false,
"type": "user",
"user": "coolzhtang"
}
},
{
"_id": "67beaec94a1d9d7e368a7841",
"hidden": false,
"name": "Xiang Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:15:20.100Z",
"user": {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg",
"fullname": "Xiang Liu",
"isPro": false,
"type": "user",
"user": "Dominic789654"
}
},
{
"_id": "67beaec94a1d9d7e368a7842",
"hidden": false,
"name": "Qian Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67beaec94a1d9d7e368a7843",
"hidden": false,
"name": "Peijie Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67beaec94a1d9d7e368a7844",
"hidden": false,
"name": "Bingsheng He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67beaec94a1d9d7e368a7845",
"hidden": false,
"name": "Xiaowen Chu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:15:41.884Z",
"user": {
"_id": "6676935fcd0b89a0115174b0",
"avatarUrl": "/avatars/4caca1b672d29e787814f9a30bf20bcc.svg",
"fullname": "Xiaowen Chu",
"isPro": false,
"type": "user",
"user": "wenxinsiju"
}
},
{
"_id": "67beaec94a1d9d7e368a7846",
"hidden": false,
"name": "Bo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T15:39:35 | The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM
Compression Preserve? | Motivated by reducing the computational and storage costs of LLMs, model
compression and KV cache compression have attracted much attention from
researchers. However, current methods predominantly emphasize maintaining the
performance of compressed LLMs, as measured by perplexity or simple accuracy on
tasks of common sense knowledge QA and basic arithmetic reasoning. In this
blog, we present a brief review of recent advancements in LLMs related to
retrieval-augmented generation, multi-step reasoning, external tools, and
computational expressivity, all of which substantially enhance LLM performance.
Then, we propose a lottery LLM hypothesis suggesting that for a given LLM and
task, there exists a smaller lottery LLM capable of producing the same
performance as the original LLM with the assistance of multi-step reasoning and
external tools. Based on the review of current progress in LLMs, we discuss and
summarize the essential capabilities that the lottery LLM and KV cache
compression must possess, which are currently overlooked in existing methods. | 8 | 67beaeca4a1d9d7e368a7875 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.