Dataset Viewer
publishedAt
timestamp[ns]date 2023-02-13 12:55:54
2025-05-02 03:36:49
⌀ | title
stringlengths 8
206
⌀ | thumbnail
stringlengths 77
77
⌀ | numComments
int64 0
143
⌀ | submittedBy
dict | isAuthorParticipating
bool 2
classes | mediaUrls
sequencelengths 0
12
⌀ | paper_id
stringlengths 10
10
⌀ | paper_authors
listlengths 1
942
⌀ | paper_publishedAt
timestamp[ns]date 2023-02-13 17:55:54
2025-05-02 07:36:49
⌀ | paper_title
stringlengths 8
206
⌀ | paper_summary
stringlengths 165
1.92k
⌀ | paper_upvotes
int64 0
615
⌀ | paper_discussionId
stringlengths 24
24
⌀ | paper_projectPage
stringclasses 572
values | paper_githubRepo
stringclasses 813
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-03-04T12:05:25.041000 | Efficient Test-Time Scaling via Self-Calibration | 1 | {
"_id": "62ea79dd01ed9b0e8f61ccd3",
"avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg",
"followerCount": 2,
"fullname": "Chengsong Huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ChengsongHuang",
"type": "user"
} | true | null | 2503.00031 | [
{
"_id": "67c732c14aaf26f75cea0d82",
"hidden": false,
"name": "Chengsong Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T21:15:36.013Z",
"user": {
"_id": "62ea79dd01ed9b0e8f61ccd3",
"avatarUrl": "/avatars/70af83e0e267be39fcd5f23b85e2dafa.svg",
"fullname": "Chengsong Huang",
"isPro": false,
"type": "user",
"user": "ChengsongHuang"
}
},
{
"_id": "67c732c14aaf26f75cea0d83",
"hidden": false,
"name": "Langlin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d84",
"hidden": false,
"name": "Jixuan Leng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d85",
"hidden": false,
"name": "Jiacheng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c732c14aaf26f75cea0d86",
"hidden": false,
"name": "Jiaxin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T00:21:14 | Efficient Test-Time Scaling via Self-Calibration | Increasing test-time computation is a straightforward approach to enhancing
the quality of responses in Large Language Models (LLMs). While Best-of-N
sampling and Self-Consistency with majority voting are simple and effective,
they require a fixed number of sampling responses for each query, regardless of
its complexity. This could result in wasted computation for simpler questions
and insufficient exploration for more challenging ones. In this work, we argue
that model confidence of responses can be used for improving the efficiency of
test-time scaling. Unfortunately, LLMs are known to be overconfident and
provide unreliable confidence estimation. To address this limitation, we
introduce Self-Calibration by distilling Self-Consistency-derived confidence
into the model itself. This enables reliable confidence estimation at test time
with one forward pass. We then design confidence-based efficient test-time
scaling methods to handle queries of various difficulty, such as Early-Stopping
for Best-of-N and Self-Consistency with calibrated confidence. Experiments on
three LLMs across six datasets demonstrate the effectiveness of our approach.
Specifically, applying confidence-based Early Stopping to Best-of-N improves
MathQA accuracy from 81.0 to 83.6 with a sample budget of 16 responses,
indicating the efficacy of confidence-based sampling strategy at inference
time. | 8 | 67c732c34aaf26f75cea0df7 | null | null |
|
2025-03-04T10:47:26.717000 | Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis | 1 | {
"_id": "63e0b1925ba41def87930c47",
"avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg",
"followerCount": 1,
"fullname": "Jeffrey Yang Fan Chiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "RandomHakkaDude",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63e0b1925ba41def87930c47/OQIn8hn8i8nP9HMjOk5cR.mp4"
] | 2502.20383 | [
{
"_id": "67c284e76e9f0735ea1c436d",
"hidden": false,
"name": "Jeffrey Yang Fan Chiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:34.456Z",
"user": {
"_id": "63e0b1925ba41def87930c47",
"avatarUrl": "/avatars/4d55fdbe979ddf72a21430d66518d24f.svg",
"fullname": "Jeffrey Yang Fan Chiang",
"isPro": false,
"type": "user",
"user": "RandomHakkaDude"
}
},
{
"_id": "67c284e76e9f0735ea1c436e",
"hidden": false,
"name": "Seungjae Lee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:28.645Z",
"user": {
"_id": "64081a908dca6cec91caf136",
"avatarUrl": "/avatars/c45d7fcdf879f4d6020863fd3be39771.svg",
"fullname": "SeungJae Lee",
"isPro": false,
"type": "user",
"user": "SeungJaeLee"
}
},
{
"_id": "67c284e76e9f0735ea1c436f",
"hidden": false,
"name": "Jia-Bin Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:55.181Z",
"user": {
"_id": "641c139b73296f7ee256970c",
"avatarUrl": "/avatars/5a2550d95e686640242840ad3bd0e680.svg",
"fullname": "Jiabin Huang",
"isPro": false,
"type": "user",
"user": "YellowAddice"
}
},
{
"_id": "67c284e76e9f0735ea1c4370",
"hidden": false,
"name": "Furong Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:44.428Z",
"user": {
"_id": "64cbc3e2a257a3212c00a115",
"avatarUrl": "/avatars/836e61be4aeda2080ddf2db9f2626cc6.svg",
"fullname": "Furong Huang Lab at UMD",
"isPro": false,
"type": "user",
"user": "furongh-lab"
}
},
{
"_id": "67c284e76e9f0735ea1c4371",
"hidden": false,
"name": "Yizheng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:07:37.149Z",
"user": {
"_id": "660daf1d62d63ad000a53b9b",
"avatarUrl": "/avatars/2f79d4b7db395e94b614358c7f322efe.svg",
"fullname": "Yizheng Chen",
"isPro": false,
"type": "user",
"user": "surrealyz"
}
}
] | 2025-02-27T18:56:26 | Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security
Analysis | Recent advancements in Web AI agents have demonstrated remarkable
capabilities in addressing complex web navigation tasks. However, emerging
research shows that these agents exhibit greater vulnerability compared to
standalone Large Language Models (LLMs), despite both being built upon the same
safety-aligned models. This discrepancy is particularly concerning given the
greater flexibility of Web AI Agent compared to standalone LLMs, which may
expose them to a wider range of adversarial user inputs. To build a scaffold
that addresses these concerns, this study investigates the underlying factors
that contribute to the increased vulnerability of Web AI agents. Notably, this
disparity stems from the multifaceted differences between Web AI agents and
standalone LLMs, as well as the complex signals - nuances that simple
evaluation metrics, such as success rate, often fail to capture. To tackle
these challenges, we propose a component-level analysis and a more granular,
systematic evaluation framework. Through this fine-grained investigation, we
identify three critical factors that amplify the vulnerability of Web AI
agents; (1) embedding user goals into the system prompt, (2) multi-step action
generation, and (3) observational capabilities. Our findings highlights the
pressing need to enhance security and robustness in AI agent design and provide
actionable insights for targeted defense strategies. | 1 | 67c284e96e9f0735ea1c43dd | https://vulnerable-ai-agents.github.io/ | null |
|
2025-03-04T08:19:57.557000 | General Reasoning Requires Learning to Reason from the Get-go | 1 | {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"followerCount": null,
"fullname": "Seungwook Han",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hanseungwook",
"type": "user"
} | true | null | 2502.19402 | [
{
"_id": "67c66a6321d722b4247e5959",
"hidden": false,
"name": "Seungwook Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T16:08:58.266Z",
"user": {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"fullname": "Seungwook Han",
"isPro": false,
"type": "user",
"user": "hanseungwook"
}
},
{
"_id": "67c66a6321d722b4247e595a",
"hidden": false,
"name": "Jyothish Pari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66a6321d722b4247e595b",
"hidden": false,
"name": "Samuel J. Gershman",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-04T13:57:29.748Z",
"user": {
"_id": "6520d6db2a16045c092b3b36",
"avatarUrl": "/avatars/dab34f141a1aef39d00c789ff85e729f.svg",
"fullname": "Seungwook Han",
"isPro": false,
"type": "user",
"user": "hanseungwook"
}
},
{
"_id": "67c66a6321d722b4247e595c",
"hidden": false,
"name": "Pulkit Agrawal",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T18:51:12 | General Reasoning Requires Learning to Reason from the Get-go | Large Language Models (LLMs) have demonstrated impressive real-world utility,
exemplifying artificial useful intelligence (AUI). However, their ability to
reason adaptively and robustly -- the hallmarks of artificial general
intelligence (AGI) -- remains fragile. While LLMs seemingly succeed in
commonsense reasoning, programming, and mathematics, they struggle to
generalize algorithmic understanding across novel contexts. Our experiments
with algorithmic tasks in esoteric programming languages reveal that LLM's
reasoning overfits to the training data and is limited in its transferability.
We hypothesize that the core issue underlying such limited transferability is
the coupling of reasoning and knowledge in LLMs.
To transition from AUI to AGI, we propose disentangling knowledge and
reasoning through three key directions: (1) pretaining to reason using RL from
scratch as an alternative to the widely used next-token prediction pretraining,
(2) using a curriculum of synthetic tasks to ease the learning of a
reasoning prior for RL that can then be transferred to natural
language tasks, and (3) learning more generalizable reasoning functions using a
small context window to reduce exploiting spurious correlations between tokens.
Such a reasoning system coupled with a trained retrieval system and a large
external memory bank as a knowledge store can overcome several limitations of
existing architectures at learning to reason in novel scenarios. | 4 | 67c66a6521d722b4247e59c8 | null | null |
|
2025-03-04T08:11:33.371000 | PodAgent: A Comprehensive Framework for Podcast Generation | 1 | {
"_id": "674836767b7151c3ff30f865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png",
"followerCount": null,
"fullname": "Yujia Xiao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yogurt928",
"type": "user"
} | true | null | 2503.00455 | [
{
"_id": "67c6facdd8af5b36fd4b59cf",
"hidden": false,
"name": "Yujia Xiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T16:08:12.490Z",
"user": {
"_id": "674836767b7151c3ff30f865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png",
"fullname": "Yujia Xiao",
"isPro": false,
"type": "user",
"user": "Yogurt928"
}
},
{
"_id": "67c6facdd8af5b36fd4b59d0",
"hidden": false,
"name": "Lei He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d1",
"hidden": false,
"name": "Haohan Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d2",
"hidden": false,
"name": "Fenglong Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6facdd8af5b36fd4b59d3",
"hidden": false,
"name": "Tan Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-01T11:35:17 | PodAgent: A Comprehensive Framework for Podcast Generation | Existing Existing automatic audio generation methods struggle to generate
podcast-like audio programs effectively. The key challenges lie in in-depth
content generation, appropriate and expressive voice production. This paper
proposed PodAgent, a comprehensive framework for creating audio programs.
PodAgent 1) generates informative topic-discussion content by designing a
Host-Guest-Writer multi-agent collaboration system, 2) builds a voice pool for
suitable voice-role matching and 3) utilizes LLM-enhanced speech synthesis
method to generate expressive conversational speech. Given the absence of
standardized evaluation criteria for podcast-like audio generation, we
developed comprehensive assessment guidelines to effectively evaluate the
model's performance. Experimental results demonstrate PodAgent's effectiveness,
significantly surpassing direct GPT-4 generation in topic-discussion dialogue
content, achieving an 87.4% voice-matching accuracy, and producing more
expressive speech through LLM-guided synthesis. Demo page:
https://podcast-agent.github.io/demo/. Source code:
https://github.com/yujxx/PodAgent. | 5 | 67c6facfd8af5b36fd4b5a45 | https://podcast-agent.github.io/demo/ | https://github.com/yujxx/PodAgent |
|
2025-03-04T06:41:49.997000 | When an LLM is apprehensive about its answers -- and when its uncertainty is justified | 1 | {
"_id": "675708985b91dea24c3ef642",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/675708985b91dea24c3ef642/8KmerI1LwJEBHM2vrC54d.jpeg",
"followerCount": null,
"fullname": "Andrey Goncharov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "aigoncharov",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/675708985b91dea24c3ef642/9wCzAalApYA8hPN94CaEu.png"
] | 2503.01688 | [
{
"_id": "67c6e6735aea9d8918635ac2",
"hidden": false,
"name": "Petr Sychev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:33.230Z",
"user": {
"_id": "6728224623d75cbd1cdbe568",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4sb6TjuzeDc8-PG9hYhjW.jpeg",
"fullname": "Petr Sychev",
"isPro": false,
"type": "user",
"user": "sspetya"
}
},
{
"_id": "67c6e6735aea9d8918635ac3",
"hidden": false,
"name": "Andrey Goncharov",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T11:39:33.550Z",
"user": {
"_id": "675708985b91dea24c3ef642",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/675708985b91dea24c3ef642/8KmerI1LwJEBHM2vrC54d.jpeg",
"fullname": "Andrey Goncharov",
"isPro": false,
"type": "user",
"user": "aigoncharov"
}
},
{
"_id": "67c6e6735aea9d8918635ac4",
"hidden": false,
"name": "Daniil Vyazhev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:34.869Z",
"user": {
"_id": "659e049c01805191e5f67b12",
"avatarUrl": "/avatars/4f33e39d85f8fbdfaeb34143e5038b92.svg",
"fullname": "Vyazhev",
"isPro": false,
"type": "user",
"user": "DanielVyazhev"
}
},
{
"_id": "67c6e6735aea9d8918635ac5",
"hidden": false,
"name": "Edvard Khalafyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6e6735aea9d8918635ac6",
"hidden": false,
"name": "Alexey Zaytsev",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T16:03:46 | When an LLM is apprehensive about its answers -- and when its
uncertainty is justified | Uncertainty estimation is crucial for evaluating Large Language Models
(LLMs), particularly in high-stakes domains where incorrect answers result in
significant consequences. Numerous approaches consider this problem, while
focusing on a specific type of uncertainty, ignoring others. We investigate
what estimates, specifically token-wise entropy and model-as-judge (MASJ),
would work for multiple-choice question-answering tasks for different question
topics. Our experiments consider three LLMs: Phi-4, Mistral, and Qwen of
different sizes from 1.5B to 72B and 14 topics. While MASJ performs similarly
to a random error predictor, the response entropy predicts model error in
knowledge-dependent domains and serves as an effective indicator of question
difficulty: for biology ROC AUC is 0.73. This correlation vanishes for the
reasoning-dependent domain: for math questions ROC-AUC is 0.55. More
principally, we found out that the entropy measure required a reasoning amount.
Thus, data-uncertainty related entropy should be integrated within uncertainty
estimates frameworks, while MASJ requires refinement. Moreover, existing
MMLU-Pro samples are biased, and should balance required amount of reasoning
for different subdomains to provide a more fair assessment of LLMs performance. | 16 | 67c6e6755aea9d8918635b20 | null | https://github.com/LabARSS/question-complextiy-estimation |
|
2025-03-04T05:28:10.012000 | SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity | 1 | {
"_id": "65a0aade5fafc248c2156e95",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65a0aade5fafc248c2156e95/S9YjJMTuKc-U1cFizqUMA.jpeg",
"followerCount": 1,
"fullname": "DeyangKong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DeyangKong",
"type": "user"
} | true | null | 2503.01506 | [
{
"_id": "67c67cf5c8d296910ca74711",
"hidden": false,
"name": "Xiangyu Xi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:25.632Z",
"user": {
"_id": "63edb098679c2cc40abc6c2e",
"avatarUrl": "/avatars/288c7229937c2c3f29fda6d17c7df2eb.svg",
"fullname": "Xiangyu",
"isPro": false,
"type": "user",
"user": "xixy"
}
},
{
"_id": "67c67cf5c8d296910ca74712",
"hidden": false,
"name": "Deyang Kong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:21.910Z",
"user": {
"_id": "65a0aade5fafc248c2156e95",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65a0aade5fafc248c2156e95/S9YjJMTuKc-U1cFizqUMA.jpeg",
"fullname": "DeyangKong",
"isPro": false,
"type": "user",
"user": "DeyangKong"
}
},
{
"_id": "67c67cf5c8d296910ca74713",
"hidden": false,
"name": "Jian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74714",
"hidden": false,
"name": "Jiawei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74715",
"hidden": false,
"name": "Zhengyu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:14.018Z",
"user": {
"_id": "67b7ebf3d00e69f10cfcf551",
"avatarUrl": "/avatars/8adea7ae44c459079113a690ec7da73a.svg",
"fullname": "Chen Zhengyu",
"isPro": false,
"type": "user",
"user": "WQYC"
}
},
{
"_id": "67c67cf5c8d296910ca74716",
"hidden": false,
"name": "Wei Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:23.635Z",
"user": {
"_id": "62fa0ffe0697d224219a0cb7",
"avatarUrl": "/avatars/f0ef59e1c0cf4ab4fe5cee08d488bd03.svg",
"fullname": "Wei Wang",
"isPro": false,
"type": "user",
"user": "WeiWang"
}
},
{
"_id": "67c67cf5c8d296910ca74717",
"hidden": false,
"name": "Jingang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:00.303Z",
"user": {
"_id": "647097cbcfd57849518e656b",
"avatarUrl": "/avatars/c66fe0add29c1bde9e3a98bf4a8793b9.svg",
"fullname": "Jingang Wang",
"isPro": false,
"type": "user",
"user": "bitwjg"
}
},
{
"_id": "67c67cf5c8d296910ca74718",
"hidden": false,
"name": "Xunliang Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca74719",
"hidden": false,
"name": "Shikun Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67cf5c8d296910ca7471a",
"hidden": false,
"name": "Wei Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T13:22:11 | SampleMix: A Sample-wise Pre-training Data Mixing Strategey by
Coordinating Data Quality and Diversity | Existing pretraining data mixing methods for large language models (LLMs)
typically follow a domain-wise methodology, a top-down process that first
determines domain weights and then performs uniform data sampling across each
domain. However, these approaches neglect significant inter-domain overlaps and
commonalities, failing to control the global diversity of the constructed
training dataset. Further, uniform sampling within domains ignores fine-grained
sample-specific features, potentially leading to suboptimal data distribution.
To address these shortcomings, we propose a novel sample-wise data mixture
approach based on a bottom-up paradigm. This method performs global
cross-domain sampling by systematically evaluating the quality and diversity of
each sample, thereby dynamically determining the optimal domain distribution.
Comprehensive experiments across multiple downstream tasks and perplexity
assessments demonstrate that SampleMix surpasses existing domain-based methods.
Meanwhile, SampleMix requires 1.4x to 2.1x training steps to achieves the
baselines' performance, highlighting the substantial potential of SampleMix to
optimize pre-training data. | 7 | 67c67d03c8d296910ca7494f | null | null |
|
2025-03-04T05:13:44.578000 | Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia | 1 | {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"followerCount": 1,
"fullname": "Zirui Song",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ziruibest",
"type": "user"
} | true | null | 2503.01714 | [
{
"_id": "67c6d22d983375492193aab0",
"hidden": false,
"name": "Chenxi Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T11:16:44.551Z",
"user": {
"_id": "679bc0ec7f3c28bf968321c8",
"avatarUrl": "/avatars/9d5ab9c6af32878e28987518c0210c1a.svg",
"fullname": "Chenxi Wang",
"isPro": false,
"type": "user",
"user": "Aurora-cx"
}
},
{
"_id": "67c6d22d983375492193aab1",
"hidden": false,
"name": "Tianle Gu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:54:29.015Z",
"user": {
"_id": "6346361c5efccdc07f179cae",
"avatarUrl": "/avatars/217818114a4c19ea4f3e5cdafefb625e.svg",
"fullname": "Gu Tianle",
"isPro": false,
"type": "user",
"user": "Carol0110"
}
},
{
"_id": "67c6d22d983375492193aab2",
"hidden": false,
"name": "Zhongyu Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d22d983375492193aab3",
"hidden": false,
"name": "Lang Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d22d983375492193aab4",
"hidden": false,
"name": "Zirui Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T10:17:25.935Z",
"user": {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"fullname": "Zirui Song",
"isPro": false,
"type": "user",
"user": "Ziruibest"
}
},
{
"_id": "67c6d22d983375492193aab5",
"hidden": false,
"name": "Xiuying Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T16:31:45 | Word Form Matters: LLMs' Semantic Reconstruction under Typoglycemia | Human readers can efficiently comprehend scrambled words, a phenomenon known
as Typoglycemia, primarily by relying on word form; if word form alone is
insufficient, they further utilize contextual cues for interpretation. While
advanced large language models (LLMs) exhibit similar abilities, the underlying
mechanisms remain unclear. To investigate this, we conduct controlled
experiments to analyze the roles of word form and contextual information in
semantic reconstruction and examine LLM attention patterns. Specifically, we
first propose SemRecScore, a reliable metric to quantify the degree of semantic
reconstruction, and validate its effectiveness. Using this metric, we study how
word form and contextual information influence LLMs' semantic reconstruction
ability, identifying word form as the core factor in this process. Furthermore,
we analyze how LLMs utilize word form and find that they rely on specialized
attention heads to extract and process word form information, with this
mechanism remaining stable across varying levels of word scrambling. This
distinction between LLMs' fixed attention patterns primarily focused on word
form and human readers' adaptive strategy in balancing word form and contextual
information provides insights into enhancing LLM performance by incorporating
human-like, context-aware mechanisms. | 5 | 67c6d22e983375492193ab13 | null | null |
|
2025-03-04T05:12:10.849000 | Direct Discriminative Optimization: Your Likelihood-Based Visual Generative Model is Secretly a GAN Discriminator | 1 | {
"_id": "652bf7edc3cba555d5673c6e",
"avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg",
"followerCount": null,
"fullname": "Kaiwen Zheng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "worstcoder",
"type": "user"
} | true | null | 2503.01103 | [
{
"_id": "67c6d1c35e896ed915374027",
"hidden": false,
"name": "Kaiwen Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T10:17:24.142Z",
"user": {
"_id": "652bf7edc3cba555d5673c6e",
"avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg",
"fullname": "Kaiwen Zheng",
"isPro": false,
"type": "user",
"user": "worstcoder"
}
},
{
"_id": "67c6d1c35e896ed915374028",
"hidden": false,
"name": "Yongxin Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:16:59.170Z",
"user": {
"_id": "66f4cf1a03b5ba8a7f1f6522",
"avatarUrl": "/avatars/2768d6e37d3f280194cfb8ed274f6015.svg",
"fullname": "Yongxin Chen",
"isPro": false,
"type": "user",
"user": "Ema11"
}
},
{
"_id": "67c6d1c35e896ed915374029",
"hidden": false,
"name": "Huayu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:06.080Z",
"user": {
"_id": "6630f87ee53fcb71c3887df0",
"avatarUrl": "/avatars/50191a3d45bebf90cf08df09477e95db.svg",
"fullname": "HuayuChen",
"isPro": false,
"type": "user",
"user": "HuayuChen"
}
},
{
"_id": "67c6d1c35e896ed91537402a",
"hidden": false,
"name": "Guande He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:20.266Z",
"user": {
"_id": "67492ee82ad3cfc108a41bbb",
"avatarUrl": "/avatars/7ad03e55a8791c62f1271a5c9bf8cc60.svg",
"fullname": "Guande He",
"isPro": false,
"type": "user",
"user": "gdhe17"
}
},
{
"_id": "67c6d1c35e896ed91537402b",
"hidden": false,
"name": "Ming-Yu Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:27.270Z",
"user": {
"_id": "62f049afdf4b93aad5c7f2d6",
"avatarUrl": "/avatars/e272e58ad996733d7098e50248e5b57e.svg",
"fullname": "Ming-Yu Liu",
"isPro": false,
"type": "user",
"user": "mingyuliutw"
}
},
{
"_id": "67c6d1c35e896ed91537402c",
"hidden": false,
"name": "Jun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6d1c35e896ed91537402d",
"hidden": false,
"name": "Qinsheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:33.763Z",
"user": {
"_id": "6732d5dea24987c43bfbafd8",
"avatarUrl": "/avatars/1581373b9de5069975716932fceb976b.svg",
"fullname": "Qinsheng Zhang",
"isPro": false,
"type": "user",
"user": "qsh-zh"
}
}
] | 2025-03-03T02:06:22 | Direct Discriminative Optimization: Your Likelihood-Based Visual
Generative Model is Secretly a GAN Discriminator | While likelihood-based generative models, particularly diffusion and
autoregressive models, have achieved remarkable fidelity in visual generation,
the maximum likelihood estimation (MLE) objective inherently suffers from a
mode-covering tendency that limits the generation quality under limited model
capacity. In this work, we propose Direct Discriminative Optimization (DDO) as
a unified framework that bridges likelihood-based generative training and the
GAN objective to bypass this fundamental constraint. Our key insight is to
parameterize a discriminator implicitly using the likelihood ratio between a
learnable target model and a fixed reference model, drawing parallels with the
philosophy of Direct Preference Optimization (DPO). Unlike GANs, this
parameterization eliminates the need for joint training of generator and
discriminator networks, allowing for direct, efficient, and effective
finetuning of a well-trained model to its full potential beyond the limits of
MLE. DDO can be performed iteratively in a self-play manner for progressive
model refinement, with each round requiring less than 1% of pretraining epochs.
Our experiments demonstrate the effectiveness of DDO by significantly advancing
the previous SOTA diffusion model EDM, reducing FID scores from 1.79/1.58 to
new records of 1.30/0.97 on CIFAR-10/ImageNet-64 datasets, and by consistently
improving both guidance-free and CFG-enhanced FIDs of visual autoregressive
models on ImageNet 256times256. | 2 | 67c6d1c65e896ed9153740e4 | https://research.nvidia.com/labs/dir/ddo/ | null |
|
2025-03-04T04:56:33.061000 | From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens | 1 | {
"_id": "63a95a6a7930fa8c7dd63d4e",
"avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg",
"followerCount": 3,
"fullname": "Zilong Zheng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zlzheng",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63a95a6a7930fa8c7dd63d4e/3WZ10b-Ku3GcY1fc1MWx8.mp4"
] | 2502.18890 | [
{
"_id": "67c6cbd6e52534aa6ada2e26",
"hidden": false,
"name": "Tong Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:45.670Z",
"user": {
"_id": "668f7fee5156d55f72af4f21",
"avatarUrl": "/avatars/02edf8d7d5f288d80dc665b18dda4d0a.svg",
"fullname": "TongWu",
"isPro": false,
"type": "user",
"user": "TongWu"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e27",
"hidden": false,
"name": "Junzhe Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:27.834Z",
"user": {
"_id": "6530c9d7d107f378e105d667",
"avatarUrl": "/avatars/889dfcb6514c90351802bebb4a34a78f.svg",
"fullname": "Junzhe Shen",
"isPro": false,
"type": "user",
"user": "JunzheS"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e28",
"hidden": false,
"name": "Zixia Jia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:58:34.128Z",
"user": {
"_id": "64b7ae6cf53ae848e72b997d",
"avatarUrl": "/avatars/b55dd3d6fcb3ccac2e3880d01a9bdc63.svg",
"fullname": "Zixia Jia",
"isPro": false,
"type": "user",
"user": "vickyandkekey"
}
},
{
"_id": "67c6cbd6e52534aa6ada2e29",
"hidden": false,
"name": "Yuxuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6cbd6e52534aa6ada2e2a",
"hidden": false,
"name": "Zilong Zheng",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T09:45:59.571Z",
"user": {
"_id": "63a95a6a7930fa8c7dd63d4e",
"avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg",
"fullname": "Zilong Zheng",
"isPro": false,
"type": "user",
"user": "zlzheng"
}
}
] | 2025-02-26T07:10:08 | From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence
Generation up to 100K Tokens | Generating ultra-long sequences with large language models (LLMs) has become
increasingly crucial but remains a highly time-intensive task, particularly for
sequences up to 100K tokens. While traditional speculative decoding methods
exist, simply extending their generation limits fails to accelerate the process
and can be detrimental. Through an in-depth analysis, we identify three major
challenges hindering efficient generation: frequent model reloading, dynamic
key-value (KV) management and repetitive generation. To address these issues,
we introduce TOKENSWIFT, a novel framework designed to substantially accelerate
the generation process of ultra-long sequences while maintaining the target
model's inherent quality. Experimental results demonstrate that TOKENSWIFT
achieves over 3 times speedup across models of varying scales (1.5B, 7B, 8B,
14B) and architectures (MHA, GQA). This acceleration translates to hours of
time savings for ultra-long sequence generation, establishing TOKENSWIFT as a
scalable and effective solution at unprecedented lengths. Code can be found at
https://github.com/bigai-nlco/TokenSwift. | 7 | 67c6cbd7e52534aa6ada2e79 | null | https://github.com/bigai-nlco/TokenSwift |
|
2025-03-04T04:54:04.054000 | DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion | 1 | {
"_id": "624bebf604abc7ebb01789af",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg",
"followerCount": 3863,
"fullname": "Apolinário from multimodal AI art",
"isHf": true,
"isMod": false,
"isPro": true,
"name": "multimodalart",
"type": "user"
} | false | null | 2503.01183 | [
{
"_id": "67c6a15e21d722b4248bd9c2",
"hidden": false,
"name": "Ziqian Ning",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c3",
"hidden": false,
"name": "Huakang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c4",
"hidden": false,
"name": "Yuepeng Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c5",
"hidden": false,
"name": "Chunbo Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c6",
"hidden": false,
"name": "Guobin Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c7",
"hidden": false,
"name": "Shuai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c8",
"hidden": false,
"name": "Jixun Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a15e21d722b4248bd9c9",
"hidden": false,
"name": "Lei Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T05:15:34 | DiffRhythm: Blazingly Fast and Embarrassingly Simple End-to-End
Full-Length Song Generation with Latent Diffusion | Recent advancements in music generation have garnered significant attention,
yet existing approaches face critical limitations. Some current generative
models can only synthesize either the vocal track or the accompaniment track.
While some models can generate combined vocal and accompaniment, they typically
rely on meticulously designed multi-stage cascading architectures and intricate
data pipelines, hindering scalability. Additionally, most systems are
restricted to generating short musical segments rather than full-length songs.
Furthermore, widely used language model-based methods suffer from slow
inference speeds. To address these challenges, we propose DiffRhythm, the first
latent diffusion-based song generation model capable of synthesizing complete
songs with both vocal and accompaniment for durations of up to 4m45s in only
ten seconds, maintaining high musicality and intelligibility. Despite its
remarkable capabilities, DiffRhythm is designed to be simple and elegant: it
eliminates the need for complex data preparation, employs a straightforward
model structure, and requires only lyrics and a style prompt during inference.
Additionally, its non-autoregressive structure ensures fast inference speeds.
This simplicity guarantees the scalability of DiffRhythm. Moreover, we release
the complete training code along with the pre-trained model on large-scale data
to promote reproducibility and further research. | 18 | 67c6a16021d722b4248bda37 | https://aslp-lab.github.io/DiffRhythm.github.io/ | https://github.com/ASLP-lab/DiffRhythm |
|
2025-03-04T04:17:23.806000 | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model | 1 | {
"_id": "642bdfc65edcc5760cb1ea12",
"avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg",
"followerCount": null,
"fullname": "huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yxuan",
"type": "user"
} | true | null | 2502.16779 | [
{
"_id": "67c65c06e116e361574405e9",
"hidden": false,
"name": "Yaxuan Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:27.582Z",
"user": {
"_id": "642bdfc65edcc5760cb1ea12",
"avatarUrl": "/avatars/599b0bbb379b43cd39097c204c946075.svg",
"fullname": "huang",
"isPro": false,
"type": "user",
"user": "yxuan"
}
},
{
"_id": "67c65c06e116e361574405ea",
"hidden": false,
"name": "Xili Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405eb",
"hidden": false,
"name": "Jianan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405ec",
"hidden": false,
"name": "Xianbiao Qi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:01:12.106Z",
"user": {
"_id": "6494483aa13255720397287a",
"avatarUrl": "/avatars/61ff2e0371df513194246cf6fbb2b78a.svg",
"fullname": "Xianbiao Qi",
"isPro": false,
"type": "user",
"user": "qixianbiao"
}
},
{
"_id": "67c65c06e116e361574405ed",
"hidden": false,
"name": "Yixing Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c65c06e116e361574405ee",
"hidden": false,
"name": "Xiangyu Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:00:56.439Z",
"user": {
"_id": "666a8f24e2990b0cb16b7bf9",
"avatarUrl": "/avatars/fcbaf8f1e3e53a2a4a819b7cb2c53aa4.svg",
"fullname": "Xiangyu Yue",
"isPro": false,
"type": "user",
"user": "xyyue"
}
}
] | 2025-02-24T02:14:19 | Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain
Model | Room layout estimation from multiple-perspective images is poorly
investigated due to the complexities that emerge from multi-view geometry,
which requires muti-step solutions such as camera intrinsic and extrinsic
estimation, image matching, and triangulation. However, in 3D reconstruction,
the advancement of recent 3D foundation models such as DUSt3R has shifted the
paradigm from the traditional multi-step structure-from-motion process to an
end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a
novel method for multi-view room layout estimation leveraging the 3D foundation
model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on
a room layout dataset (Structure3D) with a modified objective to estimate
structural planes. By generating uniform and parsimonious results, Plane-DUSt3R
enables room layout estimation with only a single post-processing step and 2D
detection results. Unlike previous methods that rely on single-perspective or
panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective
images. Moreover, it offers a streamlined, end-to-end solution that simplifies
the process and reduces error accumulation. Experimental results demonstrate
that Plane-DUSt3R not only outperforms state-of-the-art methods on the
synthetic dataset but also proves robust and effective on in the wild data with
different image styles such as cartoon.Our code is available at:
https://github.com/justacar/Plane-DUSt3R | 2 | 67c65c0be116e36157440751 | null | https://github.com/justacar/Plane-DUSt3R |
|
2025-03-04T03:56:04.503000 | OneRec: Unifying Retrieve and Rank with Generative Recommender and Iterative Preference Alignment | 1 | {
"_id": "668f5875b5b3081d776e4094",
"avatarUrl": "/avatars/8c763393f25afbe5fb8b132f775e746a.svg",
"followerCount": 1,
"fullname": "Xiaohuan Zhou",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "XiaohuanZhou",
"type": "user"
} | false | null | 2502.18965 | [
{
"_id": "67c6bfdf96b9f5fa18c517db",
"hidden": false,
"name": "Jiaxin Deng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:32.410Z",
"user": {
"_id": "625f6ebee1994410eef16a42",
"avatarUrl": "/avatars/eaa353afe91e849adcd35656477a6462.svg",
"fullname": "Jiaxin Deng",
"isPro": false,
"type": "user",
"user": "OrpheusBetter"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517dc",
"hidden": false,
"name": "Shiyao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:39.957Z",
"user": {
"_id": "641f8e596d51620635e49707",
"avatarUrl": "/avatars/f30b24da53fea2278f343c318007bb60.svg",
"fullname": "shiyao wang",
"isPro": false,
"type": "user",
"user": "oneself"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517dd",
"hidden": false,
"name": "Kuo Cai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T12:01:27.669Z",
"user": {
"_id": "65e6cc77e999cde61fcbc097",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5NxDLRmS2cQgNeZ6ScSNW.png",
"fullname": "CaiKuo",
"isPro": false,
"type": "user",
"user": "caikuo"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517de",
"hidden": false,
"name": "Lejian Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517df",
"hidden": false,
"name": "Qigen Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517e0",
"hidden": false,
"name": "Weifeng Ding",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:17:03.422Z",
"user": {
"_id": "64aeb9342cda6a37a4781b7d",
"avatarUrl": "/avatars/c1584c10ff0f9871315872245c9934fc.svg",
"fullname": "Weifeng Ding",
"isPro": false,
"type": "user",
"user": "DingWF"
}
},
{
"_id": "67c6bfdf96b9f5fa18c517e1",
"hidden": false,
"name": "Qiang Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6bfdf96b9f5fa18c517e2",
"hidden": false,
"name": "Guorui Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:16:52.106Z",
"user": {
"_id": "67c6c570cf87e2d2ebfc81aa",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67c6c570cf87e2d2ebfc81aa/7qAstZtIT86Uwrz3u_anv.jpeg",
"fullname": "Guorui Zhou",
"isPro": false,
"type": "user",
"user": "GuoruiZhou"
}
}
] | 2025-02-26T09:25:10 | OneRec: Unifying Retrieve and Rank with Generative Recommender and
Iterative Preference Alignment | Recently, generative retrieval-based recommendation systems have emerged as a
promising paradigm. However, most modern recommender systems adopt a
retrieve-and-rank strategy, where the generative model functions only as a
selector during the retrieval stage. In this paper, we propose OneRec, which
replaces the cascaded learning framework with a unified generative model. To
the best of our knowledge, this is the first end-to-end generative model that
significantly surpasses current complex and well-designed recommender systems
in real-world scenarios. Specifically, OneRec includes: 1) an encoder-decoder
structure, which encodes the user's historical behavior sequences and gradually
decodes the videos that the user may be interested in. We adopt sparse
Mixture-of-Experts (MoE) to scale model capacity without proportionally
increasing computational FLOPs. 2) a session-wise generation approach. In
contrast to traditional next-item prediction, we propose a session-wise
generation, which is more elegant and contextually coherent than point-by-point
generation that relies on hand-crafted rules to properly combine the generated
results. 3) an Iterative Preference Alignment module combined with Direct
Preference Optimization (DPO) to enhance the quality of the generated results.
Unlike DPO in NLP, a recommendation system typically has only one opportunity
to display results for each user's browsing request, making it impossible to
obtain positive and negative samples simultaneously. To address this
limitation, We design a reward model to simulate user generation and customize
the sampling strategy. Extensive experiments have demonstrated that a limited
number of DPO samples can align user interest preferences and significantly
improve the quality of generated results. We deployed OneRec in the main scene
of Kuaishou, achieving a 1.6\% increase in watch-time, which is a substantial
improvement. | 18 | 67c6bfe396b9f5fa18c518e5 | null | null |
|
2025-03-04T03:20:03.380000 | AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond Human Understanding | 1 | {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"followerCount": null,
"fullname": "David Noever",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dnoever",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63136a82e29fb2e86d5e5bdd/mgIPjnhtUaGLR2Iv4ViL6.jpeg"
] | 2503.01063 | [
{
"_id": "67c6b72b7aad9a016ae60797",
"hidden": false,
"name": "David Noever",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:17:50.200Z",
"user": {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"fullname": "David Noever",
"isPro": false,
"type": "user",
"user": "dnoever"
}
}
] | 2025-03-02T23:59:52 | AI-Invented Tonal Languages: Preventing a Machine Lingua Franca Beyond
Human Understanding | This paper investigates the potential for large language models (LLMs) to
develop private tonal languages for machine-to-machine (M2M) communication.
Inspired by cryptophasia in human twins (affecting up to 50% of twin births)
and natural tonal languages like Mandarin and Vietnamese, we implement a
precise character-to-frequency mapping system that encodes the full ASCII
character set (32-126) using musical semitones. Each character is assigned a
unique frequency, creating a logarithmic progression beginning with space (220
Hz) and ending with tilde (50,175.42 Hz). This spans approximately 7.9 octaves,
with higher characters deliberately mapped to ultrasonic frequencies beyond
human perception (>20 kHz). Our implemented software prototype demonstrates
this encoding through visualization, auditory playback, and ABC musical
notation, allowing for analysis of information density and transmission speed.
Testing reveals that tonal encoding can achieve information rates exceeding
human speech while operating partially outside human perceptual boundaries.
This work responds directly to concerns about AI systems catastrophically
developing private languages within the next five years, providing a concrete
prototype software example of how such communication might function and the
technical foundation required for its emergence, detection, and governance. | 1 | 67c6b72c7aad9a016ae607bb | null | null |
|
2025-03-04T02:48:58.261000 | Liger: Linearizing Large Language Models to Gated Recurrent Structures | 1 | {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"followerCount": 3,
"fullname": "Weigao Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "weigao266",
"type": "user"
} | true | null | 2503.01496 | [
{
"_id": "67c6b05f35198d0f397adc98",
"hidden": false,
"name": "Disen Lan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:46.117Z",
"user": {
"_id": "66ea643899af9ac3463639b1",
"avatarUrl": "/avatars/252d470e761a57834dee3dbc60dfefed.svg",
"fullname": "Disen Lan",
"isPro": false,
"type": "user",
"user": "landisen"
}
},
{
"_id": "67c6b05f35198d0f397adc99",
"hidden": false,
"name": "Weigao Sun",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-04T08:10:52.130Z",
"user": {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"fullname": "Weigao Sun",
"isPro": false,
"type": "user",
"user": "weigao266"
}
},
{
"_id": "67c6b05f35198d0f397adc9a",
"hidden": false,
"name": "Jiaxi Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:18.982Z",
"user": {
"_id": "665dc35752ff9daa9ba5a4ed",
"avatarUrl": "/avatars/df8b01879d97e599b610fa51414d3a18.svg",
"fullname": "Hu Jiaxi",
"isPro": false,
"type": "user",
"user": "Jiaxihu2"
}
},
{
"_id": "67c6b05f35198d0f397adc9b",
"hidden": false,
"name": "Jusen Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:26.432Z",
"user": {
"_id": "65003e857804f04a163328d9",
"avatarUrl": "/avatars/fe32150aabfde8d283b38ccebcf6982e.svg",
"fullname": "Jusen Du",
"isPro": false,
"type": "user",
"user": "JusenK"
}
},
{
"_id": "67c6b05f35198d0f397adc9c",
"hidden": false,
"name": "Yu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T13:08:00 | Liger: Linearizing Large Language Models to Gated Recurrent Structures | Transformers with linear recurrent modeling offer linear-time training and
constant-memory inference. Despite their demonstrated efficiency and
performance, pretraining such non-standard architectures from scratch remains
costly and risky. The linearization of large language models (LLMs) transforms
pretrained standard models into linear recurrent structures, enabling more
efficient deployment. However, current linearization methods typically
introduce additional feature map modules that require extensive fine-tuning and
overlook the gating mechanisms used in state-of-the-art linear recurrent
models. To address these issues, this paper presents Liger, short for
Linearizing LLMs to gated recurrent structures. Liger is a novel approach for
converting pretrained LLMs into gated linear recurrent models without adding
extra parameters. It repurposes the pretrained key matrix weights to construct
diverse gating mechanisms, facilitating the formation of various gated
recurrent structures while avoiding the need to train additional components
from scratch. Using lightweight fine-tuning with Low-Rank Adaptation (LoRA),
Liger restores the performance of the linearized gated recurrent models to
match that of the original LLMs. Additionally, we introduce Liger Attention, an
intra-layer hybrid attention mechanism, which significantly recovers 93\% of
the Transformer-based LLM at 0.02\% pre-training tokens during the
linearization process, achieving competitive results across multiple
benchmarks, as validated on models ranging from 1B to 8B parameters. Code is
available at https://github.com/OpenSparseLLMs/Linearization. | 13 | 67c6b06035198d0f397adcc4 | null | null |
|
2025-03-04T02:27:17.351000 | CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic Environments | 1 | {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"followerCount": 1,
"fullname": "Lei Mingcong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SP4595",
"type": "user"
} | true | null | 2503.00729 | [
{
"_id": "67c6ab3ec0b62d612c54ddf5",
"hidden": false,
"name": "Mingcong Lei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:48.061Z",
"user": {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"fullname": "Lei Mingcong",
"isPro": false,
"type": "user",
"user": "SP4595"
}
},
{
"_id": "67c6ab3ec0b62d612c54ddf6",
"hidden": false,
"name": "Ge Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf7",
"hidden": false,
"name": "Yiming Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf8",
"hidden": false,
"name": "Zhixin Mai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddf9",
"hidden": false,
"name": "Qing Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfa",
"hidden": false,
"name": "Yao Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfb",
"hidden": false,
"name": "Zhen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfc",
"hidden": false,
"name": "Shuguang Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfd",
"hidden": false,
"name": "Yatong Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6ab3ec0b62d612c54ddfe",
"hidden": false,
"name": "Jinke Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T04:50:59 | CLEA: Closed-Loop Embodied Agent for Enhancing Task Execution in Dynamic
Environments | Large Language Models (LLMs) exhibit remarkable capabilities in the
hierarchical decomposition of complex tasks through semantic reasoning.
However, their application in embodied systems faces challenges in ensuring
reliable execution of subtask sequences and achieving one-shot success in
long-term task completion. To address these limitations in dynamic
environments, we propose Closed-Loop Embodied Agent (CLEA) -- a novel
architecture incorporating four specialized open-source LLMs with functional
decoupling for closed-loop task management. The framework features two core
innovations: (1) Interactive task planner that dynamically generates executable
subtasks based on the environmental memory, and (2) Multimodal execution critic
employing an evaluation framework to conduct a probabilistic assessment of
action feasibility, triggering hierarchical re-planning mechanisms when
environmental perturbations exceed preset thresholds. To validate CLEA's
effectiveness, we conduct experiments in a real environment with manipulable
objects, using two heterogeneous robots for object search, manipulation, and
search-manipulation integration tasks. Across 12 task trials, CLEA outperforms
the baseline model, achieving a 67.3% improvement in success rate and a 52.8%
increase in task completion rate. These results demonstrate that CLEA
significantly enhances the robustness of task planning and execution in dynamic
environments. | 2 | 67c6ab42c0b62d612c54df71 | https://sp4595.github.io/CLEA/ | https://github.com/SP4595/CLEA-Closed-Loop-Embodied-Agent |
|
2025-03-04T02:21:00.460000 | Speculative Ad-hoc Querying | 1 | {
"_id": "6577437552f02732a463d97d",
"avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg",
"followerCount": null,
"fullname": "Haoyu Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Haoyu0529",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6577437552f02732a463d97d/fEkQ4BZ8Yx_CzsjvHBWFq.qt"
] | 2503.00714 | [
{
"_id": "67c6a803025b72f14ccb0939",
"hidden": false,
"name": "Haoyu Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-04T07:13:08.306Z",
"user": {
"_id": "6577437552f02732a463d97d",
"avatarUrl": "/avatars/8eb271ec249fa9b0d97dfe0eace6da88.svg",
"fullname": "Haoyu Li",
"isPro": false,
"type": "user",
"user": "Haoyu0529"
}
},
{
"_id": "67c6a803025b72f14ccb093a",
"hidden": false,
"name": "Srikanth Kandula",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093b",
"hidden": false,
"name": "Maria Angels de Luis Balaguer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093c",
"hidden": false,
"name": "Aditya Akella",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a803025b72f14ccb093d",
"hidden": false,
"name": "Venkat Arun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T03:44:31 | Speculative Ad-hoc Querying | Analyzing large datasets requires responsive query execution, but executing
SQL queries on massive datasets can be slow. This paper explores whether query
execution can begin even before the user has finished typing, allowing results
to appear almost instantly. We propose SpeQL, a system that leverages Large
Language Models (LLMs) to predict likely queries based on the database schema,
the user's past queries, and their incomplete query. Since exact query
prediction is infeasible, SpeQL speculates on partial queries in two ways: 1)
it predicts the query structure to compile and plan queries in advance, and 2)
it precomputes smaller temporary tables that are much smaller than the original
database, but are still predicted to contain all information necessary to
answer the user's final query. Additionally, SpeQL continuously displays
results for speculated queries and subqueries in real time, aiding exploratory
analysis. A utility/user study showed that SpeQL improved task completion time,
and participants reported that its speculative display of results helped them
discover patterns in the data more quickly. In the study, SpeQL improves user's
query latency by up to 289times and kept the overhead reasonable, at 4$
per hour. | 8 | 67c6a804025b72f14ccb0994 | https://github.com/lihy0529/SpeQL | https://github.com/lihy0529/SpeQL |
|
2025-03-04T02:16:25.633000 | CodeArena: A Collective Evaluation Platform for LLM Code Generation | 1 | {
"_id": "61711f02e0b1ddb56eb9b526",
"avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg",
"followerCount": 3,
"fullname": "Mingzhe Du",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Elfsong",
"type": "user"
} | true | null | 2503.01295 | [
{
"_id": "67c6a8b534aeb86063e94010",
"hidden": false,
"name": "Mingzhe Du",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:49.954Z",
"user": {
"_id": "61711f02e0b1ddb56eb9b526",
"avatarUrl": "/avatars/3e2fdf774f5bc1f73b450486d6da42d4.svg",
"fullname": "Mingzhe Du",
"isPro": false,
"type": "user",
"user": "Elfsong"
}
},
{
"_id": "67c6a8b534aeb86063e94011",
"hidden": false,
"name": "Anh Tuan Luu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:20.575Z",
"user": {
"_id": "655722e80438e0854fae7554",
"avatarUrl": "/avatars/b93a74f7c7880f9fe0f3ffb47e2aef5e.svg",
"fullname": "Luu Anh Tuan",
"isPro": false,
"type": "user",
"user": "anhtuanluu36"
}
},
{
"_id": "67c6a8b534aeb86063e94012",
"hidden": false,
"name": "Bin Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a8b534aeb86063e94013",
"hidden": false,
"name": "Xiaobao Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:48.996Z",
"user": {
"_id": "64cb02869e30a46f7b80b355",
"avatarUrl": "/avatars/81ce4ba78826b54f0e1b53eeaff87ee6.svg",
"fullname": "Xiaobao Wu",
"isPro": false,
"type": "user",
"user": "bobxwu"
}
},
{
"_id": "67c6a8b534aeb86063e94014",
"hidden": false,
"name": "Dong Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:43.013Z",
"user": {
"_id": "67c56a7f083bb2c50254bbe5",
"avatarUrl": "/avatars/bdf6fd8934c2199ff169b178f6482773.svg",
"fullname": "Huang, Dong",
"isPro": false,
"type": "user",
"user": "DongHuang-ebay"
}
},
{
"_id": "67c6a8b534aeb86063e94015",
"hidden": false,
"name": "Terry Yue Zhuo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:02:33.977Z",
"user": {
"_id": "62b7fb545233925f253531c8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b7fb545233925f253531c8/W50u2G1HK3EtUKHRU189V.jpeg",
"fullname": "Terry Yue Zhuo",
"isPro": false,
"type": "user",
"user": "terryyz"
}
},
{
"_id": "67c6a8b534aeb86063e94016",
"hidden": false,
"name": "Qian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a8b534aeb86063e94017",
"hidden": false,
"name": "See-Kiong Ng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T08:31:16 | CodeArena: A Collective Evaluation Platform for LLM Code Generation | Large Language Models (LLMs) have reshaped code generation by synergizing
their exceptional comprehension of natural language and programming syntax,
thereby substantially boosting developer productivity. These advancements have
prompted numerous efforts to quantitatively evaluate their coding capabilities.
However, persistent challenges, such as benchmark leakage, data dissipation,
and limited system accessibility, continue to impede a timely and accurate
assessment. To address these limitations, we introduce CodeArena, an online
evaluation framework tailored for LLM code generation. The key innovation is a
collective evaluation mechanism, which dynamically recalibrates individual
model scores based on the holistic performance of all participating models,
mitigating score biases caused by widespread benchmark leakage. In addition,
CodeArena ensures open access to all submitted solutions and test cases and
provides automation-friendly APIs to streamline the code evaluation workflow.
Our main contributions are: (1) a collective evaluation system for unbiased
assessment, (2) a public repository of solutions and test cases, and (3)
automation-ready APIs for seamless integration. | 5 | 67c6a8b634aeb86063e9406a | null | null |
|
2025-03-04T01:56:03.632000 | Qilin: A Multimodal Information Retrieval Dataset with APP-level User Sessions | 1 | {
"_id": "60c0ed29d8bc072769d78f48",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg",
"followerCount": 2,
"fullname": "Qian Dong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "qian",
"type": "user"
} | true | null | 2503.00501 | [
{
"_id": "67c6a343ad6b7c2fa29d5e7e",
"hidden": false,
"name": "Jia Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T16:08:10.744Z",
"user": {
"_id": "67c03221aed8409476d39da8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67c03221aed8409476d39da8/eQIhOPRLNoiphsR145mfB.png",
"fullname": "Jia Chen",
"isPro": false,
"type": "user",
"user": "Regulus309"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e7f",
"hidden": false,
"name": "Qian Dong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:51.762Z",
"user": {
"_id": "60c0ed29d8bc072769d78f48",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c0ed29d8bc072769d78f48/V6q6Tn4kzB46NIbTYw9pQ.jpeg",
"fullname": "Qian Dong",
"isPro": false,
"type": "user",
"user": "qian"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e80",
"hidden": false,
"name": "Haitao Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:20:57.898Z",
"user": {
"_id": "67b5d91558369f6b38c5b596",
"avatarUrl": "/avatars/18b08d5d9b05786cad34bc000c7606aa.svg",
"fullname": "Haitao Li",
"isPro": false,
"type": "user",
"user": "haitaoli"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e81",
"hidden": false,
"name": "Xiaohui He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e82",
"hidden": false,
"name": "Yan Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e83",
"hidden": false,
"name": "Shaosheng Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e84",
"hidden": false,
"name": "Yi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e85",
"hidden": false,
"name": "Ping Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e86",
"hidden": false,
"name": "Chen Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e87",
"hidden": false,
"name": "Yao Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c6a343ad6b7c2fa29d5e88",
"hidden": false,
"name": "Qingyao Ai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:21:22.100Z",
"user": {
"_id": "6657e7045f6e35c7d541bdd8",
"avatarUrl": "/avatars/368e5cef6c93543b2b92fbca79a4e4b9.svg",
"fullname": "Qingyao Ai",
"isPro": false,
"type": "user",
"user": "aiqy"
}
},
{
"_id": "67c6a343ad6b7c2fa29d5e89",
"hidden": false,
"name": "Yiqun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-01T14:15:00 | Qilin: A Multimodal Information Retrieval Dataset with APP-level User
Sessions | User-generated content (UGC) communities, especially those featuring
multimodal content, improve user experiences by integrating visual and textual
information into results (or items). The challenge of improving user
experiences in complex systems with search and recommendation (S\&R) services
has drawn significant attention from both academia and industry these years.
However, the lack of high-quality datasets has limited the research progress on
multimodal S\&R. To address the growing need for developing better S\&R
services, we present a novel multimodal information retrieval dataset in this
paper, namely Qilin. The dataset is collected from Xiaohongshu, a popular
social platform with over 300 million monthly active users and an average
search penetration rate of over 70\%. In contrast to existing datasets,
Qilin offers a comprehensive collection of user sessions with
heterogeneous results like image-text notes, video notes, commercial notes, and
direct answers, facilitating the development of advanced multimodal neural
retrieval models across diverse task settings. To better model user
satisfaction and support the analysis of heterogeneous user behaviors, we also
collect extensive APP-level contextual signals and genuine user feedback.
Notably, Qilin contains user-favored answers and their referred results for
search requests triggering the Deep Query Answering (DQA) module. This allows
not only the training \& evaluation of a Retrieval-augmented Generation (RAG)
pipeline, but also the exploration of how such a module would affect users'
search behavior. Through comprehensive analysis and experiments, we provide
interesting findings and insights for further improving S\&R systems. We hope
that Qilin will significantly contribute to the advancement of
multimodal content platforms with S\&R services in the future. | 11 | 67c6a346ad6b7c2fa29d5f88 | https://huggingface.co/datasets/THUIR/Qilin | https://github.com/RED-Search/Qilin/ |
|
2025-03-04T01:19:45.715000 | Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation | 1 | {
"_id": "6332e2689bf698ce68a22e8c",
"avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg",
"followerCount": 2,
"fullname": "JIANTAO LIN",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "LTT",
"type": "user"
} | true | null | 2503.01370 | [
{
"_id": "67c691673ff65c55829685a0",
"hidden": false,
"name": "Jiantao Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:52:36.682Z",
"user": {
"_id": "6332e2689bf698ce68a22e8c",
"avatarUrl": "/avatars/c1922acfda2e6d2fe7b03194a404eb10.svg",
"fullname": "JIANTAO LIN",
"isPro": true,
"type": "user",
"user": "LTT"
}
},
{
"_id": "67c691673ff65c55829685a1",
"hidden": false,
"name": "Xin Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a2",
"hidden": false,
"name": "Meixi Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:52:44.047Z",
"user": {
"_id": "63641f09a53b71b7a1b02955",
"avatarUrl": "/avatars/2f43703cbbc56f3e3f98090f44bccfe6.svg",
"fullname": "Meixi Chen",
"isPro": false,
"type": "user",
"user": "MeixiChen"
}
},
{
"_id": "67c691673ff65c55829685a3",
"hidden": false,
"name": "Yingjie Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a4",
"hidden": false,
"name": "Dongyu Yan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:56.252Z",
"user": {
"_id": "64049ae20ab5e22719f35103",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678023295407-noauth.jpeg",
"fullname": "Dongyu Yan",
"isPro": false,
"type": "user",
"user": "StarYDY"
}
},
{
"_id": "67c691673ff65c55829685a5",
"hidden": false,
"name": "Leyi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a6",
"hidden": false,
"name": "Xinli Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:53:15.555Z",
"user": {
"_id": "64b4ab62eec33e27dcd733b5",
"avatarUrl": "/avatars/0a9bf220c9a5efe7279f9b287b087d36.svg",
"fullname": "Xinli XU",
"isPro": false,
"type": "user",
"user": "Xxlbigbrother"
}
},
{
"_id": "67c691673ff65c55829685a7",
"hidden": false,
"name": "Lie XU",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a8",
"hidden": false,
"name": "Shunsi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c691673ff65c55829685a9",
"hidden": false,
"name": "Ying-Cong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:53:33.509Z",
"user": {
"_id": "655cba1d87b67834000590e8",
"avatarUrl": "/avatars/3bd43b7c9351f65b8f38f4c8237a0146.svg",
"fullname": "Yingcong Chen",
"isPro": false,
"type": "user",
"user": "yingcongchen"
}
}
] | 2025-03-03T10:07:19 | Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation | Diffusion models have achieved great success in generating 2D images.
However, the quality and generalizability of 3D content generation remain
limited. State-of-the-art methods often require large-scale 3D assets for
training, which are challenging to collect. In this work, we introduce
Kiss3DGen (Keep It Simple and Straightforward in 3D Generation), an efficient
framework for generating, editing, and enhancing 3D objects by repurposing a
well-trained 2D image diffusion model for 3D generation. Specifically, we
fine-tune a diffusion model to generate ''3D Bundle Image'', a tiled
representation composed of multi-view images and their corresponding normal
maps. The normal maps are then used to reconstruct a 3D mesh, and the
multi-view images provide texture mapping, resulting in a complete 3D model.
This simple method effectively transforms the 3D generation problem into a 2D
image generation task, maximizing the utilization of knowledge in pretrained
diffusion models. Furthermore, we demonstrate that our Kiss3DGen model is
compatible with various diffusion model techniques, enabling advanced features
such as 3D editing, mesh and texture enhancement, etc. Through extensive
experiments, we demonstrate the effectiveness of our approach, showcasing its
ability to produce high-quality 3D models efficiently. | 7 | 67c6916b3ff65c5582968702 | https://ltt-o.github.io/Kiss3dgen.github.io/ | https://github.com/EnVision-Research/Kiss3DGen |
|
2025-03-04T00:52:22.204000 | Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models | 1 | {
"_id": "633aaf695df91da9cea92960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/633aaf695df91da9cea92960/9T4y1ru5wt5iKUUqf9_Tt.png",
"followerCount": 12,
"fullname": "Jay Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jayw",
"type": "user"
} | true | null | 2503.01774 | [
{
"_id": "67c694febdab31ec59fea175",
"hidden": false,
"name": "Jay Zhangjie Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:53.874Z",
"user": {
"_id": "633aaf695df91da9cea92960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/633aaf695df91da9cea92960/9T4y1ru5wt5iKUUqf9_Tt.png",
"fullname": "Jay Wu",
"isPro": false,
"type": "user",
"user": "jayw"
}
},
{
"_id": "67c694febdab31ec59fea176",
"hidden": false,
"name": "Yuxuan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea177",
"hidden": false,
"name": "Haithem Turki",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:13:26.878Z",
"user": {
"_id": "656e000253703dd78fd072a9",
"avatarUrl": "/avatars/6702ba8fabe3d08884aa757f90cea333.svg",
"fullname": "Haithem Turki",
"isPro": false,
"type": "user",
"user": "hturki"
}
},
{
"_id": "67c694febdab31ec59fea178",
"hidden": false,
"name": "Xuanchi Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:13:33.467Z",
"user": {
"_id": "658529d61c461dfe88afe8e8",
"avatarUrl": "/avatars/a22c1b07d28c2662833c462c6537d835.svg",
"fullname": "Xuanchi Ren",
"isPro": false,
"type": "user",
"user": "xrenaa"
}
},
{
"_id": "67c694febdab31ec59fea179",
"hidden": false,
"name": "Jun Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea17a",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:27:21.825Z",
"user": {
"_id": "661ab3da2b14565c7acccf5c",
"avatarUrl": "/avatars/fa4fc03664803e02aede4d4c3d50b393.svg",
"fullname": "Mike Zheng Shou",
"isPro": false,
"type": "user",
"user": "AnalMom"
}
},
{
"_id": "67c694febdab31ec59fea17b",
"hidden": false,
"name": "Sanja Fidler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c694febdab31ec59fea17c",
"hidden": false,
"name": "Zan Gojcic",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:27:34.034Z",
"user": {
"_id": "6366cda3361a96184dc22139",
"avatarUrl": "/avatars/d8a88c84cb5f69e69dd038674a29be89.svg",
"fullname": "Zan Gojcic",
"isPro": false,
"type": "user",
"user": "zgojcic"
}
},
{
"_id": "67c694febdab31ec59fea17d",
"hidden": false,
"name": "Huan Ling",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T17:58:33 | Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models | Neural Radiance Fields and 3D Gaussian Splatting have revolutionized 3D
reconstruction and novel-view synthesis task. However, achieving photorealistic
rendering from extreme novel viewpoints remains challenging, as artifacts
persist across representations. In this work, we introduce Difix3D+, a novel
pipeline designed to enhance 3D reconstruction and novel-view synthesis through
single-step diffusion models. At the core of our approach is Difix, a
single-step image diffusion model trained to enhance and remove artifacts in
rendered novel views caused by underconstrained regions of the 3D
representation. Difix serves two critical roles in our pipeline. First, it is
used during the reconstruction phase to clean up pseudo-training views that are
rendered from the reconstruction and then distilled back into 3D. This greatly
enhances underconstrained regions and improves the overall 3D representation
quality. More importantly, Difix also acts as a neural enhancer during
inference, effectively removing residual artifacts arising from imperfect 3D
supervision and the limited capacity of current reconstruction models. Difix3D+
is a general solution, a single model compatible with both NeRF and 3DGS
representations, and it achieves an average 2times improvement in FID score
over baselines while maintaining 3D consistency. | 29 | 67c69500bdab31ec59fea24d | https://research.nvidia.com/labs/toronto-ai/difix3d | null |
|
2025-03-04T00:29:56.570000 | VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation | 1 | {
"_id": "62b32a4429a410b7f6b06710",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a4429a410b7f6b06710/VzgvmnlYZWuifZTkIkCxy.jpeg",
"followerCount": 14,
"fullname": "Wenhao Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "WenhaoWang",
"type": "user"
} | true | null | 2503.01739 | [
{
"_id": "67c68f7828a037872c5ce5bb",
"hidden": false,
"name": "Wenhao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:14:37.907Z",
"user": {
"_id": "62b32a4429a410b7f6b06710",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b32a4429a410b7f6b06710/VzgvmnlYZWuifZTkIkCxy.jpeg",
"fullname": "Wenhao Wang",
"isPro": false,
"type": "user",
"user": "WenhaoWang"
}
},
{
"_id": "67c68f7828a037872c5ce5bc",
"hidden": false,
"name": "Yi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-03T17:00:36 | VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video
Generation | Text-to-video generative models convert textual prompts into dynamic visual
content, offering wide-ranging applications in film production, gaming, and
education. However, their real-world performance often falls short of user
expectations. One key reason is that these models have not been trained on
videos related to some topics users want to create. In this paper, we propose
VideoUFO, the first Video dataset specifically curated to align with Users'
FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1)
minimal (0.29%) overlap with existing video datasets, and (2) videos
searched exclusively via YouTube's official API under the Creative Commons
license. These two attributes provide future researchers with greater freedom
to broaden their training sources. The VideoUFO comprises over 1.09 million
video clips, each paired with both a brief and a detailed caption
(description). Specifically, through clustering, we first identify 1,291
user-focused topics from the million-scale real text-to-video prompt dataset,
VidProM. Then, we use these topics to retrieve videos from YouTube, split the
retrieved videos into clips, and generate both brief and detailed captions for
each clip. After verifying the clips with specified topics, we are left with
about 1.09 million video clips. Our experiments reveal that (1) current 16
text-to-video models do not achieve consistent performance across all
user-focused topics; and (2) a simple model trained on VideoUFO outperforms
others on worst-performing topics. The dataset is publicly available at
https://huggingface.co/datasets/WenhaoWang/VideoUFO under the CC BY 4.0
License. | 3 | 67c68f7a28a037872c5ce60d | null | null |
|
2025-03-04T00:09:04.418000 | Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs | 1 | {
"_id": "63e6a880f2e9a8f22c5a1630",
"avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg",
"followerCount": null,
"fullname": "Kanishk Gandhi",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "obiwan96",
"type": "user"
} | true | null | 2503.01307 | [
{
"_id": "67c68adc0457c9f809c22df8",
"hidden": false,
"name": "Kanishk Gandhi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:35:01.161Z",
"user": {
"_id": "63e6a880f2e9a8f22c5a1630",
"avatarUrl": "/avatars/53b57690fe052ce6882bbfc87b11567c.svg",
"fullname": "Kanishk Gandhi",
"isPro": false,
"type": "user",
"user": "obiwan96"
}
},
{
"_id": "67c68adc0457c9f809c22df9",
"hidden": false,
"name": "Ayush Chakravarthy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:04:44.344Z",
"user": {
"_id": "624f9e3d07bd004fb855f5e9",
"avatarUrl": "/avatars/86a349cd4053bc0317e27e75a51c69fa.svg",
"fullname": "Ayush Chakravarthy",
"isPro": false,
"type": "user",
"user": "ayushchakravarthy"
}
},
{
"_id": "67c68adc0457c9f809c22dfa",
"hidden": false,
"name": "Anikait Singh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:05:05.759Z",
"user": {
"_id": "6511ee845b7e52b0251fdee9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6511ee845b7e52b0251fdee9/hTIwiIYBGOVnIrxtpri83.png",
"fullname": "Anikait Singh",
"isPro": false,
"type": "user",
"user": "Asap7772"
}
},
{
"_id": "67c68adc0457c9f809c22dfb",
"hidden": false,
"name": "Nathan Lile",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:34:58.582Z",
"user": {
"_id": "61aa15fd8a9625ebfe284286",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61aa15fd8a9625ebfe284286/KaGzIeijcgcN15JErCqft.jpeg",
"fullname": "nathan lile",
"isPro": false,
"type": "user",
"user": "nlile"
}
},
{
"_id": "67c68adc0457c9f809c22dfc",
"hidden": false,
"name": "Noah D. Goodman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:05:12.186Z",
"user": {
"_id": "67321274c1f20c742bcf7a8d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/ltcQhre6eDRVzn6Vbbyhu.png",
"fullname": "Noah D. Goodman",
"isPro": false,
"type": "user",
"user": "ngoodman"
}
}
] | 2025-03-03T08:46:22 | Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four
Habits of Highly Effective STaRs | Test-time inference has emerged as a powerful paradigm for enabling language
models to ``think'' longer and more carefully about complex challenges, much
like skilled human experts. While reinforcement learning (RL) can drive
self-improvement in language models on verifiable tasks, some models exhibit
substantial gains while others quickly plateau. For instance, we find that
Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game
of Countdown. This discrepancy raises a critical question: what intrinsic
properties enable effective self-improvement? We introduce a framework to
investigate this question by analyzing four key cognitive behaviors --
verification, backtracking, subgoal setting, and backward chaining -- that both
expert human problem solvers and successful language models employ. Our study
reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama
initially lacks them. In systematic experimentation with controlled behavioral
datasets, we find that priming Llama with examples containing these reasoning
behaviors enables substantial improvements during RL, matching or exceeding
Qwen's performance. Importantly, the presence of reasoning behaviors, rather
than correctness of answers, proves to be the critical factor -- models primed
with incorrect solutions containing proper reasoning patterns achieve
comparable performance to those trained on correct solutions. Finally,
leveraging continued pretraining with OpenWebMath data, filtered to amplify
reasoning behaviors, enables the Llama model to match Qwen's self-improvement
trajectory. Our findings establish a fundamental relationship between initial
reasoning behaviors and the capacity for improvement, explaining why some
language models effectively utilize additional computation while others
plateau. | 13 | 67c68add0457c9f809c22e31 | null | null |
|
2025-03-03T23:44:06.105000 | Large-Scale Data Selection for Instruction Tuning | 1 | {
"_id": "62608fc2ffe8827cb1d89f9f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png",
"followerCount": 11,
"fullname": "Hamish Ivison",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hamishivi",
"type": "user"
} | true | null | 2503.01807 | [
{
"_id": "67c67ff6dec55d10cb10fc9e",
"hidden": false,
"name": "Hamish Ivison",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:13.649Z",
"user": {
"_id": "62608fc2ffe8827cb1d89f9f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654027835241-62608fc2ffe8827cb1d89f9f.png",
"fullname": "Hamish Ivison",
"isPro": false,
"type": "user",
"user": "hamishivi"
}
},
{
"_id": "67c67ff6dec55d10cb10fc9f",
"hidden": false,
"name": "Muru Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:14:59.402Z",
"user": {
"_id": "61cc2cf4dcb47bd5ed3cd3b8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1640770780085-noauth.jpeg",
"fullname": "Muru Zhang",
"isPro": false,
"type": "user",
"user": "nanami"
}
},
{
"_id": "67c67ff6dec55d10cb10fca0",
"hidden": false,
"name": "Faeze Brahman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:05.562Z",
"user": {
"_id": "65282b8d578679aac7888aec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65282b8d578679aac7888aec/dibBkhH-z1c70mJZZxJ7u.jpeg",
"fullname": "Faeze Brahman",
"isPro": false,
"type": "user",
"user": "faezeb"
}
},
{
"_id": "67c67ff6dec55d10cb10fca1",
"hidden": false,
"name": "Pang Wei Koh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:14.558Z",
"user": {
"_id": "641b4263abfce26bcf7b27de",
"avatarUrl": "/avatars/e91b4205e4f74b0dd8c333c23203a924.svg",
"fullname": "Pang Wei Koh",
"isPro": false,
"type": "user",
"user": "pangwei"
}
},
{
"_id": "67c67ff6dec55d10cb10fca2",
"hidden": false,
"name": "Pradeep Dasigi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T11:15:20.400Z",
"user": {
"_id": "6408fcc93461c51cf735a61e",
"avatarUrl": "/avatars/619f3653911d111f046a5a6c30fc8319.svg",
"fullname": "Pradeep Dasigi",
"isPro": false,
"type": "user",
"user": "pradeepd"
}
}
] | 2025-03-03T18:37:26 | Large-Scale Data Selection for Instruction Tuning | Selecting high-quality training data from a larger pool is a crucial step
when instruction-tuning language models, as carefully curated datasets often
produce models that outperform those trained on much larger, noisier datasets.
Automated data selection approaches for instruction-tuning are typically tested
by selecting small datasets (roughly 10k samples) from small pools (100-200k
samples). However, popular deployed instruction-tuned models often train on
hundreds of thousands to millions of samples, subsampled from even larger data
pools. We present a systematic study of how well data selection methods scale
to these settings, selecting up to 2.5M samples from pools of up to 5.8M
samples and evaluating across 7 diverse tasks. We show that many recently
proposed methods fall short of random selection in this setting (while using
more compute), and even decline in performance when given access to larger
pools of data to select over. However, we find that a variant of
representation-based data selection (RDS+), which uses weighted mean pooling of
pretrained LM hidden states, consistently outperforms more complex methods
across all settings tested -- all whilst being more compute-efficient. Our
findings highlight that the scaling properties of proposed automated selection
methods should be more closely examined. We release our code, data, and models
at https://github.com/hamishivi/automated-instruction-selection. | 5 | 67c67ff9dec55d10cb10fcef | null | null |
|
2025-03-03T23:29:27.952000 | Visual-RFT: Visual Reinforcement Fine-Tuning | 1 | {
"_id": "63fda3fced9eead590ff6918",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg",
"followerCount": 16,
"fullname": "Zeyi Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Zery",
"type": "user"
} | true | null | 2503.01785 | [
{
"_id": "67c6816614a1bf9855188b8b",
"hidden": false,
"name": "Ziyu Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:57.481Z",
"user": {
"_id": "66fe1334ff3ee1f7569fab6d",
"avatarUrl": "/avatars/6868b1a545028a9b8bbded52490dc093.svg",
"fullname": "ziyuliu",
"isPro": false,
"type": "user",
"user": "ziyuliu"
}
},
{
"_id": "67c6816614a1bf9855188b8c",
"hidden": false,
"name": "Zeyi Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:35:03.275Z",
"user": {
"_id": "63fda3fced9eead590ff6918",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677566802735-noauth.jpeg",
"fullname": "Zeyi Sun",
"isPro": false,
"type": "user",
"user": "Zery"
}
},
{
"_id": "67c6816614a1bf9855188b8d",
"hidden": false,
"name": "Yuhang Zang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:32.723Z",
"user": {
"_id": "63859cf3b2906edaf83af9f0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg",
"fullname": "Yuhang Zang",
"isPro": false,
"type": "user",
"user": "yuhangzang"
}
},
{
"_id": "67c6816614a1bf9855188b8e",
"hidden": false,
"name": "Xiaoyi Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:25.627Z",
"user": {
"_id": "67c0849ee08c178ef8d4e05c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mQ6VdnjZnRhb0H_waPclo.png",
"fullname": "Xiaoyi Dong",
"isPro": false,
"type": "user",
"user": "sweetFruit"
}
},
{
"_id": "67c6816614a1bf9855188b8f",
"hidden": false,
"name": "Yuhang Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:19.177Z",
"user": {
"_id": "65000bef18830fabea469fdd",
"avatarUrl": "/avatars/b320c77dfad039d9f9c54127f610d44f.svg",
"fullname": "Cao Yuhang",
"isPro": false,
"type": "user",
"user": "yhcao"
}
},
{
"_id": "67c6816614a1bf9855188b90",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:12:05.281Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67c6816614a1bf9855188b91",
"hidden": false,
"name": "Dahua Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:11:57.087Z",
"user": {
"_id": "636317ed80c1a705a6eff396",
"avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg",
"fullname": "Dahua Lin",
"isPro": false,
"type": "user",
"user": "lindahua"
}
},
{
"_id": "67c6816614a1bf9855188b92",
"hidden": false,
"name": "Jiaqi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:11:48.889Z",
"user": {
"_id": "64638c4d51fa6e63060521b5",
"avatarUrl": "/avatars/c863ace5b1dc788a341bcf4ddbdfaec1.svg",
"fullname": "JIaqi",
"isPro": false,
"type": "user",
"user": "Jiaqiwang"
}
}
] | 2025-03-03T18:16:32 | Visual-RFT: Visual Reinforcement Fine-Tuning | Reinforcement Fine-Tuning (RFT) in Large Reasoning Models like OpenAI o1
learns from feedback on its answers, which is especially useful in applications
when fine-tuning data is scarce. Recent open-source work like DeepSeek-R1
demonstrates that reinforcement learning with verifiable reward is one key
direction in reproducing o1. While the R1-style model has demonstrated success
in language models, its application in multi-modal domains remains
under-explored. This work introduces Visual Reinforcement Fine-Tuning
(Visual-RFT), which further extends the application areas of RFT on visual
tasks. Specifically, Visual-RFT first uses Large Vision-Language Models (LVLMs)
to generate multiple responses containing reasoning tokens and final answers
for each input, and then uses our proposed visual perception verifiable reward
functions to update the model via the policy optimization algorithm such as
Group Relative Policy Optimization (GRPO). We design different verifiable
reward functions for different perception tasks, such as the Intersection over
Union (IoU) reward for object detection. Experimental results on fine-grained
image classification, few-shot object detection, reasoning grounding, as well
as open-vocabulary object detection benchmarks show the competitive performance
and advanced generalization ability of Visual-RFT compared with Supervised
Fine-tuning (SFT). For example, Visual-RFT improves accuracy by 24.3% over
the baseline in one-shot fine-grained image classification with around 100
samples. In few-shot object detection, Visual-RFT also exceeds the baseline by
21.9 on COCO's two-shot setting and 15.4 on LVIS. Our Visual-RFT represents
a paradigm shift in fine-tuning LVLMs, offering a data-efficient, reward-driven
approach that enhances reasoning and adaptability for domain-specific tasks. | 43 | 67c6816c14a1bf9855188d8c | https://github.com/Liuziyu77/Visual-RFT | https://github.com/Liuziyu77/Visual-RFT |
|
2025-03-03T23:15:05.187000 | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs | 3 | {
"_id": "63f5173bb51da4d61da6c038",
"avatarUrl": "/avatars/0ee530cf80476aa3985c4d591cd384a1.svg",
"followerCount": 6,
"fullname": "Young Jin Kim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ykim362",
"type": "user"
} | true | null | 2503.01743 | [
{
"_id": "67c67d0dfe135a5f482599bb",
"hidden": false,
"name": "Abdelrahman Abouelenin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599bc",
"hidden": false,
"name": "Atabak Ashfaq",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:45:15.511Z",
"user": {
"_id": "669ed17498ba26df962584f5",
"avatarUrl": "/avatars/996c9cf05a4f8e5447552220085157c7.svg",
"fullname": "Atabak Ashfaq",
"isPro": false,
"type": "user",
"user": "atabakashfaqMSFT"
}
},
{
"_id": "67c67d0dfe135a5f482599bd",
"hidden": false,
"name": "Adam Atkinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599be",
"hidden": false,
"name": "Hany Awadalla",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599bf",
"hidden": false,
"name": "Nguyen Bach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599c0",
"hidden": false,
"name": "Jianmin Bao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:34.578Z",
"user": {
"_id": "6481e690f9ed842838a2b106",
"avatarUrl": "/avatars/e89a3c8366df504a95dc08a1a412bf3d.svg",
"fullname": "Jianmin Bao",
"isPro": false,
"type": "user",
"user": "jianmin-ustc"
}
},
{
"_id": "67c67d0dfe135a5f482599c1",
"hidden": false,
"name": "Alon Benhaim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:41.117Z",
"user": {
"_id": "65b9b627e7c838136275a681",
"avatarUrl": "/avatars/22423f3d9a6c4ee34cad3b0894d27d23.svg",
"fullname": "Alon Benhaim",
"isPro": false,
"type": "user",
"user": "alonbenhaim"
}
},
{
"_id": "67c67d0dfe135a5f482599c2",
"hidden": false,
"name": "Martin Cai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:47.556Z",
"user": {
"_id": "66f81b5b3c7ffa7931b4829a",
"avatarUrl": "/avatars/a7f34e8e3fd92fdb96affc367b522fbe.svg",
"fullname": "cai",
"isPro": false,
"type": "user",
"user": "martincai"
}
},
{
"_id": "67c67d0dfe135a5f482599c3",
"hidden": false,
"name": "Vishrav Chaudhary",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:46:56.428Z",
"user": {
"_id": "659c7ac977ac6f1bf5e63d7e",
"avatarUrl": "/avatars/86a6efde0d483564a67ed5f344d479a0.svg",
"fullname": "Vishrav Chaudhary",
"isPro": false,
"type": "user",
"user": "vishravmsft"
}
},
{
"_id": "67c67d0dfe135a5f482599c4",
"hidden": false,
"name": "Congcong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:04.205Z",
"user": {
"_id": "66c7a93b92e9f5b19f7533ab",
"avatarUrl": "/avatars/e26ebf5cf083a3ec09fce24026ecc76e.svg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "congcongchen"
}
},
{
"_id": "67c67d0dfe135a5f482599c5",
"hidden": false,
"name": "Dong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:11.865Z",
"user": {
"_id": "666470a28f5513b0cf11e850",
"avatarUrl": "/avatars/7beea758882677ad32a12ce56d4d084a.svg",
"fullname": "Dong Chen",
"isPro": false,
"type": "user",
"user": "DongChen06"
}
},
{
"_id": "67c67d0dfe135a5f482599c6",
"hidden": false,
"name": "Dongdong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:18.197Z",
"user": {
"_id": "6567651c6fcc82e5e8c36d4d",
"avatarUrl": "/avatars/ba3cc037a7688c4f8d967fc6043e540d.svg",
"fullname": "Dongdong Chen",
"isPro": false,
"type": "user",
"user": "dongdongchen"
}
},
{
"_id": "67c67d0dfe135a5f482599c7",
"hidden": false,
"name": "Junkun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:43.236Z",
"user": {
"_id": "669db44d61278f96d8c608a4",
"avatarUrl": "/avatars/92a493da10c086af5f2af680f4e2c6c6.svg",
"fullname": "Junkun Chen",
"isPro": false,
"type": "user",
"user": "shtpgshus"
}
},
{
"_id": "67c67d0dfe135a5f482599c8",
"hidden": false,
"name": "Weizhu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:51.832Z",
"user": {
"_id": "64da876370446182be5b608d",
"avatarUrl": "/avatars/e412fdc71404ecdf638e416846e3ebfb.svg",
"fullname": "Weizhu Chen",
"isPro": false,
"type": "user",
"user": "chenweizhu"
}
},
{
"_id": "67c67d0dfe135a5f482599c9",
"hidden": false,
"name": "Yen-Chun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:47:58.051Z",
"user": {
"_id": "662d6b09a47b4da4b23c8b2a",
"avatarUrl": "/avatars/6770b1d7e25b2cdce04f9904b543d122.svg",
"fullname": "Yen-Chun Chen",
"isPro": false,
"type": "user",
"user": "Yen-ChunChen"
}
},
{
"_id": "67c67d0dfe135a5f482599ca",
"hidden": false,
"name": "Yi-ling Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cb",
"hidden": false,
"name": "Qi Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cc",
"hidden": false,
"name": "Xiyang Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cd",
"hidden": false,
"name": "Ruchao Fan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:17.936Z",
"user": {
"_id": "64a8b800b35f48e37dfd20fe",
"avatarUrl": "/avatars/1e66be9a5238ce86df8b54150520bcc8.svg",
"fullname": "Ruchao Fan",
"isPro": false,
"type": "user",
"user": "fanruchao"
}
},
{
"_id": "67c67d0dfe135a5f482599ce",
"hidden": false,
"name": "Mei Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599cf",
"hidden": false,
"name": "Min Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d0",
"hidden": false,
"name": "Amit Garg",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d1",
"hidden": false,
"name": "Abhishek Goswami",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:49:02.466Z",
"user": {
"_id": "62cdae333529c21a2283a0a1",
"avatarUrl": "/avatars/cafc2821e522bbd06d49830e36a073e3.svg",
"fullname": "Abhishek GOSWAMI",
"isPro": false,
"type": "user",
"user": "abgoswam"
}
},
{
"_id": "67c67d0dfe135a5f482599d2",
"hidden": false,
"name": "Junheng Hao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:16.356Z",
"user": {
"_id": "5f04c4394ec31d33a72116d6",
"avatarUrl": "/avatars/75d4b9020070e73604b12e5adc1c8201.svg",
"fullname": "Junheng Hao",
"isPro": false,
"type": "user",
"user": "jeffhao"
}
},
{
"_id": "67c67d0dfe135a5f482599d3",
"hidden": false,
"name": "Amr Hendy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:24.716Z",
"user": {
"_id": "660480db07619487a3718a16",
"avatarUrl": "/avatars/9c08d541913e57fd79988ef93d5095d4.svg",
"fullname": "Amr Hendy",
"isPro": false,
"type": "user",
"user": "amrhendy"
}
},
{
"_id": "67c67d0dfe135a5f482599d4",
"hidden": false,
"name": "Yuxuan Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d5",
"hidden": false,
"name": "Xin Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599d6",
"hidden": false,
"name": "Mahmoud Khademi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:53:53.225Z",
"user": {
"_id": "6440905e27dc46cca590994c",
"avatarUrl": "/avatars/0346f8ad17038fba87649a0fc59d64ab.svg",
"fullname": "Mahmoud Khademi",
"isPro": false,
"type": "user",
"user": "mkhademi"
}
},
{
"_id": "67c67d0dfe135a5f482599d7",
"hidden": false,
"name": "Dongwoo Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:04.257Z",
"user": {
"_id": "662476aec8920ec351b8d3d8",
"avatarUrl": "/avatars/791e40f53073563680ef18f75b3ea95e.svg",
"fullname": "Dongwoo Kim",
"isPro": false,
"type": "user",
"user": "dongwookim-ms"
}
},
{
"_id": "67c67d0dfe135a5f482599d8",
"hidden": false,
"name": "Young Jin Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:19.902Z",
"user": {
"_id": "63f5173bb51da4d61da6c038",
"avatarUrl": "/avatars/0ee530cf80476aa3985c4d591cd384a1.svg",
"fullname": "Young Jin Kim",
"isPro": false,
"type": "user",
"user": "ykim362"
}
},
{
"_id": "67c67d0dfe135a5f482599d9",
"hidden": false,
"name": "Gina Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599da",
"hidden": false,
"name": "Jinyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:17.115Z",
"user": {
"_id": "64004b72330a45b03604303b",
"avatarUrl": "/avatars/a1fa3fc700173238d0336258b000d934.svg",
"fullname": "Jinyu Li",
"isPro": false,
"type": "user",
"user": "FallTraveler"
}
},
{
"_id": "67c67d0dfe135a5f482599db",
"hidden": false,
"name": "Yunsheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599dc",
"hidden": false,
"name": "Chen Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599dd",
"hidden": false,
"name": "Xihui Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:29.024Z",
"user": {
"_id": "6464f05e5cdb9ab50f846c98",
"avatarUrl": "/avatars/3cb2f60a909b59289209ecc7ba75a338.svg",
"fullname": "Xihui Lin",
"isPro": false,
"type": "user",
"user": "linxihui"
}
},
{
"_id": "67c67d0dfe135a5f482599de",
"hidden": false,
"name": "Zeqi Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:38.534Z",
"user": {
"_id": "62c3a0caf5e2eb44f51de87d",
"avatarUrl": "/avatars/3c535c5488476b75443666176fcb4c9b.svg",
"fullname": "Zeqi Lin",
"isPro": false,
"type": "user",
"user": "linzeqi"
}
},
{
"_id": "67c67d0dfe135a5f482599df",
"hidden": false,
"name": "Mengchen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e0",
"hidden": false,
"name": "Yang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e1",
"hidden": false,
"name": "Gilsinia Lopez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:55.169Z",
"user": {
"_id": "60c790f1accf7da31ed8240d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60c790f1accf7da31ed8240d/YDohCmgf9OUeWqZIs3Thh.jpeg",
"fullname": "Gilsinia Lopez",
"isPro": false,
"type": "user",
"user": "lgg"
}
},
{
"_id": "67c67d0dfe135a5f482599e2",
"hidden": false,
"name": "Chong Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e3",
"hidden": false,
"name": "Piyush Madan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:02:38.019Z",
"user": {
"_id": "66269a329014ef4d10f55d9d",
"avatarUrl": "/avatars/d4866c32419a7dd07e9aa0660f4bafa9.svg",
"fullname": "Piyush Madan",
"isPro": false,
"type": "user",
"user": "PiyushMadan"
}
},
{
"_id": "67c67d0dfe135a5f482599e4",
"hidden": false,
"name": "Vadim Mazalov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:02:47.430Z",
"user": {
"_id": "65301591944086d1d5fcf656",
"avatarUrl": "/avatars/250a2e898a4fcbe78feaf6e812851bd6.svg",
"fullname": "Vadim Mazalovskii",
"isPro": false,
"type": "user",
"user": "JakeRiley"
}
},
{
"_id": "67c67d0dfe135a5f482599e5",
"hidden": false,
"name": "Ali Mousavi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e6",
"hidden": false,
"name": "Anh Nguyen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:57:52.311Z",
"user": {
"_id": "649bc84833486cdd77c01c66",
"avatarUrl": "/avatars/36f4e4bb15c337c4391bfbd234051f4c.svg",
"fullname": "Nguyen Anh",
"isPro": false,
"type": "user",
"user": "Anhnguyen"
}
},
{
"_id": "67c67d0dfe135a5f482599e7",
"hidden": false,
"name": "Jing Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599e8",
"hidden": false,
"name": "Daniel Perez-Becker",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:09.929Z",
"user": {
"_id": "673b7f70cdc852f69bebfed1",
"avatarUrl": "/avatars/1efad61a42b948c750c96472a6192de5.svg",
"fullname": "Daniel Perez-Becker",
"isPro": false,
"type": "user",
"user": "perezbecker"
}
},
{
"_id": "67c67d0dfe135a5f482599e9",
"hidden": false,
"name": "Jacob Platin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ea",
"hidden": false,
"name": "Thomas Portet",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:39.865Z",
"user": {
"_id": "65c52dad286bf45e79491697",
"avatarUrl": "/avatars/01ebc7979273df6e53971ae9835b503f.svg",
"fullname": "Thomas Portet",
"isPro": false,
"type": "user",
"user": "thopo"
}
},
{
"_id": "67c67d0dfe135a5f482599eb",
"hidden": false,
"name": "Kai Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ec",
"hidden": false,
"name": "Bo Ren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:40:15.919Z",
"user": {
"_id": "668dcf92835bf7e64bbca904",
"avatarUrl": "/avatars/416eb3a3c5318a6a45aad87012296470.svg",
"fullname": "Bo Ren",
"isPro": false,
"type": "user",
"user": "rosrad"
}
},
{
"_id": "67c67d0dfe135a5f482599ed",
"hidden": false,
"name": "Liliang Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:57:37.996Z",
"user": {
"_id": "63815eff4761ddfa00903762",
"avatarUrl": "/avatars/3419b239d42e091586f1c51b526d88e5.svg",
"fullname": "Liliang Ren",
"isPro": false,
"type": "user",
"user": "renll"
}
},
{
"_id": "67c67d0dfe135a5f482599ee",
"hidden": false,
"name": "Sambuddha Roy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599ef",
"hidden": false,
"name": "Ning Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f0",
"hidden": false,
"name": "Yelong Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:00:05.457Z",
"user": {
"_id": "6454c337a13edf669cd5d8ea",
"avatarUrl": "/avatars/a383a0dda7c2ef6a0d6c3c64651f42ff.svg",
"fullname": "Yelong Shen",
"isPro": false,
"type": "user",
"user": "uuu6"
}
},
{
"_id": "67c67d0dfe135a5f482599f1",
"hidden": false,
"name": "Saksham Singhal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:03.188Z",
"user": {
"_id": "62743aec8cb70eed79073bc0",
"avatarUrl": "/avatars/3c8b9a91d898f616265f823ab7d432df.svg",
"fullname": "Saksham Singhal",
"isPro": false,
"type": "user",
"user": "sakshamsinghal"
}
},
{
"_id": "67c67d0dfe135a5f482599f2",
"hidden": false,
"name": "Subhojit Som",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:59:47.241Z",
"user": {
"_id": "678bc6b432ee4968eca9bb6a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/wT-Xa3TYem_EzkZZMyDG0.png",
"fullname": "Subhojit Som",
"isPro": false,
"type": "user",
"user": "susom"
}
},
{
"_id": "67c67d0dfe135a5f482599f3",
"hidden": false,
"name": "Xia Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f4",
"hidden": false,
"name": "Tetyana Sych",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:58:27.814Z",
"user": {
"_id": "64692ad25d701566394fd8da",
"avatarUrl": "/avatars/d6811ccceb14788bfa0aa10fe4ee1054.svg",
"fullname": "Tetyana Sych",
"isPro": false,
"type": "user",
"user": "tesych"
}
},
{
"_id": "67c67d0dfe135a5f482599f5",
"hidden": false,
"name": "Praneetha Vaddamanu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f6",
"hidden": false,
"name": "Shuohang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f7",
"hidden": false,
"name": "Yiming Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T21:16:18.278Z",
"user": {
"_id": "6786f93b3ad5585f2c2828b1",
"avatarUrl": "/avatars/41411af6f7d547041032a29b34041fe8.svg",
"fullname": "Yiming Wang",
"isPro": false,
"type": "user",
"user": "freewym"
}
},
{
"_id": "67c67d0dfe135a5f482599f8",
"hidden": false,
"name": "Zhenghao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599f9",
"hidden": false,
"name": "Haibin Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fa",
"hidden": false,
"name": "Haoran Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:56:04.939Z",
"user": {
"_id": "61384b860317b0a5c10877d3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1631080954171-61384b860317b0a5c10877d3.jpeg",
"fullname": "Haoran Xu",
"isPro": false,
"type": "user",
"user": "haoranxu"
}
},
{
"_id": "67c67d0dfe135a5f482599fb",
"hidden": false,
"name": "Weijian Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:58:36.082Z",
"user": {
"_id": "6398f4b32c20654083f36cde",
"avatarUrl": "/avatars/4591f514483890997c55e9e6d60bbb0f.svg",
"fullname": "Weijian Xu",
"isPro": false,
"type": "user",
"user": "xwjabc"
}
},
{
"_id": "67c67d0dfe135a5f482599fc",
"hidden": false,
"name": "Yifan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fd",
"hidden": false,
"name": "Ziyi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f482599fe",
"hidden": false,
"name": "Donghan Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:41.798Z",
"user": {
"_id": "65b01b8a29ae836e9ed5af24",
"avatarUrl": "/avatars/a8b78a4b54d3f10858c5925521357001.svg",
"fullname": "Donghan Yu",
"isPro": false,
"type": "user",
"user": "donghanyu"
}
},
{
"_id": "67c67d0dfe135a5f482599ff",
"hidden": false,
"name": "Ishmam Zabir",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f48259a00",
"hidden": false,
"name": "Jianwen Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:12.465Z",
"user": {
"_id": "63601ee38fb9c2420ffbe45d",
"avatarUrl": "/avatars/56af091aaff1b42dcfbae84a6ee1e7f7.svg",
"fullname": "Zhang",
"isPro": false,
"type": "user",
"user": "Jianwen"
}
},
{
"_id": "67c67d0dfe135a5f48259a01",
"hidden": false,
"name": "Li Lyna Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:55:01.540Z",
"user": {
"_id": "62b0009c72043b05d29492b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png",
"fullname": "Li Lyna Zhang",
"isPro": false,
"type": "user",
"user": "lynazhang"
}
},
{
"_id": "67c67d0dfe135a5f48259a02",
"hidden": false,
"name": "Yunan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c67d0dfe135a5f48259a03",
"hidden": false,
"name": "Xiren Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T09:54:26.629Z",
"user": {
"_id": "66ce4c9f864befb39cfc74e9",
"avatarUrl": "/avatars/ef66398466c470fc1d384c6817d9e461.svg",
"fullname": "Xiren Zhou",
"isPro": false,
"type": "user",
"user": "XirenZhou"
}
}
] | 2025-03-03T17:05:52 | Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
Models via Mixture-of-LoRAs | We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable
language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language
model trained on high-quality web and synthetic data, significantly
outperforming recent open-source models of similar size and matching the
performance of models twice its size on math and coding tasks requiring complex
reasoning. This achievement is driven by a carefully curated synthetic data
recipe emphasizing high-quality math and coding datasets. Compared to its
predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of
200K tokens to better support multilingual applications, as well as group query
attention for more efficient long-sequence generation. Phi-4-Multimodal is a
multimodal model that integrates text, vision, and speech/audio input
modalities into a single model. Its novel modality extension approach leverages
LoRA adapters and modality-specific routers to allow multiple inference modes
combining various modalities without interference. For example, it now ranks
first in the OpenASR leaderboard to date, although the LoRA component of the
speech/audio modality has just 460 million parameters. Phi-4-Multimodal
supports scenarios involving (vision + language), (vision + speech), and
(speech/audio) inputs, outperforming larger vision-language and speech-language
models on a wide range of tasks. Additionally, we experiment to further train
Phi-4-Mini to enhance its reasoning capabilities. Despite its compact
3.8-billion-parameter size, this experimental version achieves reasoning
performance on par with or surpassing significantly larger models, including
DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B. | 42 | 67c67d0efe135a5f48259a38 | https://huggingface.co/microsoft/Phi-4-multimodal-instruct | null |
|
2025-03-03T22:35:45.299000 | DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting | 1 | {
"_id": "6485d5b300c9cfe5c2470c81",
"avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg",
"followerCount": 3,
"fullname": "Kai",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KaiLv",
"type": "user"
} | true | null | 2503.00784 | [
{
"_id": "67c673bcf47209364f0cec96",
"hidden": false,
"name": "Kai Lv",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:14:11.523Z",
"user": {
"_id": "6485d5b300c9cfe5c2470c81",
"avatarUrl": "/avatars/c29aa81d2add795e8448b99274a04b83.svg",
"fullname": "Kai",
"isPro": false,
"type": "user",
"user": "KaiLv"
}
},
{
"_id": "67c673bcf47209364f0cec97",
"hidden": false,
"name": "Honglin Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:14:04.672Z",
"user": {
"_id": "638ef0b0c67af472d31674a6",
"avatarUrl": "/avatars/02df97d15a0f46b47f9162221733b121.svg",
"fullname": "Honglin Guo",
"isPro": false,
"type": "user",
"user": "KYLN24"
}
},
{
"_id": "67c673bcf47209364f0cec98",
"hidden": false,
"name": "Qipeng Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:13:46.322Z",
"user": {
"_id": "6491cd52b1e5d3444528edb1",
"avatarUrl": "/avatars/a85635d886c7f157b6723dec5c01c030.svg",
"fullname": "Qipeng Guo",
"isPro": false,
"type": "user",
"user": "QipengGuo"
}
},
{
"_id": "67c673bcf47209364f0cec99",
"hidden": false,
"name": "Xipeng Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-04T10:13:40.885Z",
"user": {
"_id": "61457b8deff2c9fdb4de4988",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg",
"fullname": "Xipeng Qiu",
"isPro": false,
"type": "user",
"user": "xpqiu"
}
}
] | 2025-03-02T08:27:48 | DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with
Dynamic Multi-Sequence Drafting | Large language models (LLMs) exhibit exceptional performance across a wide
range of tasks; however, their token-by-token autoregressive generation process
significantly hinders inference speed. Speculative decoding presents a
promising draft-then-verify framework that reduces generation latency while
maintaining output distribution fidelity. Nevertheless, the draft model
introduces additional computational overhead, becoming a performance bottleneck
and increasing the time to first token (TTFT). Previous approaches to mitigate
draft model overhead have primarily relied on heuristics and generally failed
to match the quality of the draft language models. To address these challenges,
we propose DuoDecoding, a novel approach that strategically deploys the draft
and target models on the CPU and GPU respectively, enabling parallel decoding
while preserving draft quality. Our method incorporates a hardware-aware
optimal draft budget to minimize idle times and employs dynamic multi-sequence
drafting to enhance draft quality. Extensive experiments across seven tasks
show that DuoDecoding achieves up to 2.61x speedup in generation latency, while
reducing TTFT to 83% of that in conventional speculative decoding. The Code is
available at https://github.com/KaiLv69/DuoDecoding. | 8 | 67c673bdf47209364f0cecb7 | null | https://github.com/KaiLv69/DuoDecoding |
|
2025-03-03T21:22:16.512000 | Predictive Data Selection: The Data That Predicts Is the Data That Teaches | 1 | {
"_id": "641c9662043963b1c0a1df52",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c9662043963b1c0a1df52/L1o85EHztv_xP9r6ppljf.jpeg",
"followerCount": 2,
"fullname": "KaShun SHUM",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ksshumab",
"type": "user"
} | true | null | 2503.00808 | [
{
"_id": "67c66382e5394bda7cbd03f9",
"hidden": false,
"name": "Kashun Shum",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:25.484Z",
"user": {
"_id": "641c9662043963b1c0a1df52",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641c9662043963b1c0a1df52/L1o85EHztv_xP9r6ppljf.jpeg",
"fullname": "KaShun SHUM",
"isPro": false,
"type": "user",
"user": "ksshumab"
}
},
{
"_id": "67c66382e5394bda7cbd03fa",
"hidden": false,
"name": "Yuzhen Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:23.329Z",
"user": {
"_id": "6462def82a83863b97c0611e",
"avatarUrl": "/avatars/c03e9cc7d75b0266fcc56ecb6ee62148.svg",
"fullname": "Yuzhen Huang",
"isPro": false,
"type": "user",
"user": "yuzhen17"
}
},
{
"_id": "67c66382e5394bda7cbd03fb",
"hidden": false,
"name": "Hongjian Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fc",
"hidden": false,
"name": "Ding Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fd",
"hidden": false,
"name": "Yixuan Liao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03fe",
"hidden": false,
"name": "Xiaoxin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd03ff",
"hidden": false,
"name": "Qian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c66382e5394bda7cbd0400",
"hidden": false,
"name": "Junxian He",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-03-02T09:21:28 | Predictive Data Selection: The Data That Predicts Is the Data That
Teaches | Language model pretraining involves training on extensive corpora, where data
quality plays a pivotal role. In this work, we aim to directly estimate the
contribution of data during pretraining and select pretraining data in an
efficient manner. Specifically, we draw inspiration from recent findings
showing that compression efficiency (i.e., the normalized loss) of diverse
models on certain text correlates strongly with their downstream performance,
when the text domain aligns with the downstream benchmark (Huang et al., 2024).
Building on this observation, we hypothesize that data on which model losses
are predictive of downstream abilities also contribute effectively to learning.
To leverage this insight, we introduce data selection based on data's
Predictive strength (Preselect), a lightweight and efficient data selection
method that requires training and deploying only a fastText-based scorer.
Through comprehensive experiments with 1B and 3B parameter models, we
demonstrate that models trained on 30B tokens selected with PreSelect surpasses
the performance of a vanilla baseline trained on 300B tokens, achieving a 10x
reduction in compute requirements. Furthermore, PreSelect significantly
outperforms other competitive data selection baselines, such as DCLM and
FineWeb-Edu on a scale of 3B models trained on 100B tokens. We open-source our
trained data selection scorer along with the curated datasets at
https://github.com/hkust-nlp/PreSelect. | 45 | 67c66383e5394bda7cbd0428 | null | https://github.com/hkust-nlp/PreSelect |
|
2025-03-03T11:25:57.425000 | Multi-Turn Code Generation Through Single-Step Rewards | 2 | {
"_id": "6421d2972143035270db37b9",
"avatarUrl": "/avatars/4fadeafc273d32cf72fe2f12d444c5e8.svg",
"followerCount": 2,
"fullname": "Gonzalo Gonzalez",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "chalo2000",
"type": "user"
} | true | null | 2502.20380 | [
{
"_id": "67c34e3beae05d8f94f800b4",
"hidden": false,
"name": "Arnav Kumar Jain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b5",
"hidden": false,
"name": "Gonzalo Gonzalez-Pumariega",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:15:14.593Z",
"user": {
"_id": "6421d2972143035270db37b9",
"avatarUrl": "/avatars/4fadeafc273d32cf72fe2f12d444c5e8.svg",
"fullname": "Gonzalo Gonzalez",
"isPro": false,
"type": "user",
"user": "chalo2000"
}
},
{
"_id": "67c34e3beae05d8f94f800b6",
"hidden": false,
"name": "Wayne Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b7",
"hidden": false,
"name": "Alexander M Rush",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b8",
"hidden": false,
"name": "Wenting Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c34e3beae05d8f94f800b9",
"hidden": false,
"name": "Sanjiban Choudhury",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:55:05 | Multi-Turn Code Generation Through Single-Step Rewards | We address the problem of code generation from multi-turn execution feedback.
Existing methods either generate code without feedback or use complex,
hierarchical reinforcement learning to optimize multi-turn rewards. We propose
a simple yet scalable approach, muCode, that solves multi-turn code
generation using only single-step rewards. Our key insight is that code
generation is a one-step recoverable MDP, where the correct code can be
recovered from any intermediate code state in a single turn. muCode
iteratively trains both a generator to provide code solutions conditioned on
multi-turn execution feedback and a verifier to score the newly generated code.
Experimental evaluations show that our approach achieves significant
improvements over the state-of-the-art baselines. We provide analysis of the
design choices of the reward models and policy, and show the efficacy of
muCode at utilizing the execution feedback. Our code is available at
https://github.com/portal-cornell/muCode. | 24 | 67c34e3ceae05d8f94f8010e | https://portal-cornell.github.io/muCode/ | https://github.com/portal-cornell/muCode |
|
2025-03-03T10:56:33.810000 | Preference Learning Unlocks LLMs' Psycho-Counseling Skills | 2 | {
"_id": "650857fef3060ea840ffbbfe",
"avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg",
"followerCount": 1,
"fullname": "Mian Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "billmianz",
"type": "user"
} | true | null | 2502.19731 | [
{
"_id": "67c36b35e12b50f698e7db1d",
"hidden": false,
"name": "Mian Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:31.238Z",
"user": {
"_id": "650857fef3060ea840ffbbfe",
"avatarUrl": "/avatars/3a339936021c040f19a21838ae1382c4.svg",
"fullname": "Mian Zhang",
"isPro": false,
"type": "user",
"user": "billmianz"
}
},
{
"_id": "67c36b35e12b50f698e7db1e",
"hidden": false,
"name": "Shaun M. Eack",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c36b35e12b50f698e7db1f",
"hidden": false,
"name": "Zhiyu Zoey Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T03:50:25 | Preference Learning Unlocks LLMs' Psycho-Counseling Skills | Applying large language models (LLMs) to assist in psycho-counseling is an
emerging and meaningful approach, driven by the significant gap between patient
needs and the availability of mental health support. However, current LLMs
struggle to consistently provide effective responses to client speeches,
largely due to the lack of supervision from high-quality real psycho-counseling
data, whose content is typically inaccessible due to client privacy concerns.
Furthermore, the quality of therapists' responses in available sessions can
vary significantly based on their professional training and experience.
Assessing the quality of therapists' responses remains an open challenge. In
this work, we address these challenges by first proposing a set of professional
and comprehensive principles to evaluate therapists' responses to client
speeches. Using these principles, we create a preference dataset,
PsychoCounsel-Preference, which contains 36k high-quality preference comparison
pairs. This dataset aligns with the preferences of professional
psychotherapists, providing a robust foundation for evaluating and improving
LLMs in psycho-counseling. Experiments on reward modeling and preference
learning demonstrate that PsychoCounsel-Preference is an excellent resource for
LLMs to acquire essential skills for responding to clients in a counseling
session. Our best-aligned model, PsychoCounsel-Llama3-8B, achieves an
impressive win rate of 87% against GPT-4o. We release PsychoCounsel-Preference,
PsychoCounsel-Llama3-8B and the reward model PsychoCounsel Llama3-8B-Reward to
facilitate the research of psycho-counseling with LLMs at:
https://hf.co/Psychotherapy-LLM. | 6 | 67c36b36e12b50f698e7db51 | null | null |
|
2025-03-03T10:26:31.746000 | EgoNormia: Benchmarking Physical Social Norm Understanding | 2 | {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"followerCount": 7,
"fullname": "Hao Zhu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ProKil",
"type": "user"
} | true | null | 2502.20490 | [
{
"_id": "67c5c853e7c5cfb1d2b52858",
"hidden": false,
"name": "MohammadHossein Rezaei",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-03T16:56:51.354Z",
"user": {
"_id": "63f6ba02a67b8acfa50407bb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f6ba02a67b8acfa50407bb/ueUb01p1mhuRNrkyfEHtc.jpeg",
"fullname": "MohammadHossein Rezaei",
"isPro": false,
"type": "user",
"user": "mhr2004"
}
},
{
"_id": "67c5c853e7c5cfb1d2b52859",
"hidden": false,
"name": "Yicheng Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285a",
"hidden": false,
"name": "Phil Cuvin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285b",
"hidden": false,
"name": "Caleb Ziems",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285c",
"hidden": false,
"name": "Yanzhe Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c853e7c5cfb1d2b5285d",
"hidden": false,
"name": "Hao Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:08.219Z",
"user": {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"fullname": "Hao Zhu",
"isPro": false,
"type": "user",
"user": "ProKil"
}
},
{
"_id": "67c5c853e7c5cfb1d2b5285e",
"hidden": false,
"name": "Diyi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T19:54:16 | EgoNormia: Benchmarking Physical Social Norm Understanding | Human activity is moderated by norms. When performing actions in the real
world, humans not only follow norms, but also consider the trade-off between
different norms However, machines are often trained without explicit
supervision on norm understanding and reasoning, especially when the norms are
grounded in a physical and social context. To improve and evaluate the
normative reasoning capability of vision-language models (VLMs), we present
EgoNormia |epsilon|, consisting of 1,853 ego-centric videos of human
interactions, each of which has two related questions evaluating both the
prediction and justification of normative actions. The normative actions
encompass seven categories: safety, privacy, proxemics, politeness,
cooperation, coordination/proactivity, and communication/legibility. To compile
this dataset at scale, we propose a novel pipeline leveraging video sampling,
automatic answer generation, filtering, and human validation. Our work
demonstrates that current state-of-the-art vision-language models lack robust
norm understanding, scoring a maximum of 45% on EgoNormia (versus a human bench
of 92%). Our analysis of performance in each dimension highlights the
significant risks of safety, privacy, and the lack of collaboration and
communication capability when applied to real-world agents. We additionally
show that through a retrieval-based generation method, it is possible to use
EgoNomia to enhance normative reasoning in VLMs. | 4 | 67c5c857e7c5cfb1d2b52994 | https://egonormia.org | https://github.com/open-social-world/egonormia |
|
2025-03-03T09:49:10.381000 | How far can we go with ImageNet for Text-to-Image generation? | 2 | {
"_id": "630652803aed65d34e98eee3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630652803aed65d34e98eee3/XG_PuVFA6ziGQZd3UUZSF.jpeg",
"followerCount": 3,
"fullname": "Nicolas Dufour",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "nicolas-dufour",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/630652803aed65d34e98eee3/8GIi2e6959v5dl4XUVqkc.png"
] | 2502.21318 | [
{
"_id": "67c5c13ca10c7059c3d3d4c9",
"hidden": false,
"name": "L. Degeorge",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:10.195Z",
"user": {
"_id": "63bb08b07fd5e883e13efd32",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63bb08b07fd5e883e13efd32/aKAR8alYsYteEQImBrWO7.jpeg",
"fullname": "Lucas Degeorge",
"isPro": false,
"type": "user",
"user": "Lucasdegeorge"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4ca",
"hidden": false,
"name": "A. Ghosh",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-03-03T18:07:11.151Z",
"user": {
"_id": "66f971c83d94062a4aa808ef",
"avatarUrl": "/avatars/f1d6c4d85d20fd4a614278ecd784c772.svg",
"fullname": "Arijit Ghosh",
"isPro": false,
"type": "user",
"user": "arijitghosh"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4cb",
"hidden": false,
"name": "N. Dufour",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:14.366Z",
"user": {
"_id": "630652803aed65d34e98eee3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630652803aed65d34e98eee3/XG_PuVFA6ziGQZd3UUZSF.jpeg",
"fullname": "Nicolas Dufour",
"isPro": false,
"type": "user",
"user": "nicolas-dufour"
}
},
{
"_id": "67c5c13ca10c7059c3d3d4cc",
"hidden": false,
"name": "D. Picard",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5c13ca10c7059c3d3d4cd",
"hidden": false,
"name": "V. Kalogeiton",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T18:59:42 | How far can we go with ImageNet for Text-to-Image generation? | Recent text-to-image (T2I) generation models have achieved remarkable results
by training on billion-scale datasets, following a `bigger is better' paradigm
that prioritizes data quantity over quality. We challenge this established
paradigm by demonstrating that strategic data augmentation of small,
well-curated datasets can match or outperform models trained on massive
web-scraped collections. Using only ImageNet enhanced with well-designed text
and image augmentations, we achieve a +2 overall score over SD-XL on GenEval
and +5 on DPGBench while using just 1/10th the parameters and 1/1000th the
training images. Our results suggest that strategic data augmentation, rather
than massive datasets, could offer a more sustainable path forward for T2I
generation. | 22 | 67c5c145a10c7059c3d3d693 | https://lucasdegeorge.github.io/projects/t2i_imagenet/ | https://github.com/lucasdegeorge/T2I-ImageNet |
|
2025-03-03T09:44:46.734000 | DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping | 2 | {
"_id": "655d9f43b5da99edaf3f2f81",
"avatarUrl": "/avatars/c7225b3ed54d099a4fd87682427fb5bf.svg",
"followerCount": 2,
"fullname": "Yifan Zhong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yifan-Zhong",
"type": "user"
} | false | null | 2502.20900 | [
{
"_id": "67c5beea1b2c18e03a3d5218",
"hidden": false,
"name": "Yifan Zhong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d5219",
"hidden": false,
"name": "Xuchuan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521a",
"hidden": false,
"name": "Ruochong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521b",
"hidden": false,
"name": "Ceyao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521c",
"hidden": false,
"name": "Yitao Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521d",
"hidden": false,
"name": "Yaodong Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5beea1b2c18e03a3d521e",
"hidden": false,
"name": "Yuanpei Chen",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-03T14:38:37.342Z",
"user": {
"_id": "6393a8af84c565d2c3419b7c",
"avatarUrl": "/avatars/f2a237a58dd0a25ef5c1a98e60acbb5c.svg",
"fullname": "chen",
"isPro": false,
"type": "user",
"user": "yuanpei"
}
}
] | 2025-02-28T09:57:20 | DexGraspVLA: A Vision-Language-Action Framework Towards General
Dexterous Grasping | Dexterous grasping remains a fundamental yet challenging problem in robotics.
A general-purpose robot must be capable of grasping diverse objects in
arbitrary scenarios. However, existing research typically relies on specific
assumptions, such as single-object settings or limited environments, leading to
constrained generalization. Our solution is DexGraspVLA, a hierarchical
framework that utilizes a pre-trained Vision-Language model as the high-level
task planner and learns a diffusion-based policy as the low-level Action
controller. The key insight lies in iteratively transforming diverse language
and visual inputs into domain-invariant representations, where imitation
learning can be effectively applied due to the alleviation of domain shift.
Thus, it enables robust generalization across a wide range of real-world
scenarios. Notably, our method achieves a 90+% success rate under thousands of
unseen object, lighting, and background combinations in a ``zero-shot''
environment. Empirical analysis further confirms the consistency of internal
model behavior across environmental variations, thereby validating our design
and explaining its generalization performance. We hope our work can be a step
forward in achieving general dexterous grasping. Our demo and code can be found
at https://dexgraspvla.github.io/. | 6 | 67c5beed1b2c18e03a3d52c0 | null | null |
|
2025-03-03T09:33:49.658000 | TeleRAG: Efficient Retrieval-Augmented Generation Inference with Lookahead Retrieval | 2 | {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"followerCount": null,
"fullname": "Keisuke Kamahori",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kamahori",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6304ac1a412a1b9d381ca378/BYM8EdFZVDrDbfX8LKVC2.png"
] | 2502.20969 | [
{
"_id": "67c5bc8babe08983d98a4248",
"hidden": false,
"name": "Chien-Yu Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4249",
"hidden": false,
"name": "Keisuke Kamahori",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:17.078Z",
"user": {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"fullname": "Keisuke Kamahori",
"isPro": false,
"type": "user",
"user": "kamahori"
}
},
{
"_id": "67c5bc8babe08983d98a424a",
"hidden": false,
"name": "Yiyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424b",
"hidden": false,
"name": "Xiaoxiang Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424c",
"hidden": false,
"name": "Madhav Kashyap",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424d",
"hidden": false,
"name": "Yile Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424e",
"hidden": false,
"name": "Rulin Shao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a424f",
"hidden": false,
"name": "Zihao Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4250",
"hidden": false,
"name": "Kan Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4251",
"hidden": false,
"name": "Stephanie Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4252",
"hidden": false,
"name": "Arvind Krishnamurthy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4253",
"hidden": false,
"name": "Rohan Kadekodi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4254",
"hidden": false,
"name": "Luis Ceze",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5bc8babe08983d98a4255",
"hidden": false,
"name": "Baris Kasikci",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T11:32:22 | TeleRAG: Efficient Retrieval-Augmented Generation Inference with
Lookahead Retrieval | Retrieval-augmented generation (RAG) extends large language models (LLMs)
with external data sources to enhance factual correctness and domain coverage.
Modern RAG pipelines rely on large datastores, leading to system challenges in
latency-sensitive deployments, especially when limited GPU memory is available.
To address these challenges, we propose TeleRAG, an efficient inference system
that reduces RAG latency with minimal GPU memory requirements. The core
innovation of TeleRAG is lookahead retrieval, a prefetching mechanism that
anticipates required data and transfers it from CPU to GPU in parallel with LLM
generation. By leveraging the modularity of RAG pipelines, the inverted file
index (IVF) search algorithm and similarities between queries, TeleRAG
optimally overlaps data movement and computation. Experimental results show
that TeleRAG reduces end-to-end RAG inference latency by up to 1.72x on average
compared to state-of-the-art systems, enabling faster, more memory-efficient
deployments of advanced RAG applications. | 7 | 67c5bc8cabe08983d98a426c | null | null |
|
2025-03-03T08:13:06.912000 | MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing | 2 | {
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
} | false | null | 2502.21291 | [
{
"_id": "67c5aad632a7208c9ae1d020",
"hidden": false,
"name": "Xueyun Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d021",
"hidden": false,
"name": "Wei Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-03-03T13:12:57.839Z",
"user": {
"_id": "63044e025c70c21d0eaf08bc",
"avatarUrl": "/avatars/a2d39973d7fbcbe9d4cce5648b3149c2.svg",
"fullname": "Wei Li",
"isPro": false,
"type": "user",
"user": "Wiley085"
}
},
{
"_id": "67c5aad632a7208c9ae1d022",
"hidden": false,
"name": "Bingbing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d023",
"hidden": false,
"name": "Yige Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d024",
"hidden": false,
"name": "Yuanzhuo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c5aad632a7208c9ae1d025",
"hidden": false,
"name": "Huawei Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T18:21:08 | MIGE: A Unified Framework for Multimodal Instruction-Based Image
Generation and Editing | Despite significant progress in diffusion-based image generation,
subject-driven generation and instruction-based editing remain challenging.
Existing methods typically treat them separately, struggling with limited
high-quality data and poor generalization. However, both tasks require
capturing complex visual variations while maintaining consistency between
inputs and outputs. Therefore, we propose MIGE, a unified framework that
standardizes task representations using multimodal instructions. It treats
subject-driven generation as creation on a blank canvas and instruction-based
editing as modification of an existing image, establishing a shared
input-output formulation. MIGE introduces a novel multimodal encoder that maps
free-form multimodal instructions into a unified vision-language space,
integrating visual and semantic features through a feature fusion
mechanism.This unification enables joint training of both tasks, providing two
key advantages: (1) Cross-Task Enhancement: By leveraging shared visual and
semantic representations, joint training improves instruction adherence and
visual consistency in both subject-driven generation and instruction-based
editing. (2) Generalization: Learning in a unified format facilitates
cross-task knowledge transfer, enabling MIGE to generalize to novel
compositional tasks, including instruction-based subject-driven editing.
Experiments show that MIGE excels in both subject-driven generation and
instruction-based editing while setting a state-of-the-art in the new task of
instruction-based subject-driven editing. Code and model have been publicly
available at https://github.com/Eureka-Maggie/MIGE. | 4 | 67c5aad932a7208c9ae1d19a | null | https://github.com/Eureka-Maggie/MIGE |
|
2025-03-03T07:33:14.717000 | LettuceDetect: A Hallucination Detection Framework for RAG Applications | 2 | {
"_id": "646264832538819c729e32ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264832538819c729e32ba/syc-UpPQyR3Nbf-gYndc4.jpeg",
"followerCount": 1,
"fullname": "Adam Kovacs",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "adaamko",
"type": "user"
} | true | null | 2502.17125 | [
{
"_id": "67c0536530abbab5c723f2e0",
"hidden": false,
"name": "Ádám Kovács",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:13.294Z",
"user": {
"_id": "646264832538819c729e32ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646264832538819c729e32ba/syc-UpPQyR3Nbf-gYndc4.jpeg",
"fullname": "Adam Kovacs",
"isPro": true,
"type": "user",
"user": "adaamko"
}
},
{
"_id": "67c0536530abbab5c723f2e1",
"hidden": false,
"name": "Gábor Recski",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T13:11:47 | LettuceDetect: A Hallucination Detection Framework for RAG Applications | Retrieval Augmented Generation (RAG) systems remain vulnerable to
hallucinated answers despite incorporating external knowledge sources. We
present LettuceDetect a framework that addresses two critical limitations in
existing hallucination detection methods: (1) the context window constraints of
traditional encoder-based methods, and (2) the computational inefficiency of
LLM based approaches. Building on ModernBERT's extended context capabilities
(up to 8k tokens) and trained on the RAGTruth benchmark dataset, our approach
outperforms all previous encoder-based models and most prompt-based models,
while being approximately 30 times smaller than the best models. LettuceDetect
is a token-classification model that processes context-question-answer triples,
allowing for the identification of unsupported claims at the token level.
Evaluations on the RAGTruth corpus demonstrate an F1 score of 79.22% for
example-level detection, which is a 14.8% improvement over Luna, the previous
state-of-the-art encoder-based architecture. Additionally, the system can
process 30 to 60 examples per second on a single GPU, making it more practical
for real-world RAG applications. | 5 | 67c0536630abbab5c723f31e | null | https://github.com/KRLabsOrg/LettuceDetect |
|
2025-03-03T07:04:47.515000 | Optimal Brain Apoptosis | 2 | {
"_id": "668e62f6514c46e257387f6b",
"avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg",
"followerCount": null,
"fullname": "Mingyuan Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "mingyuansun",
"type": "user"
} | true | null | 2502.17941 | [
{
"_id": "67c59a7e6eb050aa82406452",
"hidden": false,
"name": "Mingyuan Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:21.192Z",
"user": {
"_id": "668e62f6514c46e257387f6b",
"avatarUrl": "/avatars/601b111141141cb2ea710b3166e62cd0.svg",
"fullname": "Mingyuan Sun",
"isPro": false,
"type": "user",
"user": "mingyuansun"
}
},
{
"_id": "67c59a7e6eb050aa82406453",
"hidden": false,
"name": "Zheng Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406454",
"hidden": false,
"name": "Jiaxu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406455",
"hidden": false,
"name": "Junjie Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406456",
"hidden": false,
"name": "Delei Kong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406457",
"hidden": false,
"name": "Chenming Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406458",
"hidden": false,
"name": "Yuetong Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c59a7e6eb050aa82406459",
"hidden": false,
"name": "Renjing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T08:03:04 | Optimal Brain Apoptosis | The increasing complexity and parameter count of Convolutional Neural
Networks (CNNs) and Transformers pose challenges in terms of computational
efficiency and resource demands. Pruning has been identified as an effective
strategy to address these challenges by removing redundant elements such as
neurons, channels, or connections, thereby enhancing computational efficiency
without heavily compromising performance. This paper builds on the foundational
work of Optimal Brain Damage (OBD) by advancing the methodology of parameter
importance estimation using the Hessian matrix. Unlike previous approaches that
rely on approximations, we introduce Optimal Brain Apoptosis (OBA), a novel
pruning method that calculates the Hessian-vector product value directly for
each parameter. By decomposing the Hessian matrix across network layers and
identifying conditions under which inter-layer Hessian submatrices are
non-zero, we propose a highly efficient technique for computing the
second-order Taylor expansion of parameters. This approach allows for a more
precise pruning process, particularly in the context of CNNs and Transformers,
as validated in our experiments including VGG19, ResNet32, ResNet50, and
ViT-B/16 on CIFAR10, CIFAR100 and Imagenet datasets. Our code is available at
https://github.com/NEU-REAL/OBA. | 7 | 67c59a7f6eb050aa824064b9 | null | https://github.com/NEU-REAL/OBA |
|
2025-03-03T04:21:42.563000 | Tell me why: Visual foundation models as self-explainable classifiers | 2 | {
"_id": "66588b6fd22637bfab498709",
"avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg",
"followerCount": null,
"fullname": "Hugues Turbé",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hturbe",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/66588b6fd22637bfab498709/4VG_eDtZKZ4kj1AdG_P14.png"
] | 2502.19577 | [
{
"_id": "67c42356054ae6d1c760b643",
"hidden": false,
"name": "Hugues Turbé",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:15:04.391Z",
"user": {
"_id": "66588b6fd22637bfab498709",
"avatarUrl": "/avatars/9007f0d3b078bd6193912a5359107f24.svg",
"fullname": "Hugues Turbé",
"isPro": false,
"type": "user",
"user": "hturbe"
}
},
{
"_id": "67c42356054ae6d1c760b644",
"hidden": false,
"name": "Mina Bjelogrlic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c42356054ae6d1c760b645",
"hidden": false,
"name": "Gianmarco Mengaldo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c42356054ae6d1c760b646",
"hidden": false,
"name": "Christian Lovis",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-26T21:40:30 | Tell me why: Visual foundation models as self-explainable classifiers | Visual foundation models (VFMs) have become increasingly popular due to their
state-of-the-art performance. However, interpretability remains crucial for
critical applications. In this sense, self-explainable models (SEM) aim to
provide interpretable classifiers that decompose predictions into a weighted
sum of interpretable concepts. Despite their promise, recent studies have shown
that these explanations often lack faithfulness. In this work, we combine VFMs
with a novel prototypical architecture and specialized training objectives. By
training only a lightweight head (approximately 1M parameters) on top of frozen
VFMs, our approach (ProtoFM) offers an efficient and interpretable solution.
Evaluations demonstrate that our approach achieves competitive classification
performance while outperforming existing models across a range of
interpretability metrics derived from the literature. Code is available at
https://github.com/hturbe/proto-fm. | 9 | 67c4235c054ae6d1c760b806 | null | null |
|
2025-03-03T02:35:09.967000 | Chain of Draft: Thinking Faster by Writing Less | 4 | {
"_id": "63da3d7ae697e5898cb86854",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675246771355-noauth.jpeg",
"followerCount": 86,
"fullname": "Talha Rüzgar Akkuş",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Q-bert",
"type": "user"
} | true | null | 2502.18600 | [
{
"_id": "67c0a8058589d8ecb79d472b",
"hidden": false,
"name": "Silei Xu",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-27T18:01:14.543Z",
"user": {
"_id": "6594b1bb57a556fbe162915e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6594b1bb57a556fbe162915e/WuYxqbbvaJaT-xsk5KhoT.jpeg",
"fullname": "Silei Xu",
"isPro": false,
"type": "user",
"user": "sileixu"
}
},
{
"_id": "67c0a8058589d8ecb79d472c",
"hidden": false,
"name": "Wenhao Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0a8058589d8ecb79d472d",
"hidden": false,
"name": "Lingxiao Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c0a8058589d8ecb79d472e",
"hidden": false,
"name": "Pengcheng He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:30:43.479Z",
"user": {
"_id": "5efd09cf49ed724c8a135868",
"avatarUrl": "/avatars/af12bc94657979677a9f26183f0c9727.svg",
"fullname": "Pengcheng He",
"isPro": false,
"type": "user",
"user": "DeBERTa"
}
}
] | 2025-02-25T19:36:06 | Chain of Draft: Thinking Faster by Writing Less | Large Language Models (LLMs) have demonstrated remarkable performance in
solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT)
prompting, which emphasizes verbose, step-by-step reasoning. However, humans
typically employ a more efficient strategy: drafting concise intermediate
thoughts that capture only essential information. In this work, we propose
Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes,
where LLMs generate minimalistic yet informative intermediate reasoning outputs
while solving tasks. By reducing verbosity and focusing on critical insights,
CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of
the tokens, significantly reducing cost and latency across various reasoning
tasks. | 35 | 67c0a8078589d8ecb79d47ed | null | https://github.com/sileix/chain-of-draft |
|
2025-03-02T22:22:01.895000 | ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents | 2 | {
"_id": "657429d833e5a4bf5b278615",
"avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg",
"followerCount": 1,
"fullname": "QiuchenWang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "autumncc",
"type": "user"
} | true | null | 2502.18017 | [
{
"_id": "67bef5a6070ec160042d99f4",
"hidden": false,
"name": "Qiuchen Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:57.850Z",
"user": {
"_id": "657429d833e5a4bf5b278615",
"avatarUrl": "/avatars/ed7e28c1b9a7bed1cad864c992cdcc69.svg",
"fullname": "QiuchenWang",
"isPro": false,
"type": "user",
"user": "autumncc"
}
},
{
"_id": "67bef5a6070ec160042d99f5",
"hidden": false,
"name": "Ruixue Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bef5a6070ec160042d99f6",
"hidden": false,
"name": "Zehui Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:18.129Z",
"user": {
"_id": "64892d31cbda0d1cdb956897",
"avatarUrl": "/avatars/3cdafe03a8295124636347d15a099aaf.svg",
"fullname": "Zehui Chen",
"isPro": false,
"type": "user",
"user": "lovesnowbest"
}
},
{
"_id": "67bef5a6070ec160042d99f7",
"hidden": false,
"name": "Weiqi Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:12.075Z",
"user": {
"_id": "65351cbe6141b3927afaed17",
"avatarUrl": "/avatars/5abf5f2c4ab329e63a7f45c15c9dfb93.svg",
"fullname": "weiqi wu",
"isPro": false,
"type": "user",
"user": "vickywu"
}
},
{
"_id": "67bef5a6070ec160042d99f8",
"hidden": false,
"name": "Shihang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:32:05.679Z",
"user": {
"_id": "62e8efb14210d3fe69eacb42",
"avatarUrl": "/avatars/2feadd75274bf353b910f4679ef72b39.svg",
"fullname": "Shihang Wang",
"isPro": false,
"type": "user",
"user": "shihang"
}
},
{
"_id": "67bef5a6070ec160042d99f9",
"hidden": false,
"name": "Pengjun Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:31:59.813Z",
"user": {
"_id": "63a091e42fabbbb89991f5ce",
"avatarUrl": "/avatars/d55485b06461764c36c9edf9d6e8892c.svg",
"fullname": "pengjun xie",
"isPro": false,
"type": "user",
"user": "xpjandy"
}
},
{
"_id": "67bef5a6070ec160042d99fa",
"hidden": false,
"name": "Feng Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T09:26:12 | ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic
Iterative Reasoning Agents | Understanding information from visually rich documents remains a significant
challenge for traditional Retrieval-Augmented Generation (RAG) methods.
Existing benchmarks predominantly focus on image-based question answering (QA),
overlooking the fundamental challenges of efficient retrieval, comprehension,
and reasoning within dense visual documents. To bridge this gap, we introduce
ViDoSeek, a novel dataset designed to evaluate RAG performance on visually rich
documents requiring complex reasoning. Based on it, we identify key limitations
in current RAG approaches: (i) purely visual retrieval methods struggle to
effectively integrate both textual and visual features, and (ii) previous
approaches often allocate insufficient reasoning tokens, limiting their
effectiveness. To address these challenges, we propose ViDoRAG, a novel
multi-agent RAG framework tailored for complex reasoning across visual
documents. ViDoRAG employs a Gaussian Mixture Model (GMM)-based hybrid strategy
to effectively handle multi-modal retrieval. To further elicit the model's
reasoning capabilities, we introduce an iterative agent workflow incorporating
exploration, summarization, and reflection, providing a framework for
investigating test-time scaling in RAG domains. Extensive experiments on
ViDoSeek validate the effectiveness and generalization of our approach.
Notably, ViDoRAG outperforms existing methods by over 10% on the competitive
ViDoSeek benchmark. | 17 | 67bef5a7070ec160042d9a65 | null | https://github.com/Alibaba-NLP/ViDoRAG |
|
2025-03-02T22:08:44.891000 | Sim-to-Real Reinforcement Learning for Vision-Based Dexterous Manipulation on Humanoids | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20396 | [
{
"_id": "67c51d36c830dcb76bbb5994",
"hidden": false,
"name": "Toru Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:25.709Z",
"user": {
"_id": "65e8b34632f166badb8d893a",
"avatarUrl": "/avatars/a55da1d08dc1104e6c539cd3f1ef1ebe.svg",
"fullname": "T",
"isPro": false,
"type": "user",
"user": "toruowo"
}
},
{
"_id": "67c51d36c830dcb76bbb5995",
"hidden": false,
"name": "Kartik Sachdev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51d36c830dcb76bbb5996",
"hidden": false,
"name": "Linxi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51d36c830dcb76bbb5997",
"hidden": false,
"name": "Jitendra Malik",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:36:34.177Z",
"user": {
"_id": "65369a95605a07338de78ab0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/sGFjOjLT2akN-sn5beVWL.jpeg",
"fullname": "Jitendra Malik ",
"isPro": false,
"type": "user",
"user": "jitendra1995"
}
},
{
"_id": "67c51d36c830dcb76bbb5998",
"hidden": false,
"name": "Yuke Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:59:52 | Sim-to-Real Reinforcement Learning for Vision-Based Dexterous
Manipulation on Humanoids | Reinforcement learning has delivered promising results in achieving human- or
even superhuman-level capabilities across diverse problem domains, but success
in dexterous robot manipulation remains limited. This work investigates the key
challenges in applying reinforcement learning to solve a collection of
contact-rich manipulation tasks on a humanoid embodiment. We introduce novel
techniques to overcome the identified challenges with empirical validation. Our
main contributions include an automated real-to-sim tuning module that brings
the simulated environment closer to the real world, a generalized reward design
scheme that simplifies reward engineering for long-horizon contact-rich
manipulation tasks, a divide-and-conquer distillation process that improves the
sample efficiency of hard-exploration problems while maintaining sim-to-real
performance, and a mixture of sparse and dense object representations to bridge
the sim-to-real perception gap. We show promising results on three humanoid
dexterous manipulation tasks, with ablation studies on each technique. Our work
presents a successful approach to learning humanoid dexterous manipulation
using sim-to-real reinforcement learning, achieving robust generalization and
high performance without the need for human demonstration. | 11 | 67c51d39c830dcb76bbb5a1f | null | null |
|
2025-03-02T22:04:15.087000 | HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20811 | [
{
"_id": "67c51c198d02783fa3a6249d",
"hidden": false,
"name": "Xiao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a6249e",
"hidden": false,
"name": "Jingyun Hua",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a6249f",
"hidden": false,
"name": "Weihong Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:42:30.547Z",
"user": {
"_id": "675a69699e086bd6250a36ef",
"avatarUrl": "/avatars/95c72e3975d1a37f8655a2fe629746ec.svg",
"fullname": "Weihong Lin",
"isPro": false,
"type": "user",
"user": "lwher1996"
}
},
{
"_id": "67c51c198d02783fa3a624a0",
"hidden": false,
"name": "Yuanxing Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a1",
"hidden": false,
"name": "Fuzheng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a2",
"hidden": false,
"name": "Jianlong Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a3",
"hidden": false,
"name": "Di Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51c198d02783fa3a624a4",
"hidden": false,
"name": "Liqiang Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T07:53:40 | HAIC: Improving Human Action Understanding and Generation with Better
Captions for Multi-modal Large Language Models | Recent Multi-modal Large Language Models (MLLMs) have made great progress in
video understanding. However, their performance on videos involving human
actions is still limited by the lack of high-quality data. To address this, we
introduce a two-stage data annotation pipeline. First, we design strategies to
accumulate videos featuring clear human actions from the Internet. Second,
videos are annotated in a standardized caption format that uses human
attributes to distinguish individuals and chronologically details their actions
and interactions. Through this pipeline, we curate two datasets, namely
HAICTrain and HAICBench. HAICTrain comprises 126K video-caption pairs
generated by Gemini-Pro and verified for training purposes. Meanwhile,
HAICBench includes 500 manually annotated video-caption pairs and
1,400 QA pairs, for a comprehensive evaluation of human action understanding.
Experimental results demonstrate that training with HAICTrain not only
significantly enhances human understanding abilities across 4 benchmarks, but
can also improve text-to-video generation results. Both the HAICTrain and
HAICBench are released at https://huggingface.co/datasets/KuaishouHAIC/HAIC. | 1 | 67c51c1b8d02783fa3a62543 | null | null |
|
2025-03-02T22:00:31.796000 | SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.20545 | [
{
"_id": "67c51b459d5807d6674b3d3c",
"hidden": false,
"name": "Kechen Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:51:29.578Z",
"user": {
"_id": "6742deb4d3ad4510c12da658",
"avatarUrl": "/avatars/91407d854560ef9a2facd80fa8fab6ec.svg",
"fullname": "Kechen Li",
"isPro": false,
"type": "user",
"user": "Kechen-Li"
}
},
{
"_id": "67c51b459d5807d6674b3d3d",
"hidden": false,
"name": "Wenqi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51b459d5807d6674b3d3e",
"hidden": false,
"name": "Coralia Cartis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c51b459d5807d6674b3d3f",
"hidden": false,
"name": "Tianbo Ji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:35:49.782Z",
"user": {
"_id": "64bb61e876a6e2efcc728e22",
"avatarUrl": "/avatars/b0ed1c9f13fd1f2c99d202155001e39b.svg",
"fullname": "Tianbo Ji",
"isPro": false,
"type": "user",
"user": "jitianbo"
}
},
{
"_id": "67c51b459d5807d6674b3d40",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:14:45.635Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
}
] | 2025-02-27T21:41:43 | SoS1: O1 and R1-Like Reasoning LLMs are Sum-of-Square Solvers | Large Language Models (LLMs) have achieved human-level proficiency across
diverse tasks, but their ability to perform rigorous mathematical problem
solving remains an open challenge. In this work, we investigate a fundamental
yet computationally intractable problem: determining whether a given
multivariate polynomial is nonnegative. This problem, closely related to
Hilbert's Seventeenth Problem, plays a crucial role in global polynomial
optimization and has applications in various fields. First, we introduce
SoS-1K, a meticulously curated dataset of approximately 1,000 polynomials,
along with expert-designed reasoning instructions based on five progressively
challenging criteria. Evaluating multiple state-of-the-art LLMs, we find that
without structured guidance, all models perform only slightly above the random
guess baseline 50%. However, high-quality reasoning instructions significantly
improve accuracy, boosting performance up to 81%. Furthermore, our 7B model,
SoS-7B, fine-tuned on SoS-1K for just 4 hours, outperforms the 671B DeepSeek-V3
and GPT-4o-mini in accuracy while only requiring 1.8% and 5% of the computation
time needed for letters, respectively. Our findings highlight the potential of
LLMs to push the boundaries of mathematical reasoning and tackle NP-hard
problems. | 17 | 67c51b469d5807d6674b3d88 | null | null |
|
2025-03-02T21:48:46.577000 | LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation | 2 | {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"followerCount": null,
"fullname": "Keisuke Kamahori",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kamahori",
"type": "user"
} | true | null | 2502.20583 | [
{
"_id": "67c516998d02783fa3a52dc8",
"hidden": false,
"name": "Keisuke Kamahori",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T08:07:02.986Z",
"user": {
"_id": "6304ac1a412a1b9d381ca378",
"avatarUrl": "/avatars/f4724eb5afc2a3b0e61e6da7bfa7be27.svg",
"fullname": "Keisuke Kamahori",
"isPro": false,
"type": "user",
"user": "kamahori"
}
},
{
"_id": "67c516998d02783fa3a52dc9",
"hidden": false,
"name": "Jungo Kasai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:43:49.097Z",
"user": {
"_id": "62908273c740ebb981a6dba4",
"avatarUrl": "/avatars/465f50369c367b07670f5209c83d65f2.svg",
"fullname": "Jungo Kasai",
"isPro": false,
"type": "user",
"user": "jungok"
}
},
{
"_id": "67c516998d02783fa3a52dca",
"hidden": false,
"name": "Noriyuki Kojima",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:43:56.698Z",
"user": {
"_id": "628c26a8b80bb09700d6af86",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1653352051245-noauth.jpeg",
"fullname": "Noriyuki Kojima",
"isPro": false,
"type": "user",
"user": "kojimano"
}
},
{
"_id": "67c516998d02783fa3a52dcb",
"hidden": false,
"name": "Baris Kasikci",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:44:04.084Z",
"user": {
"_id": "654132fe5a9a913c6c870e79",
"avatarUrl": "/avatars/2f6807eddef1929c571977e9af35f952.svg",
"fullname": "Baris Kasikci",
"isPro": false,
"type": "user",
"user": "kasikci"
}
}
] | 2025-02-27T22:52:21 | LiteASR: Efficient Automatic Speech Recognition with Low-Rank
Approximation | Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper,
rely on deep encoder-decoder architectures, and their encoders are a critical
bottleneck for efficient deployment due to high computational intensity. We
introduce LiteASR, a low-rank compression scheme for ASR encoders that
significantly reduces inference costs while maintaining transcription accuracy.
Our approach leverages the strong low-rank properties observed in intermediate
activations: by applying principal component analysis (PCA) with a small
calibration dataset, we approximate linear transformations with a chain of
low-rank matrix multiplications, and further optimize self-attention to work in
the reduced dimension. Evaluation results show that our method can compress
Whisper large-v3's encoder size by over 50%, matching Whisper medium's size
with better transcription accuracy, thereby establishing a new Pareto-optimal
frontier of efficiency and performance. The code of LiteASR is available at
https://github.com/efeslab/LiteASR. | 9 | 67c516998d02783fa3a52dfd | null | https://github.com/efeslab/LiteASR |
|
2025-03-02T21:35:24.437000 | DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point Thinking | 4 | {
"_id": "63664c8fa2abcdf2fd6425ed",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8fa2abcdf2fd6425ed/IywpB0DXZ_twkmZmVSCCD.jpeg",
"followerCount": 1,
"fullname": "Li Zhuoqun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lzq2021",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/y_kT4GP3xgm-5RdguMNV7.png",
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/wDAS_USsxsVHbin1I5CEe.png",
"https://cdn-uploads.huggingface.co/production/uploads/63664c8fa2abcdf2fd6425ed/4lJgWp9V8pm4vDBUH4I5n.png"
] | 2502.20730 | [
{
"_id": "67c514aba3d873e41624a082",
"hidden": false,
"name": "Zhuoqun Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T08:07:26.218Z",
"user": {
"_id": "63664c8fa2abcdf2fd6425ed",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63664c8fa2abcdf2fd6425ed/IywpB0DXZ_twkmZmVSCCD.jpeg",
"fullname": "Li Zhuoqun",
"isPro": false,
"type": "user",
"user": "lzq2021"
}
},
{
"_id": "67c514aba3d873e41624a083",
"hidden": false,
"name": "Haiyang Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T09:31:12.493Z",
"user": {
"_id": "64a4ceda9a90f701134189b7",
"avatarUrl": "/avatars/859a189c5d2ae2fcb9aa2d79104fbfe7.svg",
"fullname": "Haiyang Yu",
"isPro": false,
"type": "user",
"user": "yhycai"
}
},
{
"_id": "67c514aba3d873e41624a084",
"hidden": false,
"name": "Xuanang Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:31.384Z",
"user": {
"_id": "63ef664304b0e373992a2633",
"avatarUrl": "/avatars/cba554ff88bd8b68ae51bea8ee991d13.svg",
"fullname": "Xuanang Chen",
"isPro": false,
"type": "user",
"user": "xuanang"
}
},
{
"_id": "67c514aba3d873e41624a085",
"hidden": false,
"name": "Hongyu Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:28:09.791Z",
"user": {
"_id": "6711c702f858a456b4b9f3a4",
"avatarUrl": "/avatars/178e9567c3111ab22717c3c0dd003a6a.svg",
"fullname": "Hongyu Lin",
"isPro": false,
"type": "user",
"user": "sanmusunrise"
}
},
{
"_id": "67c514aba3d873e41624a086",
"hidden": false,
"name": "Yaojie Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:38.957Z",
"user": {
"_id": "6216496a9b34d2fb49144599",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6216496a9b34d2fb49144599/41CKA_h1Ffj3RzVabSAkm.jpeg",
"fullname": "Yaojie Lu",
"isPro": false,
"type": "user",
"user": "luyaojie"
}
},
{
"_id": "67c514aba3d873e41624a087",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c514aba3d873e41624a088",
"hidden": false,
"name": "Xianpei Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:51.007Z",
"user": {
"_id": "65e99a77e71555ed193609cf",
"avatarUrl": "/avatars/38ceb127883944677665da967d17dd18.svg",
"fullname": "Xianpei Han",
"isPro": false,
"type": "user",
"user": "xphan"
}
},
{
"_id": "67c514aba3d873e41624a089",
"hidden": false,
"name": "Yongbin Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-03-03T09:29:57.561Z",
"user": {
"_id": "66641b2fd8e1e34bc621e688",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66641b2fd8e1e34bc621e688/csPETwnx2zCIHSWi9uAi-.png",
"fullname": "Yongbin Li",
"isPro": false,
"type": "user",
"user": "Yongbin-Li"
}
},
{
"_id": "67c514aba3d873e41624a08a",
"hidden": false,
"name": "Le Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-28T05:23:10 | DeepSolution: Boosting Complex Engineering Solution Design via
Tree-based Exploration and Bi-point Thinking | Designing solutions for complex engineering challenges is crucial in human
production activities. However, previous research in the retrieval-augmented
generation (RAG) field has not sufficiently addressed tasks related to the
design of complex engineering solutions. To fill this gap, we introduce a new
benchmark, SolutionBench, to evaluate a system's ability to generate complete
and feasible solutions for engineering problems with multiple complex
constraints. To further advance the design of complex engineering solutions, we
propose a novel system, SolutionRAG, that leverages the tree-based exploration
and bi-point thinking mechanism to generate reliable solutions. Extensive
experimental results demonstrate that SolutionRAG achieves state-of-the-art
(SOTA) performance on the SolutionBench, highlighting its potential to enhance
the automation and reliability of complex engineering solution design in
real-world applications. | 30 | 67c514aca3d873e41624a10b | null | https://github.com/Li-Z-Q/DeepSolution |
|
2025-02-28T16:51:51.551000 | PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving | 3 | {
"_id": "61a00714f5119f1651f7e4be",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1651013366729-61a00714f5119f1651f7e4be.jpeg",
"followerCount": 1,
"fullname": "Mihir Parmar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Mihir3009",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/61a00714f5119f1651f7e4be/dZJBpAQlVaJSFYXhuE1Rl.png"
] | 2502.16111 | [
{
"_id": "67be18d2bb66802239ec8095",
"hidden": false,
"name": "Mihir Parmar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8096",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8097",
"hidden": false,
"name": "Palash Goyal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8098",
"hidden": false,
"name": "Yanfei Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec8099",
"hidden": false,
"name": "Long Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809a",
"hidden": false,
"name": "Swaroop Mishra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809b",
"hidden": false,
"name": "Hossein Mobahi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809c",
"hidden": false,
"name": "Jindong Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809d",
"hidden": false,
"name": "Zifeng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809e",
"hidden": false,
"name": "Hootan Nakhost",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec809f",
"hidden": false,
"name": "Chitta Baral",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a0",
"hidden": false,
"name": "Chen-Yu Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a1",
"hidden": false,
"name": "Tomas Pfister",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be18d2bb66802239ec80a2",
"hidden": false,
"name": "Hamid Palangi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-22T06:21:56 | PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning
Trajectories for Complex Problem Solving | Recent agent frameworks and inference-time algorithms often struggle with
complex planning problems due to limitations in verifying generated plans or
reasoning and varying complexity of instances within a single task. Many
existing methods for these tasks either perform task-level verification without
considering constraints or apply inference-time algorithms without adapting to
instance-level complexity. To address these limitations, we propose PlanGEN, a
model-agnostic and easily scalable agent framework with three key components:
constraint, verification, and selection agents. Specifically, our approach
proposes constraint-guided iterative verification to enhance performance of
inference-time algorithms--Best of N, Tree-of-Thought, and REBASE. In PlanGEN
framework, the selection agent optimizes algorithm choice based on instance
complexity, ensuring better adaptability to complex planning problems.
Experimental results demonstrate significant improvements over the strongest
baseline across multiple benchmarks, achieving state-of-the-art results on
NATURAL PLAN (sim8%uparrow), OlympiadBench (sim4%uparrow), DocFinQA
(sim7%uparrow), and GPQA (sim1%uparrow). Our key finding highlights
that constraint-guided iterative verification improves inference-time
algorithms, and adaptive selection further boosts performance on complex
planning and reasoning problems. | 7 | 67be18d3bb66802239ec80d1 | null | null |
|
2025-02-28T13:21:13.227000 | Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation | 2 | {
"_id": "65317ea1501804124f011950",
"avatarUrl": "/avatars/b055c3aba0c65d5377c69472e4576480.svg",
"followerCount": 3,
"fullname": "Ren",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "OliverRen",
"type": "user"
} | false | null | 2502.20388 | [
{
"_id": "67c1643aa4ccbde471532ba6",
"hidden": false,
"name": "Sucheng Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba7",
"hidden": false,
"name": "Qihang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba8",
"hidden": false,
"name": "Ju He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532ba9",
"hidden": false,
"name": "Xiaohui Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532baa",
"hidden": false,
"name": "Alan Yuille",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1643aa4ccbde471532bab",
"hidden": false,
"name": "Liang-Chieh Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-27T18:59:08 | Beyond Next-Token: Next-X Prediction for Autoregressive Visual
Generation | Autoregressive (AR) modeling, known for its next-token prediction paradigm,
underpins state-of-the-art language and visual generative models.
Traditionally, a ``token'' is treated as the smallest prediction unit, often a
discrete symbol in language or a quantized patch in vision. However, the
optimal token definition for 2D image structures remains an open question.
Moreover, AR models suffer from exposure bias, where teacher forcing during
training leads to error accumulation at inference. In this paper, we propose
xAR, a generalized AR framework that extends the notion of a token to an entity
X, which can represent an individual patch token, a cell (a ktimes k
grouping of neighboring patches), a subsample (a non-local grouping of distant
patches), a scale (coarse-to-fine resolution), or even a whole image.
Additionally, we reformulate discrete token classification as
continuous entity regression, leveraging flow-matching methods at each
AR step. This approach conditions training on noisy entities instead of ground
truth tokens, leading to Noisy Context Learning, which effectively alleviates
exposure bias. As a result, xAR offers two key advantages: (1) it enables
flexible prediction units that capture different contextual granularity and
spatial structures, and (2) it mitigates exposure bias by avoiding reliance on
teacher forcing. On ImageNet-256 generation benchmark, our base model, xAR-B
(172M), outperforms DiT-XL/SiT-XL (675M) while achieving 20times faster
inference. Meanwhile, xAR-H sets a new state-of-the-art with an FID of 1.24,
running 2.2times faster than the previous best-performing model without
relying on vision foundation modules (\eg, DINOv2) or advanced guidance
interval sampling. | 13 | 67c1643ba4ccbde471532c03 | null | null |
|
2025-02-28T08:54:03.125000 | On Relation-Specific Neurons in Large Language Models | 2 | {
"_id": "61bf84c8ca59d6d196a1b4e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"followerCount": 44,
"fullname": "Amir Hossein Kargaran",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "kargaranamir",
"type": "user"
} | true | null | 2502.17355 | [
{
"_id": "67bf1808b91e7e6477d92c1e",
"hidden": false,
"name": "Yihong Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T15:14:48.351Z",
"user": {
"_id": "653f7e569e84d1e8b6a66e70",
"avatarUrl": "/avatars/24eaa6434508a162c349aebfc51990ff.svg",
"fullname": "Yihong Liu",
"isPro": false,
"type": "user",
"user": "yihongLiu"
}
},
{
"_id": "67bf1808b91e7e6477d92c1f",
"hidden": false,
"name": "Runsheng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:28.041Z",
"user": {
"_id": "63629b9f2a84d82a8c8feb32",
"avatarUrl": "/avatars/8484b5bf8311b28249757729b1ce80f8.svg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "Runsheng"
}
},
{
"_id": "67bf1808b91e7e6477d92c20",
"hidden": false,
"name": "Lea Hirlimann",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:18.398Z",
"user": {
"_id": "658559148615630cb3ec5b6b",
"avatarUrl": "/avatars/dd804ca277e6b19903bb550cc167ba4a.svg",
"fullname": "Lea Hirlimann",
"isPro": false,
"type": "user",
"user": "hirlimann"
}
},
{
"_id": "67bf1808b91e7e6477d92c21",
"hidden": false,
"name": "Ahmad Dawar Hakimi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:11.693Z",
"user": {
"_id": "62502669d2d191ac43320ade",
"avatarUrl": "/avatars/7997e9b2012059edb22b745c3b737481.svg",
"fullname": "Ahmad Dawar Hakimi",
"isPro": false,
"type": "user",
"user": "adhakimi"
}
},
{
"_id": "67bf1808b91e7e6477d92c22",
"hidden": false,
"name": "Mingyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf1808b91e7e6477d92c23",
"hidden": false,
"name": "Amir Hossein Kargaran",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:07.932Z",
"user": {
"_id": "61bf84c8ca59d6d196a1b4e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"fullname": "Amir Hossein Kargaran",
"isPro": false,
"type": "user",
"user": "kargaranamir"
}
},
{
"_id": "67bf1808b91e7e6477d92c24",
"hidden": false,
"name": "Sascha Rothe",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bf1808b91e7e6477d92c25",
"hidden": false,
"name": "François Yvon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:16:57.343Z",
"user": {
"_id": "62ab10f04bd2ebf5dbad205c",
"avatarUrl": "/avatars/65356b3b057159cc67a86efb26b53486.svg",
"fullname": "François Yvon",
"isPro": false,
"type": "user",
"user": "fyvo"
}
},
{
"_id": "67bf1808b91e7e6477d92c26",
"hidden": false,
"name": "Hinrich Schütze",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T17:33:18 | On Relation-Specific Neurons in Large Language Models | In large language models (LLMs), certain neurons can store distinct pieces of
knowledge learned during pretraining. While knowledge typically appears as a
combination of relations and entities, it remains unclear whether some neurons
focus on a relation itself -- independent of any entity. We hypothesize such
neurons detect a relation in the input text and guide generation involving such
a relation. To investigate this, we study the Llama-2 family on a chosen set of
relations with a statistics-based method. Our experiments demonstrate the
existence of relation-specific neurons. We measure the effect of selectively
deactivating candidate neurons specific to relation r on the LLM's ability to
handle (1) facts whose relation is r and (2) facts whose relation is a
different relation r' neq r. With respect to their capacity for encoding
relation information, we give evidence for the following three properties of
relation-specific neurons. (i) Neuron cumulativity. The neurons for
r present a cumulative effect so that deactivating a larger portion of them
results in the degradation of more facts in r. (ii) Neuron
versatility. Neurons can be shared across multiple closely related as well as
less related relations. Some relation neurons transfer across languages.
(iii) Neuron interference. Deactivating neurons specific to one
relation can improve LLM generation performance for facts of other relations.
We will make our code publicly available at
https://github.com/cisnlp/relation-specific-neurons. | 6 | 67bf1808b91e7e6477d92c55 | null | null |
|
2025-02-28T08:46:19.110000 | Guardians of the Agentic System: Preventing Many Shots Jailbreak with Agentic System | 2 | {
"_id": "653425f4ed74ace63395826c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QJlB0DOEel6U9b-95wasK.png",
"followerCount": 3,
"fullname": "Saikat Barua",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AlignAI",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/653425f4ed74ace63395826c/czZ9fF4yF6yz3E89YtU6e.jpeg"
] | 2502.16750 | [
{
"_id": "67c1b63744d780e60d7c5274",
"hidden": false,
"name": "Saikat Barua",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T13:24:57.086Z",
"user": {
"_id": "653425f4ed74ace63395826c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QJlB0DOEel6U9b-95wasK.png",
"fullname": "Saikat Barua",
"isPro": false,
"type": "user",
"user": "AlignAI"
}
},
{
"_id": "67c1b63744d780e60d7c5275",
"hidden": false,
"name": "Mostafizur Rahman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5276",
"hidden": false,
"name": "Md Jafor Sadek",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T15:21:48.563Z",
"user": {
"_id": "63c99ab3dfac8071d01b61d4",
"avatarUrl": "/avatars/9151241b8af4d64d7771740587d1b7a5.svg",
"fullname": "MD Jafor Sadek Khan",
"isPro": false,
"type": "user",
"user": "Jafor"
}
},
{
"_id": "67c1b63744d780e60d7c5277",
"hidden": false,
"name": "Rafiul Islam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5278",
"hidden": false,
"name": "Shehnaz Khaled",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67c1b63744d780e60d7c5279",
"hidden": false,
"name": "Ahmedul Kabir",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T23:35:15 | Guardians of the Agentic System: Preventing Many Shots Jailbreak with
Agentic System | The autonomous AI agents using large language models can create undeniable
values in all span of the society but they face security threats from
adversaries that warrants immediate protective solutions because trust and
safety issues arise. Considering the many-shot jailbreaking and deceptive
alignment as some of the main advanced attacks, that cannot be mitigated by the
static guardrails used during the supervised training, points out a crucial
research priority for real world robustness. The combination of static
guardrails in dynamic multi-agent system fails to defend against those attacks.
We intend to enhance security for LLM-based agents through the development of
new evaluation frameworks which identify and counter threats for safe
operational deployment. Our work uses three examination methods to detect rogue
agents through a Reverse Turing Test and analyze deceptive alignment through
multi-agent simulations and develops an anti-jailbreaking system by testing it
with GEMINI 1.5 pro and llama-3.3-70B, deepseek r1 models using tool-mediated
adversarial scenarios. The detection capabilities are strong such as 94\%
accuracy for GEMINI 1.5 pro yet the system suffers persistent vulnerabilities
when under long attacks as prompt length increases attack success rates (ASR)
and diversity metrics become ineffective in prediction while revealing multiple
complex system faults. The findings demonstrate the necessity of adopting
flexible security systems based on active monitoring that can be performed by
the agents themselves together with adaptable interventions by system admin as
the current models can create vulnerabilities that can lead to the unreliable
and vulnerable system. So, in our work, we try to address such situations and
propose a comprehensive framework to counteract the security issues. | 10 | 67c1b63a44d780e60d7c5317 | null | null |
End of preview. Expand
in Data Studio
Weekly snapshots of Models, Datasets and Papers on the HF Hub
Sample code
To query the dataset to see which snapshots are observable, use e.g.:
import json
from datasets import load_dataset
from huggingface_hub import HfApi
REPO_ID = "hfmlsoc/hub_weekly_snapshots"
hf_api = HfApi()
all_files = hf_api.list_repo_files(repo_id=REPO_ID, repo_type="dataset")
repo_type_to_snapshots = {}
for repo_fpath in all_files:
if ".parquet" in repo_fpath:
repo_type = repo_fpath.split("/")[0]
repo_type_to_snapshots[repo_type] = repo_type_to_snapshots.get(repo_type, []) + [repo_fpath]
for repo_type in repo_type_to_snapshots:
repo_type_to_snapshots[repo_type] = sorted(repo_type_to_snapshots[repo_type], key=lambda x:x.split("/")[1])
repo_type_to_snapshots
You can then load a specific snapshot as e.g.:
date = "2025-01-01"
snapshot = load_dataset(REPO_ID, data_files={date.replace("-",""): f"datasets/{date}/datasets.parquet"})
snapshot
Returning:
DatasetDict({
20250101: Dataset({
features: ['_id', 'id', 'author', 'cardData', 'disabled', 'gated', 'lastModified', 'likes', 'trendingScore', 'private', 'sha', 'description', 'downloads', 'tags', 'createdAt', 'key', 'paperswithcode_id', 'citation'],
num_rows: 276421
})
})
Sample analysis of top datasets
To look at the 10 most liked datasets as of January 1st 2025, you can then run:
[{
"id": row['id'],
"tags": json.loads(row["cardData"]).get("tags", []),
"tasks": json.loads(row["cardData"]).get("task_categories", []),
"likes": row['likes'],
} for row in snapshot["20250101"].sort("likes", reverse=True).select(range(10))]
Most of the user-maintained metadata for Hub repositories is stored in the cardData field, which is saved as a JSON-formated string
- Downloads last month
- 202