publishedAt
timestamp[ns]date 2023-02-13 12:55:54
2025-05-02 03:36:49
⌀ | title
stringlengths 8
206
⌀ | thumbnail
stringlengths 77
77
⌀ | numComments
int64 0
143
⌀ | submittedBy
dict | isAuthorParticipating
bool 2
classes | mediaUrls
sequencelengths 0
12
⌀ | paper_id
stringlengths 10
10
⌀ | paper_authors
listlengths 1
942
⌀ | paper_publishedAt
timestamp[ns]date 2023-02-13 17:55:54
2025-05-02 07:36:49
⌀ | paper_title
stringlengths 8
206
⌀ | paper_summary
stringlengths 165
1.92k
⌀ | paper_upvotes
int64 0
615
⌀ | paper_discussionId
stringlengths 24
24
⌀ | paper_projectPage
stringclasses 572
values | paper_githubRepo
stringclasses 813
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-02-18T00:06:55.671000 | video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.11775 | [
{
"_id": "67b4147f7721b4fe4d2bd466",
"hidden": false,
"name": "Guangzhi Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:50:36.976Z",
"user": {
"_id": "64ab544489aa67e4a2505eeb",
"avatarUrl": "/avatars/f1a9def3afbec2f8b89ef4450770d67e.svg",
"fullname": "Guangzhi Sun",
"isPro": false,
"type": "user",
"user": "BrianatCambridge"
}
},
{
"_id": "67b4147f7721b4fe4d2bd467",
"hidden": false,
"name": "Yudong Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:50:43.942Z",
"user": {
"_id": "66ea99f751c3bcd917ae1265",
"avatarUrl": "/avatars/17c4206c39d4da49d2516422c392e0ee.svg",
"fullname": "Yudong Yang",
"isPro": false,
"type": "user",
"user": "yangyudong2020"
}
},
{
"_id": "67b4147f7721b4fe4d2bd468",
"hidden": false,
"name": "Jimin Zhuang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:50:50.168Z",
"user": {
"_id": "66825d0e971dff9d3a24d912",
"avatarUrl": "/avatars/024b704a1f8c3aec20474e162ac326ad.svg",
"fullname": "Jimin Zhuang",
"isPro": false,
"type": "user",
"user": "OctaAcid"
}
},
{
"_id": "67b4147f7721b4fe4d2bd469",
"hidden": false,
"name": "Changli Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:50:57.247Z",
"user": {
"_id": "63770389cdcc1bf630870758",
"avatarUrl": "/avatars/6bc7c602e79688bc42a4c79ecf5b6d2d.svg",
"fullname": "Changli Tang",
"isPro": false,
"type": "user",
"user": "Changli"
}
},
{
"_id": "67b4147f7721b4fe4d2bd46a",
"hidden": false,
"name": "Yixuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b4147f7721b4fe4d2bd46b",
"hidden": false,
"name": "Wei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b4147f7721b4fe4d2bd46c",
"hidden": false,
"name": "Zejun MA",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b4147f7721b4fe4d2bd46d",
"hidden": false,
"name": "Chao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-17T13:07:40 | video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model | While recent advancements in reasoning optimization have significantly
enhanced the capabilities of large language models (LLMs), existing efforts to
improve reasoning have been limited to solving mathematical problems and
focusing on visual graphical inputs, neglecting broader applications in general
video understanding.This paper proposes video-SALMONN-o1, the first open-source
reasoning-enhanced audio-visual LLM designed for general video understanding
tasks. To enhance its reasoning abilities, we develop a reasoning-intensive
dataset featuring challenging audio-visual questions with step-by-step
solutions. We also propose process direct preference optimization (pDPO), which
leverages contrastive step selection to achieve efficient step-level reward
modelling tailored for multimodal inputs. Additionally, we introduce RivaBench,
the first reasoning-intensive video understanding benchmark, featuring over
4,000 high-quality, expert-curated question-answer pairs across scenarios such
as standup comedy, academic presentations, and synthetic video detection.
video-SALMONN-o1 achieves 3-8% accuracy improvements over the LLaVA-OneVision
baseline across different video reasoning benchmarks. Besides, pDPO achieves
6-8% improvements compared to the supervised fine-tuning model on RivaBench.
Enhanced reasoning enables video-SALMONN-o1 zero-shot synthetic video detection
capabilities. | 8 | 67b414827721b4fe4d2bd534 | null | null |
|
2025-02-17T23:51:50.821000 | Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM Multi-Agent Systems | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.11098 | [
{
"_id": "67b411e45e634139c0d86a1e",
"hidden": false,
"name": "Zhao Wang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-28T05:58:41.057Z",
"user": {
"_id": "67c150247372d4d6c4aa5f2d",
"avatarUrl": "/avatars/c847859068ad868b3313e14fa850f3b6.svg",
"fullname": "wang",
"isPro": false,
"type": "user",
"user": "wang1946may7"
}
},
{
"_id": "67b411e45e634139c0d86a1f",
"hidden": false,
"name": "Sota Moriyama",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b411e45e634139c0d86a20",
"hidden": false,
"name": "Wei-Yao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b411e45e634139c0d86a21",
"hidden": false,
"name": "Briti Gangopadhyay",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b411e45e634139c0d86a22",
"hidden": false,
"name": "Shingo Takamatsu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:47:05.161Z",
"user": {
"_id": "630776fafd79b417f1bd6268",
"avatarUrl": "/avatars/7712ec70dc56124bd123ba48db11e239.svg",
"fullname": "Shingo Takamatsu",
"isPro": false,
"type": "user",
"user": "ts0184"
}
}
] | 2025-02-16T12:26:58 | Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM
Multi-Agent Systems | Recent advancements in LLM-based multi-agent (LLM-MA) systems have shown
promise, yet significant challenges remain in managing communication and
refinement when agents collaborate on complex tasks. In this paper, we propose
Talk Structurally, Act Hierarchically (TalkHier), a novel framework
that introduces a structured communication protocol for context-rich exchanges
and a hierarchical refinement system to address issues such as incorrect
outputs, falsehoods, and biases. TalkHier surpasses various types of
SoTA, including inference scaling model (OpenAI-o1), open-source multi-agent
models (e.g., AgentVerse), and majority voting strategies on current LLM and
single-agent baselines (e.g., ReAct, GPT4o), across diverse tasks, including
open-domain question answering, domain-specific selective questioning, and
practical advertisement text generation. These results highlight its potential
to set a new standard for LLM-MA systems, paving the way for more effective,
adaptable, and collaborative multi-agent frameworks. The code is available
https://github.com/sony/talkhier. | 12 | 67b411e55e634139c0d86a4c | null | null |
|
2025-02-17T23:37:16.770000 | One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual Reasoning in Mathematical LLMs | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.10454 | [
{
"_id": "67b40e56bffd44cc85976ecd",
"hidden": false,
"name": "Yinghui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ece",
"hidden": false,
"name": "Jiayi Kuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ecf",
"hidden": false,
"name": "Haojing Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed0",
"hidden": false,
"name": "Zhikun Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed1",
"hidden": false,
"name": "Xinnian Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed2",
"hidden": false,
"name": "Yi Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed3",
"hidden": false,
"name": "Wenlian Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed4",
"hidden": false,
"name": "Yangning Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed5",
"hidden": false,
"name": "Xiaoyu Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed6",
"hidden": false,
"name": "Chao Qu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed7",
"hidden": false,
"name": "Ying Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed8",
"hidden": false,
"name": "Hai-Tao Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40e56bffd44cc85976ed9",
"hidden": false,
"name": "Philip S. Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T02:01:10 | One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual
Reasoning in Mathematical LLMs | Leveraging mathematical Large Language Models (LLMs) for proof generation is
a fundamental topic in LLMs research. We argue that the ability of current LLMs
to prove statements largely depends on whether they have encountered the
relevant proof process during training. This reliance limits their deeper
understanding of mathematical theorems and related concepts. Inspired by the
pedagogical method of "proof by counterexamples" commonly used in human
mathematics education, our work aims to enhance LLMs' ability to conduct
mathematical reasoning and proof through counterexamples. Specifically, we
manually create a high-quality, university-level mathematical benchmark,
CounterMATH, which requires LLMs to prove mathematical statements by providing
counterexamples, thereby assessing their grasp of mathematical concepts.
Additionally, we develop a data engineering framework to automatically obtain
training data for further model improvement. Extensive experiments and detailed
analyses demonstrate that CounterMATH is challenging, indicating that LLMs,
such as OpenAI o1, have insufficient counterexample-driven proof capabilities.
Moreover, our exploration into model training reveals that strengthening LLMs'
counterexample-driven conceptual reasoning abilities is crucial for improving
their overall mathematical capabilities. We believe that our work offers new
perspectives on the community of mathematical LLMs. | 7 | 67b40e57bffd44cc85976f0e | null | null |
|
2025-02-17T23:30:53.097000 | Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening | 3 | {
"_id": "653e5d31ffd60206c8b64bb5",
"avatarUrl": "/avatars/5076795722ec1f9e031654f301d30e8f.svg",
"followerCount": 13,
"fullname": "Xinchen Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "comin",
"type": "user"
} | true | null | 2502.12146 | [
{
"_id": "67b40ce4d3c5f50aa9b71df5",
"hidden": false,
"name": "Ye Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40ce4d3c5f50aa9b71df6",
"hidden": false,
"name": "Ling Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T09:37:40.151Z",
"user": {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"fullname": "Ling Yang",
"isPro": false,
"type": "user",
"user": "Lingaaaaaaa"
}
},
{
"_id": "67b40ce4d3c5f50aa9b71df7",
"hidden": false,
"name": "Xinchen Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:31:29.697Z",
"user": {
"_id": "653e5d31ffd60206c8b64bb5",
"avatarUrl": "/avatars/5076795722ec1f9e031654f301d30e8f.svg",
"fullname": "Xinchen Zhang",
"isPro": false,
"type": "user",
"user": "comin"
}
},
{
"_id": "67b40ce4d3c5f50aa9b71df8",
"hidden": false,
"name": "Yunhai Tong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40ce4d3c5f50aa9b71df9",
"hidden": false,
"name": "Mengdi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40ce4d3c5f50aa9b71dfa",
"hidden": false,
"name": "Bin Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-17T18:57:26 | Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising
Trajectory Sharpening | We propose Diffusion-Sharpening, a fine-tuning approach that enhances
downstream alignment by optimizing sampling trajectories. Existing RL-based
fine-tuning methods focus on single training timesteps and neglect
trajectory-level alignment, while recent sampling trajectory optimization
methods incur significant inference NFE costs. Diffusion-Sharpening overcomes
this by using a path integral framework to select optimal trajectories during
training, leveraging reward feedback, and amortizing inference costs. Our
method demonstrates superior training efficiency with faster convergence, and
best inference efficiency without requiring additional NFEs. Extensive
experiments show that Diffusion-Sharpening outperforms RL-based fine-tuning
methods (e.g., Diffusion-DPO) and sampling trajectory optimization methods
(e.g., Inference Scaling) across diverse metrics including text alignment,
compositional capabilities, and human preferences, offering a scalable and
efficient solution for future diffusion model fine-tuning. Code:
https://github.com/Gen-Verse/Diffusion-Sharpening | 16 | 67b40ce8d3c5f50aa9b71f9a | null | null |
|
2025-02-17T23:29:29.396000 | HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation | 2 | {
"_id": "653e5d31ffd60206c8b64bb5",
"avatarUrl": "/avatars/5076795722ec1f9e031654f301d30e8f.svg",
"followerCount": 13,
"fullname": "Xinchen Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "comin",
"type": "user"
} | true | null | 2502.12148 | [
{
"_id": "67b40c8cdb88dfd19ab917f3",
"hidden": false,
"name": "Ling Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T09:37:43.227Z",
"user": {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"fullname": "Ling Yang",
"isPro": false,
"type": "user",
"user": "Lingaaaaaaa"
}
},
{
"_id": "67b40c8cdb88dfd19ab917f4",
"hidden": false,
"name": "Xinchen Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:31:31.841Z",
"user": {
"_id": "653e5d31ffd60206c8b64bb5",
"avatarUrl": "/avatars/5076795722ec1f9e031654f301d30e8f.svg",
"fullname": "Xinchen Zhang",
"isPro": false,
"type": "user",
"user": "comin"
}
},
{
"_id": "67b40c8cdb88dfd19ab917f5",
"hidden": false,
"name": "Ye Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40c8cdb88dfd19ab917f6",
"hidden": false,
"name": "Chenming Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40c8cdb88dfd19ab917f7",
"hidden": false,
"name": "Minghao Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40c8cdb88dfd19ab917f8",
"hidden": false,
"name": "Wentao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b40c8cdb88dfd19ab917f9",
"hidden": false,
"name": "Bin Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-17T18:57:51 | HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and
Generation | The remarkable success of the autoregressive paradigm has made significant
advancement in Multimodal Large Language Models (MLLMs), with powerful models
like Show-o, Transfusion and Emu3 achieving notable progress in unified image
understanding and generation. For the first time, we uncover a common
phenomenon: the understanding capabilities of MLLMs are typically stronger than
their generative capabilities, with a significant gap between the two. Building
on this insight, we propose HermesFlow, a simple yet general framework designed
to seamlessly bridge the gap between understanding and generation in MLLMs.
Specifically, we take the homologous data as input to curate homologous
preference data of both understanding and generation. Through Pair-DPO and
self-play iterative optimization, HermesFlow effectively aligns multimodal
understanding and generation using homologous preference data. Extensive
experiments demonstrate the significant superiority of our approach over prior
methods, particularly in narrowing the gap between multimodal understanding and
generation. These findings highlight the potential of HermesFlow as a general
alignment framework for next-generation multimodal foundation models. Code:
https://github.com/Gen-Verse/HermesFlow | 16 | 67b40c8edb88dfd19ab9183f | null | null |
|
2025-02-17T23:06:03.562000 | SAFE-SQL: Self-Augmented In-Context Learning with Fine-grained Example Selection for Text-to-SQL | 2 | {
"_id": "63f6f245e94ed998c46316df",
"avatarUrl": "/avatars/9c0ec8682d4a85b96d2180602b1bbe6c.svg",
"followerCount": 3,
"fullname": "ingeolbaek",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ingeol",
"type": "user"
} | false | null | 2502.11438 | [
{
"_id": "67b406993d0f54ab381594f5",
"hidden": false,
"name": "Jimin Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b406993d0f54ab381594f6",
"hidden": false,
"name": "Ingeol Baek",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b406993d0f54ab381594f7",
"hidden": false,
"name": "Byeongjeong Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b406993d0f54ab381594f8",
"hidden": false,
"name": "Hwanhee Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-17T04:52:24 | SAFE-SQL: Self-Augmented In-Context Learning with Fine-grained Example
Selection for Text-to-SQL | Text-to-SQL aims to convert natural language questions into executable SQL
queries. While previous approaches, such as skeleton-masked selection, have
demonstrated strong performance by retrieving similar training examples to
guide large language models (LLMs), they struggle in real-world scenarios where
such examples are unavailable. To overcome this limitation, we propose
Self-Augmentation in-context learning with Fine-grained Example selection for
Text-to-SQL (SAFE-SQL), a novel framework that improves SQL generation by
generating and filtering self-augmented examples. SAFE-SQL first prompts an LLM
to generate multiple Text-to-SQL examples relevant to the test input. Then
SAFE-SQL filters these examples through three relevance assessments,
constructing high-quality in-context learning examples. Using self-generated
examples, SAFE-SQL surpasses the previous zero-shot, and few-shot Text-to-SQL
frameworks, achieving higher execution accuracy. Notably, our approach provides
additional performance gains in extra hard and unseen scenarios, where
conventional methods often fail. | 7 | 67b4069a3d0f54ab38159520 | null | null |
|
2025-02-17T22:43:51.555000 | CRANE: Reasoning with constrained LLM generation | 2 | {
"_id": "65e7bb35e5e78134ab049942",
"avatarUrl": "/avatars/3c0972f0d59e51ebb5c218ee736d4458.svg",
"followerCount": 2,
"fullname": "Tarun Suresh",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tarsur909",
"type": "user"
} | true | null | 2502.09061 | [
{
"_id": "67b401de3995f28d45c212d6",
"hidden": false,
"name": "Debangshu Banerjee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:24:10.076Z",
"user": {
"_id": "675f9306e0b5cb5bc241e8cc",
"avatarUrl": "/avatars/566385d2049b353ea38ddbb9f9105dd0.svg",
"fullname": "Debangshu Banerjee",
"isPro": false,
"type": "user",
"user": "debangshubanerjee"
}
},
{
"_id": "67b401de3995f28d45c212d7",
"hidden": false,
"name": "Tarun Suresh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:24:17.773Z",
"user": {
"_id": "65e7bb35e5e78134ab049942",
"avatarUrl": "/avatars/3c0972f0d59e51ebb5c218ee736d4458.svg",
"fullname": "Tarun Suresh",
"isPro": false,
"type": "user",
"user": "tarsur909"
}
},
{
"_id": "67b401de3995f28d45c212d8",
"hidden": false,
"name": "Shubham Ugare",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:24:24.384Z",
"user": {
"_id": "656c29ed271c5c4e3308a008",
"avatarUrl": "/avatars/9b58072230baf3ac941d476d69356fda.svg",
"fullname": "Shubham Ugare",
"isPro": false,
"type": "user",
"user": "shubhamugare"
}
},
{
"_id": "67b401de3995f28d45c212d9",
"hidden": false,
"name": "Sasa Misailovic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b401de3995f28d45c212da",
"hidden": false,
"name": "Gagandeep Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T08:23:42 | CRANE: Reasoning with constrained LLM generation | Code generation, symbolic math reasoning, and other tasks require LLMs to
produce outputs that are both syntactically and semantically correct.
Constrained LLM generation is a promising direction to enforce adherence to
formal grammar, but prior works have empirically observed that strict
enforcement of formal constraints often diminishes the reasoning capabilities
of LLMs. In this work, we first provide a theoretical explanation for why
constraining LLM outputs to very restrictive grammars that only allow
syntactically valid final answers reduces the reasoning capabilities of the
model. Second, we demonstrate that by augmenting the output grammar with
carefully designed additional rules, it is always possible to preserve the
reasoning capabilities of the LLM while ensuring syntactic and semantic
correctness in its outputs. Building on these theoretical insights, we propose
a reasoning-augmented constrained decoding algorithm, CRANE, which effectively
balances the correctness of constrained generation with the flexibility of
unconstrained generation. Experiments on multiple open-source LLMs and
benchmarks show that CRANE significantly outperforms both state-of-the-art
constrained decoding strategies and standard unconstrained decoding, showing up
to 10% points accuracy improvement over baselines on challenging symbolic
reasoning benchmarks GSM-symbolic and FOLIO. | 18 | 67b401e03995f28d45c21354 | null | null |
|
2025-02-17T22:10:49.900000 | Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest | 2 | {
"_id": "64323dd503d81fa4d26deaf9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64323dd503d81fa4d26deaf9/x3ES8VXEZJljxDWvFWaAf.png",
"followerCount": 7,
"fullname": "Letian Peng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KomeijiForce",
"type": "user"
} | true | null | 2502.11275 | [
{
"_id": "67b3fa2862838a378b21860d",
"hidden": false,
"name": "Letian Peng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T09:04:47.684Z",
"user": {
"_id": "64323dd503d81fa4d26deaf9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64323dd503d81fa4d26deaf9/x3ES8VXEZJljxDWvFWaAf.png",
"fullname": "Letian Peng",
"isPro": false,
"type": "user",
"user": "KomeijiForce"
}
},
{
"_id": "67b3fa2862838a378b21860e",
"hidden": false,
"name": "Zilong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3fa2862838a378b21860f",
"hidden": false,
"name": "Feng Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3fa2862838a378b218610",
"hidden": false,
"name": "Jingbo Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-16T21:32:20 | Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest | Massive high-quality data, both pre-training raw texts and post-training
annotations, have been carefully prepared to incubate advanced large language
models (LLMs). In contrast, for information extraction (IE), pre-training data,
such as BIO-tagged sequences, are hard to scale up. We show that IE models can
act as free riders on LLM resources by reframing next-token prediction
into extraction for tokens already present in the context. Specifically,
our proposed next tokens extraction (NTE) paradigm learns a versatile IE model,
Cuckoo, with 102.6M extractive data converted from LLM's pre-training
and post-training data. Under the few-shot setting, Cuckoo adapts effectively
to traditional and complex instruction-following IE with better performance
than existing pre-trained IE models. As a free rider, Cuckoo can naturally
evolve with the ongoing advancements in LLM data preparation, benefiting from
improvements in LLM training pipelines without additional manual effort. | 6 | 67b3fa2962838a378b21867b | null | null |
|
2025-02-17T22:05:54.047000 | Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o Under Data Scarsity | 2 | {
"_id": "642b8add48f67b6f21d4eb20",
"avatarUrl": "/avatars/f15025b39248daa19a18e6ccb2eaaa0c.svg",
"followerCount": 1,
"fullname": "Dylan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "shizhuo2",
"type": "user"
} | false | null | 2502.11901 | [
{
"_id": "67b3f8cc1bfe04e82830b752",
"hidden": false,
"name": "Dylan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3f8cc1bfe04e82830b753",
"hidden": false,
"name": "Justin Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3f8cc1bfe04e82830b754",
"hidden": false,
"name": "Tianran Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-17T15:24:11 | Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o
Under Data Scarsity | Existing LMs struggle with proof-oriented programming due to data scarcity,
which manifest in two key ways: (1) a lack of sufficient corpora for
proof-oriented programming languages such as F*, and (2) the absence of
large-scale, project-level proof-oriented implementations that can teach the
model the intricate reasoning process when performing proof-oriented
programming. We present the first on synthetic data augmentation for project
level proof oriented programming for both generation and repair. Our method
addresses data scarcity by synthesizing basic proof-oriented programming
problems for proficiency in that language; incorporating diverse coding data
for reasoning capability elicitation and creating new proofs and repair data
within existing repositories. This approach enables language models to both
synthesize and repair proofs for function- and repository-level code. We show
that our fine-tuned 14B parameter model, PoPilot, can exceed the performance of
the models that outperforms GPT-4o in project-level proof-oriented programming
by 64% relative margin, and can improve GPT-4o's performance by 54% by
repairing its outputs over GPT-4o's self-repair. | 6 | 67b3f8cd1bfe04e82830b77f | null | null |
|
2025-02-17T17:09:38.653000 | The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks | 2 | {
"_id": "652a656d1a3250bbfe3bb92d",
"avatarUrl": "/avatars/a1c25150d55c493edd9a7f81287fc449.svg",
"followerCount": null,
"fullname": "Alejandro Cuadron Lafuente",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AlexCuadron",
"type": "user"
} | true | null | 2502.08235 | [
{
"_id": "67b078cb1c879c0cbb785d5f",
"hidden": false,
"name": "Alejandro Cuadron",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:36.741Z",
"user": {
"_id": "652a656d1a3250bbfe3bb92d",
"avatarUrl": "/avatars/a1c25150d55c493edd9a7f81287fc449.svg",
"fullname": "Alejandro Cuadron Lafuente",
"isPro": false,
"type": "user",
"user": "AlexCuadron"
}
},
{
"_id": "67b078cb1c879c0cbb785d60",
"hidden": false,
"name": "Dacheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d61",
"hidden": false,
"name": "Wenjie Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d62",
"hidden": false,
"name": "Xingyao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d63",
"hidden": false,
"name": "Yichuan Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:40.048Z",
"user": {
"_id": "626e3449e7914f0d5ea78ad1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626e3449e7914f0d5ea78ad1/pVzdmdPMpNcxuj94qiIvB.jpeg",
"fullname": "Yichuan",
"isPro": false,
"type": "user",
"user": "Chrisyichuan"
}
},
{
"_id": "67b078cb1c879c0cbb785d64",
"hidden": false,
"name": "Siyuan Zhuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d65",
"hidden": false,
"name": "Shu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d66",
"hidden": false,
"name": "Luis Gaspar Schroeder",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d67",
"hidden": false,
"name": "Tian Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d68",
"hidden": false,
"name": "Huanzhi Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d69",
"hidden": false,
"name": "Nicholas Thumiger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d6a",
"hidden": false,
"name": "Aditya Desai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d6b",
"hidden": false,
"name": "Ion Stoica",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d6c",
"hidden": false,
"name": "Ana Klimovic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d6d",
"hidden": false,
"name": "Graham Neubig",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b078cb1c879c0cbb785d6e",
"hidden": false,
"name": "Joseph E. Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T09:23:26 | The Danger of Overthinking: Examining the Reasoning-Action Dilemma in
Agentic Tasks | Large Reasoning Models (LRMs) represent a breakthrough in AI problem-solving
capabilities, but their effectiveness in interactive environments can be
limited. This paper introduces and analyzes overthinking in LRMs. A phenomenon
where models favor extended internal reasoning chains over environmental
interaction. Through experiments on software engineering tasks using SWE Bench
Verified, we observe three recurring patterns: Analysis Paralysis, Rogue
Actions, and Premature Disengagement. We propose a framework to study these
behaviors, which correlates with human expert assessments, and analyze 4018
trajectories. We observe that higher overthinking scores correlate with
decreased performance, with reasoning models exhibiting stronger tendencies
toward overthinking compared to non-reasoning models. Our analysis reveals that
simple efforts to mitigate overthinking in agentic environments, such as
selecting the solution with the lower overthinking score, can improve model
performance by almost 30% while reducing computational costs by 43%. These
results suggest that mitigating overthinking has strong practical implications.
We suggest that by leveraging native function-calling capabilities and
selective reinforcement learning overthinking tendencies could be mitigated. We
also open-source our evaluation framework and dataset to facilitate research in
this direction at https://github.com/AlexCuadron/Overthinking. | 54 | 67b078cc1c879c0cbb785dbb | null | null |
|
2025-02-17T12:27:43.231000 | Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models | 2 | {
"_id": "638324f862badff43269e588",
"avatarUrl": "/avatars/907a39a9b44fc8b7f3fad35858b01fb7.svg",
"followerCount": 6,
"fullname": "Asaf Yehudai",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Asaf-Yehudai",
"type": "user"
} | true | null | 2502.08130 | [
{
"_id": "67b3716bab1b992c7f4599da",
"hidden": false,
"name": "Sonam Gupta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3716bab1b992c7f4599db",
"hidden": false,
"name": "Yatin Nandwani",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:52:54.491Z",
"user": {
"_id": "6626284411772517e546d799",
"avatarUrl": "/avatars/a9f2f1958c9f17ccc911fa323d9af2f3.svg",
"fullname": "Yatin Nandwani",
"isPro": false,
"type": "user",
"user": "ynandwan"
}
},
{
"_id": "67b3716bab1b992c7f4599dc",
"hidden": false,
"name": "Asaf Yehudai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:53:01.485Z",
"user": {
"_id": "638324f862badff43269e588",
"avatarUrl": "/avatars/907a39a9b44fc8b7f3fad35858b01fb7.svg",
"fullname": "Asaf Yehudai",
"isPro": false,
"type": "user",
"user": "Asaf-Yehudai"
}
},
{
"_id": "67b3716bab1b992c7f4599dd",
"hidden": false,
"name": "Dinesh Khandelwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3716bab1b992c7f4599de",
"hidden": false,
"name": "Dinesh Raghu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b3716bab1b992c7f4599df",
"hidden": false,
"name": "Sachindra Joshi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T05:24:21 | Selective Self-to-Supervised Fine-Tuning for Generalization in Large
Language Models | Fine-tuning Large Language Models (LLMs) on specific datasets is a common
practice to improve performance on target tasks. However, this performance gain
often leads to overfitting, where the model becomes too specialized in either
the task or the characteristics of the training data, resulting in a loss of
generalization. This paper introduces Selective Self-to-Supervised Fine-Tuning
(S3FT), a fine-tuning approach that achieves better performance than the
standard supervised fine-tuning (SFT) while improving generalization. S3FT
leverages the existence of multiple valid responses to a query. By utilizing
the model's correct responses, S3FT reduces model specialization during the
fine-tuning stage. S3FT first identifies the correct model responses from the
training set by deploying an appropriate judge. Then, it fine-tunes the model
using the correct model responses and the gold response (or its paraphrase) for
the remaining samples. The effectiveness of S3FT is demonstrated through
experiments on mathematical reasoning, Python programming and reading
comprehension tasks. The results show that standard SFT can lead to an average
performance drop of up to 4.4 on multiple benchmarks, such as MMLU and
TruthfulQA. In contrast, S3FT reduces this drop by half, i.e. 2.5, indicating
better generalization capabilities than SFT while performing significantly
better on the fine-tuning tasks. | 9 | 67b3716bab1b992c7f459a15 | null | null |
|
2025-02-17T10:18:04.718000 | CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.10362 | [
{
"_id": "67b2e11dd2ee8e627dec1bc2",
"hidden": false,
"name": "Shangda Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc3",
"hidden": false,
"name": "Zhancheng Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc4",
"hidden": false,
"name": "Ruibin Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc5",
"hidden": false,
"name": "Junyan Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc6",
"hidden": false,
"name": "Seungheon Doh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:58:42.624Z",
"user": {
"_id": "637c3504c292c0fd3f37361f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/637c3504c292c0fd3f37361f/wyTkbYKi8HufRT65LGN0P.jpeg",
"fullname": "seungheon.doh",
"isPro": false,
"type": "user",
"user": "seungheondoh"
}
},
{
"_id": "67b2e11dd2ee8e627dec1bc7",
"hidden": false,
"name": "Gus Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc8",
"hidden": false,
"name": "Juhan Nam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bc9",
"hidden": false,
"name": "Xiaobing Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bca",
"hidden": false,
"name": "Feng Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e11dd2ee8e627dec1bcb",
"hidden": false,
"name": "Maosong Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T18:42:25 | CLaMP 3: Universal Music Information Retrieval Across Unaligned
Modalities and Unseen Languages | CLaMP 3 is a unified framework developed to address challenges of cross-modal
and cross-lingual generalization in music information retrieval. Using
contrastive learning, it aligns all major music modalities--including sheet
music, performance signals, and audio recordings--with multilingual text in a
shared representation space, enabling retrieval across unaligned modalities
with text as a bridge. It features a multilingual text encoder adaptable to
unseen languages, exhibiting strong cross-lingual generalization. Leveraging
retrieval-augmented generation, we curated M4-RAG, a web-scale dataset
consisting of 2.31 million music-text pairs. This dataset is enriched with
detailed metadata that represents a wide array of global musical traditions. To
advance future research, we release WikiMT-X, a benchmark comprising 1,000
triplets of sheet music, audio, and richly varied text descriptions.
Experiments show that CLaMP 3 achieves state-of-the-art performance on multiple
MIR tasks, significantly surpassing previous strong baselines and demonstrating
excellent generalization in multimodal and multilingual music contexts. | 4 | 67b2e11ed2ee8e627dec1c25 | null | null |
|
2025-02-17T09:25:39.949000 | Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding | 2 | {
"_id": "648ac65fd044b25978015634",
"avatarUrl": "/avatars/2278a66fdc953220e9f8fc0ccce3ff00.svg",
"followerCount": null,
"fullname": "Xiuwei Xu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xuxw98",
"type": "user"
} | false | null | 2502.10392 | [
{
"_id": "67b346bab6c58a3e0a26190a",
"hidden": false,
"name": "Wenxuan Guo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:20.352Z",
"user": {
"_id": "67b2cf648a276e7b4856e307",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/lHepLDwyYeQC4IQhVKgz6.png",
"fullname": "Wenxuan Guo",
"isPro": false,
"type": "user",
"user": "gwx22"
}
},
{
"_id": "67b346bab6c58a3e0a26190b",
"hidden": false,
"name": "Xiuwei Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b346bab6c58a3e0a26190c",
"hidden": false,
"name": "Ziwei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b346bab6c58a3e0a26190d",
"hidden": false,
"name": "Jianjiang Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b346bab6c58a3e0a26190e",
"hidden": false,
"name": "Jie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b346bab6c58a3e0a26190f",
"hidden": false,
"name": "Jiwen Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T18:59:59 | Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding | In this paper, we propose an efficient multi-level convolution architecture
for 3D visual grounding. Conventional methods are difficult to meet the
requirements of real-time inference due to the two-stage or point-based
architecture. Inspired by the success of multi-level fully sparse convolutional
architecture in 3D object detection, we aim to build a new 3D visual grounding
framework following this technical route. However, as in 3D visual grounding
task the 3D scene representation should be deeply interacted with text
features, sparse convolution-based architecture is inefficient for this
interaction due to the large amount of voxel features. To this end, we propose
text-guided pruning (TGP) and completion-based addition (CBA) to deeply fuse 3D
scene representation and text features in an efficient way by gradual region
pruning and target completion. Specifically, TGP iteratively sparsifies the 3D
scene representation and thus efficiently interacts the voxel features with
text features by cross-attention. To mitigate the affect of pruning on delicate
geometric information, CBA adaptively fixes the over-pruned region by voxel
completion with negligible computational overhead. Compared with previous
single-stage methods, our method achieves top inference speed and surpasses
previous fastest method by 100\% FPS. Our method also achieves state-of-the-art
accuracy even compared with two-stage methods, with +1.13 lead of [email protected] on
ScanRefer, and +2.6 and +3.2 leads on NR3D and SR3D respectively. The code
is available at
https://github.com/GWxuan/TSP3D{https://github.com/GWxuan/TSP3D}. | 6 | 67b346bcb6c58a3e0a26195c | null | null |
|
2025-02-17T08:54:04.307000 | DarwinLM: Evolutionary Structured Pruning of Large Language Models | 7 | {
"_id": "63e76e2bfdb4097ef65e0745",
"avatarUrl": "/avatars/6d4d94ab6f44e23437488fd9fed2a383.svg",
"followerCount": 4,
"fullname": "Tang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Shengkun",
"type": "user"
} | true | null | 2502.07780 | [
{
"_id": "67b33f632f3994b7d95b6e77",
"hidden": false,
"name": "Shengkun Tang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:22.646Z",
"user": {
"_id": "63e76e2bfdb4097ef65e0745",
"avatarUrl": "/avatars/6d4d94ab6f44e23437488fd9fed2a383.svg",
"fullname": "Tang",
"isPro": false,
"type": "user",
"user": "Shengkun"
}
},
{
"_id": "67b33f632f3994b7d95b6e78",
"hidden": false,
"name": "Oliver Sieberling",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b33f632f3994b7d95b6e79",
"hidden": false,
"name": "Eldar Kurtic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b33f632f3994b7d95b6e7a",
"hidden": false,
"name": "Zhiqiang Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b33f632f3994b7d95b6e7b",
"hidden": false,
"name": "Dan Alistarh",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T18:59:35 | DarwinLM: Evolutionary Structured Pruning of Large Language Models | Large Language Models (LLMs) have achieved significant success across various
NLP tasks. However, their massive computational costs limit their widespread
use, particularly in real-time applications. Structured pruning offers an
effective solution by compressing models and directly providing end-to-end
speed improvements, regardless of the hardware environment. Meanwhile,
different components of the model exhibit varying sensitivities towards
pruning, calling for non-uniform model compression. However, a pruning
method should not only identify a capable substructure, but also account for
post-compression training. To this end, we propose \sysname, a method for
training-aware structured pruning. \sysname builds upon an evolutionary
search process, generating multiple offspring models in each generation through
mutation, and selecting the fittest for survival. To assess the effect of
post-training, we incorporate a lightweight, multistep training process within
the offspring population, progressively increasing the number of tokens and
eliminating poorly performing models in each selection stage. We validate our
method through extensive experiments on Llama-2-7B, Llama-3.1-8B and
Qwen-2.5-14B-Instruct, achieving state-of-the-art performance for structured
pruning. For instance, \sysname surpasses ShearedLlama while requiring
5times less training data during post-compression training. | 17 | 67b33f642f3994b7d95b6eb1 | null | null |
|
2025-02-17T08:41:41.933000 | ImageRAG: Dynamic Image Retrieval for Reference-Guided Image Generation | 2 | {
"_id": "627c1360f19c5eb46d55ba05",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1652712871747-627c1360f19c5eb46d55ba05.jpeg",
"followerCount": 3,
"fullname": "Rinon Gal",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "rinong",
"type": "user"
} | false | null | 2502.09411 | [
{
"_id": "67b33c3f8904ba09caa986fb",
"hidden": false,
"name": "Rotem Shalev-Arkushin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:25.103Z",
"user": {
"_id": "63999742ab8f65687a062763",
"avatarUrl": "/avatars/813bd7f2ec125e209a2e10e29d9386db.svg",
"fullname": "Rotem Shalev-Arkushin",
"isPro": false,
"type": "user",
"user": "Rotemsha"
}
},
{
"_id": "67b33c3f8904ba09caa986fc",
"hidden": false,
"name": "Rinon Gal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b33c3f8904ba09caa986fd",
"hidden": false,
"name": "Amit H. Bermano",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b33c3f8904ba09caa986fe",
"hidden": false,
"name": "Ohad Fried",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T15:36:12 | ImageRAG: Dynamic Image Retrieval for Reference-Guided Image Generation | Diffusion models enable high-quality and diverse visual content synthesis.
However, they struggle to generate rare or unseen concepts. To address this
challenge, we explore the usage of Retrieval-Augmented Generation (RAG) with
image generation models. We propose ImageRAG, a method that dynamically
retrieves relevant images based on a given text prompt, and uses them as
context to guide the generation process. Prior approaches that used retrieved
images to improve generation, trained models specifically for retrieval-based
generation. In contrast, ImageRAG leverages the capabilities of existing image
conditioning models, and does not require RAG-specific training. Our approach
is highly adaptable and can be applied across different model types, showing
significant improvement in generating rare and fine-grained concepts using
different base models.
Our project page is available at: https://rotem-shalev.github.io/ImageRAG | 18 | 67b33c478904ba09caa988dd | https://rotem-shalev.github.io/ImageRAG/ | https://github.com/rotem-shalev/ImageRAG |
|
2025-02-17T08:29:25.102000 | Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of Small Multilingual Language Models for Low-Resource Languages | 2 | {
"_id": "6427f45beb320ead3d287acf",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6427f45beb320ead3d287acf/J7DLdQM6ehiJBke112s0q.jpeg",
"followerCount": 12,
"fullname": "Daniil Gurgurov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DGurgurov",
"type": "user"
} | true | null | 2502.10140 | [
{
"_id": "67b32e9dff65b4ec02cb6d81",
"hidden": false,
"name": "Daniil Gurgurov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T16:39:55.120Z",
"user": {
"_id": "6427f45beb320ead3d287acf",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6427f45beb320ead3d287acf/J7DLdQM6ehiJBke112s0q.jpeg",
"fullname": "Daniil Gurgurov",
"isPro": false,
"type": "user",
"user": "DGurgurov"
}
},
{
"_id": "67b32e9dff65b4ec02cb6d82",
"hidden": false,
"name": "Ivan Vykopal",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-17T12:42:07.190Z",
"user": {
"_id": "64b7e5a89e7deb6a7824301b",
"avatarUrl": "/avatars/34d8843fe6203c4d09bf1b442405a4ab.svg",
"fullname": "Ivan Vykopal",
"isPro": false,
"type": "user",
"user": "ivykopal"
}
},
{
"_id": "67b32e9dff65b4ec02cb6d83",
"hidden": false,
"name": "Josef van Genabith",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b32e9dff65b4ec02cb6d84",
"hidden": false,
"name": "Simon Ostermann",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T10:00:05.016Z",
"user": {
"_id": "664de652b3eaebda418c4ce9",
"avatarUrl": "/avatars/7f35afa838566daaeebb6e2915013871.svg",
"fullname": "Simon Ostermann",
"isPro": false,
"type": "user",
"user": "simonost"
}
}
] | 2025-02-14T13:10:39 | Small Models, Big Impact: Efficient Corpus and Graph-Based Adaptation of
Small Multilingual Language Models for Low-Resource Languages | Low-resource languages (LRLs) face significant challenges in natural language
processing (NLP) due to limited data. While current state-of-the-art large
language models (LLMs) still struggle with LRLs, smaller multilingual models
(mLMs) such as mBERT and XLM-R offer greater promise due to a better fit of
their capacity to low training data sizes. This study systematically
investigates parameter-efficient adapter-based methods for adapting mLMs to
LRLs, evaluating three architectures: Sequential Bottleneck, Invertible
Bottleneck, and Low-Rank Adaptation. Using unstructured text from GlotCC and
structured knowledge from ConceptNet, we show that small adaptation datasets
(e.g., up to 1 GB of free-text or a few MB of knowledge graph data) yield gains
in intrinsic (masked language modeling) and extrinsic tasks (topic
classification, sentiment analysis, and named entity recognition). We find that
Sequential Bottleneck adapters excel in language modeling, while Invertible
Bottleneck adapters slightly outperform other methods on downstream tasks due
to better embedding alignment and larger parameter counts. Adapter-based
methods match or outperform full fine-tuning while using far fewer parameters,
and smaller mLMs prove more effective for LRLs than massive LLMs like LLaMA-3,
GPT-4, and DeepSeek-R1-based distilled models. While adaptation improves
performance, pre-training data size remains the dominant factor, especially for
languages with extensive pre-training coverage. | 9 | 67b32e9fff65b4ec02cb6dcd | null | null |
|
2025-02-17T07:24:28.545000 | Cluster and Predict Latents Patches for Improved Masked Image Modeling | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2502.08769 | [
{
"_id": "67b32a554d60b7d162dffd89",
"hidden": false,
"name": "Timothée Darcet",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:15:21.470Z",
"user": {
"_id": "63047da67424d937fa31b2b0",
"avatarUrl": "/avatars/cfdbb2054da0af37834a619f6c03db52.svg",
"fullname": "Timothée Darcet",
"isPro": false,
"type": "user",
"user": "TimDarcet"
}
},
{
"_id": "67b32a554d60b7d162dffd8a",
"hidden": false,
"name": "Federico Baldassarre",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b32a554d60b7d162dffd8b",
"hidden": false,
"name": "Maxime Oquab",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b32a554d60b7d162dffd8c",
"hidden": false,
"name": "Julien Mairal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b32a554d60b7d162dffd8d",
"hidden": false,
"name": "Piotr Bojanowski",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T20:17:10 | Cluster and Predict Latents Patches for Improved Masked Image Modeling | Masked Image Modeling (MIM) offers a promising approach to self-supervised
representation learning, however existing MIM models still lag behind the
state-of-the-art. In this paper, we systematically analyze target
representations, loss functions, and architectures, to introduce CAPI - a novel
pure-MIM framework that relies on the prediction of latent clusterings. Our
approach leverages a clustering-based loss, which is stable to train, and
exhibits promising scaling properties. Our ViT-L backbone, CAPI, achieves 83.8%
accuracy on ImageNet and 32.1% mIoU on ADE20K with simple linear probes,
substantially outperforming previous MIM methods and approaching the
performance of the current state-of-the-art, DINOv2. We release all our code
and models. | 4 | 67b32a574d60b7d162dffdd4 | null | null |
|
2025-02-17T05:36:23.051000 | AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting | 2 | {
"_id": "621d59ebd3df05d67132e8d9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621d59ebd3df05d67132e8d9/0gPfPTRKKnz5kq0InTqm5.jpeg",
"followerCount": 7,
"fullname": "Abdelhakim Benechehab",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "abenechehab",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/J5xioTqwZTvZTFQ1cnhbr.png",
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/i8pNDJflC9XZ1IqO_Y3iX.png",
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/_J0c16BuklV33a43j_Pxk.png",
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/c_kr2HOzAyrzC9SsQ4kgk.png",
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/dlZ9r1oZh3ogB6-p1sewz.png",
"https://cdn-uploads.huggingface.co/production/uploads/621d59ebd3df05d67132e8d9/yEX6Nt-xHKP3pbCQ9LmE8.png"
] | 2502.10235 | [
{
"_id": "67b30f528904ba09ca9d9ab4",
"hidden": false,
"name": "Abdelhakim Benechehab",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-17T10:50:21.800Z",
"user": {
"_id": "621d59ebd3df05d67132e8d9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621d59ebd3df05d67132e8d9/0gPfPTRKKnz5kq0InTqm5.jpeg",
"fullname": "Abdelhakim Benechehab",
"isPro": false,
"type": "user",
"user": "abenechehab"
}
},
{
"_id": "67b30f528904ba09ca9d9ab5",
"hidden": false,
"name": "Vasilii Feofanov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:27.648Z",
"user": {
"_id": "66f4159b7b9d607cb86a290e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66f4159b7b9d607cb86a290e/dx7Gl7flzTEGWDz1Efcdo.jpeg",
"fullname": "Vasilii Feofanov",
"isPro": false,
"type": "user",
"user": "vasilii-feofanov"
}
},
{
"_id": "67b30f528904ba09ca9d9ab6",
"hidden": false,
"name": "Giuseppe Paolo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T14:43:09.340Z",
"user": {
"_id": "65e98cd8e19214e9d151f29e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e98cd8e19214e9d151f29e/XjQzoVgKVzv8AZBWFQnHz.jpeg",
"fullname": "Giuseppe Paolo",
"isPro": false,
"type": "user",
"user": "GPaolo"
}
},
{
"_id": "67b30f528904ba09ca9d9ab7",
"hidden": false,
"name": "Albert Thomas",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b30f528904ba09ca9d9ab8",
"hidden": false,
"name": "Maurizio Filippone",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b30f528904ba09ca9d9ab9",
"hidden": false,
"name": "Balázs Kégl",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:30.076Z",
"user": {
"_id": "672cb69f250205b317235571",
"avatarUrl": "/avatars/37afc1506b15a2f9c37e3e8769142580.svg",
"fullname": "Balazs Kegl",
"isPro": false,
"type": "user",
"user": "balazskegl"
}
}
] | 2025-02-14T15:46:19 | AdaPTS: Adapting Univariate Foundation Models to Probabilistic
Multivariate Time Series Forecasting | Pre-trained foundation models (FMs) have shown exceptional performance in
univariate time series forecasting tasks. However, several practical challenges
persist, including managing intricate dependencies among features and
quantifying uncertainty in predictions. This study aims to tackle these
critical limitations by introducing adapters; feature-space transformations
that facilitate the effective use of pre-trained univariate time series FMs for
multivariate tasks. Adapters operate by projecting multivariate inputs into a
suitable latent space and applying the FM independently to each dimension.
Inspired by the literature on representation learning and partially stochastic
Bayesian neural networks, we present a range of adapters and
optimization/inference strategies. Experiments conducted on both synthetic and
real-world datasets confirm the efficacy of adapters, demonstrating substantial
enhancements in forecasting accuracy and uncertainty quantification compared to
baseline methods. Our framework, AdaPTS, positions adapters as a modular,
scalable, and effective solution for leveraging time series FMs in multivariate
contexts, thereby promoting their wider adoption in real-world applications. We
release the code at https://github.com/abenechehab/AdaPTS. | 8 | 67b30f548904ba09ca9d9b1e | null | null |
|
2025-02-17T05:09:33.663000 | Agentic End-to-End De Novo Protein Design for Tailored Dynamics Using a Language Diffusion Model | 2 | {
"_id": "623ce1c6b66fedf374859fe7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/623ce1c6b66fedf374859fe7/lhbMLg6BxLCb9DD4rgjfx.jpeg",
"followerCount": 24,
"fullname": "Markus Buehler",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "mjbuehler",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/rcgnOK5A9wV0qO9I3Mxny.png",
"https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/xD8WOPTgKHpIIPwHh9KHf.mp4"
] | 2502.10173 | [
{
"_id": "67b306ba817e86482ef224d5",
"hidden": false,
"name": "Bo Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b306ba817e86482ef224d6",
"hidden": false,
"name": "Markus J. Buehler",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T14:07:54 | Agentic End-to-End De Novo Protein Design for Tailored Dynamics Using a
Language Diffusion Model | Proteins are dynamic molecular machines whose biological functions, spanning
enzymatic catalysis, signal transduction, and structural adaptation, are
intrinsically linked to their motions. Designing proteins with targeted dynamic
properties, however, remains a challenge due to the complex, degenerate
relationships between sequence, structure, and molecular motion. Here, we
introduce VibeGen, a generative AI framework that enables end-to-end de novo
protein design conditioned on normal mode vibrations. VibeGen employs an
agentic dual-model architecture, comprising a protein designer that generates
sequence candidates based on specified vibrational modes and a protein
predictor that evaluates their dynamic accuracy. This approach synergizes
diversity, accuracy, and novelty during the design process. Via full-atom
molecular simulations as direct validation, we demonstrate that the designed
proteins accurately reproduce the prescribed normal mode amplitudes across the
backbone while adopting various stable, functionally relevant structures.
Notably, generated sequences are de novo, exhibiting no significant similarity
to natural proteins, thereby expanding the accessible protein space beyond
evolutionary constraints. Our work integrates protein dynamics into generative
protein design, and establishes a direct, bidirectional link between sequence
and vibrational behavior, unlocking new pathways for engineering biomolecules
with tailored dynamical and functional properties. This framework holds broad
implications for the rational design of flexible enzymes, dynamic scaffolds,
and biomaterials, paving the way toward dynamics-informed AI-driven protein
engineering. | 3 | 67b306ba817e86482ef224fa | null | null |
|
2025-02-17T04:28:55.526000 | We Can't Understand AI Using our Existing Vocabulary | 4 | {
"_id": "5e7749883d77a72421292d07",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"followerCount": 213,
"fullname": "Gabriele Sarti",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gsarti",
"type": "user"
} | false | null | 2502.07586 | [
{
"_id": "67b30146b02f929c82ce075e",
"hidden": false,
"name": "John Hewitt",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:54:11.819Z",
"user": {
"_id": "646ec1161a198f8520f3c705",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/QPCRraySl9zTnf7eG5ZJk.png",
"fullname": "John Hewitt",
"isPro": false,
"type": "user",
"user": "johnhew"
}
},
{
"_id": "67b30146b02f929c82ce075f",
"hidden": false,
"name": "Robert Geirhos",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:53:50.762Z",
"user": {
"_id": "673bbe0d7dfcdedd52619ec2",
"avatarUrl": "/avatars/531a44f05d0c738bbe3e028c76c2e948.svg",
"fullname": "Robert Geirhos",
"isPro": false,
"type": "user",
"user": "rgeirhos"
}
},
{
"_id": "67b30146b02f929c82ce0760",
"hidden": false,
"name": "Been Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T14:34:05 | We Can't Understand AI Using our Existing Vocabulary | This position paper argues that, in order to understand AI, we cannot rely on
our existing vocabulary of human words. Instead, we should strive to develop
neologisms: new words that represent precise human concepts that we want to
teach machines, or machine concepts that we need to learn. We start from the
premise that humans and machines have differing concepts. This means
interpretability can be framed as a communication problem: humans must be able
to reference and control machine concepts, and communicate human concepts to
machines. Creating a shared human-machine language through developing
neologisms, we believe, could solve this communication problem. Successful
neologisms achieve a useful amount of abstraction: not too detailed, so they're
reusable in many contexts, and not too high-level, so they convey precise
information. As a proof of concept, we demonstrate how a "length neologism"
enables controlling LLM response length, while a "diversity neologism" allows
sampling more variable responses. Taken together, we argue that we cannot
understand AI using our existing vocabulary, and expanding it through
neologisms creates opportunities for both controlling and understanding
machines better. | 10 | 67b30147b02f929c82ce079c | null | null |
|
2025-02-17T03:06:17.932000 | Precise Parameter Localization for Textual Generation in Diffusion Models | 2 | {
"_id": "63c7c19721bd95f80ed8ed80",
"avatarUrl": "/avatars/0b1c1ace991e0290118d4f99f619d809.svg",
"followerCount": null,
"fullname": "Lukasz Staniszewski",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lukasz-staniszewski",
"type": "user"
} | true | null | 2502.09935 | [
{
"_id": "67b2e6939edebc815a35eec8",
"hidden": false,
"name": "Łukasz Staniszewski",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:40.751Z",
"user": {
"_id": "63c7c19721bd95f80ed8ed80",
"avatarUrl": "/avatars/0b1c1ace991e0290118d4f99f619d809.svg",
"fullname": "Lukasz Staniszewski",
"isPro": false,
"type": "user",
"user": "lukasz-staniszewski"
}
},
{
"_id": "67b2e6939edebc815a35eec9",
"hidden": false,
"name": "Bartosz Cywiński",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:39.005Z",
"user": {
"_id": "6422f416a73327caad9d1d86",
"avatarUrl": "/avatars/aa3639277cd1732504402fc64a57eff8.svg",
"fullname": "Bartosz Cywiński",
"isPro": false,
"type": "user",
"user": "bcywinski"
}
},
{
"_id": "67b2e6939edebc815a35eeca",
"hidden": false,
"name": "Franziska Boenisch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e6939edebc815a35eecb",
"hidden": false,
"name": "Kamil Deja",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2e6939edebc815a35eecc",
"hidden": false,
"name": "Adam Dziedzic",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T06:11:23 | Precise Parameter Localization for Textual Generation in Diffusion
Models | Novel diffusion models can synthesize photo-realistic images with integrated
high-quality text. Surprisingly, we demonstrate through attention activation
patching that only less than 1% of diffusion models' parameters, all contained
in attention layers, influence the generation of textual content within the
images. Building on this observation, we improve textual generation efficiency
and performance by targeting cross and joint attention layers of diffusion
models. We introduce several applications that benefit from localizing the
layers responsible for textual content generation. We first show that a
LoRA-based fine-tuning solely of the localized layers enhances, even more, the
general text-generation capabilities of large diffusion models while preserving
the quality and diversity of the diffusion models' generations. Then, we
demonstrate how we can use the localized layers to edit textual content in
generated images. Finally, we extend this idea to the practical use case of
preventing the generation of toxic text in a cost-free manner. In contrast to
prior work, our localization approach is broadly applicable across various
diffusion model architectures, including U-Net (e.g., LDM and SDXL) and
transformer-based (e.g., DeepFloyd IF and Stable Diffusion 3), utilizing
diverse text encoders (e.g., from CLIP to the large language models like T5).
Project page available at https://t2i-text-loc.github.io/. | 11 | 67b2e6979edebc815a35efbc | null | null |
|
2025-02-17T02:03:05.624000 | MRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE Solvers | 2 | {
"_id": "64100834c025ddf6189c415e",
"avatarUrl": "/avatars/9b9bbecef5d5815540abf92d74012f55.svg",
"followerCount": 2,
"fullname": "Hongbo Zhao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "z-hb",
"type": "user"
} | true | null | 2502.07856 | [
{
"_id": "67b2dedc8a276e7b485a9bcd",
"hidden": false,
"name": "Ao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2dedc8a276e7b485a9bce",
"hidden": false,
"name": "Wei Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2dedc8a276e7b485a9bcf",
"hidden": false,
"name": "Hongbo Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:42.698Z",
"user": {
"_id": "64100834c025ddf6189c415e",
"avatarUrl": "/avatars/9b9bbecef5d5815540abf92d74012f55.svg",
"fullname": "Hongbo Zhao",
"isPro": false,
"type": "user",
"user": "z-hb"
}
},
{
"_id": "67b2dedc8a276e7b485a9bd0",
"hidden": false,
"name": "Le Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2dedc8a276e7b485a9bd1",
"hidden": false,
"name": "Ge Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2dedc8a276e7b485a9bd2",
"hidden": false,
"name": "Minfeng Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T14:57:33 | MRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE
Solvers | In applications of diffusion models, controllable generation is of practical
significance, but is also challenging. Current methods for controllable
generation primarily focus on modifying the score function of diffusion models,
while Mean Reverting (MR) Diffusion directly modifies the structure of the
stochastic differential equation (SDE), making the incorporation of image
conditions simpler and more natural. However, current training-free fast
samplers are not directly applicable to MR Diffusion. And thus MR Diffusion
requires hundreds of NFEs (number of function evaluations) to obtain
high-quality samples. In this paper, we propose a new algorithm named MRS (MR
Sampler) to reduce the sampling NFEs of MR Diffusion. We solve the reverse-time
SDE and the probability flow ordinary differential equation (PF-ODE) associated
with MR Diffusion, and derive semi-analytical solutions. The solutions consist
of an analytical function and an integral parameterized by a neural network.
Based on this solution, we can generate high-quality samples in fewer steps.
Our approach does not require training and supports all mainstream
parameterizations, including noise prediction, data prediction and velocity
prediction. Extensive experiments demonstrate that MR Sampler maintains high
sampling quality with a speedup of 10 to 20 times across ten different image
restoration tasks. Our algorithm accelerates the sampling procedure of MR
Diffusion, making it more practical in controllable generation. | 4 | 67b2dedd8a276e7b485a9c0b | https://github.com/grrrute/mr-sampler | https://github.com/grrrute/mr-sampler |
|
2025-02-17T01:33:15.971000 | V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with Multi-Modal Large Language Models | 2 | {
"_id": "64ae22dd1aee69ece065cdcd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ae22dd1aee69ece065cdcd/JG7QaHIrr4i2k4uwR4pZK.png",
"followerCount": 3,
"fullname": "Min-Hung Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "cmhungsteve",
"type": "user"
} | true | null | 2502.09980 | [
{
"_id": "67b2d7e86a002d59a415fc99",
"hidden": false,
"name": "Hsu-kuang Chiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2d7e86a002d59a415fc9a",
"hidden": false,
"name": "Ryo Hachiuma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2d7e86a002d59a415fc9b",
"hidden": false,
"name": "Chien-Yi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2d7e86a002d59a415fc9c",
"hidden": false,
"name": "Stephen F. Smith",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2d7e86a002d59a415fc9d",
"hidden": false,
"name": "Yu-Chiang Frank Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2d7e86a002d59a415fc9e",
"hidden": false,
"name": "Min-Hung Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:44.929Z",
"user": {
"_id": "64ae22dd1aee69ece065cdcd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ae22dd1aee69ece065cdcd/JG7QaHIrr4i2k4uwR4pZK.png",
"fullname": "Min-Hung Chen",
"isPro": false,
"type": "user",
"user": "cmhungsteve"
}
}
] | 2025-02-14T08:05:41 | V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with
Multi-Modal Large Language Models | Current autonomous driving vehicles rely mainly on their individual sensors
to understand surrounding scenes and plan for future trajectories, which can be
unreliable when the sensors are malfunctioning or occluded. To address this
problem, cooperative perception methods via vehicle-to-vehicle (V2V)
communication have been proposed, but they have tended to focus on detection
and tracking. How those approaches contribute to overall cooperative planning
performance is still under-explored. Inspired by recent progress using Large
Language Models (LLMs) to build autonomous driving systems, we propose a novel
problem setting that integrates an LLM into cooperative autonomous driving,
with the proposed Vehicle-to-Vehicle Question-Answering (V2V-QA) dataset and
benchmark. We also propose our baseline method Vehicle-to-Vehicle Large
Language Model (V2V-LLM), which uses an LLM to fuse perception information from
multiple connected autonomous vehicles (CAVs) and answer driving-related
questions: grounding, notable object identification, and planning. Experimental
results show that our proposed V2V-LLM can be a promising unified model
architecture for performing various tasks in cooperative autonomous driving,
and outperforms other baseline methods that use different fusion approaches.
Our work also creates a new research direction that can improve the safety of
future autonomous driving systems. Our project website:
https://eddyhkchiu.github.io/v2vllm.github.io/ . | 4 | 67b2d7ee6a002d59a415fe34 | null | null |
|
2025-02-17T00:04:19.389000 | Jailbreaking to Jailbreak | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.09638 | [
{
"_id": "67b2c3386ccf462ccaa45860",
"hidden": false,
"name": "Jeremy Kritz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45861",
"hidden": false,
"name": "Vaughn Robinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45862",
"hidden": false,
"name": "Robert Vacareanu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45863",
"hidden": false,
"name": "Bijan Varjavand",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45864",
"hidden": false,
"name": "Michael Choi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45865",
"hidden": false,
"name": "Bobby Gogov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45866",
"hidden": false,
"name": "Scale Red Team",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45867",
"hidden": false,
"name": "Summer Yue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45868",
"hidden": false,
"name": "Willow E. Primack",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c3386ccf462ccaa45869",
"hidden": false,
"name": "Zifan Wang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-17T05:03:53.788Z",
"user": {
"_id": "66976d1007b36ccd01586ce5",
"avatarUrl": "/avatars/5811e350907a29b71f6e4d57ffd53e66.svg",
"fullname": "Wang",
"isPro": false,
"type": "user",
"user": "ZifanScale"
}
}
] | 2025-02-09T20:49:16 | Jailbreaking to Jailbreak | Refusal training on Large Language Models (LLMs) prevents harmful outputs,
yet this defense remains vulnerable to both automated and human-crafted
jailbreaks. We present a novel LLM-as-red-teamer approach in which a human
jailbreaks a refusal-trained LLM to make it willing to jailbreak itself or
other LLMs. We refer to the jailbroken LLMs as J_2 attackers, which can
systematically evaluate target models using various red teaming strategies and
improve its performance via in-context learning from the previous failures. Our
experiments demonstrate that Sonnet 3.5 and Gemini 1.5 pro outperform other
LLMs as J_2, achieving 93.0% and 91.0% attack success rates (ASRs)
respectively against GPT-4o (and similar results across other capable LLMs) on
Harmbench. Our work not only introduces a scalable approach to strategic red
teaming, drawing inspiration from human red teamers, but also highlights
jailbreaking-to-jailbreak as an overlooked failure mode of the safeguard.
Specifically, an LLM can bypass its own safeguards by employing a jailbroken
version of itself that is willing to assist in further jailbreaking. To prevent
any direct misuse with J_2, while advancing research in AI safety, we
publicly share our methodology while keeping specific prompting details
private. | 4 | 67b2c3396ccf462ccaa458b3 | null | null |
|
2025-02-17T00:03:18.228000 | Large Language Diffusion Models | 9 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.09992 | [
{
"_id": "67b2c31125f77e5fc242f4f8",
"hidden": false,
"name": "Shen Nie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:48:04.969Z",
"user": {
"_id": "640dd9a5fdeaae13908208a7",
"avatarUrl": "/avatars/61f8f1d5f6ef1c4af1f47285e9cc0217.svg",
"fullname": "nieshen",
"isPro": false,
"type": "user",
"user": "nieshen"
}
},
{
"_id": "67b2c31125f77e5fc242f4f9",
"hidden": false,
"name": "Fengqi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c31125f77e5fc242f4fa",
"hidden": false,
"name": "Zebin You",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:47:21.498Z",
"user": {
"_id": "624f909eac5dd186b01ac3f5",
"avatarUrl": "/avatars/71a5c93c491064ef9e1eda80fda90665.svg",
"fullname": "Zebin You",
"isPro": false,
"type": "user",
"user": "yyyou"
}
},
{
"_id": "67b2c31125f77e5fc242f4fb",
"hidden": false,
"name": "Xiaolu Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-28T12:47:27.956Z",
"user": {
"_id": "67513d6d3b8586521cda5d76",
"avatarUrl": "/avatars/0f95cc5c23a0a1da289aa785bd33b616.svg",
"fullname": "Xiaolu Zhang",
"isPro": false,
"type": "user",
"user": "xiaolu0714"
}
},
{
"_id": "67b2c31125f77e5fc242f4fc",
"hidden": false,
"name": "Jingyang Ou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:51:46.024Z",
"user": {
"_id": "656949b71d7c2ca7b7aae5f2",
"avatarUrl": "/avatars/e7b23e260eb348cc26b849aaa601a503.svg",
"fullname": "Jingyang Ou",
"isPro": false,
"type": "user",
"user": "JingyangOu"
}
},
{
"_id": "67b2c31125f77e5fc242f4fd",
"hidden": false,
"name": "Jun Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c31125f77e5fc242f4fe",
"hidden": false,
"name": "Jun Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c31125f77e5fc242f4ff",
"hidden": false,
"name": "Yankai Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:51:39.766Z",
"user": {
"_id": "657a651e1433ea7d44de6397",
"avatarUrl": "/avatars/ccfc76f94595a38ff4a80f77c911eabf.svg",
"fullname": "Yankai Lin",
"isPro": false,
"type": "user",
"user": "lyk423"
}
},
{
"_id": "67b2c31125f77e5fc242f500",
"hidden": false,
"name": "Ji-Rong Wen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:51:33.428Z",
"user": {
"_id": "64b8c89052b7353d8c6a1013",
"avatarUrl": "/avatars/cd59fffe81f6b07b4519540b8ff3d95f.svg",
"fullname": "Ji-Rong Wen",
"isPro": false,
"type": "user",
"user": "jrwen"
}
},
{
"_id": "67b2c31125f77e5fc242f501",
"hidden": false,
"name": "Chongxuan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:51:26.668Z",
"user": {
"_id": "64c07b488e2612254361153b",
"avatarUrl": "/avatars/ade0f783cc4c2d3e73f402637f595471.svg",
"fullname": "chongxuan li",
"isPro": false,
"type": "user",
"user": "zhenxuan00"
}
}
] | 2025-02-14T08:23:51 | Large Language Diffusion Models | Autoregressive models (ARMs) are widely regarded as the cornerstone of large
language models (LLMs). We challenge this notion by introducing LLaDA, a
diffusion model trained from scratch under the pre-training and supervised
fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data
masking process and a reverse process, parameterized by a vanilla Transformer
to predict masked tokens. By optimizing a likelihood bound, it provides a
principled generative approach for probabilistic inference. Across extensive
benchmarks, LLaDA demonstrates strong scalability, outperforming our
self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong
LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive
instruction-following abilities in case studies such as multi-turn dialogue.
Moreover, LLaDA addresses the reversal curse, surpassing GPT-4o in a reversal
poem completion task. Our findings establish diffusion models as a viable and
promising alternative to ARMs, challenging the assumption that key LLM
capabilities discussed above are inherently tied to ARMs. | 94 | 67b2c31225f77e5fc242f527 | null | null |
|
2025-02-16T23:57:43.710000 | Diverse Inference and Verification for Advanced Reasoning | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.09955 | [
{
"_id": "67b2c1ac0303a07acd3f9443",
"hidden": false,
"name": "Iddo Drori",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9444",
"hidden": false,
"name": "Gaston Longhitano",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T15:54:41.134Z",
"user": {
"_id": "665167c3b1181f7d10bd3b99",
"avatarUrl": "/avatars/443a63003b15c2312bc59fea2e018362.svg",
"fullname": "Gaston Longhitano",
"isPro": false,
"type": "user",
"user": "glongh"
}
},
{
"_id": "67b2c1ac0303a07acd3f9445",
"hidden": false,
"name": "Mao Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9446",
"hidden": false,
"name": "Seunghwan Hyun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9447",
"hidden": false,
"name": "Yuke Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9448",
"hidden": false,
"name": "Sungjun Park",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9449",
"hidden": false,
"name": "Zachary Meeks",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944a",
"hidden": false,
"name": "Xin-Yu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944b",
"hidden": false,
"name": "Ben Segev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944c",
"hidden": false,
"name": "Howard Yong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944d",
"hidden": false,
"name": "Nakul Verma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944e",
"hidden": false,
"name": "Avi Shporer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f944f",
"hidden": false,
"name": "Alon Amit",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2c1ac0303a07acd3f9450",
"hidden": false,
"name": "Madeleine Udell",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T07:22:25 | Diverse Inference and Verification for Advanced Reasoning | Reasoning LLMs such as OpenAI o1, o3 and DeepSeek R1 have made significant
progress in mathematics and coding, yet find challenging advanced tasks such as
International Mathematical Olympiad (IMO) combinatorics problems, Abstraction
and Reasoning Corpus (ARC) puzzles, and Humanity's Last Exam (HLE) questions.
We use a diverse inference approach that combines multiple models and methods
at test time. We find that verifying mathematics and code problems, and
rejection sampling on other problems is simple and effective. We automatically
verify correctness of solutions to IMO problems by Lean, and ARC puzzles by
code, and find that best-of-N effectively answers HLE questions. Our approach
increases answer accuracy on IMO combinatorics problems from 33.3% to 77.8%,
accuracy on HLE questions from 8% to 37%, and solves 80% of ARC puzzles that
948 humans could not and 26.5% of ARC puzzles that o3 high compute does not.
Test-time simulations, reinforcement learning, and meta-learning with inference
feedback improve generalization by adapting agent graph representations and
varying prompts, code, and datasets. Our approach is reliable, robust, and
scalable, and in the spirit of reproducible research, we will make it publicly
available upon publication. | 16 | 67b2c1b10303a07acd3f9532 | null | null |
|
2025-02-16T23:07:53.170000 | FoNE: Precise Single-Token Number Embeddings via Fourier Features | 3 | {
"_id": "63c8454e46421a2efe82709d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c8454e46421a2efe82709d/3BcSk4KOwAgWHEPVtsAV3.png",
"followerCount": 5,
"fullname": "Deqing Fu",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "deqing",
"type": "user"
} | true | null | 2502.09741 | [
{
"_id": "67b2b58f9edebc815a2a938c",
"hidden": false,
"name": "Tianyi Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2b58f9edebc815a2a938d",
"hidden": false,
"name": "Deqing Fu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:57.319Z",
"user": {
"_id": "63c8454e46421a2efe82709d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c8454e46421a2efe82709d/3BcSk4KOwAgWHEPVtsAV3.png",
"fullname": "Deqing Fu",
"isPro": true,
"type": "user",
"user": "deqing"
}
},
{
"_id": "67b2b58f9edebc815a2a938e",
"hidden": false,
"name": "Mahdi Soltanolkotabi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2b58f9edebc815a2a938f",
"hidden": false,
"name": "Robin Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2b58f9edebc815a2a9390",
"hidden": false,
"name": "Vatsal Sharan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T19:54:59 | FoNE: Precise Single-Token Number Embeddings via Fourier Features | Large Language Models (LLMs) typically represent numbers using multiple
tokens, which requires the model to aggregate these tokens to interpret
numerical values. This fragmentation makes both training and inference less
efficient and adversely affects the model's performance on number-related
tasks. Inspired by the observation that pre-trained LLMs internally learn
Fourier-like features for number tokens, we propose Fourier Number Embedding
(FoNE), a novel method that directly maps numbers into the embedding space with
their Fourier features. FoNE encodes each number as a single token with only
two embedding dimensions per digit, effectively capturing numerical values
without fragmentation. This compact representation accelerates both training
and inference. Compared to traditional subword and digit-wise embeddings, FoNE
not only reduces computational overhead but also achieves higher accuracy
across various numerical tasks including addition, subtraction and
multiplication. On 6-digit decimal addition, FoNE requires 64times less data
to achieve 99% accuracy than subword and digit-wise embeddings while using
3times and 6times fewer tokens per number, respectively. Furthermore,
FoNE is the only method that yields 100% accuracy on over 100,000 test examples
for addition, subtraction, and multiplication. The codes and visualization are
available at https://fouriernumber.github.io/. | 11 | 67b2b5919edebc815a2a93fc | null | null |
|
2025-02-16T22:51:55.408000 | MM-RLHF: The Next Step Forward in Multimodal LLM Alignment | 5 | {
"_id": "623d8ca4c29adf5ef6175615",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/623d8ca4c29adf5ef6175615/q7lHao7UPwU1u7YLSP56m.jpeg",
"followerCount": 7,
"fullname": "Yi-Fan Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yifanzhang114",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/YtpeHGys5Zs3bqPlOGs94.png",
"https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/8mE0hOEgm-if-9zaLyMGn.png"
] | 2502.10391 | [
{
"_id": "67b2ab548191c180b9c4eb83",
"hidden": false,
"name": "Yi-Fan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb84",
"hidden": false,
"name": "Tao Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb85",
"hidden": false,
"name": "Haochen Tian",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T14:37:54.641Z",
"user": {
"_id": "6448baa3e780dbfc89058bc3",
"avatarUrl": "/avatars/44c243c8eb4714cdcdaa7cd04e5a9716.svg",
"fullname": "Micheal Tian",
"isPro": false,
"type": "user",
"user": "StarBurger"
}
},
{
"_id": "67b2ab548191c180b9c4eb86",
"hidden": false,
"name": "Chaoyou Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb87",
"hidden": false,
"name": "Peiyan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb88",
"hidden": false,
"name": "Jianshu Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb89",
"hidden": false,
"name": "Wulin Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb8a",
"hidden": false,
"name": "Yang Shi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:32:59.588Z",
"user": {
"_id": "673c7319d11b1c2e246ead9c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg",
"fullname": "Yang Shi",
"isPro": false,
"type": "user",
"user": "DogNeverSleep"
}
},
{
"_id": "67b2ab548191c180b9c4eb8b",
"hidden": false,
"name": "Huanyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb8c",
"hidden": false,
"name": "Junkang Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb8d",
"hidden": false,
"name": "Xue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb8e",
"hidden": false,
"name": "Yibo Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb8f",
"hidden": false,
"name": "Bin Wen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb90",
"hidden": false,
"name": "Fan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb91",
"hidden": false,
"name": "Zhang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb92",
"hidden": false,
"name": "Tingting Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb93",
"hidden": false,
"name": "Di Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb94",
"hidden": false,
"name": "Liang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb95",
"hidden": false,
"name": "Rong Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2ab548191c180b9c4eb96",
"hidden": false,
"name": "Tieniu Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T18:59:51 | MM-RLHF: The Next Step Forward in Multimodal LLM Alignment | Despite notable advancements in Multimodal Large Language Models (MLLMs),
most state-of-the-art models have not undergone thorough alignment with human
preferences. This gap exists because current alignment research has primarily
achieved progress in specific areas (e.g., hallucination reduction), while the
broader question of whether aligning models with human preferences can
systematically enhance MLLM capability remains largely unexplored. To this end,
we introduce MM-RLHF, a dataset containing 120k fine-grained,
human-annotated preference comparison pairs. This dataset represents a
substantial advancement over existing resources, offering superior size,
diversity, annotation granularity, and quality. Leveraging this dataset, we
propose several key innovations to improve both the quality of reward models
and the efficiency of alignment algorithms. Notably, we introduce a
Critique-Based Reward Model, which generates critiques of model outputs before
assigning scores, offering enhanced interpretability and more informative
feedback compared to traditional scalar reward mechanisms. Additionally, we
propose Dynamic Reward Scaling, a method that adjusts the loss weight of each
sample according to the reward signal, thereby optimizing the use of
high-quality comparison pairs. Our approach is rigorously evaluated across
10 distinct dimensions and 27 benchmarks, with results
demonstrating significant and consistent improvements in model performance.
Specifically, fine-tuning LLaVA-ov-7B with MM-RLHF and our alignment algorithm
leads to a 19.5% increase in conversational abilities and a
60% improvement in safety.
We have open-sourced the preference dataset, reward model, training and
evaluation code, as well as reward modeling and safety benchmarks. For more
details, please visit our project page: https://mm-rlhf.github.io. | 30 | 67b2ab598191c180b9c4ec10 | null | null |
|
2025-02-16T22:50:38.622000 | Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model | 3 | {
"_id": "60efceb38432bc401cd0abc8",
"avatarUrl": "/avatars/c3331d9a46da4afcb90a25691d47aed4.svg",
"followerCount": null,
"fullname": "tongwang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "turrf",
"type": "user"
} | false | null | 2502.10248 | [
{
"_id": "67b2a72e7a49eaea082b9dcf",
"hidden": false,
"name": "Guoqing Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd0",
"hidden": false,
"name": "Haoyang Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd1",
"hidden": false,
"name": "Kun Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd2",
"hidden": false,
"name": "Liangyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd3",
"hidden": false,
"name": "Nan Duan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd4",
"hidden": false,
"name": "Shengming Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd5",
"hidden": false,
"name": "Changyi Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd6",
"hidden": false,
"name": "Ranchen Ming",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd7",
"hidden": false,
"name": "Xiaoniu Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd8",
"hidden": false,
"name": "Xing Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dd9",
"hidden": false,
"name": "Yu Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dda",
"hidden": false,
"name": "Deshan Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9ddb",
"hidden": false,
"name": "Deyu Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9ddc",
"hidden": false,
"name": "Jian Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9ddd",
"hidden": false,
"name": "Kaijun Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dde",
"hidden": false,
"name": "Kang An",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9ddf",
"hidden": false,
"name": "Mei Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de0",
"hidden": false,
"name": "Wei Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de1",
"hidden": false,
"name": "Qiling Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de2",
"hidden": false,
"name": "Wen Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de3",
"hidden": false,
"name": "Xin Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de4",
"hidden": false,
"name": "Yanan Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de5",
"hidden": false,
"name": "Zheng Ge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de6",
"hidden": false,
"name": "Aojie Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de7",
"hidden": false,
"name": "Bin Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de8",
"hidden": false,
"name": "Bizhu Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9de9",
"hidden": false,
"name": "Bo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dea",
"hidden": false,
"name": "Brian Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9deb",
"hidden": false,
"name": "Changxing Miao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dec",
"hidden": false,
"name": "Chen Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9ded",
"hidden": false,
"name": "Chenfei Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dee",
"hidden": false,
"name": "Chenguang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9def",
"hidden": false,
"name": "Dapeng Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df0",
"hidden": false,
"name": "Dingyuan Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df1",
"hidden": false,
"name": "Enle Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df2",
"hidden": false,
"name": "Gang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df3",
"hidden": false,
"name": "Ge Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df4",
"hidden": false,
"name": "Guanzhe Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df5",
"hidden": false,
"name": "Gulin Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df6",
"hidden": false,
"name": "Haiyang Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df7",
"hidden": false,
"name": "Hao Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df8",
"hidden": false,
"name": "Haonan Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9df9",
"hidden": false,
"name": "Hanpeng Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dfa",
"hidden": false,
"name": "Hanqi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dfb",
"hidden": false,
"name": "Haolong Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dfc",
"hidden": false,
"name": "Heng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dfd",
"hidden": false,
"name": "Hongcheng Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dfe",
"hidden": false,
"name": "Huilin Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9dff",
"hidden": false,
"name": "Huixin Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e00",
"hidden": false,
"name": "Jiahao Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e01",
"hidden": false,
"name": "Jianchang Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e02",
"hidden": false,
"name": "Jiaoren Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e03",
"hidden": false,
"name": "Jie Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e04",
"hidden": false,
"name": "Jie Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e05",
"hidden": false,
"name": "Jiashuai Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e06",
"hidden": false,
"name": "Jiashuo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e07",
"hidden": false,
"name": "Jingyang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e08",
"hidden": false,
"name": "Junjing Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e09",
"hidden": false,
"name": "Junzhe Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0a",
"hidden": false,
"name": "Kaixiang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0b",
"hidden": false,
"name": "Lei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0c",
"hidden": false,
"name": "Lei Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0d",
"hidden": false,
"name": "Liang Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0e",
"hidden": false,
"name": "Liguo Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e0f",
"hidden": false,
"name": "Liwen Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e10",
"hidden": false,
"name": "Liying Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e11",
"hidden": false,
"name": "Ming Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e12",
"hidden": false,
"name": "Mingliang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e13",
"hidden": false,
"name": "Muhua Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e14",
"hidden": false,
"name": "Na Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e15",
"hidden": false,
"name": "Qiaohui Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e16",
"hidden": false,
"name": "Qinglin He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e17",
"hidden": false,
"name": "Qiuyan Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e18",
"hidden": false,
"name": "Quan Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e19",
"hidden": false,
"name": "Ran Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1a",
"hidden": false,
"name": "Rui Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1b",
"hidden": false,
"name": "Shaoliang Pang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1c",
"hidden": false,
"name": "Shiliang Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1d",
"hidden": false,
"name": "Sitong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1e",
"hidden": false,
"name": "Siqi Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e1f",
"hidden": false,
"name": "Shuli Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e20",
"hidden": false,
"name": "Tiancheng Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e21",
"hidden": false,
"name": "Tianyu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e22",
"hidden": false,
"name": "Weipeng Ming",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e23",
"hidden": false,
"name": "Wenqing He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e24",
"hidden": false,
"name": "Xu Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e25",
"hidden": false,
"name": "Xuelin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e26",
"hidden": false,
"name": "Xianfang Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e27",
"hidden": false,
"name": "Xiaojia Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e28",
"hidden": false,
"name": "Xuan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e29",
"hidden": false,
"name": "Yaqi Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2a",
"hidden": false,
"name": "Yanbo Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2b",
"hidden": false,
"name": "Yang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2c",
"hidden": false,
"name": "Yineng Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2d",
"hidden": false,
"name": "Yingming Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2e",
"hidden": false,
"name": "Yilei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e2f",
"hidden": false,
"name": "Yuanwei Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e30",
"hidden": false,
"name": "Yu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e31",
"hidden": false,
"name": "Yu Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e32",
"hidden": false,
"name": "Yuchu Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e33",
"hidden": false,
"name": "Yuhe Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e34",
"hidden": false,
"name": "Yuheng Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e35",
"hidden": false,
"name": "Yuxiang Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e36",
"hidden": false,
"name": "Zecheng Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e37",
"hidden": false,
"name": "Zekai Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e38",
"hidden": false,
"name": "Zidong Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e39",
"hidden": false,
"name": "Binxing Jiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3a",
"hidden": false,
"name": "Jiansheng Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3b",
"hidden": false,
"name": "Jing Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3c",
"hidden": false,
"name": "Shuchang Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3d",
"hidden": false,
"name": "Xiangyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3e",
"hidden": false,
"name": "Xinhao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e3f",
"hidden": false,
"name": "Yibo Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e40",
"hidden": false,
"name": "Heung-Yeung Shum",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a72e7a49eaea082b9e41",
"hidden": false,
"name": "Daxin Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T15:58:10 | Step-Video-T2V Technical Report: The Practice, Challenges, and Future of
Video Foundation Model | We present Step-Video-T2V, a state-of-the-art text-to-video pre-trained model
with 30B parameters and the ability to generate videos up to 204 frames in
length. A deep compression Variational Autoencoder, Video-VAE, is designed for
video generation tasks, achieving 16x16 spatial and 8x temporal compression
ratios, while maintaining exceptional video reconstruction quality. User
prompts are encoded using two bilingual text encoders to handle both English
and Chinese. A DiT with 3D full attention is trained using Flow Matching and is
employed to denoise input noise into latent frames. A video-based DPO approach,
Video-DPO, is applied to reduce artifacts and improve the visual quality of the
generated videos. We also detail our training strategies and share key
observations and insights. Step-Video-T2V's performance is evaluated on a novel
video generation benchmark, Step-Video-T2V-Eval, demonstrating its
state-of-the-art text-to-video quality when compared with both open-source and
commercial engines. Additionally, we discuss the limitations of current
diffusion-based model paradigm and outline future directions for video
foundation models. We make both Step-Video-T2V and Step-Video-T2V-Eval
available at https://github.com/stepfun-ai/Step-Video-T2V. The online version
can be accessed from https://yuewen.cn/videos as well. Our goal is to
accelerate the innovation of video foundation models and empower video content
creators. | 51 | 67b2a7357a49eaea082b9fbf | null | null |
|
2025-02-16T22:22:08.102000 | Region-Adaptive Sampling for Diffusion Transformers | 3 | {
"_id": "62d18eb81e36881a57f29bf4",
"avatarUrl": "/avatars/104851421b4ee9641daaf15942fa7ea1.svg",
"followerCount": 3,
"fullname": "Yif Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yif29",
"type": "user"
} | true | null | 2502.10389 | [
{
"_id": "67b2a89ebe31bfaa7cd2bff1",
"hidden": false,
"name": "Ziming Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:14.074Z",
"user": {
"_id": "632924029c3f42ca7149f305",
"avatarUrl": "/avatars/080bc7da4ad2875bdfa359213c88feb7.svg",
"fullname": "Liu Ziming",
"isPro": false,
"type": "user",
"user": "MaruyamaAya"
}
},
{
"_id": "67b2a89ebe31bfaa7cd2bff2",
"hidden": false,
"name": "Yifan Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:16.182Z",
"user": {
"_id": "62d18eb81e36881a57f29bf4",
"avatarUrl": "/avatars/104851421b4ee9641daaf15942fa7ea1.svg",
"fullname": "Yif Yang",
"isPro": false,
"type": "user",
"user": "Yif29"
}
},
{
"_id": "67b2a89ebe31bfaa7cd2bff3",
"hidden": false,
"name": "Chengruidong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a89ebe31bfaa7cd2bff4",
"hidden": false,
"name": "Yiqi Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:07.641Z",
"user": {
"_id": "6356bf52e983d3c51d212205",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1666629451430-noauth.jpeg",
"fullname": "Yiqi Zhang",
"isPro": false,
"type": "user",
"user": "Viscent"
}
},
{
"_id": "67b2a89ebe31bfaa7cd2bff5",
"hidden": false,
"name": "Lili Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a89ebe31bfaa7cd2bff6",
"hidden": false,
"name": "Yang You",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2a89ebe31bfaa7cd2bff7",
"hidden": false,
"name": "Yuqing Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T18:59:36 | Region-Adaptive Sampling for Diffusion Transformers | Diffusion models (DMs) have become the leading choice for generative tasks
across diverse domains. However, their reliance on multiple sequential forward
passes significantly limits real-time performance. Previous acceleration
methods have primarily focused on reducing the number of sampling steps or
reusing intermediate results, failing to leverage variations across spatial
regions within the image due to the constraints of convolutional U-Net
structures. By harnessing the flexibility of Diffusion Transformers (DiTs) in
handling variable number of tokens, we introduce RAS, a novel, training-free
sampling strategy that dynamically assigns different sampling ratios to regions
within an image based on the focus of the DiT model. Our key observation is
that during each sampling step, the model concentrates on semantically
meaningful regions, and these areas of focus exhibit strong continuity across
consecutive steps. Leveraging this insight, RAS updates only the regions
currently in focus, while other regions are updated using cached noise from the
previous step. The model's focus is determined based on the output from the
preceding step, capitalizing on the temporal consistency we observed. We
evaluate RAS on Stable Diffusion 3 and Lumina-Next-T2I, achieving speedups up
to 2.36x and 2.51x, respectively, with minimal degradation in generation
quality. Additionally, a user study reveals that RAS delivers comparable
qualities under human evaluation while achieving a 1.6x speedup. Our approach
makes a significant step towards more efficient diffusion transformers,
enhancing their potential for real-time applications. | 52 | 67b2a8a4be31bfaa7cd2c1ad | null | null |
|
2025-02-16T22:20:53.227000 | ZeroBench: An Impossible Visual Benchmark for Contemporary Large Multimodal Models | 5 | {
"_id": "6039478ab3ecf716b1a5fd4d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg",
"followerCount": 65,
"fullname": "taesiri",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "taesiri",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6039478ab3ecf716b1a5fd4d/QJdJ_pJPI20MjNz_q8PTw.png"
] | 2502.09696 | [
{
"_id": "67b2aae22a4cd186392a18b2",
"hidden": false,
"name": "Jonathan Roberts",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:01.809Z",
"user": {
"_id": "632456d21ed511c0c5231afd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671535517748-632456d21ed511c0c5231afd.jpeg",
"fullname": "Jonathan Roberts",
"isPro": true,
"type": "user",
"user": "jonathan-roberts1"
}
},
{
"_id": "67b2aae22a4cd186392a18b3",
"hidden": false,
"name": "Mohammad Reza Taesiri",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:04.484Z",
"user": {
"_id": "6039478ab3ecf716b1a5fd4d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg",
"fullname": "taesiri",
"isPro": true,
"type": "user",
"user": "taesiri"
}
},
{
"_id": "67b2aae22a4cd186392a18b4",
"hidden": false,
"name": "Ansh Sharma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18b5",
"hidden": false,
"name": "Akash Gupta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18b6",
"hidden": false,
"name": "Samuel Roberts",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18b7",
"hidden": false,
"name": "Ioana Croitoru",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18b8",
"hidden": false,
"name": "Simion-Vlad Bogolin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18b9",
"hidden": false,
"name": "Jialu Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18ba",
"hidden": false,
"name": "Florian Langer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18bb",
"hidden": false,
"name": "Vyas Raina",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18bc",
"hidden": false,
"name": "Vatsal Raina",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18bd",
"hidden": false,
"name": "Hanyi Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18be",
"hidden": false,
"name": "Vishaal Udandarao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18bf",
"hidden": false,
"name": "Jingyi Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c0",
"hidden": false,
"name": "Shiyang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c1",
"hidden": false,
"name": "Sam Purkis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c2",
"hidden": false,
"name": "Tianshuo Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c3",
"hidden": false,
"name": "Wenye Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c4",
"hidden": false,
"name": "Gyungin Shin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c5",
"hidden": false,
"name": "Qiaochu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c6",
"hidden": false,
"name": "Anh Totti Nguyen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c7",
"hidden": false,
"name": "Kai Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b2aae22a4cd186392a18c8",
"hidden": false,
"name": "Samuel Albanie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:59:11 | ZeroBench: An Impossible Visual Benchmark for Contemporary Large
Multimodal Models | Large Multimodal Models (LMMs) exhibit major shortfalls when interpreting
images and, by some measures, have poorer spatial cognition than small children
or animals. Despite this, they attain high scores on many popular visual
benchmarks, with headroom rapidly eroded by an ongoing surge of model progress.
To address this, there is a pressing need for difficult benchmarks that remain
relevant for longer. We take this idea to its limit by introducing ZeroBench-a
lightweight visual reasoning benchmark that is entirely impossible for
contemporary frontier LMMs. Our benchmark consists of 100 manually curated
questions and 334 less difficult subquestions. We evaluate 20 LMMs on
ZeroBench, all of which score 0.0%, and rigorously analyse the errors. To
encourage progress in visual understanding, we publicly release ZeroBench. | 38 | 67b2aae42a4cd186392a195b | null | null |
|
2025-02-16T21:31:11.459000 | STMA: A Spatio-Temporal Memory Agent for Long-Horizon Embodied Task Planning | 2 | {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"followerCount": 1,
"fullname": "Lei Mingcong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SP4595",
"type": "user"
} | true | null | 2502.10177 | [
{
"_id": "67b29f472ea5fd965beb91ed",
"hidden": false,
"name": "Mingcong Lei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T15:53:20.170Z",
"user": {
"_id": "6628c6107751d297d7025a71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6628c6107751d297d7025a71/S1rm5VIwV2Uxfv8GetKMU.jpeg",
"fullname": "Lei Mingcong",
"isPro": false,
"type": "user",
"user": "SP4595"
}
},
{
"_id": "67b29f472ea5fd965beb91ee",
"hidden": false,
"name": "Yiming Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b29f472ea5fd965beb91ef",
"hidden": false,
"name": "Ge Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b29f472ea5fd965beb91f0",
"hidden": false,
"name": "Zhixin Mai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b29f472ea5fd965beb91f1",
"hidden": false,
"name": "Shuguang Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b29f472ea5fd965beb91f2",
"hidden": false,
"name": "Yatong Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b29f472ea5fd965beb91f3",
"hidden": false,
"name": "Jinke Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-14T14:12:09 | STMA: A Spatio-Temporal Memory Agent for Long-Horizon Embodied Task
Planning | A key objective of embodied intelligence is enabling agents to perform
long-horizon tasks in dynamic environments while maintaining robust
decision-making and adaptability. To achieve this goal, we propose the
Spatio-Temporal Memory Agent (STMA), a novel framework designed to enhance task
planning and execution by integrating spatio-temporal memory. STMA is built
upon three critical components: (1) a spatio-temporal memory module that
captures historical and environmental changes in real time, (2) a dynamic
knowledge graph that facilitates adaptive spatial reasoning, and (3) a
planner-critic mechanism that iteratively refines task strategies. We evaluate
STMA in the TextWorld environment on 32 tasks, involving multi-step planning
and exploration under varying levels of complexity. Experimental results
demonstrate that STMA achieves a 31.25% improvement in success rate and a 24.7%
increase in average score compared to the state-of-the-art model. The results
highlight the effectiveness of spatio-temporal memory in advancing the memory
capabilities of embodied agents. | 6 | 67b29f4a2ea5fd965beb9286 | null | null |
|
2025-02-14T21:20:14.771000 | Latent Radiance Fields with 3D-aware 2D Representations | 2 | {
"_id": "65495d1008775ce78e43e77d",
"avatarUrl": "/avatars/172678e8187acfec0aa8e647479cbb81.svg",
"followerCount": 1,
"fullname": "Chaoyi Zhou",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "chaoyizh",
"type": "user"
} | true | null | 2502.09613 | [
{
"_id": "67aff152910c82946ece0343",
"hidden": false,
"name": "Chaoyi Zhou",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:33:45.516Z",
"user": {
"_id": "65495d1008775ce78e43e77d",
"avatarUrl": "/avatars/172678e8187acfec0aa8e647479cbb81.svg",
"fullname": "Chaoyi Zhou",
"isPro": false,
"type": "user",
"user": "chaoyizh"
}
},
{
"_id": "67aff152910c82946ece0344",
"hidden": false,
"name": "Xi Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aff152910c82946ece0345",
"hidden": false,
"name": "Feng Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aff152910c82946ece0346",
"hidden": false,
"name": "Siyu Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:59:09 | Latent Radiance Fields with 3D-aware 2D Representations | Latent 3D reconstruction has shown great promise in empowering 3D semantic
understanding and 3D generation by distilling 2D features into the 3D space.
However, existing approaches struggle with the domain gap between 2D feature
space and 3D representations, resulting in degraded rendering performance. To
address this challenge, we propose a novel framework that integrates 3D
awareness into the 2D latent space. The framework consists of three stages: (1)
a correspondence-aware autoencoding method that enhances the 3D consistency of
2D latent representations, (2) a latent radiance field (LRF) that lifts these
3D-aware 2D representations into 3D space, and (3) a VAE-Radiance Field
(VAE-RF) alignment strategy that improves image decoding from the rendered 2D
representations. Extensive experiments demonstrate that our method outperforms
the state-of-the-art latent 3D reconstruction approaches in terms of synthesis
performance and cross-dataset generalizability across diverse indoor and
outdoor scenes. To our knowledge, this is the first work showing the radiance
field representations constructed from 2D latent representations can yield
photorealistic 3D reconstruction performance. | 6 | 67aff158910c82946ece0458 | null | null |
|
2025-02-14T09:18:18.443000 | Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges | 2 | {
"_id": "64e77b47d96966317b45eeb3",
"avatarUrl": "/avatars/6b67eba3f15d6cd86ac3ad55c1daf166.svg",
"followerCount": 1,
"fullname": "Minwu Kim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "guactastesgood",
"type": "user"
} | true | null | 2502.08680 | [
{
"_id": "67af2b2d1f297f2bdacded89",
"hidden": false,
"name": "Safal Shrestha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T12:04:44.693Z",
"user": {
"_id": "64cb922ec7f30fbf7b91a9a7",
"avatarUrl": "/avatars/457eae5e56b9641ee5543146447d1755.svg",
"fullname": "Safal Shrestha",
"isPro": false,
"type": "user",
"user": "safal312"
}
},
{
"_id": "67af2b2d1f297f2bdacded8a",
"hidden": false,
"name": "Minwu Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T12:00:37.837Z",
"user": {
"_id": "64e77b47d96966317b45eeb3",
"avatarUrl": "/avatars/6b67eba3f15d6cd86ac3ad55c1daf166.svg",
"fullname": "Minwu Kim",
"isPro": false,
"type": "user",
"user": "guactastesgood"
}
},
{
"_id": "67af2b2d1f297f2bdacded8b",
"hidden": false,
"name": "Keith Ross",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T09:53:10 | Mathematical Reasoning in Large Language Models: Assessing Logical and
Arithmetic Errors across Wide Numerical Ranges | Mathematical reasoning in Large Language Models (LLMs) is often evaluated
using benchmarks with limited numerical ranges, failing to reflect real-world
problem-solving across diverse scales. Furthermore, most existing evaluation
methods only compare model outputs to ground-truth answers, obscuring insights
into reasoning processes. To address these limitations, we introduce
GSM-Ranges, a dataset generator derived from GSM8K that systematically perturbs
numerical values in math problems to assess model robustness across varying
numerical scales. Additionally, we propose a novel grading methodology that
distinguishes between logical and non-logical errors, offering a more precise
evaluation of reasoning processes beyond computational accuracy. Our
experiments with various models reveal a significant increase in logical error
rates-up to 14 percentage points-as numerical complexity rises, demonstrating a
general weakness in reasoning with out-of-distribution numerical values.
Moreover, while models demonstrate high accuracy on standalone arithmetic
tasks, their performance deteriorates substantially when computations are
embedded within word problems. These findings provide a comprehensive
evaluation of LLMs' mathematical reasoning capabilities and inform future
research directions for improving numerical generalization in language models. | 11 | 67af2b2e1f297f2bdacdedd3 | null | null |
|
2025-02-14T08:47:33.396000 | VFX Creator: Animated Visual Effect Generation with Controllable Diffusion Transformer | 2 | {
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
} | false | null | 2502.05979 | [
{
"_id": "67ab5cc2b8d7fe3b96361e31",
"hidden": false,
"name": "Xinyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab5cc2b8d7fe3b96361e32",
"hidden": false,
"name": "Ailing Zeng",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-11T14:21:00.414Z",
"user": {
"_id": "665030b8c08859923b274a55",
"avatarUrl": "/avatars/06a3f3cd533caa459f3da1bb5e0f4c1f.svg",
"fullname": "aznukad",
"isPro": false,
"type": "user",
"user": "alkxncda"
}
},
{
"_id": "67ab5cc2b8d7fe3b96361e33",
"hidden": false,
"name": "Wei Xue",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-11T14:21:00.414Z",
"user": {
"_id": "6628adb14277eae0da5eee28",
"avatarUrl": "/avatars/6cb41b80cc5e014e455dfc2a22682e64.svg",
"fullname": "HKUST Audio",
"isPro": true,
"type": "user",
"user": "HKUST-Audio"
}
},
{
"_id": "67ab5cc2b8d7fe3b96361e34",
"hidden": false,
"name": "Harry Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab5cc2b8d7fe3b96361e35",
"hidden": false,
"name": "Wenhan Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab5cc2b8d7fe3b96361e36",
"hidden": false,
"name": "Qifeng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab5cc2b8d7fe3b96361e37",
"hidden": false,
"name": "Yike Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T18:12:25 | VFX Creator: Animated Visual Effect Generation with Controllable
Diffusion Transformer | Crafting magic and illusions is one of the most thrilling aspects of
filmmaking, with visual effects (VFX) serving as the powerhouse behind
unforgettable cinematic experiences. While recent advances in generative
artificial intelligence have driven progress in generic image and video
synthesis, the domain of controllable VFX generation remains relatively
underexplored. In this work, we propose a novel paradigm for animated VFX
generation as image animation, where dynamic effects are generated from
user-friendly textual descriptions and static reference images.
Our work makes two primary contributions: (i) Open-VFX, the first
high-quality VFX video dataset spanning 15 diverse effect categories, annotated
with textual descriptions, instance segmentation masks for spatial
conditioning, and start-end timestamps for temporal control. (ii) VFX Creator,
a simple yet effective controllable VFX generation framework based on a Video
Diffusion Transformer. The model incorporates a spatial and temporal
controllable LoRA adapter, requiring minimal training videos. Specifically, a
plug-and-play mask control module enables instance-level spatial manipulation,
while tokenized start-end motion timestamps embedded in the diffusion process,
alongside the text encoder, allow precise temporal control over effect timing
and pace.
Extensive experiments on the Open-VFX test set demonstrate the superiority of
the proposed system in generating realistic and dynamic effects, achieving
state-of-the-art performance and generalization ability in both spatial and
temporal controllability. Furthermore, we introduce a specialized metric to
evaluate the precision of temporal control. By bridging traditional VFX
techniques with generative approaches, VFX Creator unlocks new possibilities
for efficient and high-quality video effect generation, making advanced VFX
accessible to a broader audience. | 8 | 67ab5cccb8d7fe3b96361ff7 | null | null |
|
2025-02-14T04:50:27.474000 | DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References | 2 | {
"_id": "65b8070ad49f4330ab0ca5f7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/t4fI-3djMfgXCchU_xpjL.png",
"followerCount": 2,
"fullname": "Xueyi Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xymeow7",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/65b8070ad49f4330ab0ca5f7/Ir-_GtsnqYII8yhrpJRD5.mp4"
] | 2502.09614 | [
{
"_id": "67af107d6bd28b8bd4e13c38",
"hidden": false,
"name": "Xueyi Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:55:06.626Z",
"user": {
"_id": "65b8070ad49f4330ab0ca5f7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/t4fI-3djMfgXCchU_xpjL.png",
"fullname": "Xueyi Liu",
"isPro": false,
"type": "user",
"user": "xymeow7"
}
},
{
"_id": "67af107d6bd28b8bd4e13c39",
"hidden": false,
"name": "Jianibieke Adalibieke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67af107d6bd28b8bd4e13c3a",
"hidden": false,
"name": "Qianwei Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67af107d6bd28b8bd4e13c3b",
"hidden": false,
"name": "Yuzhe Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67af107d6bd28b8bd4e13c3c",
"hidden": false,
"name": "Li Yi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:59:13 | DexTrack: Towards Generalizable Neural Tracking Control for Dexterous
Manipulation from Human References | We address the challenge of developing a generalizable neural tracking
controller for dexterous manipulation from human references. This controller
aims to manage a dexterous robot hand to manipulate diverse objects for various
purposes defined by kinematic human-object interactions. Developing such a
controller is complicated by the intricate contact dynamics of dexterous
manipulation and the need for adaptivity, generalizability, and robustness.
Current reinforcement learning and trajectory optimization methods often fall
short due to their dependence on task-specific rewards or precise system
models. We introduce an approach that curates large-scale successful robot
tracking demonstrations, comprising pairs of human references and robot
actions, to train a neural controller. Utilizing a data flywheel, we
iteratively enhance the controller's performance, as well as the number and
quality of successful tracking demonstrations. We exploit available tracking
demonstrations and carefully integrate reinforcement learning and imitation
learning to boost the controller's performance in dynamic environments. At the
same time, to obtain high-quality tracking demonstrations, we individually
optimize per-trajectory tracking by leveraging the learned tracking controller
in a homotopy optimization method. The homotopy optimization, mimicking
chain-of-thought, aids in solving challenging trajectory tracking problems to
increase demonstration diversity. We showcase our success by training a
generalizable neural controller and evaluating it in both simulation and real
world. Our method achieves over a 10% improvement in success rates compared to
leading baselines. The project website with animated results is available at
https://meowuu7.github.io/DexTrack/. | 12 | 67af10806bd28b8bd4e13ce5 | null | null |
|
2025-02-14T04:00:29.585000 | 3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised Anomaly | 2 | {
"_id": "648bf9afded4c3eb970eca85",
"avatarUrl": "/avatars/a4b7b7fd6c1fca0eac85da7383f58361.svg",
"followerCount": null,
"fullname": "enquan yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "enquan2022",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/648bf9afded4c3eb970eca85/n-ufwo6Smo9TdMiTqKG8_.png"
] | 2502.05761 | [
{
"_id": "67aee1cd7af05a21a72e793d",
"hidden": false,
"name": "Enquan Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:03.483Z",
"user": {
"_id": "648bf9afded4c3eb970eca85",
"avatarUrl": "/avatars/a4b7b7fd6c1fca0eac85da7383f58361.svg",
"fullname": "enquan yang",
"isPro": false,
"type": "user",
"user": "enquan2022"
}
},
{
"_id": "67aee1cd7af05a21a72e793e",
"hidden": false,
"name": "Peng Xing",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee1cd7af05a21a72e793f",
"hidden": false,
"name": "Hanyang Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee1cd7af05a21a72e7940",
"hidden": false,
"name": "Wenbo Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee1cd7af05a21a72e7941",
"hidden": false,
"name": "Yuanwei Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee1cd7af05a21a72e7942",
"hidden": false,
"name": "Zechao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee1cd7af05a21a72e7943",
"hidden": false,
"name": "Dan Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T03:37:54 | 3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised
Anomaly | Industrial anomaly detection achieves progress thanks to datasets such as
MVTec-AD and VisA. However, they suf- fer from limitations in terms of the
number of defect sam- ples, types of defects, and availability of real-world
scenes. These constraints inhibit researchers from further exploring the
performance of industrial detection with higher accuracy. To this end, we
propose a new large-scale anomaly detection dataset called 3CAD, which is
derived from real 3C produc- tion lines. Specifically, the proposed 3CAD
includes eight different types of manufactured parts, totaling 27,039 high-
resolution images labeled with pixel-level anomalies. The key features of 3CAD
are that it covers anomalous regions of different sizes, multiple anomaly
types, and the possibility of multiple anomalous regions and multiple anomaly
types per anomaly image. This is the largest and first anomaly de- tection
dataset dedicated to 3C product quality control for community exploration and
development. Meanwhile, we in- troduce a simple yet effective framework for
unsupervised anomaly detection: a Coarse-to-Fine detection paradigm with
Recovery Guidance (CFRG). To detect small defect anoma- lies, the proposed CFRG
utilizes a coarse-to-fine detection paradigm. Specifically, we utilize a
heterogeneous distilla- tion model for coarse localization and then fine
localiza- tion through a segmentation model. In addition, to better capture
normal patterns, we introduce recovery features as guidance. Finally, we report
the results of our CFRG frame- work and popular anomaly detection methods on
the 3CAD dataset, demonstrating strong competitiveness and providing a highly
challenging benchmark to promote the development of the anomaly detection
field. Data and code are available: https://github.com/EnquanYang2022/3CAD. | 6 | 67aee1cf7af05a21a72e799b | null | null |
|
2025-02-14T02:58:25.756000 | Can this Model Also Recognize Dogs? Zero-Shot Model Search from Weights | 2 | {
"_id": "6465fd33dac127ac80f0b334",
"avatarUrl": "/avatars/113f02c1b1f8d33d3487daa867afcd3f.svg",
"followerCount": 2,
"fullname": "Jonathan Kahana",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jonkahana",
"type": "user"
} | true | null | 2502.09619 | [
{
"_id": "67aef6212c36e4d8bd23740e",
"hidden": false,
"name": "Jonathan Kahana",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:41:31.020Z",
"user": {
"_id": "6465fd33dac127ac80f0b334",
"avatarUrl": "/avatars/113f02c1b1f8d33d3487daa867afcd3f.svg",
"fullname": "Jonathan Kahana",
"isPro": false,
"type": "user",
"user": "jonkahana"
}
},
{
"_id": "67aef6212c36e4d8bd23740f",
"hidden": false,
"name": "Or Nathan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T09:37:48.235Z",
"user": {
"_id": "67b603c90918c8645fff91e9",
"avatarUrl": "/avatars/d590a2055b5553ba6e7487f156aaf06c.svg",
"fullname": "Or Nathan",
"isPro": false,
"type": "user",
"user": "OrNathan"
}
},
{
"_id": "67aef6212c36e4d8bd237410",
"hidden": false,
"name": "Eliahu Horwitz",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T11:10:52.689Z",
"user": {
"_id": "630dd4218df86f1e5beb2ed7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/630dd4218df86f1e5beb2ed7/fKvNWyWv6CVBdbXXUlrYv.jpeg",
"fullname": "Eliahu Horwitz",
"isPro": false,
"type": "user",
"user": "Eliahu"
}
},
{
"_id": "67aef6212c36e4d8bd237411",
"hidden": false,
"name": "Yedid Hoshen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:41:43.672Z",
"user": {
"_id": "646cfc3b4220471ca0c56b20",
"avatarUrl": "/avatars/19d6ab141ec2cd25c1c3b45fd8f69910.svg",
"fullname": "Yedid Hoshen",
"isPro": false,
"type": "user",
"user": "yedid"
}
}
] | 2025-02-13T18:59:44 | Can this Model Also Recognize Dogs? Zero-Shot Model Search from Weights | With the increasing numbers of publicly available models, there are probably
pretrained, online models for most tasks users require. However, current model
search methods are rudimentary, essentially a text-based search in the
documentation, thus users cannot find the relevant models. This paper presents
ProbeLog, a method for retrieving classification models that can recognize a
target concept, such as "Dog", without access to model metadata or training
data. Differently from previous probing methods, ProbeLog computes a descriptor
for each output dimension (logit) of each model, by observing its responses on
a fixed set of inputs (probes). Our method supports both logit-based retrieval
("find more logits like this") and zero-shot, text-based retrieval ("find all
logits corresponding to dogs"). As probing-based representations require
multiple costly feedforward passes through the model, we develop a method,
based on collaborative filtering, that reduces the cost of encoding
repositories by 3x. We demonstrate that ProbeLog achieves high retrieval
accuracy, both in real-world and fine-grained search tasks and is scalable to
full-size repositories. | 31 | 67aef6222c36e4d8bd237472 | null | null |
|
2025-02-14T02:50:35.108000 | CoSER: Coordinating LLM-Based Persona Simulation of Established Roles | 2 | {
"_id": "64c7bf2c4524c2aea7eac0b3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c7bf2c4524c2aea7eac0b3/5ocZ69MvN4RFv86Aa7ks3.png",
"followerCount": 3,
"fullname": "Xintao Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Neph0s",
"type": "user"
} | true | null | 2502.09082 | [
{
"_id": "67aee90c208d299238758622",
"hidden": true,
"name": "Xintao Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T14:37:56.459Z",
"user": {
"_id": "64c7bf2c4524c2aea7eac0b3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c7bf2c4524c2aea7eac0b3/5ocZ69MvN4RFv86Aa7ks3.png",
"fullname": "Xintao Wang",
"isPro": false,
"type": "user",
"user": "Neph0s"
}
},
{
"_id": "67aee90c208d299238758623",
"hidden": false,
"name": "Heng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d299238758624",
"hidden": false,
"name": "Yifei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d299238758625",
"hidden": false,
"name": "Xinfeng Yuan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T09:37:50.630Z",
"user": {
"_id": "6749b9b54431ba7184411328",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/c2DvvGF_Ga5rKY9iJuyib.png",
"fullname": "Xinfeng",
"isPro": false,
"type": "user",
"user": "Joanna-Yuan"
}
},
{
"_id": "67aee90c208d299238758626",
"hidden": false,
"name": "Rui Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d299238758627",
"hidden": false,
"name": "Jen-tse Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d299238758628",
"hidden": false,
"name": "Siyu Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:43:00.148Z",
"user": {
"_id": "62d62b333bf5e059f7d2b286",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1668513815771-62d62b333bf5e059f7d2b286.jpeg",
"fullname": "Siyu Yuan",
"isPro": false,
"type": "user",
"user": "siyuyuan"
}
},
{
"_id": "67aee90c208d299238758629",
"hidden": false,
"name": "Haoran Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:42:42.152Z",
"user": {
"_id": "64f06b030ba7caa06a8bf13e",
"avatarUrl": "/avatars/a9dce43ecdd0339c438a64e28dcd3fcf.svg",
"fullname": "Haoran Guo",
"isPro": false,
"type": "user",
"user": "Haoran-Guo"
}
},
{
"_id": "67aee90c208d29923875862a",
"hidden": false,
"name": "Jiangjie Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:42:33.652Z",
"user": {
"_id": "606ed1884ffe81d1e03e81e5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1639375346654-606ed1884ffe81d1e03e81e5.png",
"fullname": "Jiangjie Chen",
"isPro": false,
"type": "user",
"user": "jiangjiechen"
}
},
{
"_id": "67aee90c208d29923875862b",
"hidden": false,
"name": "Wei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d29923875862c",
"hidden": false,
"name": "Yanghua Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee90c208d29923875862d",
"hidden": false,
"name": "Shuchang Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:42:04.561Z",
"user": {
"_id": "620531f8522e40b4a18d872f",
"avatarUrl": "/avatars/cd2fba21c499e27dea75e571a3b75228.svg",
"fullname": "Shuchang Zhou",
"isPro": false,
"type": "user",
"user": "zsc"
}
}
] | 2025-02-13T08:55:24 | CoSER: Coordinating LLM-Based Persona Simulation of Established Roles | Role-playing language agents (RPLAs) have emerged as promising applications
of large language models (LLMs). However, simulating established characters
presents a challenging task for RPLAs, due to the lack of authentic character
datasets and nuanced evaluation methods using such data. In this paper, we
present CoSER, a collection of a high-quality dataset, open models, and an
evaluation protocol towards effective RPLAs of established characters. The
CoSER dataset covers 17,966 characters from 771 renowned books. It provides
authentic dialogues with real-world intricacies, as well as diverse data types
such as conversation setups, character experiences and internal thoughts.
Drawing from acting methodology, we introduce given-circumstance acting for
training and evaluating role-playing LLMs, where LLMs sequentially portray
multiple characters in book scenes. Using our dataset, we develop CoSER 8B and
CoSER 70B, i.e., advanced open role-playing LLMs built on LLaMA-3.1 models.
Extensive experiments demonstrate the value of the CoSER dataset for RPLA
training, evaluation and retrieval. Moreover, CoSER 70B exhibits
state-of-the-art performance surpassing or matching GPT-4o on our evaluation
and three existing benchmarks, i.e., achieving 75.80% and 93.47% accuracy on
the InCharacter and LifeChoice benchmarks respectively. | 27 | 67aee90f208d2992387586d1 | null | null |
|
2025-02-14T02:35:53.718000 | SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models | 2 | {
"_id": "62d93cd728f9c86a4031562e",
"avatarUrl": "/avatars/4619930d15512ec9b80b01c62e986217.svg",
"followerCount": null,
"fullname": "Daniel Fleischer",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "danf",
"type": "user"
} | true | null | 2502.09390 | [
{
"_id": "67aef17da9f929ce0ca3e36b",
"hidden": false,
"name": "Daniel Fleischer",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-14T07:32:14.019Z",
"user": {
"_id": "62d93cd728f9c86a4031562e",
"avatarUrl": "/avatars/4619930d15512ec9b80b01c62e986217.svg",
"fullname": "Daniel Fleischer",
"isPro": false,
"type": "user",
"user": "danf"
}
},
{
"_id": "67aef17da9f929ce0ca3e36c",
"hidden": false,
"name": "Moshe Berchansky",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:52:07.600Z",
"user": {
"_id": "63e0c8875c6964861ebb0c49",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e0c8875c6964861ebb0c49/yzkhPSxgXtJCM62iMBOOK.jpeg",
"fullname": "Moshe Berchansky",
"isPro": false,
"type": "user",
"user": "mber"
}
},
{
"_id": "67aef17da9f929ce0ca3e36d",
"hidden": false,
"name": "Gad Markovits",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:52:13.549Z",
"user": {
"_id": "666154a2694ec45eaabc19c3",
"avatarUrl": "/avatars/b0022ad6dd992c75a51973070b302315.svg",
"fullname": "Gad Markovits",
"isPro": false,
"type": "user",
"user": "gadmarkovits"
}
},
{
"_id": "67aef17da9f929ce0ca3e36e",
"hidden": false,
"name": "Moshe Wasserblat",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:52:19.452Z",
"user": {
"_id": "6013c64031d84a3bb7650038",
"avatarUrl": "/avatars/e05ca715004b39e79472399f75010bda.svg",
"fullname": "Moshe Wasserblat",
"isPro": false,
"type": "user",
"user": "moshew"
}
}
] | 2025-02-13T15:07:20 | SQuARE: Sequential Question Answering Reasoning Engine for Enhanced
Chain-of-Thought in Large Language Models | In the rapidly evolving field of Natural Language Processing, Large Language
Models (LLMs) are tasked with increasingly complex reasoning challenges.
Traditional methods like chain-of-thought prompting have shown promise but
often fall short in fully leveraging a model's reasoning capabilities. This
paper introduces SQuARE (Sequential Question Answering Reasoning Engine), a
novel prompting technique designed to improve reasoning through a
self-interrogation paradigm. Building upon CoT frameworks, SQuARE prompts
models to generate and resolve multiple auxiliary questions before tackling the
main query, promoting a more thorough exploration of various aspects of a
topic. Our expansive evaluations, conducted with Llama 3 and GPT-4o models
across multiple question-answering datasets, demonstrate that SQuARE
significantly surpasses traditional CoT prompts and existing
rephrase-and-respond methods. By systematically decomposing queries, SQuARE
advances LLM capabilities in reasoning tasks. The code is publicly available at
https://github.com/IntelLabs/RAG-FiT/tree/square. | 16 | 67aef17ea9f929ce0ca3e3bf | null | null |
|
2025-02-14T02:27:45.749000 | Exploring the Potential of Encoder-free Architectures in 3D LMMs | 2 | {
"_id": "647d9ab61a1fcad2fdbf2d3d",
"avatarUrl": "/avatars/48c8aeae8979d2c87df8bde922437d62.svg",
"followerCount": 8,
"fullname": "Ziyu Guo",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "ZiyuG",
"type": "user"
} | false | null | 2502.09620 | [
{
"_id": "67aeec91b1bbfb68824df5d1",
"hidden": false,
"name": "Yiwen Tang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:00:57.216Z",
"user": {
"_id": "6552f1ad5d55ccb20e9142a0",
"avatarUrl": "/avatars/0e3e80cba64b5ae0bc5638694ac33dbf.svg",
"fullname": "Ivan Tang",
"isPro": false,
"type": "user",
"user": "IvanTang"
}
},
{
"_id": "67aeec91b1bbfb68824df5d2",
"hidden": false,
"name": "Zoey Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:02:40.282Z",
"user": {
"_id": "642a8302d651bae3c11b72b1",
"avatarUrl": "/avatars/4d2d422613e274d80482fed9a7d3f785.svg",
"fullname": "Zoey Guo",
"isPro": false,
"type": "user",
"user": "Purple1288"
}
},
{
"_id": "67aeec91b1bbfb68824df5d3",
"hidden": false,
"name": "Zhuhao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:02:47.566Z",
"user": {
"_id": "672489e32fd598f719be33ba",
"avatarUrl": "/avatars/63a6edf7bf38957e029fa52c1a7f9061.svg",
"fullname": "Zhuhao Wang",
"isPro": false,
"type": "user",
"user": "zhuhaow"
}
},
{
"_id": "67aeec91b1bbfb68824df5d4",
"hidden": false,
"name": "Ray Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeec91b1bbfb68824df5d5",
"hidden": false,
"name": "Qizhi Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:03:12.952Z",
"user": {
"_id": "6535045a910b844786a6642f",
"avatarUrl": "/avatars/37a94864a7a348151837b421ea6d77e3.svg",
"fullname": "Qizhi Chen",
"isPro": false,
"type": "user",
"user": "Tavish9"
}
},
{
"_id": "67aeec91b1bbfb68824df5d6",
"hidden": false,
"name": "Junli Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeec91b1bbfb68824df5d7",
"hidden": false,
"name": "Delin Qu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:00:55.263Z",
"user": {
"_id": "64daecec888b7e9c400f59b5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64daecec888b7e9c400f59b5/f4pfOfWk6jYJX-Nf2-qHn.png",
"fullname": "Delin Qu",
"isPro": false,
"type": "user",
"user": "delinqu"
}
},
{
"_id": "67aeec91b1bbfb68824df5d8",
"hidden": false,
"name": "Zhigang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeec91b1bbfb68824df5d9",
"hidden": false,
"name": "Dong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeec91b1bbfb68824df5da",
"hidden": false,
"name": "Xuelong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeec91b1bbfb68824df5db",
"hidden": false,
"name": "Bin Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:59:45 | Exploring the Potential of Encoder-free Architectures in 3D LMMs | Encoder-free architectures have been preliminarily explored in the 2D visual
domain, yet it remains an open question whether they can be effectively applied
to 3D understanding scenarios. In this paper, we present the first
comprehensive investigation into the potential of encoder-free architectures to
overcome the challenges of encoder-based 3D Large Multimodal Models (LMMs).
These challenges include the failure to adapt to varying point cloud
resolutions and the point features from the encoder not meeting the semantic
needs of Large Language Models (LLMs). We identify key aspects for 3D LMMs to
remove the encoder and enable the LLM to assume the role of the 3D encoder: 1)
We propose the LLM-embedded Semantic Encoding strategy in the pre-training
stage, exploring the effects of various point cloud self-supervised losses. And
we present the Hybrid Semantic Loss to extract high-level semantics. 2) We
introduce the Hierarchical Geometry Aggregation strategy in the instruction
tuning stage. This incorporates inductive bias into the LLM early layers to
focus on the local details of the point clouds. To the end, we present the
first Encoder-free 3D LMM, ENEL. Our 7B model rivals the current
state-of-the-art model, ShapeLLM-13B, achieving 55.0%, 50.92%, and 42.7% on the
classification, captioning, and VQA tasks, respectively. Our results
demonstrate that the encoder-free architecture is highly promising for
replacing encoder-based architectures in the field of 3D understanding. The
code is released at https://github.com/Ivan-Tang-3D/ENEL | 25 | 67aeec92b1bbfb68824df61f | null | null |
|
2025-02-14T01:34:58.800000 | MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency | 2 | {
"_id": "6349214f8146350b3a4c5cdf",
"avatarUrl": "/avatars/cfd24caac9a87efb528d0f4c375932bc.svg",
"followerCount": 8,
"fullname": "Dongzhi Jiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "CaraJ",
"type": "user"
} | true | null | 2502.09621 | [
{
"_id": "67aee0229e69670f49533146",
"hidden": false,
"name": "Dongzhi Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:05.736Z",
"user": {
"_id": "6349214f8146350b3a4c5cdf",
"avatarUrl": "/avatars/cfd24caac9a87efb528d0f4c375932bc.svg",
"fullname": "Dongzhi Jiang",
"isPro": false,
"type": "user",
"user": "CaraJ"
}
},
{
"_id": "67aee0229e69670f49533147",
"hidden": false,
"name": "Renrui Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f49533148",
"hidden": false,
"name": "Ziyu Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f49533149",
"hidden": false,
"name": "Yanwei Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:50:20.626Z",
"user": {
"_id": "643ff78cdc984afcbbbc3b1a",
"avatarUrl": "/avatars/eec5198ce88aaf8156840bec0d190a7f.svg",
"fullname": "Yanwei Li",
"isPro": false,
"type": "user",
"user": "YanweiLi"
}
},
{
"_id": "67aee0229e69670f4953314a",
"hidden": false,
"name": "Yu Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f4953314b",
"hidden": false,
"name": "Xinyan Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:50:07.239Z",
"user": {
"_id": "647c7a4ed412b3b376572a00",
"avatarUrl": "/avatars/9cc310fd3f9e3f211475816ed9b0cdaa.svg",
"fullname": "Xinyan Chen",
"isPro": false,
"type": "user",
"user": "xy06"
}
},
{
"_id": "67aee0229e69670f4953314c",
"hidden": false,
"name": "Liuhui Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f4953314d",
"hidden": false,
"name": "Jianhan Jin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:49:46.156Z",
"user": {
"_id": "67aeecf31f297f2bdabd91bf",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mZBgh3oudPtx_rgSt8PJ7.png",
"fullname": "Jianhan Jin",
"isPro": false,
"type": "user",
"user": "Pala718"
}
},
{
"_id": "67aee0229e69670f4953314e",
"hidden": false,
"name": "Claire Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:49:39.014Z",
"user": {
"_id": "64862c6e19b74d6d646baa88",
"avatarUrl": "/avatars/5ca91ff2b7dddb36378edf74067cd9e2.svg",
"fullname": "Claire Guo",
"isPro": false,
"type": "user",
"user": "clairerg"
}
},
{
"_id": "67aee0229e69670f4953314f",
"hidden": false,
"name": "Shen Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f49533150",
"hidden": false,
"name": "Bo Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T15:53:22.348Z",
"user": {
"_id": "643dfd235aafbdca3a5792c0",
"avatarUrl": "/avatars/ce8553cf5936012c692e08054ee27937.svg",
"fullname": "Bo Zhang",
"isPro": false,
"type": "user",
"user": "BoZhang"
}
},
{
"_id": "67aee0229e69670f49533151",
"hidden": false,
"name": "Chaoyou Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aee0229e69670f49533152",
"hidden": false,
"name": "Peng Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:49:22.182Z",
"user": {
"_id": "6759af3eccbc8817f9169179",
"avatarUrl": "/avatars/49e64c7ccf71b8f25c52783b6ae93620.svg",
"fullname": "Peng Gao",
"isPro": false,
"type": "user",
"user": "gaopenghigh"
}
},
{
"_id": "67aee0229e69670f49533153",
"hidden": false,
"name": "Hongsheng Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:49:00.794Z",
"user": {
"_id": "65c04e9c27a5fdca81abcbd9",
"avatarUrl": "/avatars/12a155683c824fa23da4a9e2bed4f64e.svg",
"fullname": "Hongsheng LI",
"isPro": false,
"type": "user",
"user": "hsli-cuhk"
}
}
] | 2025-02-13T18:59:46 | MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for
Reasoning Quality, Robustness, and Efficiency | Answering questions with Chain-of-Thought (CoT) has significantly enhanced
the reasoning capabilities of Large Language Models (LLMs), yet its impact on
Large Multimodal Models (LMMs) still lacks a systematic assessment and in-depth
investigation. In this paper, we introduce MME-CoT, a specialized benchmark
evaluating the CoT reasoning performance of LMMs, spanning six domains: math,
science, OCR, logic, space-time, and general scenes. As the first comprehensive
study in this area, we propose a thorough evaluation suite incorporating three
novel metrics that assess the reasoning quality, robustness, and efficiency at
a fine-grained level. Leveraging curated high-quality data and a unique
evaluation strategy, we conduct an in-depth analysis of state-of-the-art LMMs,
uncovering several key insights: 1) Models with reflection mechanism
demonstrate a superior CoT quality, with Kimi k1.5 outperforming GPT-4o and
demonstrating the highest quality results; 2) CoT prompting often degrades LMM
performance on perception-heavy tasks, suggesting a potentially harmful
overthinking behavior; and 3) Although the CoT quality is high, LMMs with
reflection exhibit significant inefficiency in both normal response and
self-correction phases. We hope MME-CoT serves as a foundation for advancing
multimodal reasoning in LMMs. Project Page: https://mmecot.github.io/ | 27 | 67aee0249e69670f495331d8 | null | null |
|
2025-02-14T01:29:44.233000 | Typhoon T1: An Open Thai Reasoning Model | 2 | {
"_id": "615313b0793ef66b3324da1f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/615313b0793ef66b3324da1f/VyJniD3dxbV5a2CMgVVQ2.jpeg",
"followerCount": 3,
"fullname": "Pittawat Taveekitworachai",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "pittawat",
"type": "user"
} | true | null | 2502.09042 | [
{
"_id": "67aea8c94d4cb38be4a40c55",
"hidden": false,
"name": "Pittawat Taveekitworachai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:24.073Z",
"user": {
"_id": "615313b0793ef66b3324da1f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/615313b0793ef66b3324da1f/VyJniD3dxbV5a2CMgVVQ2.jpeg",
"fullname": "Pittawat Taveekitworachai",
"isPro": false,
"type": "user",
"user": "pittawat"
}
},
{
"_id": "67aea8c94d4cb38be4a40c56",
"hidden": false,
"name": "Potsawee Manakul",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:48:21.806Z",
"user": {
"_id": "63f6a050b4c9a104f4b95755",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f6a050b4c9a104f4b95755/eJQyJkenSz536j-EGcpkH.jpeg",
"fullname": "Potsawee Manakul",
"isPro": false,
"type": "user",
"user": "potsawee"
}
},
{
"_id": "67aea8c94d4cb38be4a40c57",
"hidden": false,
"name": "Kasima Tharnpipitchai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aea8c94d4cb38be4a40c58",
"hidden": false,
"name": "Kunat Pipatanakul",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T12:41:14.906Z",
"user": {
"_id": "62d192c2d50433c35eb1b48e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d192c2d50433c35eb1b48e/VjmDu8GOIuLuQNBQdQLLS.png",
"fullname": "Kunat Pipatanakul",
"isPro": true,
"type": "user",
"user": "kunato"
}
}
] | 2025-02-13T07:55:54 | Typhoon T1: An Open Thai Reasoning Model | This paper introduces Typhoon T1, an open effort to develop an open Thai
reasoning model. A reasoning model is a relatively new type of generative model
built on top of large language models (LLMs). A reasoning model generates a
long chain of thought before arriving at a final answer, an approach found to
improve performance on complex tasks. However, details on developing such a
model are limited, especially for reasoning models that can generate traces in
a low-resource language. Typhoon T1 presents an open effort that dives into the
details of developing a reasoning model in a more cost-effective way by
leveraging supervised fine-tuning using open datasets, instead of reinforcement
learning. This paper shares the details about synthetic data generation and
training, as well as our dataset and model weights. Additionally, we provide
insights gained from developing a reasoning model that generalizes across
domains and is capable of generating reasoning traces in a low-resource
language, using Thai as an example. We hope this open effort provides a
foundation for further research in this field. | 16 | 67aea8ca4d4cb38be4a40cab | null | null |
|
2025-02-14T00:16:30.034000 | CoT-Valve: Length-Compressible Chain-of-Thought Tuning | 2 | {
"_id": "64396ebc21221ac7411852b3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64396ebc21221ac7411852b3/SR0dC8N0bdj9tZFxYPpSf.jpeg",
"followerCount": 3,
"fullname": "Xinyin Ma",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "horseee",
"type": "user"
} | true | null | 2502.09601 | [
{
"_id": "67aed173e6952709b47c0c5c",
"hidden": false,
"name": "Xinyin Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:50:50.673Z",
"user": {
"_id": "64396ebc21221ac7411852b3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64396ebc21221ac7411852b3/SR0dC8N0bdj9tZFxYPpSf.jpeg",
"fullname": "Xinyin Ma",
"isPro": false,
"type": "user",
"user": "horseee"
}
},
{
"_id": "67aed173e6952709b47c0c5d",
"hidden": false,
"name": "Guangnian Wan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:51:10.430Z",
"user": {
"_id": "6627cccfded9b7936d5d1d21",
"avatarUrl": "/avatars/9216bdd9ed10c226f2d14edce4a10daa.svg",
"fullname": "Guangnian Wan",
"isPro": false,
"type": "user",
"user": "bigglesworthnotcat"
}
},
{
"_id": "67aed173e6952709b47c0c5e",
"hidden": false,
"name": "Runpeng Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:51:16.707Z",
"user": {
"_id": "635364b3c41f548fe39db945",
"avatarUrl": "/avatars/ad1916bbfabca0b6651c8eabacc5eba8.svg",
"fullname": "Runpeng Yu",
"isPro": false,
"type": "user",
"user": "rp-yu"
}
},
{
"_id": "67aed173e6952709b47c0c5f",
"hidden": false,
"name": "Gongfan Fang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:51:22.345Z",
"user": {
"_id": "646a1939c37ca1e12308fe81",
"avatarUrl": "/avatars/752e9d86018e7d33ad8bcd741203fd86.svg",
"fullname": "Gongfan Fang",
"isPro": false,
"type": "user",
"user": "Vinnnf"
}
},
{
"_id": "67aed173e6952709b47c0c60",
"hidden": false,
"name": "Xinchao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:52:36 | CoT-Valve: Length-Compressible Chain-of-Thought Tuning | Chain-of-Thought significantly enhances a model's reasoning capability, but
it also comes with a considerable increase in inference costs due to long
chains. With the observation that the reasoning path can be easily compressed
under easy tasks but struggle on hard tasks, we explore the feasibility of
elastically controlling the length of reasoning paths with only one model,
thereby reducing the inference overhead of reasoning models dynamically based
on task difficulty. We introduce a new tuning and inference strategy named
CoT-Valve, designed to allow models to generate reasoning chains of varying
lengths. To achieve this, we propose to identify a direction in the parameter
space that, when manipulated, can effectively control the length of generated
CoT. Moreover, we show that this property is valuable for compressing the
reasoning chain. We construct datasets with chains from long to short for the
same questions and explore two enhanced strategies for CoT-Valve: (1) a precise
length-compressible CoT tuning method, and (2) a progressive chain length
compression approach. Our experiments show that CoT-Valve successfully enables
controllability and compressibility of the chain and shows better performance
than the prompt-based control. We applied this method to QwQ-32B-Preview,
reducing reasoning chains on GSM8K from 741 to 225 tokens with a minor
performance drop (95.07% to 94.92%) and on AIME from 6827 to 4629 tokens, with
only one additional incorrect answer. | 14 | 67aed174e6952709b47c0ca1 | null | null |
|
2025-02-13T23:32:15.420000 | mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data | 2 | {
"_id": "66add675c7a575aa0e03d5f3",
"avatarUrl": "/avatars/b72b18130664c1de197c1f8df371aa70.svg",
"followerCount": 4,
"fullname": "Haonan Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Haon-Chen",
"type": "user"
} | true | null | 2502.08468 | [
{
"_id": "67ad5f3fcad644864b4366ca",
"hidden": false,
"name": "Haonan Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:55.329Z",
"user": {
"_id": "66add675c7a575aa0e03d5f3",
"avatarUrl": "/avatars/b72b18130664c1de197c1f8df371aa70.svg",
"fullname": "Haonan Chen",
"isPro": false,
"type": "user",
"user": "Haon-Chen"
}
},
{
"_id": "67ad5f3fcad644864b4366cb",
"hidden": false,
"name": "Liang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f3fcad644864b4366cc",
"hidden": false,
"name": "Nan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f3fcad644864b4366cd",
"hidden": false,
"name": "Yutao Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:53:09.223Z",
"user": {
"_id": "625e62452a7279d3c77b5c38",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/625e62452a7279d3c77b5c38/zJINew6U4_Gup4WTobb-0.jpeg",
"fullname": "Yutao Zhu",
"isPro": false,
"type": "user",
"user": "yutaozhu94"
}
},
{
"_id": "67ad5f3fcad644864b4366ce",
"hidden": false,
"name": "Ziliang Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:53:01.987Z",
"user": {
"_id": "6639d5c106b25a7ea6f18391",
"avatarUrl": "/avatars/788e339472999a9159f77f857817d618.svg",
"fullname": "Ziliang Zhao",
"isPro": false,
"type": "user",
"user": "ZillionZhao"
}
},
{
"_id": "67ad5f3fcad644864b4366cf",
"hidden": false,
"name": "Furu Wei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:52:40.042Z",
"user": {
"_id": "6368c512fbfe97c16a40baba",
"avatarUrl": "/avatars/1c23bc7c0b6d9225699ce27647623d7a.svg",
"fullname": "Furu Wei",
"isPro": false,
"type": "user",
"user": "thegenerality"
}
},
{
"_id": "67ad5f3fcad644864b4366d0",
"hidden": false,
"name": "Zhicheng Dou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:52:33.880Z",
"user": {
"_id": "66f0bf59e9d50ec57febf751",
"avatarUrl": "/avatars/be97941e60064e5dd806c6fe9db3c537.svg",
"fullname": "Zhicheng Dou",
"isPro": false,
"type": "user",
"user": "douzc"
}
}
] | 2025-02-12T15:03:33 | mmE5: Improving Multimodal Multilingual Embeddings via High-quality
Synthetic Data | Multimodal embedding models have gained significant attention for their
ability to map data from different modalities, such as text and images, into a
unified representation space. However, the limited labeled multimodal data
often hinders embedding performance. Recent approaches have leveraged data
synthesis to address this problem, yet the quality of synthetic data remains a
critical bottleneck. In this work, we identify three criteria for high-quality
synthetic multimodal data. First, broad scope ensures that the generated data
covers diverse tasks and modalities, making it applicable to various downstream
scenarios. Second, robust cross-modal alignment makes different modalities
semantically consistent. Third, high fidelity ensures that the synthetic data
maintains realistic details to enhance its reliability. Guided by these
principles, we synthesize datasets that: (1) cover a wide range of tasks,
modality combinations, and languages, (2) are generated via a deep thinking
process within a single pass of a multimodal large language model, and (3)
incorporate real-world images with accurate and relevant texts, ensuring
fidelity through self-evaluation and refinement. Leveraging these high-quality
synthetic and labeled datasets, we train a multimodal multilingual E5 model
mmE5. Extensive experiments demonstrate that mmE5 achieves state-of-the-art
performance on the MMEB Benchmark and superior multilingual performance on the
XTD benchmark. Our codes, datasets and models are released in
https://github.com/haon-chen/mmE5. | 13 | 67ad5f3fcad644864b4366f5 | null | null |
|
2025-02-13T23:23:42.492000 | EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents | 2 | {
"_id": "64d45451c34a346181b130dd",
"avatarUrl": "/avatars/9bb8205b889337df5d321539c9b5d69d.svg",
"followerCount": 6,
"fullname": "Rui Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ray2333",
"type": "user"
} | true | null | 2502.09560 | [
{
"_id": "67aec4285b9801b819449b84",
"hidden": false,
"name": "Rui Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:05.913Z",
"user": {
"_id": "64d45451c34a346181b130dd",
"avatarUrl": "/avatars/9bb8205b889337df5d321539c9b5d69d.svg",
"fullname": "Rui Yang",
"isPro": false,
"type": "user",
"user": "Ray2333"
}
},
{
"_id": "67aec4285b9801b819449b85",
"hidden": false,
"name": "Hanyang Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:08.365Z",
"user": {
"_id": "6700b1f93381f2db06857fb5",
"avatarUrl": "/avatars/c8b9ec7c00773c5a4055ba50de0c6b2f.svg",
"fullname": "Hanyang Chen",
"isPro": false,
"type": "user",
"user": "Hanyang81"
}
},
{
"_id": "67aec4285b9801b819449b86",
"hidden": false,
"name": "Junyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:03.521Z",
"user": {
"_id": "6719bfd07c6e6c83a388aeae",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6719bfd07c6e6c83a388aeae/jHxryk04dzHo23TX5F5sz.png",
"fullname": "Junyu Zhang",
"isPro": false,
"type": "user",
"user": "jyzhang1208"
}
},
{
"_id": "67aec4285b9801b819449b87",
"hidden": false,
"name": "Mark Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec4285b9801b819449b88",
"hidden": false,
"name": "Cheng Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec4285b9801b819449b89",
"hidden": false,
"name": "Kangrui Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:47:36.061Z",
"user": {
"_id": "66f2b8602ef2817ec3cb65f6",
"avatarUrl": "/avatars/c8c5b2706644fb45a75f13af99fa7ae9.svg",
"fullname": "Kangrui Wang",
"isPro": false,
"type": "user",
"user": "JamesK2W"
}
},
{
"_id": "67aec4285b9801b819449b8a",
"hidden": false,
"name": "Qineng Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:46:06.895Z",
"user": {
"_id": "640c3b28c5fa12d61a50cd92",
"avatarUrl": "/avatars/81556de3214c848b3c3e118f50fd2968.svg",
"fullname": "Qineng Wang",
"isPro": false,
"type": "user",
"user": "Inevitablevalor"
}
},
{
"_id": "67aec4285b9801b819449b8b",
"hidden": false,
"name": "Teja Venkat Koripella",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec4285b9801b819449b8c",
"hidden": false,
"name": "Marziyeh Movahedi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:46:13.104Z",
"user": {
"_id": "64e10c263209bf4194912319",
"avatarUrl": "/avatars/02f1a9e2ce333ff521d901cf83fcdff3.svg",
"fullname": "Marziyeh Movahedi",
"isPro": false,
"type": "user",
"user": "Marzimv"
}
},
{
"_id": "67aec4285b9801b819449b8d",
"hidden": false,
"name": "Manling Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:46:50.911Z",
"user": {
"_id": "6746a140f2ca2162e3bcfe2b",
"avatarUrl": "/avatars/d9d8cfb5f112e6ed7f6152fc230135d3.svg",
"fullname": "Manling Li",
"isPro": false,
"type": "user",
"user": "ManlingLi"
}
},
{
"_id": "67aec4285b9801b819449b8e",
"hidden": false,
"name": "Heng Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec4285b9801b819449b8f",
"hidden": false,
"name": "Huan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:47:14.741Z",
"user": {
"_id": "6719d581a6cad13741b8bc7f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6719d581a6cad13741b8bc7f/w4EttqfXRgWZJc6HpYOS9.jpeg",
"fullname": "Huan Zhang",
"isPro": false,
"type": "user",
"user": "huanzhang12"
}
},
{
"_id": "67aec4285b9801b819449b90",
"hidden": false,
"name": "Tong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:11:34 | EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language
Models for Vision-Driven Embodied Agents | Leveraging Multi-modal Large Language Models (MLLMs) to create embodied
agents offers a promising avenue for tackling real-world tasks. While
language-centric embodied agents have garnered substantial attention,
MLLM-based embodied agents remain underexplored due to the lack of
comprehensive evaluation frameworks. To bridge this gap, we introduce
EmbodiedBench, an extensive benchmark designed to evaluate vision-driven
embodied agents. EmbodiedBench features: (1) a diverse set of 1,128 testing
tasks across four environments, ranging from high-level semantic tasks (e.g.,
household) to low-level tasks involving atomic actions (e.g., navigation and
manipulation); and (2) six meticulously curated subsets evaluating essential
agent capabilities like commonsense reasoning, complex instruction
understanding, spatial awareness, visual perception, and long-term planning.
Through extensive experiments, we evaluated 13 leading proprietary and
open-source MLLMs within EmbodiedBench. Our findings reveal that: MLLMs excel
at high-level tasks but struggle with low-level manipulation, with the best
model, GPT-4o, scoring only 28.9% on average. EmbodiedBench provides a
multifaceted standardized evaluation platform that not only highlights existing
challenges but also offers valuable insights to advance MLLM-based embodied
agents. Our code is available at https://embodiedbench.github.io. | 33 | 67aec42b5b9801b819449bf5 | null | null |
|
2025-02-13T23:10:44.295000 | Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient Text-to-Image Generation | 2 | {
"_id": "633e6f07309a99325095dd42",
"avatarUrl": "/avatars/57b91a488ac1745b3c0509c04eb6ad93.svg",
"followerCount": 1,
"fullname": "Hoigi Seo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Agorium",
"type": "user"
} | true | null | 2502.08690 | [
{
"_id": "67aec0a203bf3301ec29ac39",
"hidden": false,
"name": "Hoigi Seo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:10.420Z",
"user": {
"_id": "633e6f07309a99325095dd42",
"avatarUrl": "/avatars/57b91a488ac1745b3c0509c04eb6ad93.svg",
"fullname": "Hoigi Seo",
"isPro": false,
"type": "user",
"user": "Agorium"
}
},
{
"_id": "67aec0a203bf3301ec29ac3a",
"hidden": false,
"name": "Wongi Jeong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec0a203bf3301ec29ac3b",
"hidden": false,
"name": "Jae-sun Seo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aec0a203bf3301ec29ac3c",
"hidden": false,
"name": "Se Young Chun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T15:03:26 | Skrr: Skip and Re-use Text Encoder Layers for Memory Efficient
Text-to-Image Generation | Large-scale text encoders in text-to-image (T2I) diffusion models have
demonstrated exceptional performance in generating high-quality images from
textual prompts. Unlike denoising modules that rely on multiple iterative
steps, text encoders require only a single forward pass to produce text
embeddings. However, despite their minimal contribution to total inference time
and floating-point operations (FLOPs), text encoders demand significantly
higher memory usage, up to eight times more than denoising modules. To address
this inefficiency, we propose Skip and Re-use layers (Skrr), a simple yet
effective pruning strategy specifically designed for text encoders in T2I
diffusion models. Skrr exploits the inherent redundancy in transformer blocks
by selectively skipping or reusing certain layers in a manner tailored for T2I
tasks, thereby reducing memory consumption without compromising performance.
Extensive experiments demonstrate that Skrr maintains image quality comparable
to the original model even under high sparsity levels, outperforming existing
blockwise pruning methods. Furthermore, Skrr achieves state-of-the-art memory
efficiency while preserving performance across multiple evaluation metrics,
including the FID, CLIP, DreamSim, and GenEval scores. | 41 | 67aec0a903bf3301ec29adf3 | null | null |
|
2025-02-13T22:57:03.709000 | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU | 6 | {
"_id": "646cae3093badbc8c2e891c7",
"avatarUrl": "/avatars/4aae2aca70ea9dc58dd6f9f9b2be15e1.svg",
"followerCount": 8,
"fullname": "Geon Park",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "geonp",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/646cae3093badbc8c2e891c7/upRSt7mdOUX5vJZTWKG8D.png"
] | 2502.08910 | [
{
"_id": "67aebd48225614bbe7f6f271",
"hidden": false,
"name": "Heejun Lee",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:15.423Z",
"user": {
"_id": "62e622d08e0b2dc6707f8794",
"avatarUrl": "/avatars/8c47b5c862f82d4258ba707c932f7f87.svg",
"fullname": "Heejun Lee",
"isPro": false,
"type": "user",
"user": "gmlwns5176"
}
},
{
"_id": "67aebd48225614bbe7f6f272",
"hidden": false,
"name": "Geon Park",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:12.988Z",
"user": {
"_id": "646cae3093badbc8c2e891c7",
"avatarUrl": "/avatars/4aae2aca70ea9dc58dd6f9f9b2be15e1.svg",
"fullname": "Geon Park",
"isPro": false,
"type": "user",
"user": "geonp"
}
},
{
"_id": "67aebd48225614bbe7f6f273",
"hidden": false,
"name": "Jaduk Suh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:00:46.888Z",
"user": {
"_id": "657ffd9ba2ef32167533f04a",
"avatarUrl": "/avatars/e180e063c810c15d02b494727e962b84.svg",
"fullname": "Jaduk Suh",
"isPro": false,
"type": "user",
"user": "Losif63"
}
},
{
"_id": "67aebd48225614bbe7f6f274",
"hidden": false,
"name": "Sung Ju Hwang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T02:52:01 | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on
a Single GPU | In modern large language models (LLMs), handling very long context lengths
presents significant challenges as it causes slower inference speeds and
increased memory costs. Additionally, most existing pre-trained LLMs fail to
generalize beyond their original training sequence lengths. To enable efficient
and practical long-context utilization, we introduce InfiniteHiP, a novel, and
practical LLM inference framework that accelerates processing by dynamically
eliminating irrelevant context tokens through a modular hierarchical token
pruning algorithm. Our method also allows generalization to longer sequences by
selectively applying various RoPE adjustment methods according to the internal
attention patterns within LLMs. Furthermore, we offload the key-value cache to
host memory during inference, significantly reducing GPU memory pressure. As a
result, InfiniteHiP enables the processing of up to 3 million tokens on a
single L40s 48GB GPU -- 3x larger -- without any permanent loss of context
information. Our framework achieves an 18.95x speedup in attention decoding for
a 1 million token context without requiring additional training. We implement
our method in the SGLang framework and demonstrate its effectiveness and
practicality through extensive evaluations. | 142 | 67aebd4a225614bbe7f6f2d6 | null | null |
|
2025-02-13T22:56:23.567000 | TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models | 3 | {
"_id": "64d71083a787c9bc7b9f1238",
"avatarUrl": "/avatars/d0b0546dec7fc5792921154bec41385a.svg",
"followerCount": 1,
"fullname": "Yangguang Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Lp256",
"type": "user"
} | true | null | 2502.06608 | [
{
"_id": "67aebe57f47426f753bc3b07",
"hidden": false,
"name": "Yangguang Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T16:20:05.446Z",
"user": {
"_id": "64d71083a787c9bc7b9f1238",
"avatarUrl": "/avatars/d0b0546dec7fc5792921154bec41385a.svg",
"fullname": "Yangguang Li",
"isPro": false,
"type": "user",
"user": "Lp256"
}
},
{
"_id": "67aebe57f47426f753bc3b08",
"hidden": false,
"name": "Zi-Xin Zou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:44:17.925Z",
"user": {
"_id": "644dbf6453ad80c6593bf748",
"avatarUrl": "/avatars/0e170cf2aa8d7f0f3f83e36f06f023f8.svg",
"fullname": "Zixin Zou",
"isPro": false,
"type": "user",
"user": "zouzx"
}
},
{
"_id": "67aebe57f47426f753bc3b09",
"hidden": false,
"name": "Zexiang Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:44:23.987Z",
"user": {
"_id": "65ff33a830b872fe2ccc6c1e",
"avatarUrl": "/avatars/cf15da8fc5a51f0b3ae0e11e3ff685cf.svg",
"fullname": "Zexiang Liu",
"isPro": false,
"type": "user",
"user": "zexiangliu"
}
},
{
"_id": "67aebe57f47426f753bc3b0a",
"hidden": false,
"name": "Dehu Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:44:35.149Z",
"user": {
"_id": "666c4e95dc348adcab8f2836",
"avatarUrl": "/avatars/875545d88ba7d935340015a719e6e5f0.svg",
"fullname": "dehuwang",
"isPro": false,
"type": "user",
"user": "dehu168"
}
},
{
"_id": "67aebe57f47426f753bc3b0b",
"hidden": false,
"name": "Yuan Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aebe57f47426f753bc3b0c",
"hidden": false,
"name": "Zhipeng Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aebe57f47426f753bc3b0d",
"hidden": false,
"name": "Xingchao Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:45:05.465Z",
"user": {
"_id": "646b0bbdec9a61e871799339",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646b0bbdec9a61e871799339/Xippv-ajkHkrGGAA7caLn.jpeg",
"fullname": "Xingchao Liu",
"isPro": false,
"type": "user",
"user": "XCLiu"
}
},
{
"_id": "67aebe57f47426f753bc3b0e",
"hidden": false,
"name": "Yuan-Chen Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:45:11.634Z",
"user": {
"_id": "6346aaa3f06b237ba4e297b0",
"avatarUrl": "/avatars/5acb986e993eab1461200f3e9d99d022.svg",
"fullname": "Yuan-Chen Guo",
"isPro": false,
"type": "user",
"user": "bennyguo"
}
},
{
"_id": "67aebe57f47426f753bc3b0f",
"hidden": false,
"name": "Ding Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:45:32.067Z",
"user": {
"_id": "6476b01cbb7fdd7f425dcefb",
"avatarUrl": "/avatars/65efe7067f47a68c204a1ab5a772b939.svg",
"fullname": "dingliang",
"isPro": false,
"type": "user",
"user": "dingliang01"
}
},
{
"_id": "67aebe57f47426f753bc3b10",
"hidden": false,
"name": "Wanli Ouyang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aebe57f47426f753bc3b11",
"hidden": false,
"name": "Yan-Pei Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T16:07:54 | TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified
Flow Models | Recent advancements in diffusion techniques have propelled image and video
generation to unprece- dented levels of quality, significantly accelerating the
deployment and application of generative AI. However, 3D shape generation
technology has so far lagged behind, constrained by limitations in 3D data
scale, complexity of 3D data process- ing, and insufficient exploration of
advanced tech- niques in the 3D domain. Current approaches to 3D shape
generation face substantial challenges in terms of output quality,
generalization capa- bility, and alignment with input conditions. We present
TripoSG, a new streamlined shape diffu- sion paradigm capable of generating
high-fidelity 3D meshes with precise correspondence to input images.
Specifically, we propose: 1) A large-scale rectified flow transformer for 3D
shape generation, achieving state-of-the-art fidelity through training on
extensive, high-quality data. 2) A hybrid supervised training strategy
combining SDF, normal, and eikonal losses for 3D VAE, achieving high- quality
3D reconstruction performance. 3) A data processing pipeline to generate 2
million high- quality 3D samples, highlighting the crucial rules for data
quality and quantity in training 3D gen- erative models. Through comprehensive
experi- ments, we have validated the effectiveness of each component in our new
framework. The seamless integration of these parts has enabled TripoSG to
achieve state-of-the-art performance in 3D shape generation. The resulting 3D
shapes exhibit en- hanced detail due to high-resolution capabilities and
demonstrate exceptional fidelity to input im- ages. Moreover, TripoSG
demonstrates improved versatility in generating 3D models from diverse image
styles and contents, showcasing strong gen- eralization capabilities. To foster
progress and innovation in the field of 3D generation, we will make our model
publicly available. | 32 | 67aebe5ef47426f753bc3d31 | null | null |
|
2025-02-13T22:01:48.364000 | An Open Recipe: Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.09056 | [
{
"_id": "67aea8d7926b659c7e959bbc",
"hidden": false,
"name": "Kunat Pipatanakul",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:04:29.437Z",
"user": {
"_id": "62d192c2d50433c35eb1b48e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d192c2d50433c35eb1b48e/VjmDu8GOIuLuQNBQdQLLS.png",
"fullname": "Kunat Pipatanakul",
"isPro": true,
"type": "user",
"user": "kunato"
}
},
{
"_id": "67aea8d7926b659c7e959bbd",
"hidden": false,
"name": "Pittawat Taveekitworachai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:21.838Z",
"user": {
"_id": "615313b0793ef66b3324da1f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/615313b0793ef66b3324da1f/VyJniD3dxbV5a2CMgVVQ2.jpeg",
"fullname": "Pittawat Taveekitworachai",
"isPro": false,
"type": "user",
"user": "pittawat"
}
},
{
"_id": "67aea8d7926b659c7e959bbe",
"hidden": false,
"name": "Potsawee Manakul",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:04:35.434Z",
"user": {
"_id": "63f6a050b4c9a104f4b95755",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f6a050b4c9a104f4b95755/eJQyJkenSz536j-EGcpkH.jpeg",
"fullname": "Potsawee Manakul",
"isPro": false,
"type": "user",
"user": "potsawee"
}
},
{
"_id": "67aea8d7926b659c7e959bbf",
"hidden": false,
"name": "Kasima Tharnpipitchai",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T08:10:45 | An Open Recipe: Adapting Language-Specific LLMs to a Reasoning Model in
One Day via Model Merging | This paper investigates data selection and model merging methodologies aimed
at incorporating advanced reasoning capabilities such as those of DeepSeek R1
into language-specific large language models (LLMs), with a particular focus on
the Thai LLM. Our goal is to enhance the reasoning capabilities of
language-specific LLMs while maintaining their target language abilities.
DeepSeek R1 excels in reasoning but primarily benefits high-resource languages
such as English and Chinese. However, low-resource languages remain underserved
due to the dominance of English-centric training data and model optimizations,
which limit performance in these languages. This limitation results in
unreliable code-switching and diminished effectiveness on tasks in low-resource
languages. Meanwhile, local and regional LLM initiatives have attempted to
bridge this gap by developing language-specific LLMs that focus on improving
local linguistic fidelity. We demonstrate that, with only publicly available
datasets and a computational budget of $120, it is possible to enhance the
reasoning capabilities of language-specific LLMs to match the level of DeepSeek
R1, without compromising their performance on target language tasks. | 30 | 67aea8d8926b659c7e959bee | null | null |
|
2025-02-13T21:59:28.400000 | The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of Physical Concept Understanding | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.08946 | [
{
"_id": "67aeb180cb3be2cefd46ed07",
"hidden": false,
"name": "Mo Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:13.763Z",
"user": {
"_id": "67af92045a86287292026808",
"avatarUrl": "/avatars/8bad9272fe73ba04e077b5484837c8d3.svg",
"fullname": "Mo",
"isPro": false,
"type": "user",
"user": "BishopGorov"
}
},
{
"_id": "67aeb180cb3be2cefd46ed08",
"hidden": false,
"name": "Lemao Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:53:38.597Z",
"user": {
"_id": "64462d77efa0723f3946356b",
"avatarUrl": "/avatars/f9757030d82c69aef933309e0c83ccd0.svg",
"fullname": "Lemao Liu",
"isPro": false,
"type": "user",
"user": "lemaoliu"
}
},
{
"_id": "67aeb180cb3be2cefd46ed09",
"hidden": false,
"name": "Junjie Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:53:53.052Z",
"user": {
"_id": "64d36ca3036ae0b3756588e3",
"avatarUrl": "/avatars/0d7dfbd681b1157a38d0f0a86f19b702.svg",
"fullname": "Junjie Wu",
"isPro": false,
"type": "user",
"user": "junjiewu"
}
},
{
"_id": "67aeb180cb3be2cefd46ed0a",
"hidden": false,
"name": "Tsz Ting Chung",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:11.765Z",
"user": {
"_id": "60ab6b2ee3de7c7440abb845",
"avatarUrl": "/avatars/22916bece3b5b951c016bf2ddd8dda1c.svg",
"fullname": "Cindy",
"isPro": false,
"type": "user",
"user": "ttchungc"
}
},
{
"_id": "67aeb180cb3be2cefd46ed0b",
"hidden": false,
"name": "Shunchi Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T12:54:03.865Z",
"user": {
"_id": "63fe24448b3c5087ff866b39",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63fe24448b3c5087ff866b39/CaTM4yAfj9tJj53ciQWJk.jpeg",
"fullname": "Shunchi Zhang",
"isPro": false,
"type": "user",
"user": "ShunchiZhang"
}
},
{
"_id": "67aeb180cb3be2cefd46ed0c",
"hidden": false,
"name": "Jiangnan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb180cb3be2cefd46ed0d",
"hidden": false,
"name": "Dit-Yan Yeung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb180cb3be2cefd46ed0e",
"hidden": false,
"name": "Jie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T04:00:03 | The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of
Physical Concept Understanding | In a systematic way, we investigate a widely asked question: Do LLMs really
understand what they say?, which relates to the more familiar term Stochastic
Parrot. To this end, we propose a summative assessment over a carefully
designed physical concept understanding task, PhysiCo. Our task alleviates the
memorization issue via the usage of grid-format inputs that abstractly describe
physical phenomena. The grids represents varying levels of understanding, from
the core phenomenon, application examples to analogies to other abstract
patterns in the grid world. A comprehensive study on our task demonstrates: (1)
state-of-the-art LLMs, including GPT-4o, o1 and Gemini 2.0 flash thinking, lag
behind humans by ~40%; (2) the stochastic parrot phenomenon is present in LLMs,
as they fail on our grid task but can describe and recognize the same concepts
well in natural language; (3) our task challenges the LLMs due to intrinsic
difficulties rather than the unfamiliar grid format, as in-context learning and
fine-tuning on same formatted data added little to their performance. | 182 | 67aeb181cb3be2cefd46ed4c | null | null |
|
2025-02-13T21:55:58.708000 | Logical Reasoning in Large Language Models: A Survey | 5 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.09100 | [
{
"_id": "67aeb0a3d58f4990b384d83e",
"hidden": false,
"name": "Hanmeng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb0a3d58f4990b384d83f",
"hidden": false,
"name": "Zhizhang Fu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T11:10:54.325Z",
"user": {
"_id": "6467136a8334813a7ae1d1b0",
"avatarUrl": "/avatars/d0fd37532c830e8bef14995148190f9f.svg",
"fullname": "Zhizhang Fu",
"isPro": false,
"type": "user",
"user": "HarryFu"
}
},
{
"_id": "67aeb0a3d58f4990b384d840",
"hidden": false,
"name": "Mengru Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb0a3d58f4990b384d841",
"hidden": false,
"name": "Ruoxi Ning",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-14T06:28:50.414Z",
"user": {
"_id": "62e47d1b6a82e063860c587e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62e47d1b6a82e063860c587e/jvFt1caSZNWDQTYKZQ9K-.jpeg",
"fullname": "ruoxining",
"isPro": false,
"type": "user",
"user": "ruoxining"
}
},
{
"_id": "67aeb0a3d58f4990b384d842",
"hidden": false,
"name": "Chaoli Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb0a3d58f4990b384d843",
"hidden": false,
"name": "Xiaozhang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeb0a3d58f4990b384d844",
"hidden": false,
"name": "Yue Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T09:19:14 | Logical Reasoning in Large Language Models: A Survey | With the emergence of advanced reasoning models like OpenAI o3 and
DeepSeek-R1, large language models (LLMs) have demonstrated remarkable
reasoning capabilities. However, their ability to perform rigorous logical
reasoning remains an open question. This survey synthesizes recent advancements
in logical reasoning within LLMs, a critical area of AI research. It outlines
the scope of logical reasoning in LLMs, its theoretical foundations, and the
benchmarks used to evaluate reasoning proficiency. We analyze existing
capabilities across different reasoning paradigms - deductive, inductive,
abductive, and analogical - and assess strategies to enhance reasoning
performance, including data-centric tuning, reinforcement learning, decoding
strategies, and neuro-symbolic approaches. The review concludes with future
directions, emphasizing the need for further exploration to strengthen logical
reasoning in AI systems. | 22 | 67aeb0a4d58f4990b384d871 | null | null |
|
2025-02-13T21:42:37.926000 | SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models | 2 | {
"_id": "5df84571da6d0311fd3d5407",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650651305661-5df84571da6d0311fd3d5407.png",
"followerCount": 3,
"fullname": "Yung-Sung Chuang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "voidism",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/5df84571da6d0311fd3d5407/YmJO6H2Wa0ZVw31qeHZi0.png"
] | 2502.09604 | [
{
"_id": "67aeac4f2d48d9bf7728334e",
"hidden": false,
"name": "Yung-Sung Chuang",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-14T02:37:32.909Z",
"user": {
"_id": "5df84571da6d0311fd3d5407",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1650651305661-5df84571da6d0311fd3d5407.png",
"fullname": "Yung-Sung Chuang",
"isPro": false,
"type": "user",
"user": "voidism"
}
},
{
"_id": "67aeac4f2d48d9bf7728334f",
"hidden": false,
"name": "Benjamin Cohen-Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:17.696Z",
"user": {
"_id": "639aaf82a4c528850bba2bfe",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/639aaf82a4c528850bba2bfe/nn23r8bsNiOJzVUxAPfo7.png",
"fullname": "Benjamin Cohen-Wang",
"isPro": false,
"type": "user",
"user": "bencw"
}
},
{
"_id": "67aeac4f2d48d9bf77283350",
"hidden": false,
"name": "Shannon Zejiang Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeac4f2d48d9bf77283351",
"hidden": false,
"name": "Zhaofeng Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:19.691Z",
"user": {
"_id": "6351712b40dffad651f128c7",
"avatarUrl": "/avatars/87708c86c1baef548ef556f5d32dca71.svg",
"fullname": "Zhaofeng Wu",
"isPro": false,
"type": "user",
"user": "ZhaofengWu"
}
},
{
"_id": "67aeac4f2d48d9bf77283352",
"hidden": false,
"name": "Hu Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeac4f2d48d9bf77283353",
"hidden": false,
"name": "Xi Victoria Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeac4f2d48d9bf77283354",
"hidden": false,
"name": "James Glass",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeac4f2d48d9bf77283355",
"hidden": false,
"name": "Shang-Wen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aeac4f2d48d9bf77283356",
"hidden": false,
"name": "Wen-tau Yih",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-13T18:55:13 | SelfCite: Self-Supervised Alignment for Context Attribution in Large
Language Models | We introduce SelfCite, a novel self-supervised approach that aligns LLMs to
generate high-quality, fine-grained, sentence-level citations for the
statements in their generated responses. Instead of only relying on costly and
labor-intensive annotations, SelfCite leverages a reward signal provided by the
LLM itself through context ablation: If a citation is necessary, removing the
cited text from the context should prevent the same response; if sufficient,
retaining the cited text alone should preserve the same response. This reward
can guide the inference-time best-of-N sampling strategy to improve citation
quality significantly, as well as be used in preference optimization to
directly fine-tune the models for generating better citations. The
effectiveness of SelfCite is demonstrated by increasing citation F1 up to 5.3
points on the LongBench-Cite benchmark across five long-form question answering
tasks. | 32 | 67aeac502d48d9bf77283380 | null | null |
|
2025-02-13T14:57:40.061000 | Homeomorphism Prior for False Positive and Negative Problem in Medical Image Dense Contrastive Representation Learning | 2 | {
"_id": "66dd6db79231031ad305efde",
"avatarUrl": "/avatars/e74138c5a59d28b62b3f1c58d0f27821.svg",
"followerCount": null,
"fullname": "Yuting He",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "YutingHe-list",
"type": "user"
} | false | null | 2502.05282 | [
{
"_id": "67ae4ddc8a3f2c111fa97e8d",
"hidden": false,
"name": "Yuting He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ae4ddc8a3f2c111fa97e8e",
"hidden": false,
"name": "Boyu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ae4ddc8a3f2c111fa97e8f",
"hidden": false,
"name": "Rongjun Ge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ae4ddc8a3f2c111fa97e90",
"hidden": false,
"name": "Yang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ae4ddc8a3f2c111fa97e91",
"hidden": false,
"name": "Guanyu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ae4ddc8a3f2c111fa97e92",
"hidden": false,
"name": "Shuo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T19:34:22 | Homeomorphism Prior for False Positive and Negative Problem in Medical
Image Dense Contrastive Representation Learning | Dense contrastive representation learning (DCRL) has greatly improved the
learning efficiency for image-dense prediction tasks, showing its great
potential to reduce the large costs of medical image collection and dense
annotation. However, the properties of medical images make unreliable
correspondence discovery, bringing an open problem of large-scale false
positive and negative (FP&N) pairs in DCRL. In this paper, we propose GEoMetric
vIsual deNse sImilarity (GEMINI) learning which embeds the homeomorphism prior
to DCRL and enables a reliable correspondence discovery for effective dense
contrast. We propose a deformable homeomorphism learning (DHL) which models the
homeomorphism of medical images and learns to estimate a deformable mapping to
predict the pixels' correspondence under topological preservation. It
effectively reduces the searching space of pairing and drives an implicit and
soft learning of negative pairs via a gradient. We also propose a geometric
semantic similarity (GSS) which extracts semantic information in features to
measure the alignment degree for the correspondence learning. It will promote
the learning efficiency and performance of deformation, constructing positive
pairs reliably. We implement two practical variants on two typical
representation learning tasks in our experiments. Our promising results on
seven datasets which outperform the existing methods show our great
superiority. We will release our code on a companion link:
https://github.com/YutingHe-list/GEMINI. | 0 | 67ae4dde8a3f2c111fa97ee6 | null | null |
|
2025-02-13T11:41:16.499000 | PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.00963 | [
{
"_id": "67ad85a2e0aed508231406b6",
"hidden": false,
"name": "Mauricio Soroco",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:31.427Z",
"user": {
"_id": "666c8c53a119281ee0f1aeb0",
"avatarUrl": "/avatars/9a009cd36c9e7da4aae78d66bdafe61f.svg",
"fullname": "Mauricio Soroco",
"isPro": false,
"type": "user",
"user": "maurice1671"
}
},
{
"_id": "67ad85a2e0aed508231406b7",
"hidden": false,
"name": "Jialin Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:16.573Z",
"user": {
"_id": "64d686f379d99b87003d6100",
"avatarUrl": "/avatars/f57708dd923fe0a5dd05ccda6415b9df.svg",
"fullname": "Jialin Song",
"isPro": false,
"type": "user",
"user": "jsong2333333"
}
},
{
"_id": "67ad85a2e0aed508231406b8",
"hidden": false,
"name": "Mengzhou Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad85a2e0aed508231406b9",
"hidden": false,
"name": "Kye Emond",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad85a2e0aed508231406ba",
"hidden": false,
"name": "Weiran Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad85a2e0aed508231406bb",
"hidden": false,
"name": "Wuyang Chen",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-13T05:40:20.546Z",
"user": {
"_id": "64df97c628d5d234ce0bf83c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64df97c628d5d234ce0bf83c/BIEH7Ep4ffytiAcUIPFe-.jpeg",
"fullname": "Wuyang Chen",
"isPro": false,
"type": "user",
"user": "wuyangchen"
}
}
] | 2025-02-03T00:03:41 | PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs | While recent AI-for-math has made strides in pure mathematics, areas of
applied mathematics, particularly PDEs, remain underexplored despite their
significant real-world applications. We present PDE-Controller, a framework
that enables large language models (LLMs) to control systems governed by
partial differential equations (PDEs). Our approach enables LLMs to transform
informal natural language instructions into formal specifications, and then
execute reasoning and planning steps to improve the utility of PDE control. We
build a holistic solution comprising datasets (both human-written cases and 2
million synthetic samples), math-reasoning models, and novel evaluation
metrics, all of which require significant effort. Our PDE-Controller
significantly outperforms prompting the latest open-source and GPT models in
reasoning, autoformalization, and program synthesis, achieving up to a 62%
improvement in utility gain for PDE control. By bridging the gap between
language generation and PDE systems, we demonstrate the potential of LLMs in
addressing complex scientific and engineering challenges. We will release all
data, model checkpoints, and code at https://pde-controller.github.io/. | 16 | 67ad85a3e0aed50823140705 | null | null |
|
2025-02-13T05:48:33.939000 | LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention | 2 | {
"_id": "63df748df0c75dfb87621f93",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63df748df0c75dfb87621f93/Hzc3Du2BqnTP_8R9JsP-4.jpeg",
"followerCount": 1,
"fullname": "Konstantin Kolomeitsev",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kkolomeitsev",
"type": "user"
} | true | null | 2502.08213 | [
{
"_id": "67ad7c47565fbcfa66777f5a",
"hidden": false,
"name": "Konstantin Kolomeitsev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:29.819Z",
"user": {
"_id": "63df748df0c75dfb87621f93",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63df748df0c75dfb87621f93/Hzc3Du2BqnTP_8R9JsP-4.jpeg",
"fullname": "Konstantin Kolomeitsev",
"isPro": false,
"type": "user",
"user": "kkolomeitsev"
}
}
] | 2025-02-12T08:48:55 | LLM Modules: Knowledge Transfer from a Large to a Small Model using
Enhanced Cross-Attention | In this work, we propose an architecture of LLM Modules that enables the
transfer of knowledge from a large pre-trained model to a smaller model using
an Enhanced Cross-Attention mechanism. In the proposed scheme, the Qwen2-1.5B
model is frozen and its representations are passed through specially designed
attention layers to the GPT-Neo-125M model, which is trained on limited
computational resources. Experimental results on the Bespoke-Stratos-17k
dataset demonstrate that after 15 epochs of training, the combined model
generates responses comparable in quality to those obtained by distillation. We
discuss the advantages of the modular approach, provide examples of input
queries and comparative analysis, and outline prospects for further extension
of the method. | 4 | 67ad7c47565fbcfa66777f80 | null | null |
|
2025-02-13T03:47:28.654000 | Ignore the KL Penalty! Boosting Exploration on Critical Tokens to Enhance RL Fine-Tuning | 2 | {
"_id": "66470e227d73a39a342866e4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mDB2nCMAgcX8bQ1u1p7P4.png",
"followerCount": 5,
"fullname": "Roman Plaud",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lecraquito",
"type": "user"
} | true | null | 2502.06533 | [
{
"_id": "67accc647e1fcf03e14b1033",
"hidden": false,
"name": "Jean Vassoyan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:24:24.993Z",
"user": {
"_id": "6637cd3e691043ccb248d0fd",
"avatarUrl": "/avatars/94cf09cf817327be50ecba75f7f60fa1.svg",
"fullname": "Jean Vassoyan",
"isPro": false,
"type": "user",
"user": "supertardigrade"
}
},
{
"_id": "67accc647e1fcf03e14b1034",
"hidden": false,
"name": "Nathanaël Beau",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-12T16:29:25.829Z",
"user": {
"_id": "63da60458658cbc1cc489bd7",
"avatarUrl": "/avatars/620ce7ea229de7abe4dc9ea93021f0e4.svg",
"fullname": "Nathanaël Beau",
"isPro": false,
"type": "user",
"user": "Nbeau"
}
},
{
"_id": "67accc647e1fcf03e14b1035",
"hidden": false,
"name": "Roman Plaud",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:24:27.330Z",
"user": {
"_id": "66470e227d73a39a342866e4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/mDB2nCMAgcX8bQ1u1p7P4.png",
"fullname": "Roman Plaud",
"isPro": false,
"type": "user",
"user": "lecraquito"
}
}
] | 2025-02-10T14:56:25 | Ignore the KL Penalty! Boosting Exploration on Critical Tokens to
Enhance RL Fine-Tuning | The ability to achieve long-term goals is a key challenge in the current
development of large language models (LLMs). To address this, pre-trained LLMs
can be fine-tuned with reinforcement learning (RL) to explore solutions that
optimize a given goal. However, exploration with LLMs is difficult, as a
balance has to be struck between discovering new solutions and staying close
enough to the pre-trained model, so as not to degrade basic capabilities. This
is typically controlled with a Kullback-Leibler (KL) penalty. In this paper, we
investigate the exploration dynamics of a small language model on a simple
arithmetic task. We show how varying degrees of pre-training influence
exploration and demonstrate the importance of "critical tokens" which have a
dramatic impact on the final outcome. Consequently, we introduce a simple
modification to the KL penalty that favors exploration on critical tokens,
increasing the efficiency of the RL fine-tuning stage. | 18 | 67accc657e1fcf03e14b109e | null | null |
|
2025-02-13T03:45:43.646000 | Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance | 4 | {
"_id": "67ad9f06040354c9105b00bc",
"avatarUrl": "/avatars/39e9f4c48c93bb33f155390653936fc1.svg",
"followerCount": null,
"fullname": "LiHu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Hookszdp",
"type": "user"
} | true | null | 2502.06145 | [
{
"_id": "67ad9fb9731ff0d7da9f40e9",
"hidden": false,
"name": "Li Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:24.286Z",
"user": {
"_id": "67ad9f06040354c9105b00bc",
"avatarUrl": "/avatars/39e9f4c48c93bb33f155390653936fc1.svg",
"fullname": "LiHu",
"isPro": false,
"type": "user",
"user": "Hookszdp"
}
},
{
"_id": "67ad9fb9731ff0d7da9f40ea",
"hidden": false,
"name": "Guangyuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40eb",
"hidden": false,
"name": "Zhen Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40ec",
"hidden": false,
"name": "Xin Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40ed",
"hidden": false,
"name": "Dechao Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40ee",
"hidden": false,
"name": "Lian Zhuo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40ef",
"hidden": false,
"name": "Peng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40f0",
"hidden": false,
"name": "Bang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad9fb9731ff0d7da9f40f1",
"hidden": false,
"name": "Liefeng Bo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T04:20:11 | Animate Anyone 2: High-Fidelity Character Image Animation with
Environment Affordance | Recent character image animation methods based on diffusion models, such as
Animate Anyone, have made significant progress in generating consistent and
generalizable character animations. However, these approaches fail to produce
reasonable associations between characters and their environments. To address
this limitation, we introduce Animate Anyone 2, aiming to animate characters
with environment affordance. Beyond extracting motion signals from source
video, we additionally capture environmental representations as conditional
inputs. The environment is formulated as the region with the exclusion of
characters and our model generates characters to populate these regions while
maintaining coherence with the environmental context. We propose a
shape-agnostic mask strategy that more effectively characterizes the
relationship between character and environment. Furthermore, to enhance the
fidelity of object interactions, we leverage an object guider to extract
features of interacting objects and employ spatial blending for feature
injection. We also introduce a pose modulation strategy that enables the model
to handle more diverse motion patterns. Experimental results demonstrate the
superior performance of the proposed method. | 16 | 67ad9fbb731ff0d7da9f4145 | null | null |
|
2025-02-13T03:34:47.873000 | BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models | 2 | {
"_id": "649d1d4c379eada9a580cf59",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/649d1d4c379eada9a580cf59/ucXv7KoJDEB3Phgn-Dn5E.png",
"followerCount": 3,
"fullname": "xuhuang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xuhuang87",
"type": "user"
} | true | null | 2502.07346 | [
{
"_id": "67ac4e046b8c86e0cc7988f0",
"hidden": false,
"name": "Xu Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:17.555Z",
"user": {
"_id": "649d1d4c379eada9a580cf59",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/649d1d4c379eada9a580cf59/ucXv7KoJDEB3Phgn-Dn5E.png",
"fullname": "xuhuang",
"isPro": false,
"type": "user",
"user": "xuhuang87"
}
},
{
"_id": "67ac4e046b8c86e0cc7988f1",
"hidden": false,
"name": "Wenhao Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4e046b8c86e0cc7988f2",
"hidden": false,
"name": "Hanxu Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4e046b8c86e0cc7988f3",
"hidden": false,
"name": "Conghui He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4e046b8c86e0cc7988f4",
"hidden": false,
"name": "Lei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4e046b8c86e0cc7988f5",
"hidden": false,
"name": "Shujian Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4e046b8c86e0cc7988f6",
"hidden": false,
"name": "Fei Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T08:17:19 | BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large
Language Models | Previous multilingual benchmarks focus primarily on simple understanding
tasks, but for large language models(LLMs), we emphasize proficiency in
instruction following, reasoning, long context understanding, code generation,
and so on. However, measuring these advanced capabilities across languages is
underexplored. To address the disparity, we introduce BenchMAX, a multi-way
multilingual evaluation benchmark that allows for fair comparisons of these
important abilities across languages. To maintain high quality, three distinct
native-speaking annotators independently annotate each sample within all tasks
after the data was machine-translated from English into 16 other languages.
Additionally, we present a novel translation challenge stemming from dataset
construction. Extensive experiments on BenchMAX reveal varying effectiveness of
core capabilities across languages, highlighting performance gaps that cannot
be bridged by simply scaling up model size. BenchMAX serves as a comprehensive
multilingual evaluation platform, providing a promising test bed to promote the
development of multilingual language models. The dataset and code are publicly
accessible. | 51 | 67ac4e056b8c86e0cc798952 | null | null |
|
2025-02-13T03:30:35.137000 | Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing | 2 | {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg",
"followerCount": 5,
"fullname": "Xiang Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Dominic789654",
"type": "user"
} | true | null | 2502.04411 | [
{
"_id": "67adad972883187d78409a7a",
"hidden": false,
"name": "Kunfeng Lai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a7b",
"hidden": false,
"name": "Zhenheng Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a7c",
"hidden": false,
"name": "Xinglin Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a7d",
"hidden": false,
"name": "Peijie Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a7e",
"hidden": false,
"name": "Xiang Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:45:17.030Z",
"user": {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661093436322-noauth.jpeg",
"fullname": "Xiang Liu",
"isPro": false,
"type": "user",
"user": "Dominic789654"
}
},
{
"_id": "67adad972883187d78409a7f",
"hidden": false,
"name": "Haolan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a80",
"hidden": false,
"name": "Li Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a81",
"hidden": false,
"name": "Bo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67adad972883187d78409a82",
"hidden": false,
"name": "Xiaowen Chu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T11:26:30 | Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and
Uncertainty Based Routing | Model merging aggregates Large Language Models (LLMs) finetuned on different
tasks into a stronger one. However, parameter conflicts between models leads to
performance degradation in averaging. While model routing addresses this issue
by selecting individual models during inference, it imposes excessive storage
and compute costs, and fails to leverage the common knowledge from different
models. In this work, we observe that different layers exhibit varying levels
of parameter conflicts. Building on this insight, we average layers with
minimal parameter conflicts and use a novel task-level expert routing for
layers with significant conflicts. To further reduce storage costs, inspired by
task arithmetic sparsity, we decouple multiple fine-tuned experts into a dense
expert and several sparse experts. Considering the out-of-distribution samples,
we select and merge appropriate experts based on the task uncertainty of the
input data. We conduct extensive experiments on both LLaMA and Qwen with
varying parameter scales, and evaluate on real-world reasoning tasks. Results
demonstrate that our method consistently achieves significant performance
improvements while requiring less system cost compared to existing methods. | 4 | 67adad992883187d78409aa8 | null | null |
|
2025-02-13T01:47:30.377000 | MetaSC: Test-Time Safety Specification Optimization for Language Models | 2 | {
"_id": "5fad8602b8423e1d80b8a965",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg",
"followerCount": 120,
"fullname": "Victor Gallego",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vicgalle",
"type": "user"
} | true | null | 2502.07985 | [
{
"_id": "67ad9577b469222e0df18134",
"hidden": false,
"name": "Víctor Gallego",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-13T06:47:20.731Z",
"user": {
"_id": "5fad8602b8423e1d80b8a965",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fad8602b8423e1d80b8a965/tRqTwcZmrGka8c1vFq2wX.jpeg",
"fullname": "Victor Gallego",
"isPro": false,
"type": "user",
"user": "vicgalle"
}
}
] | 2025-02-11T22:06:25 | MetaSC: Test-Time Safety Specification Optimization for Language Models | We propose a novel dynamic safety framework that optimizes language model
(LM) safety reasoning at inference time without modifying model weights.
Building on recent advances in self-critique methods, our approach leverages a
meta-critique mechanism that iteratively updates safety prompts-termed
specifications-to drive the critique and revision process adaptively. This
test-time optimization not only improves performance against adversarial
jailbreak requests but also in diverse general safety-related tasks, such as
avoiding moral harm or pursuing honest responses. Our empirical evaluations
across several language models demonstrate that dynamically optimized safety
prompts yield significantly higher safety scores compared to fixed system
prompts and static self-critique defenses. Code to be released at
https://github.com/vicgalle/meta-self-critique.git . | 3 | 67ad9578b469222e0df18162 | null | null |
|
2025-02-13T01:39:08.775000 | WorldGUI: Dynamic Testing for Comprehensive Desktop GUI Automation | 4 | {
"_id": "647d7eb9770c299e56f5b39b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647d7eb9770c299e56f5b39b/CC5JJgkyLkXOxw-BeT4G5.jpeg",
"followerCount": 1,
"fullname": "Henry Hengyuan Zhao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hhenryz",
"type": "user"
} | true | null | 2502.08047 | [
{
"_id": "67ad92bfbbf3810ab20595c2",
"hidden": false,
"name": "Henry Hengyuan Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:30:16.490Z",
"user": {
"_id": "647d7eb9770c299e56f5b39b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647d7eb9770c299e56f5b39b/CC5JJgkyLkXOxw-BeT4G5.jpeg",
"fullname": "Henry Hengyuan Zhao",
"isPro": false,
"type": "user",
"user": "hhenryz"
}
},
{
"_id": "67ad92bfbbf3810ab20595c3",
"hidden": false,
"name": "Difei Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:30:23.319Z",
"user": {
"_id": "661abfba70ec567e154fbb00",
"avatarUrl": "/avatars/46c004650ba15525ccf02d40c9314085.svg",
"fullname": "Difei Gao",
"isPro": false,
"type": "user",
"user": "QuStar"
}
},
{
"_id": "67ad92bfbbf3810ab20595c4",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:30:30.271Z",
"user": {
"_id": "661ab3da2b14565c7acccf5c",
"avatarUrl": "/avatars/fa4fc03664803e02aede4d4c3d50b393.svg",
"fullname": "Mike Zheng Shou",
"isPro": false,
"type": "user",
"user": "AnalMom"
}
}
] | 2025-02-12T01:06:10 | WorldGUI: Dynamic Testing for Comprehensive Desktop GUI Automation | Current GUI agents have achieved outstanding performance in GUI element
grounding. However, planning remains highly challenging, especially due to
sensitivity to the initial state of the environment. Specifically, slight
differences in the initial state-such as the target software not being open or
the interface not being in its default state-often lead to planning errors.
This issue is widespread in real user scenarios, but existing benchmarks fail
to evaluate it. In this paper, we present WorldGUI, a novel GUI benchmark that
designs GUI tasks with various initial states to simulate real computer-user
interactions. The benchmark spans a wide range of tasks across 10 popular
software applications, including PowerPoint, VSCode, and Adobe Acrobat. In
addition, to address the challenges of dynamic GUI automation tasks, we propose
GUI-Thinker, a holistic framework, leveraging a critique mechanism, that
effectively manages the unpredictability and complexity of GUI interactions.
Experimental results demonstrate that GUI-Thinker significantly outperforms
Claude-3.5 (Computer Use) by 14.9% in success rate on WorldGUI tasks. This
improvement underscores the effectiveness of our critical-thinking-based
framework in enhancing GUI automation. | 26 | 67ad92c1bbf3810ab205961c | https://showlab.github.io/WorldGUI/ | https://github.com/showlab/WorldGUI |
|
2025-02-13T00:06:04.056000 | Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey | 2 | {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"followerCount": 8,
"fullname": "Franck Dernoncourt",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Franck-Dernoncourt",
"type": "user"
} | false | null | 2502.06872 | [
{
"_id": "67ad7da995ff670869168209",
"hidden": false,
"name": "Bo Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820a",
"hidden": false,
"name": "Zheyuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820b",
"hidden": false,
"name": "Leyao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820c",
"hidden": false,
"name": "Yongjia Lei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820d",
"hidden": false,
"name": "Yuying Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820e",
"hidden": false,
"name": "Xueqi Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916820f",
"hidden": false,
"name": "Qingkai Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168210",
"hidden": false,
"name": "Luna Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168211",
"hidden": false,
"name": "Yinglong Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168212",
"hidden": false,
"name": "Krishnaram Kenthapadi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168213",
"hidden": false,
"name": "Ryan Rossi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168214",
"hidden": false,
"name": "Franck Dernoncourt",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:27.740Z",
"user": {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"fullname": "Franck Dernoncourt",
"isPro": false,
"type": "user",
"user": "Franck-Dernoncourt"
}
},
{
"_id": "67ad7da995ff670869168215",
"hidden": false,
"name": "Md Mehrab Tanjim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168216",
"hidden": false,
"name": "Nesreen Ahmed",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168217",
"hidden": false,
"name": "Xiaorui Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168218",
"hidden": false,
"name": "Wenqi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff670869168219",
"hidden": false,
"name": "Erik Blasch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916821a",
"hidden": false,
"name": "Yu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916821b",
"hidden": false,
"name": "Meng Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7da995ff67086916821c",
"hidden": false,
"name": "Tyler Derr",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T06:50:47 | Towards Trustworthy Retrieval Augmented Generation for Large Language
Models: A Survey | Retrieval-Augmented Generation (RAG) is an advanced technique designed to
address the challenges of Artificial Intelligence-Generated Content (AIGC). By
integrating context retrieval into content generation, RAG provides reliable
and up-to-date external knowledge, reduces hallucinations, and ensures relevant
context across a wide range of tasks. However, despite RAG's success and
potential, recent studies have shown that the RAG paradigm also introduces new
risks, including robustness issues, privacy concerns, adversarial attacks, and
accountability issues. Addressing these risks is critical for future
applications of RAG systems, as they directly impact their trustworthiness.
Although various methods have been developed to improve the trustworthiness of
RAG methods, there is a lack of a unified perspective and framework for
research in this topic. Thus, in this paper, we aim to address this gap by
providing a comprehensive roadmap for developing trustworthy RAG systems. We
place our discussion around five key perspectives: reliability, privacy,
safety, fairness, explainability, and accountability. For each perspective, we
present a general framework and taxonomy, offering a structured approach to
understanding the current challenges, evaluating existing solutions, and
identifying promising future research directions. To encourage broader adoption
and innovation, we also highlight the downstream applications where trustworthy
RAG systems have a significant impact. | 8 | 67ad7daa95ff670869168251 | null | null |
|
2025-02-13T00:04:29.194000 | NoLiMa: Long-Context Evaluation Beyond Literal Matching | 2 | {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"followerCount": 8,
"fullname": "Franck Dernoncourt",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Franck-Dernoncourt",
"type": "user"
} | true | null | 2502.05167 | [
{
"_id": "67aa583c3a878652daeae02e",
"hidden": false,
"name": "Ali Modarressi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:42.560Z",
"user": {
"_id": "60e4738a8c0ddd18fc27ff88",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60e4738a8c0ddd18fc27ff88/lpLeeIW8r85RTY4fGZTva.jpeg",
"fullname": "Ali Modarressi",
"isPro": false,
"type": "user",
"user": "amodaresi"
}
},
{
"_id": "67aa583c3a878652daeae02f",
"hidden": false,
"name": "Hanieh Deilamsalehy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa583c3a878652daeae030",
"hidden": false,
"name": "Franck Dernoncourt",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:01.327Z",
"user": {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"fullname": "Franck Dernoncourt",
"isPro": false,
"type": "user",
"user": "Franck-Dernoncourt"
}
},
{
"_id": "67aa583c3a878652daeae031",
"hidden": false,
"name": "Trung Bui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa583c3a878652daeae032",
"hidden": false,
"name": "Ryan A. Rossi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa583c3a878652daeae033",
"hidden": false,
"name": "Seunghyun Yoon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa583c3a878652daeae034",
"hidden": false,
"name": "Hinrich Schütze",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T18:49:46 | NoLiMa: Long-Context Evaluation Beyond Literal Matching | Recent large language models (LLMs) support long contexts ranging from 128K
to 1M tokens. A popular method for evaluating these capabilities is the
needle-in-a-haystack (NIAH) test, which involves retrieving a "needle"
(relevant information) from a "haystack" (long irrelevant context). Extensions
of this approach include increasing distractors, fact chaining, and in-context
reasoning. However, in these benchmarks, models can exploit existing literal
matches between the needle and haystack to simplify the task. To address this,
we introduce NoLiMa, a benchmark extending NIAH with a carefully designed
needle set, where questions and needles have minimal lexical overlap, requiring
models to infer latent associations to locate the needle within the haystack.
We evaluate 12 popular LLMs that claim to support contexts of at least 128K
tokens. While they perform well in short contexts (<1K), performance degrades
significantly as context length increases. At 32K, for instance, 10 models drop
below 50% of their strong short-length baselines. Even GPT-4o, one of the
top-performing exceptions, experiences a reduction from an almost-perfect
baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the
increased difficulty the attention mechanism faces in longer contexts when
literal matches are absent, making it harder to retrieve relevant information. | 15 | 67aa583d3a878652daeae06c | null | null |
|
2025-02-12T23:50:07.130000 | TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation | 2 | {
"_id": "62333a88fd7bb4a39b92d387",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62333a88fd7bb4a39b92d387/e21AhpcXq37Ak_7rZ-Ca9.png",
"followerCount": 6,
"fullname": "Alex Jinpeng Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Awiny",
"type": "user"
} | false | null | 2502.07870 | [
{
"_id": "67ad79cb60ec3f444b21cbcb",
"hidden": false,
"name": "Alex Jinpeng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbcc",
"hidden": false,
"name": "Dongxing Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbcd",
"hidden": false,
"name": "Jiawei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbce",
"hidden": false,
"name": "Weiming Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbcf",
"hidden": false,
"name": "Zhuobai Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd0",
"hidden": false,
"name": "Linjie Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd1",
"hidden": false,
"name": "Yiqi Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd2",
"hidden": false,
"name": "Zhengyuan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd3",
"hidden": false,
"name": "Libo Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd4",
"hidden": false,
"name": "Fuwei Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:35:54.519Z",
"user": {
"_id": "615181cd0d6899cb526dbd8d",
"avatarUrl": "/avatars/4e29a953db4edee0abfc3485be4faf3d.svg",
"fullname": "zhangfuwei",
"isPro": false,
"type": "user",
"user": "fuwei"
}
},
{
"_id": "67ad79cb60ec3f444b21cbd5",
"hidden": false,
"name": "Lijuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79cb60ec3f444b21cbd6",
"hidden": false,
"name": "Min Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T18:59:19 | TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation | Text-conditioned image generation has gained significant attention in recent
years and are processing increasingly longer and comprehensive text prompt. In
everyday life, dense and intricate text appears in contexts like
advertisements, infographics, and signage, where the integration of both text
and visuals is essential for conveying complex information. However, despite
these advances, the generation of images containing long-form text remains a
persistent challenge, largely due to the limitations of existing datasets,
which often focus on shorter and simpler text. To address this gap, we
introduce TextAtlas5M, a novel dataset specifically designed to evaluate
long-text rendering in text-conditioned image generation. Our dataset consists
of 5 million long-text generated and collected images across diverse data
types, enabling comprehensive evaluation of large-scale generative models on
long-text image generation. We further curate 3000 human-improved test set
TextAtlasEval across 3 data domains, establishing one of the most extensive
benchmarks for text-conditioned generation. Evaluations suggest that the
TextAtlasEval benchmarks present significant challenges even for the most
advanced proprietary models (e.g. GPT4o with DallE-3), while their open-source
counterparts show an even larger performance gap. These evidences position
TextAtlas5M as a valuable dataset for training and evaluating future-generation
text-conditioned image generation models. | 43 | 67ad79d260ec3f444b21cd1f | null | null |
|
2025-02-12T23:47:56.223000 | Light-A-Video: Training-free Video Relighting via Progressive Light Fusion | 2 | {
"_id": "64b4eec4faa3181a5eab9c46",
"avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg",
"followerCount": 16,
"fullname": "Jiaqi Wang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "myownskyW7",
"type": "user"
} | false | null | 2502.08590 | [
{
"_id": "67ad79552fdac6537b43f120",
"hidden": false,
"name": "Yujie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f121",
"hidden": false,
"name": "Jiazi Bu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f122",
"hidden": false,
"name": "Pengyang Ling",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f123",
"hidden": false,
"name": "Pan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f124",
"hidden": false,
"name": "Tong Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f125",
"hidden": false,
"name": "Qidong Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f126",
"hidden": false,
"name": "Jinsong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f127",
"hidden": false,
"name": "Xiaoyi Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f128",
"hidden": false,
"name": "Yuhang Zang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:31.817Z",
"user": {
"_id": "63859cf3b2906edaf83af9f0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg",
"fullname": "Yuhang Zang",
"isPro": false,
"type": "user",
"user": "yuhangzang"
}
},
{
"_id": "67ad79552fdac6537b43f129",
"hidden": false,
"name": "Yuhang Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f12a",
"hidden": false,
"name": "Anyi Rao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f12b",
"hidden": false,
"name": "Jiaqi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad79552fdac6537b43f12c",
"hidden": false,
"name": "Li Niu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T17:24:19 | Light-A-Video: Training-free Video Relighting via Progressive Light
Fusion | Recent advancements in image relighting models, driven by large-scale
datasets and pre-trained diffusion models, have enabled the imposition of
consistent lighting. However, video relighting still lags, primarily due to the
excessive training costs and the scarcity of diverse, high-quality video
relighting datasets. A simple application of image relighting models on a
frame-by-frame basis leads to several issues: lighting source inconsistency and
relighted appearance inconsistency, resulting in flickers in the generated
videos. In this work, we propose Light-A-Video, a training-free approach to
achieve temporally smooth video relighting. Adapted from image relighting
models, Light-A-Video introduces two key techniques to enhance lighting
consistency. First, we design a Consistent Light Attention (CLA) module, which
enhances cross-frame interactions within the self-attention layers to stabilize
the generation of the background lighting source. Second, leveraging the
physical principle of light transport independence, we apply linear blending
between the source video's appearance and the relighted appearance, using a
Progressive Light Fusion (PLF) strategy to ensure smooth temporal transitions
in illumination. Experiments show that Light-A-Video improves the temporal
consistency of relighted video while maintaining the image quality, ensuring
coherent lighting transitions across frames. Project page:
https://bujiazi.github.io/light-a-video.github.io/. | 39 | 67ad79572fdac6537b43f189 | null | null |
|
2025-02-12T23:47:31.651000 | LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its Hybrid | 2 | {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"followerCount": 3,
"fullname": "Weigao Sun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "weigao266",
"type": "user"
} | false | null | 2502.07563 | [
{
"_id": "67ad7929dc2968691c241147",
"hidden": false,
"name": "Weigao Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:37.445Z",
"user": {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"fullname": "Weigao Sun",
"isPro": false,
"type": "user",
"user": "weigao266"
}
},
{
"_id": "67ad7929dc2968691c241148",
"hidden": false,
"name": "Disen Lan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:33.746Z",
"user": {
"_id": "66ea643899af9ac3463639b1",
"avatarUrl": "/avatars/252d470e761a57834dee3dbc60dfefed.svg",
"fullname": "Disen Lan",
"isPro": false,
"type": "user",
"user": "landisen"
}
},
{
"_id": "67ad7929dc2968691c241149",
"hidden": false,
"name": "Yiran Zhong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7929dc2968691c24114a",
"hidden": false,
"name": "Xiaoye Qu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad7929dc2968691c24114b",
"hidden": false,
"name": "Yu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T14:01:39 | LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its
Hybrid | Linear sequence modeling approaches, such as linear attention, provide
advantages like linear-time training and constant-memory inference over
sequence lengths. However, existing sequence parallelism (SP) methods are
either not optimized for the right-product-first feature of linear attention or
use a ring-style communication strategy, which results in lower computation
parallelism, limits their scalability for longer sequences in distributed
systems. In this paper, we introduce LASP-2, a new SP method to enhance both
communication and computation parallelism when training linear attention
transformer models with very-long input sequences. Compared to previous work
LASP, LASP-2 rethinks the minimal communication requirement for SP on linear
attention layers, reorganizes the whole communication-computation workflow of
LASP. In this way, only one single AllGather collective communication is needed
on intermediate memory states, whose sizes are independent of the sequence
length, leading to significant improvements of both communication and
computation parallelism, as well as their overlap. Additionally, we extend
LASP-2 to LASP-2H by applying similar communication redesign to standard
attention modules, offering an efficient SP solution for hybrid models that
blend linear and standard attention layers. Our evaluation on a Linear-Llama3
model, a variant of Llama3 with linear attention replacing standard attention,
demonstrates the effectiveness of LASP-2 and LASP-2H. Specifically, LASP-2
achieves training speed improvements of 15.2% over LASP and 36.6% over Ring
Attention, with a sequence length of 2048K across 64 GPUs. The Code is released
as a part of: https://github.com/OpenSparseLLMs/Linear-MoE. | 24 | 67ad792adc2968691c241173 | null | null |
|
2025-02-12T23:42:44.287000 | LLM Pretraining with Continuous Concepts | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.08524 | [
{
"_id": "67ad783da2808b57a3cd3316",
"hidden": false,
"name": "Jihoon Tack",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:31:46.173Z",
"user": {
"_id": "65b15a471566a60bda376e02",
"avatarUrl": "/avatars/413340cb2d44c765accb5b3fc450ccc4.svg",
"fullname": "Jihoon Tack",
"isPro": false,
"type": "user",
"user": "jihoontack"
}
},
{
"_id": "67ad783da2808b57a3cd3317",
"hidden": false,
"name": "Jack Lanchantin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:31:36.784Z",
"user": {
"_id": "65dd30a6ea1a202d3cb469f7",
"avatarUrl": "/avatars/14029def026380e6e20822d916b25a72.svg",
"fullname": "Jack Lanchantin",
"isPro": false,
"type": "user",
"user": "jcklcn"
}
},
{
"_id": "67ad783da2808b57a3cd3318",
"hidden": false,
"name": "Jane Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd3319",
"hidden": false,
"name": "Andrew Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331a",
"hidden": false,
"name": "Ilia Kulikov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331b",
"hidden": false,
"name": "Janice Lan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331c",
"hidden": false,
"name": "Shibo Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331d",
"hidden": false,
"name": "Yuandong Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331e",
"hidden": false,
"name": "Jason Weston",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad783da2808b57a3cd331f",
"hidden": false,
"name": "Xian Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-13T04:42:38.302Z",
"user": {
"_id": "659a395421a7431643caedda",
"avatarUrl": "/avatars/c1e0bbcedce68fe3b4fe39e0cf01c65c.svg",
"fullname": "Xian Li",
"isPro": false,
"type": "user",
"user": "xlxxl"
}
}
] | 2025-02-12T16:00:11 | LLM Pretraining with Continuous Concepts | Next token prediction has been the standard training objective used in large
language model pretraining. Representations are learned as a result of
optimizing for token-level perplexity. We propose Continuous Concept Mixing
(CoCoMix), a novel pretraining framework that combines discrete next token
prediction with continuous concepts. Specifically, CoCoMix predicts continuous
concepts learned from a pretrained sparse autoencoder and mixes them into the
model's hidden state by interleaving with token hidden representations. Through
experiments on multiple benchmarks, including language modeling and downstream
reasoning tasks, we show that CoCoMix is more sample efficient and consistently
outperforms standard next token prediction, knowledge distillation and
inserting pause tokens. We find that combining both concept learning and
interleaving in an end-to-end framework is critical to performance gains.
Furthermore, CoCoMix enhances interpretability and steerability by allowing
direct inspection and modification of the predicted concept, offering a
transparent way to guide the model's internal reasoning process. | 28 | 67ad783ea2808b57a3cd3361 | null | null |
|
2025-02-12T23:41:41.281000 | Distillation Scaling Laws | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.08606 | [
{
"_id": "67ad77f9cd8de299e5049c05",
"hidden": false,
"name": "Dan Busbridge",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:35:56.843Z",
"user": {
"_id": "64c3726f2a5eaefd000cdedd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c3726f2a5eaefd000cdedd/iwFifH1sWQy7agW3eTmNQ.png",
"fullname": "Dan Busbridge",
"isPro": false,
"type": "user",
"user": "dbusbridge"
}
},
{
"_id": "67ad77f9cd8de299e5049c06",
"hidden": false,
"name": "Amitis Shidani",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:18.456Z",
"user": {
"_id": "65bbdf954d1795fa8ee6795e",
"avatarUrl": "/avatars/209a656248bf57b9e717ed628414e34b.svg",
"fullname": "Amitis Shidani",
"isPro": false,
"type": "user",
"user": "AmitisShidani"
}
},
{
"_id": "67ad77f9cd8de299e5049c07",
"hidden": false,
"name": "Floris Weers",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad77f9cd8de299e5049c08",
"hidden": false,
"name": "Jason Ramapuram",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad77f9cd8de299e5049c09",
"hidden": false,
"name": "Etai Littwin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad77f9cd8de299e5049c0a",
"hidden": false,
"name": "Russ Webb",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T17:52:47 | Distillation Scaling Laws | We provide a distillation scaling law that estimates distilled model
performance based on a compute budget and its allocation between the student
and teacher. Our findings reduce the risks associated with using distillation
at scale; compute allocation for both the teacher and student models can now be
done to maximize student performance. We provide compute optimal distillation
recipes for when 1) a teacher exists, or 2) a teacher needs training. If many
students are to be distilled, or a teacher already exists, distillation
outperforms supervised pretraining until a compute level which grows
predictably with student size. If one student is to be distilled and a teacher
also needs training, supervised learning should be done instead. Additionally,
we provide insights across our large scale study of distillation, which
increase our understanding of distillation and inform experimental design. | 46 | 67ad77fccd8de299e5049d06 | null | null |
|
2025-02-12T21:57:30.420000 | SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image Interpretation | 4 | {
"_id": "64a0ed5ed5374ca472cfb0ac",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64a0ed5ed5374ca472cfb0ac/n_wXamXfR_PPn0hRbnR1X.jpeg",
"followerCount": 1,
"fullname": "ZhimingMa",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "JimmyMa99",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/64a0ed5ed5374ca472cfb0ac/LvHzRQCttMAvKS-LM0ZDH.png"
] | 2502.08168 | [
{
"_id": "67ad5f32d1a5243cc4fa38ad",
"hidden": false,
"name": "Zhiming Ma",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:21:57.239Z",
"user": {
"_id": "64a0ed5ed5374ca472cfb0ac",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64a0ed5ed5374ca472cfb0ac/n_wXamXfR_PPn0hRbnR1X.jpeg",
"fullname": "ZhimingMa",
"isPro": false,
"type": "user",
"user": "JimmyMa99"
}
},
{
"_id": "67ad5f32d1a5243cc4fa38ae",
"hidden": false,
"name": "Xiayang Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f32d1a5243cc4fa38af",
"hidden": false,
"name": "Sihao Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f32d1a5243cc4fa38b0",
"hidden": false,
"name": "Peidong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f32d1a5243cc4fa38b1",
"hidden": false,
"name": "HaiPeng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f32d1a5243cc4fa38b2",
"hidden": false,
"name": "Qingyun Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T07:19:36 | SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image
Interpretation | In the field of synthetic aperture radar (SAR) remote sensing image
interpretation, although Vision language models (VLMs) have made remarkable
progress in natural language processing and image understanding, their
applications remain limited in professional domains due to insufficient domain
expertise. This paper innovatively proposes the first large-scale multimodal
dialogue dataset for SAR images, named SARChat-2M, which contains approximately
2 million high-quality image-text pairs, encompasses diverse scenarios with
detailed target annotations. This dataset not only supports several key tasks
such as visual understanding and object detection tasks, but also has unique
innovative aspects: this study develop a visual-language dataset and benchmark
for the SAR domain, enabling and evaluating VLMs' capabilities in SAR image
interpretation, which provides a paradigmatic framework for constructing
multimodal datasets across various remote sensing vertical domains. Through
experiments on 16 mainstream VLMs, the effectiveness of the dataset has been
fully verified, and the first multi-task dialogue benchmark in the SAR field
has been successfully established. The project will be released at
https://github.com/JimmyMa99/SARChat, aiming to promote the in-depth
development and wide application of SAR visual language models. | 11 | 67ad5f37d1a5243cc4fa399c | null | null |
|
2025-02-12T21:55:44.479000 | CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.08639 | [
{
"_id": "67ad5f25cad644864b436186",
"hidden": false,
"name": "Qinghe Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b436187",
"hidden": false,
"name": "Yawen Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:30:49.636Z",
"user": {
"_id": "66743477ab975c859114d410",
"avatarUrl": "/avatars/ac692cc336e383fb2cb53db6d1e3fe8c.svg",
"fullname": "yawenluo",
"isPro": false,
"type": "user",
"user": "yawenluo"
}
},
{
"_id": "67ad5f25cad644864b436188",
"hidden": false,
"name": "Xiaoyu Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b436189",
"hidden": false,
"name": "Xu Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b43618a",
"hidden": false,
"name": "Huchuan Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b43618b",
"hidden": false,
"name": "Tianfan Xue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:31:01.219Z",
"user": {
"_id": "67631f57abfbd60470d4b3c3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/5VrG0IZjYrDLiatOr4y06.png",
"fullname": "Tianfan Xue",
"isPro": false,
"type": "user",
"user": "littlemouse9"
}
},
{
"_id": "67ad5f25cad644864b43618c",
"hidden": false,
"name": "Xintao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:31:06.645Z",
"user": {
"_id": "60e272ca6c78a8c122b12127",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60e272ca6c78a8c122b12127/xldEGBzGrU-bX6IwAw0Ie.jpeg",
"fullname": "Xintao Wang",
"isPro": false,
"type": "user",
"user": "Xintao"
}
},
{
"_id": "67ad5f25cad644864b43618d",
"hidden": false,
"name": "Pengfei Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b43618e",
"hidden": false,
"name": "Di Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5f25cad644864b43618f",
"hidden": false,
"name": "Kun Gai",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-12T18:55:36 | CineMaster: A 3D-Aware and Controllable Framework for Cinematic
Text-to-Video Generation | In this work, we present CineMaster, a novel framework for 3D-aware and
controllable text-to-video generation. Our goal is to empower users with
comparable controllability as professional film directors: precise placement of
objects within the scene, flexible manipulation of both objects and camera in
3D space, and intuitive layout control over the rendered frames. To achieve
this, CineMaster operates in two stages. In the first stage, we design an
interactive workflow that allows users to intuitively construct 3D-aware
conditional signals by positioning object bounding boxes and defining camera
movements within the 3D space. In the second stage, these control
signals--comprising rendered depth maps, camera trajectories and object class
labels--serve as the guidance for a text-to-video diffusion model, ensuring to
generate the user-intended video content. Furthermore, to overcome the scarcity
of in-the-wild datasets with 3D object motion and camera pose annotations, we
carefully establish an automated data annotation pipeline that extracts 3D
bounding boxes and camera trajectories from large-scale video data. Extensive
qualitative and quantitative experiments demonstrate that CineMaster
significantly outperforms existing methods and implements prominent 3D-aware
text-to-video generation. Project page: https://cinemaster-dev.github.io/. | 37 | 67ad5f26cad644864b4361cf | null | null |
|
2025-02-12T21:48:00.325000 | Next Block Prediction: Video Generation via Semi-Autoregressive Modeling | 2 | {
"_id": "60d2e681b8448e1785bbda06",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1624434302056-noauth.jpeg",
"followerCount": 5,
"fullname": "Shuhuai Ren",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ShuhuaiRen",
"type": "user"
} | false | null | 2502.07737 | [
{
"_id": "67ad5d2f8436e8ea7abb7a15",
"hidden": false,
"name": "Shuhuai Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5d2f8436e8ea7abb7a16",
"hidden": false,
"name": "Shuming Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5d2f8436e8ea7abb7a17",
"hidden": false,
"name": "Xu Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5d2f8436e8ea7abb7a18",
"hidden": false,
"name": "Furu Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T17:57:53 | Next Block Prediction: Video Generation via Semi-Autoregressive Modeling | Next-Token Prediction (NTP) is a de facto approach for autoregressive (AR)
video generation, but it suffers from suboptimal unidirectional dependencies
and slow inference speed. In this work, we propose a semi-autoregressive
(semi-AR) framework, called Next-Block Prediction (NBP), for video generation.
By uniformly decomposing video content into equal-sized blocks (e.g., rows or
frames), we shift the generation unit from individual tokens to blocks,
allowing each token in the current block to simultaneously predict the
corresponding token in the next block. Unlike traditional AR modeling, our
framework employs bidirectional attention within each block, enabling tokens to
capture more robust spatial dependencies. By predicting multiple tokens in
parallel, NBP models significantly reduce the number of generation steps,
leading to faster and more efficient inference. Our model achieves FVD scores
of 103.3 on UCF101 and 25.5 on K600, outperforming the vanilla NTP model by an
average of 4.4. Furthermore, thanks to the reduced number of inference steps,
the NBP model generates 8.89 frames (128x128 resolution) per second, achieving
an 11x speedup. We also explored model scales ranging from 700M to 3B
parameters, observing significant improvements in generation quality, with FVD
scores dropping from 103.3 to 55.3 on UCF101 and from 25.5 to 19.5 on K600,
demonstrating the scalability of our approach. | 9 | 67ad5d308436e8ea7abb7a3d | null | null |
|
2025-02-12T21:45:28.944000 | Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance | 5 | {
"_id": "63b58ed5889aa6707f0bb0f4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg",
"followerCount": 15,
"fullname": "Jimin Huang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "jiminHuang",
"type": "user"
} | true | null | 2502.08127 | [
{
"_id": "67ad5ca29109885ce9b859e4",
"hidden": false,
"name": "Lingfei Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5ca29109885ce9b859e5",
"hidden": false,
"name": "Weipeng Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5ca29109885ce9b859e6",
"hidden": false,
"name": "Yan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5ca29109885ce9b859e7",
"hidden": false,
"name": "Xueqing Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5ca29109885ce9b859e8",
"hidden": false,
"name": "Jimin Huang",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-19T03:28:40.861Z",
"user": {
"_id": "63b58ed5889aa6707f0bb0f4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg",
"fullname": "Jimin Huang",
"isPro": true,
"type": "user",
"user": "jiminHuang"
}
},
{
"_id": "67ad5ca29109885ce9b859e9",
"hidden": false,
"name": "Qianqian Xie",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:22:01.539Z",
"user": {
"_id": "6479f4317c18dca75e9a9324",
"avatarUrl": "/avatars/9aa709230b057f57ee4415c04a622c63.svg",
"fullname": "Xie",
"isPro": false,
"type": "user",
"user": "QianqianXie1994"
}
}
] | 2025-02-12T05:13:04 | Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance | Recent advancements in large language models (LLMs) have shown strong general
reasoning abilities, yet their effectiveness in financial reasoning remains
underexplored. In this study, we comprehensively evaluate 16 powerful reasoning
and general LLMs on three complex financial tasks involving financial text,
tabular data, and equations, assessing numerical reasoning, tabular
interpretation, financial terminology comprehension, long-context processing,
and equation-based problem solving. Our results show that while better datasets
and pretraining improve financial reasoning, general enhancements like CoT
fine-tuning do not always yield consistent gains. Moreover, all reasoning
strategies face challenges in improving performance on long-context and
multi-table tasks. To address these limitations, we develop a financial
reasoning-enhanced model based on Llama-3.1-8B-Instruct, by CoT fine-tuning and
reinforcement learning with domain-specific reasoning paths. Even with simple
fine-tuning with one financial dataset, our model achieves a consistent 10%
performance improvement across tasks, surpassing all 8B models and even
Llama3-70B-Instruct and Llama3.1-70B-Instruct on average. Our results highlight
the need for domain-specific adaptations in financial tasks, emphasizing future
directions such as multi-table reasoning, long-context processing, and
financial terminology comprehension. All our datasets, models, and codes are
publicly available. Furthermore, we introduce a leaderboard for benchmarking
future datasets and models. | 50 | 67ad5ca59109885ce9b85a5b | null | null |
|
2025-02-12T21:43:42.404000 | DPO-Shift: Shifting the Distribution of Direct Preference Optimization | 2 | {
"_id": "66270fcef7cf69d4223a8a3f",
"avatarUrl": "/avatars/115db0326737e65318c92a7b8dc5ed6a.svg",
"followerCount": null,
"fullname": "Xiao Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xli0982",
"type": "user"
} | false | null | 2502.07599 | [
{
"_id": "67ad5bd2ac32a8e230fc8996",
"hidden": false,
"name": "Xiliang Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T16:39:57.092Z",
"user": {
"_id": "667fddfd10d2f8d7d62cb635",
"avatarUrl": "/avatars/9c2a5cd059653fbbfe2ed0b73d50316d.svg",
"fullname": "Xiliang Yang",
"isPro": false,
"type": "user",
"user": "NoManDeRY"
}
},
{
"_id": "67ad5bd2ac32a8e230fc8997",
"hidden": false,
"name": "Feng Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5bd2ac32a8e230fc8998",
"hidden": false,
"name": "Qianen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5bd2ac32a8e230fc8999",
"hidden": false,
"name": "Lei Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5bd2ac32a8e230fc899a",
"hidden": false,
"name": "Xiao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T14:49:44 | DPO-Shift: Shifting the Distribution of Direct Preference Optimization | Direct Preference Optimization (DPO) and its variants have become
increasingly popular for aligning language models with human preferences. These
methods aim to teach models to better distinguish between chosen (or preferred)
and rejected (or dispreferred) responses. However, prior research has
identified that the probability of chosen responses often decreases during
training, and this phenomenon is known as likelihood displacement. To tackle
this challenge, in this work we introduce \method to controllably shift the
distribution of the chosen probability. Then, we show that \method exhibits a
fundamental trade-off between improving the chosen probability and sacrificing
the reward margin, as supported by both theoretical analysis and experimental
validation. Furthermore, we demonstrate the superiority of \method over DPO on
downstream tasks such as MT-Bench and a designed win rate experiment. We
believe this study shows that the likelihood displacement issue of DPO can be
effectively mitigated with a simple, theoretically grounded solution. Our code
is available at https://github.com/Meaquadddd/DPO-Shift. | 15 | 67ad5bd3ac32a8e230fc89a7 | null | https://github.com/Meaquadddd/DPO-Shift |
|
2025-02-12T21:41:19.791000 | TransMLA: Multi-head Latent Attention Is All You Need | 9 | {
"_id": "643f55d4ec817b766686438a",
"avatarUrl": "/avatars/0feb460432c92ab9ada0d417a7a38f6a.svg",
"followerCount": 17,
"fullname": "mengfanxu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "fxmeng",
"type": "user"
} | true | null | 2502.07864 | [
{
"_id": "67ad5b3a007d78b391946a57",
"hidden": false,
"name": "Fanxu Meng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:22:03.808Z",
"user": {
"_id": "643f55d4ec817b766686438a",
"avatarUrl": "/avatars/0feb460432c92ab9ada0d417a7a38f6a.svg",
"fullname": "mengfanxu",
"isPro": false,
"type": "user",
"user": "fxmeng"
}
},
{
"_id": "67ad5b3a007d78b391946a58",
"hidden": false,
"name": "Zengwei Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad5b3a007d78b391946a59",
"hidden": false,
"name": "Muhan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T18:20:18 | TransMLA: Multi-head Latent Attention Is All You Need | Modern large language models (LLMs) often encounter communication bottlenecks
on current hardware, rather than purely computational constraints. Multi-head
Latent Attention (MLA) tackles this challenge by using low-rank matrices in the
key-value (KV) layers, thereby allowing compressed latent KV states to be
cached. This approach significantly reduces the KV cache size relative to
traditional multi-head attention, leading to faster inference. Moreover, MLA
employs an up-projection matrix to increase expressiveness, trading additional
computation for reduced communication overhead. Although MLA has demonstrated
efficiency and effectiveness in Deepseek V2/V3/R1, many major model providers
still rely on Group Query Attention (GQA) and have not announced any plans to
adopt MLA. In this paper, we show that GQA can always be represented by MLA
while maintaining the same KV cache overhead, but the converse does not hold.
To encourage broader use of MLA, we introduce **TransMLA**, a post-training
method that converts widely used GQA-based pre-trained models (e.g., LLaMA,
Qwen, Mixtral) into MLA-based models. After conversion, the model can undergo
additional training to boost expressiveness without increasing the KV cache
size. Furthermore, we plan to develop MLA-specific inference acceleration
techniques to preserve low latency in transformed models, thus enabling more
efficient distillation of Deepseek R1. | 46 | 67ad5b3b007d78b391946a79 | null | null |
|
2025-02-12T18:40:34.235000 | Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language Models | 2 | {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"followerCount": null,
"fullname": "Sina Tayebati",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sinatayebati",
"type": "user"
} | true | null | 2502.06884 | [
{
"_id": "67ad315adbc8ae7b3ca9f17d",
"hidden": false,
"name": "Sina Tayebati",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:22:22.896Z",
"user": {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"fullname": "Sina Tayebati",
"isPro": false,
"type": "user",
"user": "sinatayebati"
}
},
{
"_id": "67ad315adbc8ae7b3ca9f17e",
"hidden": false,
"name": "Divake Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad315adbc8ae7b3ca9f17f",
"hidden": false,
"name": "Nastaran Darabi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:29:20.675Z",
"user": {
"_id": "671acb0de80155d7f9e162b0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g7hnS2Mrjyy-RudyIxvVX.png",
"fullname": "Nastaran Darabi",
"isPro": false,
"type": "user",
"user": "Nstrndrbi"
}
},
{
"_id": "67ad315adbc8ae7b3ca9f180",
"hidden": false,
"name": "Dinithi Jayasuriya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ad315adbc8ae7b3ca9f181",
"hidden": false,
"name": "Ranganath Krishnan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-14T15:29:30.994Z",
"user": {
"_id": "647a45aeccb84c6180b41b54",
"avatarUrl": "/avatars/cd0db59a1b7f49f53f65751a8efc1033.svg",
"fullname": "Ranganath Krishnan",
"isPro": false,
"type": "user",
"user": "ranganathkrishnan"
}
},
{
"_id": "67ad315adbc8ae7b3ca9f182",
"hidden": false,
"name": "Amit Ranjan Trivedi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T21:30:41 | Learning Conformal Abstention Policies for Adaptive Risk Management in
Large Language and Vision-Language Models | Large Language and Vision-Language Models (LLMs/VLMs) are increasingly used
in safety-critical applications, yet their opaque decision-making complicates
risk assessment and reliability. Uncertainty quantification (UQ) helps assess
prediction confidence and enables abstention when uncertainty is high.
Conformal prediction (CP), a leading UQ method, provides statistical guarantees
but relies on static thresholds, which fail to adapt to task complexity and
evolving data distributions, leading to suboptimal trade-offs in accuracy,
coverage, and informativeness. To address this, we propose learnable conformal
abstention, integrating reinforcement learning (RL) with CP to optimize
abstention thresholds dynamically. By treating CP thresholds as adaptive
actions, our approach balances multiple objectives, minimizing prediction set
size while maintaining reliable coverage. Extensive evaluations across diverse
LLM/VLM benchmarks show our method outperforms Least Ambiguous Classifiers
(LAC) and Adaptive Prediction Sets (APS), improving accuracy by up to 3.2%,
boosting AUROC for hallucination detection by 22.19%, enhancing
uncertainty-guided selective generation (AUARC) by 21.17%, and reducing
calibration error by 70%-85%. These improvements hold across multiple models
and datasets while consistently meeting the 90% coverage target, establishing
our approach as a more effective and flexible solution for reliable
decision-making in safety-critical applications. The code is available at:
{https://github.com/sinatayebati/vlm-uncertainty}. | 0 | 67ad315bdbc8ae7b3ca9f1b7 | null | null |
|
2025-02-12T13:41:46.312000 | Pippo: High-Resolution Multi-View Humans from a Single Image | 2 | {
"_id": "638546d0a179f856005ae310",
"avatarUrl": "/avatars/f7f2fface336c8e168a1daaf9fd4d40c.svg",
"followerCount": 3,
"fullname": "yashkant",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yashkant",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/638546d0a179f856005ae310/WpddysA9AZ5Y3Fgo_XA75.mp4"
] | 2502.07785 | [
{
"_id": "67aceb3315317375eccddc3b",
"hidden": false,
"name": "Yash Kant",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:24:09.195Z",
"user": {
"_id": "638546d0a179f856005ae310",
"avatarUrl": "/avatars/f7f2fface336c8e168a1daaf9fd4d40c.svg",
"fullname": "yashkant",
"isPro": false,
"type": "user",
"user": "yashkant"
}
},
{
"_id": "67aceb3315317375eccddc3c",
"hidden": false,
"name": "Ethan Weber",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc3d",
"hidden": false,
"name": "Jin Kyu Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc3e",
"hidden": false,
"name": "Rawal Khirodkar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc3f",
"hidden": false,
"name": "Su Zhaoen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc40",
"hidden": false,
"name": "Julieta Martinez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc41",
"hidden": false,
"name": "Igor Gilitschenski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc42",
"hidden": false,
"name": "Shunsuke Saito",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aceb3315317375eccddc43",
"hidden": false,
"name": "Timur Bagautdinov",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T18:59:59 | Pippo: High-Resolution Multi-View Humans from a Single Image | We present Pippo, a generative model capable of producing 1K resolution dense
turnaround videos of a person from a single casually clicked photo. Pippo is a
multi-view diffusion transformer and does not require any additional inputs -
e.g., a fitted parametric model or camera parameters of the input image. We
pre-train Pippo on 3B human images without captions, and conduct multi-view
mid-training and post-training on studio captured humans. During mid-training,
to quickly absorb the studio dataset, we denoise several (up to 48) views at
low-resolution, and encode target cameras coarsely using a shallow MLP. During
post-training, we denoise fewer views at high-resolution and use pixel-aligned
controls (e.g., Spatial anchor and Plucker rays) to enable 3D consistent
generations. At inference, we propose an attention biasing technique that
allows Pippo to simultaneously generate greater than 5 times as many views as
seen during training. Finally, we also introduce an improved metric to evaluate
3D consistency of multi-view generations, and show that Pippo outperforms
existing works on multi-view human generation from a single image. | 11 | 67aceb3b15317375eccddd5b | null | null |
|
2025-02-12T10:35:08.488000 | Hypencoder: Hypernetworks for Information Retrieval | 2 | {
"_id": "635582fb6b58fa7cc8701580",
"avatarUrl": "/avatars/a00791ba70a2de40dacac4582307c0f2.svg",
"followerCount": 1,
"fullname": "Julian Killingback",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jfkback",
"type": "user"
} | true | null | 2502.05364 | [
{
"_id": "67ab99c8bb44ec714c6a4a96",
"hidden": false,
"name": "Julian Killingback",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:46.458Z",
"user": {
"_id": "635582fb6b58fa7cc8701580",
"avatarUrl": "/avatars/a00791ba70a2de40dacac4582307c0f2.svg",
"fullname": "Julian Killingback",
"isPro": false,
"type": "user",
"user": "jfkback"
}
},
{
"_id": "67ab99c8bb44ec714c6a4a97",
"hidden": false,
"name": "Hansi Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab99c8bb44ec714c6a4a98",
"hidden": false,
"name": "Hamed Zamani",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T22:31:38 | Hypencoder: Hypernetworks for Information Retrieval | The vast majority of retrieval models depend on vector inner products to
produce a relevance score between a query and a document. This naturally limits
the expressiveness of the relevance score that can be employed. We propose a
new paradigm, instead of producing a vector to represent the query we produce a
small neural network which acts as a learned relevance function. This small
neural network takes in a representation of the document, in this paper we use
a single vector, and produces a scalar relevance score. To produce the little
neural network we use a hypernetwork, a network that produce the weights of
other networks, as our query encoder or as we call it a Hypencoder. Experiments
on in-domain search tasks show that Hypencoder is able to significantly
outperform strong dense retrieval models and has higher metrics then reranking
models and models an order of magnitude larger. Hypencoder is also shown to
generalize well to out-of-domain search tasks. To assess the extent of
Hypencoder's capabilities, we evaluate on a set of hard retrieval tasks
including tip-of-the-tongue retrieval and instruction-following retrieval tasks
and find that the performance gap widens substantially compared to standard
retrieval tasks. Furthermore, to demonstrate the practicality of our method we
implement an approximate search algorithm and show that our model is able to
search 8.8M documents in under 60ms. | 11 | 67ab99c9bb44ec714c6a4ac1 | null | null |
|
2025-02-12T09:56:56.287000 | Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07640 | [
{
"_id": "67acb6af4335fbde70348fc1",
"hidden": false,
"name": "Yong Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc2",
"hidden": false,
"name": "Shange Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc3",
"hidden": false,
"name": "Bohan Lyu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:20.642Z",
"user": {
"_id": "650267e7e751d03da933a24a",
"avatarUrl": "/avatars/f047a047d1de304cd97027463541bdf3.svg",
"fullname": "Bohan22",
"isPro": false,
"type": "user",
"user": "Bohan22"
}
},
{
"_id": "67acb6af4335fbde70348fc4",
"hidden": false,
"name": "Jiayun Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc5",
"hidden": false,
"name": "Hongzhou Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc6",
"hidden": false,
"name": "Kaiyu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc7",
"hidden": false,
"name": "Jia Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc8",
"hidden": false,
"name": "Mengzhou Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fc9",
"hidden": false,
"name": "Danqi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fca",
"hidden": false,
"name": "Sanjeev Arora",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67acb6af4335fbde70348fcb",
"hidden": false,
"name": "Chi Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T15:27:35 | Goedel-Prover: A Frontier Model for Open-Source Automated Theorem
Proving | We introduce Goedel-Prover, an open-source large language model (LLM) that
achieves the state-of-the-art (SOTA) performance in automated formal proof
generation for mathematical problems. The key challenge in this field is the
scarcity of formalized math statements and proofs, which we tackle in the
following ways. We train statement formalizers to translate the natural
language math problems from Numina into formal language (Lean 4), creating a
dataset of 1.64 million formal statements. LLMs are used to check that the
formal statements accurately preserve the content of the original natural
language problems. We then iteratively build a large dataset of formal proofs
by training a series of provers. Each prover succeeds in proving many
statements that the previous ones could not, and these new proofs are added to
the training set for the next prover. The final prover outperforms all existing
open-source models in whole-proof generation. On the miniF2F benchmark, it
achieves a 57.6% success rate (Pass@32), exceeding the previous best
open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7
problems (Pass@512), ranking first on the leaderboard. Furthermore, it
generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the
15.7K produced by earlier works. | 8 | 67acb6b04335fbde70349021 | null | null |
|
2025-02-12T09:54:39.924000 | Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision Models | 1 | {
"_id": "64df8592d27135dd568380b5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64df8592d27135dd568380b5/1-YkiTAkI11QBbZnVRjJu.jpeg",
"followerCount": 3,
"fullname": "Samuel Stevens",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "samuelstevens",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/64df8592d27135dd568380b5/xfktZouikPLtypubL69TW.webp"
] | 2502.06755 | [
{
"_id": "67ab7c4f84bf8aaa60cf1d8d",
"hidden": false,
"name": "Samuel Stevens",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:50.887Z",
"user": {
"_id": "64df8592d27135dd568380b5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64df8592d27135dd568380b5/1-YkiTAkI11QBbZnVRjJu.jpeg",
"fullname": "Samuel Stevens",
"isPro": false,
"type": "user",
"user": "samuelstevens"
}
},
{
"_id": "67ab7c4f84bf8aaa60cf1d8e",
"hidden": false,
"name": "Wei-Lun Chao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab7c4f84bf8aaa60cf1d8f",
"hidden": false,
"name": "Tanya Berger-Wolf",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab7c4f84bf8aaa60cf1d90",
"hidden": false,
"name": "Yu Su",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:32:41 | Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision
Models | To truly understand vision models, we must not only interpret their learned
features but also validate these interpretations through controlled
experiments. Current approaches either provide interpretable features without
the ability to test their causal influence, or enable model editing without
interpretable controls. We present a unified framework using sparse
autoencoders (SAEs) that bridges this gap, allowing us to discover
human-interpretable visual features and precisely manipulate them to test
hypotheses about model behavior. By applying our method to state-of-the-art
vision models, we reveal key differences in the semantic abstractions learned
by models with different pre-training objectives. We then demonstrate the
practical usage of our framework through controlled interventions across
multiple vision tasks. We show that SAEs can reliably identify and manipulate
interpretable visual features without model re-training, providing a powerful
tool for understanding and controlling vision model behavior. We provide code,
demos and models on our project website: https://osu-nlp-group.github.io/SAE-V. | 7 | 67ab7c5784bf8aaa60cf1f4c | null | null |
|
2025-02-12T07:51:18.930000 | CoS: Chain-of-Shot Prompting for Long Video Understanding | 2 | {
"_id": "65e1b6e9501590df0173cbd3",
"avatarUrl": "/avatars/a73e2139700e23eff455734c99cef5ba.svg",
"followerCount": null,
"fullname": "Jian Hu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lwpyh",
"type": "user"
} | true | null | 2502.06428 | [
{
"_id": "67ac99089e12456bdb1d2e9d",
"hidden": false,
"name": "Jian Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T11:12:25.146Z",
"user": {
"_id": "65e1b6e9501590df0173cbd3",
"avatarUrl": "/avatars/a73e2139700e23eff455734c99cef5ba.svg",
"fullname": "Jian Hu",
"isPro": false,
"type": "user",
"user": "lwpyh"
}
},
{
"_id": "67ac99089e12456bdb1d2e9e",
"hidden": false,
"name": "Zixu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac99089e12456bdb1d2e9f",
"hidden": false,
"name": "Chenyang Si",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac99089e12456bdb1d2ea0",
"hidden": false,
"name": "Wei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac99089e12456bdb1d2ea1",
"hidden": false,
"name": "Shaogang Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T13:03:05 | CoS: Chain-of-Shot Prompting for Long Video Understanding | Multi-modal Large Language Models (MLLMs) struggle with long videos due to
the need for excessive visual tokens. These tokens exceed massively the context
length of MLLMs, resulting in filled by redundant task-irrelevant shots. How to
select shots is an unsolved critical problem: sparse sampling risks missing key
details, while exhaustive sampling overwhelms the model with irrelevant
content, leading to video misunderstanding. To solve this problem, we propose
Chain-of-Shot prompting (CoS). The key idea is to frame shot selection as
test-time visual prompt optimisation, choosing shots adaptive to video
understanding semantic task by optimising shots-task alignment. CoS has two key
parts: (1) a binary video summary mechanism that performs pseudo temporal
grounding, discovering a binary coding to identify task-relevant shots, and (2)
a video co-reasoning module that deploys the binary coding to pair (learning to
align) task-relevant positive shots with irrelevant negative shots. It embeds
the optimised shot selections into the original video, facilitating a focus on
relevant context to optimize long video understanding. Experiments across three
baselines and five datasets demonstrate the effectiveness and adaptability of
CoS. Code given in https://lwpyh.github.io/CoS. | 10 | 67ac990b9e12456bdb1d2efe | null | null |
|
2025-02-12T07:18:08.463000 | Gemstones: A Model Suite for Multi-Faceted Scaling Laws | 2 | {
"_id": "65255f1073a043e50d043641",
"avatarUrl": "/avatars/257085f01c439d7c84787a4e6d085b3d.svg",
"followerCount": 6,
"fullname": "Sean McLeish",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "smcleish",
"type": "user"
} | true | null | 2502.06857 | [
{
"_id": "67ac9127184de583cc7daa75",
"hidden": false,
"name": "Sean McLeish",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-12T12:21:11.370Z",
"user": {
"_id": "65255f1073a043e50d043641",
"avatarUrl": "/avatars/257085f01c439d7c84787a4e6d085b3d.svg",
"fullname": "Sean McLeish",
"isPro": false,
"type": "user",
"user": "smcleish"
}
},
{
"_id": "67ac9127184de583cc7daa76",
"hidden": false,
"name": "John Kirchenbauer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac9127184de583cc7daa77",
"hidden": false,
"name": "David Yu Miller",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:35:58.819Z",
"user": {
"_id": "67ae2de52f89d658be916ff0",
"avatarUrl": "/avatars/55f6bea7b6727f5f3f152cd8659f75f6.svg",
"fullname": "David Miller",
"isPro": false,
"type": "user",
"user": "dymil"
}
},
{
"_id": "67ac9127184de583cc7daa78",
"hidden": false,
"name": "Siddharth Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac9127184de583cc7daa79",
"hidden": false,
"name": "Abhinav Bhatele",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac9127184de583cc7daa7a",
"hidden": false,
"name": "Micah Goldblum",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac9127184de583cc7daa7b",
"hidden": false,
"name": "Ashwinee Panda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac9127184de583cc7daa7c",
"hidden": false,
"name": "Tom Goldstein",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T18:09:38 | Gemstones: A Model Suite for Multi-Faceted Scaling Laws | Scaling laws are typically fit using a family of models with a narrow range
of frozen hyper-parameter choices. In this work we study scaling laws using a
wide range of architecture and hyper-parameter choices, and highlight their
impact on resulting prescriptions. As a primary artifact of our research, we
release the Gemstones: the most comprehensive open-source scaling law dataset
to date, consisting of over 4000 checkpoints from transformers with up to 2
billion parameters; these models have been trained with different learning
rates, cooldown schedules, and architectural shapes. Our checkpoints enable
more complex studies of scaling, such as a law that predicts language modeling
performance as a function of model width and depth. By examining the various
facets of our model suite, we find that the prescriptions of scaling laws can
be highly sensitive to the experimental design process and the specific model
checkpoints used during fitting. Code:
https://github.com/mcleish7/gemstone-scaling-laws | 24 | 67ac9128184de583cc7daaba | null | null |
|
2025-02-12T07:10:24.189000 | Retrieval-augmented Large Language Models for Financial Time Series Forecasting | 2 | {
"_id": "63b58ed5889aa6707f0bb0f4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63b58ed5889aa6707f0bb0f4/znl74_aMswlV8VtHrfj3G.jpeg",
"followerCount": 15,
"fullname": "Jimin Huang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "jiminHuang",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/63b58ed5889aa6707f0bb0f4/Y3YaXroeoJ851uS_hS3j0.png"
] | 2502.05878 | [
{
"_id": "67aab4a98642da4d695cf045",
"hidden": false,
"name": "Mengxi Xiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:50.108Z",
"user": {
"_id": "663adb42e14047f710dc1d29",
"avatarUrl": "/avatars/7ca49d67a4a8b4cf0ee896e07646715f.svg",
"fullname": "Mengxi Xiao",
"isPro": false,
"type": "user",
"user": "ElsaShaw"
}
},
{
"_id": "67aab4a98642da4d695cf046",
"hidden": false,
"name": "Zihao Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf047",
"hidden": false,
"name": "Lingfei Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf048",
"hidden": false,
"name": "Zhengyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf049",
"hidden": false,
"name": "Yueru He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04a",
"hidden": false,
"name": "Yijing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04b",
"hidden": false,
"name": "Yuecheng Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04c",
"hidden": false,
"name": "Dong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04d",
"hidden": false,
"name": "Ruey-Ling Weng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04e",
"hidden": false,
"name": "Min Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf04f",
"hidden": false,
"name": "Jimin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf050",
"hidden": false,
"name": "Sophia Ananiadou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab4a98642da4d695cf051",
"hidden": false,
"name": "Qianqian Xie",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:37.116Z",
"user": {
"_id": "6479f4317c18dca75e9a9324",
"avatarUrl": "/avatars/9aa709230b057f57ee4415c04a622c63.svg",
"fullname": "Xie",
"isPro": false,
"type": "user",
"user": "QianqianXie1994"
}
}
] | 2025-02-09T12:26:05 | Retrieval-augmented Large Language Models for Financial Time Series
Forecasting | Stock movement prediction, a fundamental task in financial time-series
forecasting, requires identifying and retrieving critical influencing factors
from vast amounts of time-series data. However, existing text-trained or
numeric similarity-based retrieval methods fall short in handling complex
financial analysis. To address this, we propose the first retrieval-augmented
generation (RAG) framework for financial time-series forecasting, featuring
three key innovations: a fine-tuned 1B parameter large language model
(StockLLM) as the backbone, a novel candidate selection method leveraging LLM
feedback, and a training objective that maximizes similarity between queries
and historically significant sequences. This enables our retriever, FinSeer, to
uncover meaningful patterns while minimizing noise in complex financial data.
We also construct new datasets integrating financial indicators and historical
stock prices to train FinSeer and ensure robust evaluation. Experimental
results demonstrate that our RAG framework outperforms bare StockLLM and random
retrieval, highlighting its effectiveness, while FinSeer surpasses existing
retrieval methods, achieving an 8\% higher accuracy on BIGDATA22 and retrieving
more impactful sequences. This work underscores the importance of tailored
retrieval models in financial forecasting and provides a novel framework for
future research. | 39 | 67aab4ac8642da4d695cf101 | null | null |
|
2025-02-12T06:55:30.432000 | Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More | 2 | {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"followerCount": 1,
"fullname": "Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Shiweiliuiiiiiii",
"type": "user"
} | false | null | 2502.07490 | [
{
"_id": "67ac8bc95cc4f961c9550320",
"hidden": false,
"name": "Xialie Zhuang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:06.266Z",
"user": {
"_id": "6575a625b951d40e7a4d8685",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6575a625b951d40e7a4d8685/mcnyqU--r-vi11aXN02cZ.jpeg",
"fullname": "zhuangxialie",
"isPro": false,
"type": "user",
"user": "ZhuangXialie"
}
},
{
"_id": "67ac8bc95cc4f961c9550321",
"hidden": false,
"name": "Zhikai Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac8bc95cc4f961c9550322",
"hidden": false,
"name": "Jianjin Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac8bc95cc4f961c9550323",
"hidden": false,
"name": "Zhenyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:39:17.521Z",
"user": {
"_id": "649c888f67fd6c6aa97e5f85",
"avatarUrl": "/avatars/9967b729916d1128773102797fed1673.svg",
"fullname": "Zhenyu Zhang",
"isPro": false,
"type": "user",
"user": "Kyriection"
}
},
{
"_id": "67ac8bc95cc4f961c9550324",
"hidden": false,
"name": "Li Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac8bc95cc4f961c9550325",
"hidden": false,
"name": "Zheng Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac8bc95cc4f961c9550326",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:15:23.571Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
}
] | 2025-02-11T11:49:03 | Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn
More | Large Language Models (LLMs) are discovered to suffer from accurately
retrieving key information. To address this, we propose Mask-Enhanced
Autoregressive Prediction (MEAP), a simple yet effective training paradigm that
seamlessly integrates Masked Language Modeling (MLM) into Next-Token Prediction
(NTP) to enhance the latter's in-context retrieval capabilities. Specifically,
MEAP first randomly masks a small fraction of input tokens and then directly
performs the standard next-token prediction autoregressive using a decoder-only
Transformer. MEAP eliminates the need for bidirectional attention or
encoder-decoder architectures for MLM, incurring no additional computational
overhead during pre-training or inference. Intensive experiments demonstrate
that MEAP substantially outperforms NTP on key information retrieval and
long-context reasoning tasks, while performing on par or better on commonsense
reasoning tasks. The benefits of MEAP also extend to supervised fine-tuning,
where it shows remarkable advantages in lost-in-the-middle scenarios,
outperforming NTP by 11.77 percentage points. Our analysis indicates that
MEAP's effectiveness arises from its ability to promote more distinguishable
attention scores by concentrating on a reduced set of non-masked tokens. This
mechanism improves the model's focus on task-relevant signals while mitigating
the influence of peripheral context. These findings position MEAP as a
promising training paradigm for large language models. | 9 | 67ac8bc95cc4f961c955035f | null | null |
|
2025-02-12T04:53:50.325000 | Skill Expansion and Composition in Parameter Space | 3 | {
"_id": "67ac430c4ab9207cc227d23f",
"avatarUrl": "/avatars/59c499cc191e28a66ae917963c28ffb3.svg",
"followerCount": null,
"fullname": "Tenglong Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "LTL07",
"type": "user"
} | true | null | 2502.05932 | [
{
"_id": "67ac4356401012b81050022a",
"hidden": false,
"name": "Tenglong Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:15:09.627Z",
"user": {
"_id": "67ac430c4ab9207cc227d23f",
"avatarUrl": "/avatars/59c499cc191e28a66ae917963c28ffb3.svg",
"fullname": "Tenglong Liu",
"isPro": false,
"type": "user",
"user": "LTL07"
}
},
{
"_id": "67ac4356401012b81050022b",
"hidden": false,
"name": "Jianxiong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4356401012b81050022c",
"hidden": false,
"name": "Yinan Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4356401012b81050022d",
"hidden": false,
"name": "Haoyi Niu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4356401012b81050022e",
"hidden": false,
"name": "Yixing Lan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4356401012b81050022f",
"hidden": false,
"name": "Xin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac4356401012b810500230",
"hidden": false,
"name": "Xianyuan Zhan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T15:22:38 | Skill Expansion and Composition in Parameter Space | Humans excel at reusing prior knowledge to address new challenges and
developing skills while solving problems. This paradigm becomes increasingly
popular in the development of autonomous agents, as it develops systems that
can self-evolve in response to new challenges like human beings. However,
previous methods suffer from limited training efficiency when expanding new
skills and fail to fully leverage prior knowledge to facilitate new task
learning. In this paper, we propose Parametric Skill Expansion and Composition
(PSEC), a new framework designed to iteratively evolve the agents' capabilities
and efficiently address new challenges by maintaining a manageable skill
library. This library can progressively integrate skill primitives as
plug-and-play Low-Rank Adaptation (LoRA) modules in parameter-efficient
finetuning, facilitating efficient and flexible skill expansion. This structure
also enables the direct skill compositions in parameter space by merging LoRA
modules that encode different skills, leveraging shared information across
skills to effectively program new skills. Based on this, we propose a
context-aware module to dynamically activate different skills to
collaboratively handle new tasks. Empowering diverse applications including
multi-objective composition, dynamics shift, and continual policy shift, the
results on D4RL, DSRL benchmarks, and the DeepMind Control Suite show that PSEC
exhibits superior capacity to leverage prior knowledge to efficiently tackle
new challenges, as well as expand its skill libraries to evolve the
capabilities. Project website: https://ltlhuuu.github.io/PSEC/. | 4 | 67ac435b401012b8105003dc | null | null |
|
2025-02-12T04:25:54.558000 | Éclair -- Extracting Content and Layout with Integrated Reading Order for Documents | 3 | {
"_id": "60098ca06e8ac78787773f85",
"avatarUrl": "/avatars/be6539e5706bf07c71e553254c1751b5.svg",
"followerCount": 1,
"fullname": "Jarno Seppänen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jseppanen",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/60098ca06e8ac78787773f85/BfZ57W-gCoY32J60tx7dN.png"
] | 2502.04223 | [
{
"_id": "67ac5e0d653d273eeaf25e59",
"hidden": false,
"name": "Ilia Karmanov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:14.762Z",
"user": {
"_id": "630602e8660f01f1509f791d",
"avatarUrl": "/avatars/23522c79b471010cfd969d2c34675b34.svg",
"fullname": "Ilia Karmanov",
"isPro": false,
"type": "user",
"user": "iliauk"
}
},
{
"_id": "67ac5e0d653d273eeaf25e5a",
"hidden": false,
"name": "Amala Sanjay Deshmukh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:14:49.009Z",
"user": {
"_id": "67ac5d85a19e34140ea1013b",
"avatarUrl": "/avatars/e5b7446787dbbd17553dc9e11b58a0b4.svg",
"fullname": "Amala Sanjay Deshmukh",
"isPro": false,
"type": "user",
"user": "amalad"
}
},
{
"_id": "67ac5e0d653d273eeaf25e5b",
"hidden": false,
"name": "Lukas Voegtle",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e5c",
"hidden": false,
"name": "Philipp Fischer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e5d",
"hidden": false,
"name": "Kateryna Chumachenko",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:14:47.025Z",
"user": {
"_id": "64c7a43e0d3d1b209df90b9c",
"avatarUrl": "/avatars/1d0d2f129b799a72345b17fd5307aa5e.svg",
"fullname": "Kateryna Chumachenko",
"isPro": false,
"type": "user",
"user": "katerynaCh"
}
},
{
"_id": "67ac5e0d653d273eeaf25e5e",
"hidden": false,
"name": "Timo Roman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e5f",
"hidden": false,
"name": "Jarno Seppänen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:14:51.062Z",
"user": {
"_id": "60098ca06e8ac78787773f85",
"avatarUrl": "/avatars/be6539e5706bf07c71e553254c1751b5.svg",
"fullname": "Jarno Seppänen",
"isPro": false,
"type": "user",
"user": "jseppanen"
}
},
{
"_id": "67ac5e0d653d273eeaf25e60",
"hidden": false,
"name": "Jupinder Parmar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e61",
"hidden": false,
"name": "Joseph Jennings",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e62",
"hidden": false,
"name": "Andrew Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac5e0d653d273eeaf25e63",
"hidden": false,
"name": "Karan Sapra",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T17:07:22 | Éclair -- Extracting Content and Layout with Integrated Reading Order
for Documents | Optical Character Recognition (OCR) technology is widely used to extract text
from images of documents, facilitating efficient digitization and data
retrieval. However, merely extracting text is insufficient when dealing with
complex documents. Fully comprehending such documents requires an understanding
of their structure -- including formatting, formulas, tables, and the reading
order of multiple blocks and columns across multiple pages -- as well as
semantic information for detecting elements like footnotes and image captions.
This comprehensive understanding is crucial for downstream tasks such as
retrieval, document question answering, and data curation for training Large
Language Models (LLMs) and Vision Language Models (VLMs). To address this, we
introduce \'Eclair, a general-purpose text-extraction tool specifically
designed to process a wide range of document types. Given an image, \'Eclair is
able to extract formatted text in reading order, along with bounding boxes and
their corresponding semantic classes. To thoroughly evaluate these novel
capabilities, we introduce our diverse human-annotated benchmark for
document-level OCR and semantic classification. \'Eclair achieves
state-of-the-art accuracy on this benchmark, outperforming other methods across
key metrics. Additionally, we evaluate \'Eclair on established benchmarks,
demonstrating its versatility and strength across several evaluation standards. | 11 | 67ac5e0f653d273eeaf25eea | null | null |
|
2025-02-12T02:51:41.003000 | Expect the Unexpected: FailSafe Long Context QA for Finance | 4 | {
"_id": "60e61b3969bd0df25c9375da",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1625692968400-noauth.jpeg",
"followerCount": 28,
"fullname": "Melisa Russak",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "melisa",
"type": "user"
} | false | null | 2502.06329 | [
{
"_id": "67ab4174757d2eb190af0375",
"hidden": false,
"name": "Kiran Kamble",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:55.367Z",
"user": {
"_id": "621d6f532165dc431641e438",
"avatarUrl": "/avatars/56ccef10a8426d7160ef3586a771bd63.svg",
"fullname": "Kiran Kamble",
"isPro": false,
"type": "user",
"user": "kiranr"
}
},
{
"_id": "67ab4174757d2eb190af0376",
"hidden": false,
"name": "Melisa Russak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab4174757d2eb190af0377",
"hidden": false,
"name": "Dmytro Mozolevskyi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:31.358Z",
"user": {
"_id": "64f13a7c9be8cab82d9b5a55",
"avatarUrl": "/avatars/a00b5d386016697b4c4cc746bac16168.svg",
"fullname": "Dmytro Mozolevskyi",
"isPro": false,
"type": "user",
"user": "dmytro-writer"
}
},
{
"_id": "67ab4174757d2eb190af0378",
"hidden": false,
"name": "Muayad Ali",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:53.157Z",
"user": {
"_id": "6320a906a023aad6a7670e99",
"avatarUrl": "/avatars/48071559b0c7660bf6861cfe008b3006.svg",
"fullname": "Muayad Sayed Ali",
"isPro": false,
"type": "user",
"user": "muayad"
}
},
{
"_id": "67ab4174757d2eb190af0379",
"hidden": false,
"name": "Mateusz Russak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab4174757d2eb190af037a",
"hidden": false,
"name": "Waseem AlShikh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T09:05:02.858Z",
"user": {
"_id": "60cd486d723acf5eb46fe8d3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60cd486d723acf5eb46fe8d3/Z1bD1kjvZ0QAOjZna41Xr.jpeg",
"fullname": "Waseem AlShikh",
"isPro": false,
"type": "user",
"user": "wassemgtk"
}
}
] | 2025-02-10T10:29:28 | Expect the Unexpected: FailSafe Long Context QA for Finance | We propose a new long-context financial benchmark, FailSafeQA, designed to
test the robustness and context-awareness of LLMs against six variations in
human-interface interactions in LLM-based query-answer systems within finance.
We concentrate on two case studies: Query Failure and Context Failure. In the
Query Failure scenario, we perturb the original query to vary in domain
expertise, completeness, and linguistic accuracy. In the Context Failure case,
we simulate the uploads of degraded, irrelevant, and empty documents. We employ
the LLM-as-a-Judge methodology with Qwen2.5-72B-Instruct and use fine-grained
rating criteria to define and calculate Robustness, Context Grounding, and
Compliance scores for 24 off-the-shelf models. The results suggest that
although some models excel at mitigating input perturbations, they must balance
robust answering with the ability to refrain from hallucinating. Notably,
Palmyra-Fin-128k-Instruct, recognized as the most compliant model, maintained
strong baseline performance but encountered challenges in sustaining robust
predictions in 17% of test cases. On the other hand, the most robust model,
OpenAI o3-mini, fabricated information in 41% of tested cases. The results
demonstrate that even high-performing models have significant room for
improvement and highlight the role of FailSafeQA as a tool for developing LLMs
optimized for dependability in financial applications. The dataset is available
at: https://huggingface.co/datasets/Writer/FailSafeQA | 126 | 67ab4175757d2eb190af03ca | null | null |
|
2025-02-12T01:31:44.368000 | FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks | 2 | {
"_id": "63195d0582e7eec0eac040e3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63195d0582e7eec0eac040e3/0tXOYkMfmv9e53zBWgqz7.png",
"followerCount": 1,
"fullname": "Luca Della Libera",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lucadellalib",
"type": "user"
} | true | null | 2502.04465 | [
{
"_id": "67a953844ea315a67e02461d",
"hidden": false,
"name": "Luca Della Libera",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:03:10.257Z",
"user": {
"_id": "63195d0582e7eec0eac040e3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63195d0582e7eec0eac040e3/0tXOYkMfmv9e53zBWgqz7.png",
"fullname": "Luca Della Libera",
"isPro": false,
"type": "user",
"user": "lucadellalib"
}
},
{
"_id": "67a953844ea315a67e02461e",
"hidden": false,
"name": "Francesco Paissan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a953844ea315a67e02461f",
"hidden": false,
"name": "Cem Subakan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a953844ea315a67e024620",
"hidden": false,
"name": "Mirco Ravanelli",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T19:24:50 | FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks | Large language models have revolutionized natural language processing through
self-supervised pretraining on massive datasets. Inspired by this success,
researchers have explored adapting these methods to speech by discretizing
continuous audio into tokens using neural audio codecs. However, existing
approaches face limitations, including high bitrates, the loss of either
semantic or acoustic information, and the reliance on multi-codebook designs
when trying to capture both, which increases architectural complexity for
downstream tasks. To address these challenges, we introduce FocalCodec, an
efficient low-bitrate codec based on focal modulation that utilizes a single
binary codebook to compress speech between 0.16 and 0.65 kbps. FocalCodec
delivers competitive performance in speech resynthesis and voice conversion at
lower bitrates than the current state-of-the-art, while effectively handling
multilingual speech and noisy environments. Evaluation on downstream tasks
shows that FocalCodec successfully preserves sufficient semantic and acoustic
information, while also being well-suited for generative modeling. Demo
samples, code and checkpoints are available at
https://lucadellalib.github.io/focalcodec-web/. | 3 | 67a953854ea315a67e024659 | null | null |
|
2025-02-11T23:55:37.671000 | Teaching Language Models to Critique via Reinforcement Learning | 2 | {
"_id": "622f103fc78da4c7ebd7c887",
"avatarUrl": "/avatars/b0c7cd29835d92c2cd584947fcd5d520.svg",
"followerCount": 4,
"fullname": "Xie",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Zhihui",
"type": "user"
} | true | null | 2502.03492 | [
{
"_id": "67a5a8e595df68b0a167c298",
"hidden": false,
"name": "Zhihui Xie",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:17:02.682Z",
"user": {
"_id": "622f103fc78da4c7ebd7c887",
"avatarUrl": "/avatars/b0c7cd29835d92c2cd584947fcd5d520.svg",
"fullname": "Xie",
"isPro": false,
"type": "user",
"user": "Zhihui"
}
},
{
"_id": "67a5a8e595df68b0a167c299",
"hidden": false,
"name": "Jie chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5a8e595df68b0a167c29a",
"hidden": false,
"name": "Liyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5a8e595df68b0a167c29b",
"hidden": false,
"name": "Weichao Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5a8e595df68b0a167c29c",
"hidden": false,
"name": "Jingjing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5a8e595df68b0a167c29d",
"hidden": false,
"name": "Lingpeng Kong",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T02:18:46 | Teaching Language Models to Critique via Reinforcement Learning | Teaching large language models (LLMs) to critique and refine their outputs is
crucial for building systems that can iteratively improve, yet it is
fundamentally limited by the ability to provide accurate judgments and
actionable suggestions. In this work, we study LLM critics for code generation
and propose CTRL, a framework for Critic
Training via Reinforcement Learning, which
trains a critic model to generate feedback that maximizes correction
performance for a fixed generator model without human supervision. Our results
demonstrate that critics trained with CTRL significantly enhance
pass rates and mitigate compounding errors across both base and stronger
generator models. Furthermore, we show that these critic models act as accurate
generative reward models and enable test-time scaling through iterative
critique-revision, achieving up to 106.1% relative improvements across
challenging code generation benchmarks. | 24 | 67a5a8e695df68b0a167c2c6 | null | null |
|
2025-02-11T23:27:13.769000 | Magic 1-For-1: Generating One Minute Video Clips within One Minute | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.07701 | [
{
"_id": "67ac23166def89f9aae56abd",
"hidden": false,
"name": "Hongwei Yi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56abe",
"hidden": false,
"name": "Shitong Shao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56abf",
"hidden": false,
"name": "Tian Ye",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:12.141Z",
"user": {
"_id": "66015e8aa4d296af07de538e",
"avatarUrl": "/avatars/a1295c631cc2646282c545859975ce4c.svg",
"fullname": "Ye",
"isPro": false,
"type": "user",
"user": "Owen777"
}
},
{
"_id": "67ac23166def89f9aae56ac0",
"hidden": false,
"name": "Jiantong Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac1",
"hidden": false,
"name": "Qingyu Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac2",
"hidden": false,
"name": "Michael Lingelbach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac3",
"hidden": false,
"name": "Li Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac4",
"hidden": false,
"name": "Yonghong Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac5",
"hidden": false,
"name": "Enze Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac23166def89f9aae56ac6",
"hidden": false,
"name": "Daquan Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T16:58:15 | Magic 1-For-1: Generating One Minute Video Clips within One Minute | In this technical report, we present Magic 1-For-1 (Magic141), an efficient
video generation model with optimized memory consumption and inference latency.
The key idea is simple: factorize the text-to-video generation task into two
separate easier tasks for diffusion step distillation, namely text-to-image
generation and image-to-video generation. We verify that with the same
optimization algorithm, the image-to-video task is indeed easier to converge
over the text-to-video task. We also explore a bag of optimization tricks to
reduce the computational cost of training the image-to-video (I2V) models from
three aspects: 1) model convergence speedup by using a multi-modal prior
condition injection; 2) inference latency speed up by applying an adversarial
step distillation, and 3) inference memory cost optimization with parameter
sparsification. With those techniques, we are able to generate 5-second video
clips within 3 seconds. By applying a test time sliding window, we are able to
generate a minute-long video within one minute with significantly improved
visual quality and motion dynamics, spending less than 1 second for generating
1 second video clips on average. We conduct a series of preliminary
explorations to find out the optimal tradeoff between computational cost and
video quality during diffusion step distillation and hope this could be a good
foundation model for open-source explorations. The code and the model weights
are available at https://github.com/DA-Group-PKU/Magic-1-For-1. | 33 | 67ac23186def89f9aae56b69 | null | null |
|
2025-02-11T23:22:50.454000 | Forget What You Know about LLMs Evaluations - LLMs are Like a Chameleon | 3 | {
"_id": "6731e56a07cf693a1104d2cb",
"avatarUrl": "/avatars/46a3269a19c7e6bfb7004a5da9701459.svg",
"followerCount": null,
"fullname": "Seffi Cohen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "seffico",
"type": "user"
} | false | null | 2502.07445 | [
{
"_id": "67ac216d602eb9ca8a517be6",
"hidden": false,
"name": "Nurit Cohen-Inger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac216d602eb9ca8a517be7",
"hidden": false,
"name": "Yehonatan Elisha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:36:02.389Z",
"user": {
"_id": "662a80358a721ebd0b4f358b",
"avatarUrl": "/avatars/5be5a3c8b13f6f663206a19d0525c18e.svg",
"fullname": "Yehonatan Elisha",
"isPro": false,
"type": "user",
"user": "Yoniel"
}
},
{
"_id": "67ac216d602eb9ca8a517be8",
"hidden": false,
"name": "Bracha Shapira",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac216d602eb9ca8a517be9",
"hidden": false,
"name": "Lior Rokach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac216d602eb9ca8a517bea",
"hidden": false,
"name": "Seffi Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T10:43:36 | Forget What You Know about LLMs Evaluations - LLMs are Like a Chameleon | Large language models (LLMs) often appear to excel on public benchmarks, but
these high scores may mask an overreliance on dataset-specific surface cues
rather than true language understanding. We introduce the Chameleon Benchmark
Overfit Detector (C-BOD), a meta-evaluation framework that systematically
distorts benchmark prompts via a parametric transformation and detects
overfitting of LLMs. By rephrasing inputs while preserving their semantic
content and labels, C-BOD exposes whether a model's performance is driven by
memorized patterns. Evaluated on the MMLU benchmark using 26 leading LLMs, our
method reveals an average performance degradation of 2.15% under modest
perturbations, with 20 out of 26 models exhibiting statistically significant
differences. Notably, models with higher baseline accuracy exhibit larger
performance differences under perturbation, and larger LLMs tend to be more
sensitive to rephrasings indicating that both cases may overrely on fixed
prompt patterns. In contrast, the Llama family and models with lower baseline
accuracy show insignificant degradation, suggesting reduced dependency on
superficial cues. Moreover, C-BOD's dataset- and model-agnostic design allows
easy integration into training pipelines to promote more robust language
understanding. Our findings challenge the community to look beyond leaderboard
scores and prioritize resilience and generalization in LLM evaluation. | 11 | 67ac216e602eb9ca8a517c1d | null | null |
|
2025-02-11T23:21:13.452000 | VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07531 | [
{
"_id": "67ac21acaa680a0f8782d273",
"hidden": false,
"name": "Sixiao Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d274",
"hidden": false,
"name": "Zimian Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d275",
"hidden": false,
"name": "Yanpeng Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d276",
"hidden": false,
"name": "Yi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d277",
"hidden": false,
"name": "Hang Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d278",
"hidden": false,
"name": "Xiangru Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac21acaa680a0f8782d279",
"hidden": false,
"name": "Yanwei Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T13:11:59 | VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video
Generation | Recent image-to-video generation methods have demonstrated success in
enabling control over one or two visual elements, such as camera trajectory or
object motion. However, these methods are unable to offer control over multiple
visual elements due to limitations in data and network efficacy. In this paper,
we introduce VidCRAFT3, a novel framework for precise image-to-video generation
that enables control over camera motion, object motion, and lighting direction
simultaneously. To better decouple control over each visual element, we propose
the Spatial Triple-Attention Transformer, which integrates lighting direction,
text, and image in a symmetric way. Since most real-world video datasets lack
lighting annotations, we construct a high-quality synthetic video dataset, the
VideoLightingDirection (VLD) dataset. This dataset includes lighting direction
annotations and objects of diverse appearance, enabling VidCRAFT3 to
effectively handle strong light transmission and reflection effects.
Additionally, we propose a three-stage training strategy that eliminates the
need for training data annotated with multiple visual elements (camera motion,
object motion, and lighting direction) simultaneously. Extensive experiments on
benchmark datasets demonstrate the efficacy of VidCRAFT3 in producing
high-quality video content, surpassing existing state-of-the-art methods in
terms of control granularity and visual coherence. All code and data will be
publicly available. Project page: https://sixiaozheng.github.io/VidCRAFT3/. | 13 | 67ac21b2aa680a0f8782d3bd | null | null |
|
2025-02-11T23:16:28.213000 | CAD-Editor: A Locate-then-Infill Framework with Automated Training Data Synthesis for Text-Based CAD Editing | 2 | {
"_id": "63eb00a191a1b8ec4fbba2a9",
"avatarUrl": "/avatars/0cc7cf9b6d05337603f700e0d592edf5.svg",
"followerCount": 3,
"fullname": "ShizhaoSun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ShizhaoSun",
"type": "user"
} | true | null | 2502.03997 | [
{
"_id": "67ac206214d5fe7767e7ec4e",
"hidden": false,
"name": "Yu Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac206214d5fe7767e7ec4f",
"hidden": false,
"name": "Shizhao Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:14.580Z",
"user": {
"_id": "63eb00a191a1b8ec4fbba2a9",
"avatarUrl": "/avatars/0cc7cf9b6d05337603f700e0d592edf5.svg",
"fullname": "ShizhaoSun",
"isPro": false,
"type": "user",
"user": "ShizhaoSun"
}
},
{
"_id": "67ac206214d5fe7767e7ec50",
"hidden": false,
"name": "Qi Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac206214d5fe7767e7ec51",
"hidden": false,
"name": "Jiang Bian",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T11:57:14 | CAD-Editor: A Locate-then-Infill Framework with Automated Training Data
Synthesis for Text-Based CAD Editing | Computer Aided Design (CAD) is indispensable across various industries.
Text-based CAD editing, which automates the modification of CAD models
based on textual instructions, holds great potential but remains underexplored.
Existing methods primarily focus on design variation generation or text-based
CAD generation, either lacking support for text-based control or neglecting
existing CAD models as constraints. We introduce CAD-Editor, the first
framework for text-based CAD editing. To address the challenge of demanding
triplet data with accurate correspondence for training, we propose an automated
data synthesis pipeline. This pipeline utilizes design variation models to
generate pairs of original and edited CAD models and employs Large
Vision-Language Models (LVLMs) to summarize their differences into editing
instructions. To tackle the composite nature of text-based CAD editing, we
propose a locate-then-infill framework that decomposes the task into two
focused sub-tasks: locating regions requiring modification and infilling these
regions with appropriate edits. Large Language Models (LLMs) serve as the
backbone for both sub-tasks, leveraging their capabilities in natural language
understanding and CAD knowledge. Experiments show that CAD-Editor achieves
superior performance both quantitatively and qualitatively. | 9 | 67ac206314d5fe7767e7ec98 | null | null |
|
2025-02-11T23:14:10.293000 | Enhance-A-Video: Better Generated Video for Free | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07508 | [
{
"_id": "67ac2006a6b5a26040fc94f7",
"hidden": false,
"name": "Yang Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94f8",
"hidden": false,
"name": "Xuanlei Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:22.430Z",
"user": {
"_id": "64f5937db6d7050b19c68fec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64f5937db6d7050b19c68fec/4lWYw6-VxZsbl-1Rd9FSy.jpeg",
"fullname": "Xuanlei Zhao",
"isPro": false,
"type": "user",
"user": "oahzxl"
}
},
{
"_id": "67ac2006a6b5a26040fc94f9",
"hidden": false,
"name": "Mengzhao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94fa",
"hidden": false,
"name": "Kaipeng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94fb",
"hidden": false,
"name": "Wenqi Shao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94fc",
"hidden": false,
"name": "Kai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94fd",
"hidden": false,
"name": "Zhangyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac2006a6b5a26040fc94fe",
"hidden": false,
"name": "Yang You",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T12:22:35 | Enhance-A-Video: Better Generated Video for Free | DiT-based video generation has achieved remarkable results, but research into
enhancing existing models remains relatively unexplored. In this work, we
introduce a training-free approach to enhance the coherence and quality of
DiT-based generated videos, named Enhance-A-Video. The core idea is enhancing
the cross-frame correlations based on non-diagonal temporal attention
distributions. Thanks to its simple design, our approach can be easily applied
to most DiT-based video generation frameworks without any retraining or
fine-tuning. Across various DiT-based video generation models, our approach
demonstrates promising improvements in both temporal consistency and visual
quality. We hope this research can inspire future explorations in video
generation enhancement. | 21 | 67ac200ea6b5a26040fc9709 | null | null |
|
2025-02-11T23:11:49.993000 | Auditing Prompt Caching in Language Model APIs | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07776 | [
{
"_id": "67ac1f7851c7f3b53ffc4def",
"hidden": false,
"name": "Chenchen Gu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:36:04.615Z",
"user": {
"_id": "64b78d941d913123e6de2ecc",
"avatarUrl": "/avatars/f1225e5fc2033c6a473db8adcb911e3d.svg",
"fullname": "Chenchen Gu",
"isPro": false,
"type": "user",
"user": "cygu"
}
},
{
"_id": "67ac1f7851c7f3b53ffc4df0",
"hidden": false,
"name": "Xiang Lisa Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1f7851c7f3b53ffc4df1",
"hidden": false,
"name": "Rohith Kuditipudi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1f7851c7f3b53ffc4df2",
"hidden": false,
"name": "Percy Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1f7851c7f3b53ffc4df3",
"hidden": false,
"name": "Tatsunori Hashimoto",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-12T04:11:36.912Z",
"user": {
"_id": "661595d1b3d0b21da55cde7d",
"avatarUrl": "/avatars/ba3fa065536518637d21a5c46cee5dd1.svg",
"fullname": "Tatsu Hashimoto",
"isPro": false,
"type": "user",
"user": "thashim"
}
}
] | 2025-02-11T18:58:04 | Auditing Prompt Caching in Language Model APIs | Prompt caching in large language models (LLMs) results in data-dependent
timing variations: cached prompts are processed faster than non-cached prompts.
These timing differences introduce the risk of side-channel timing attacks. For
example, if the cache is shared across users, an attacker could identify cached
prompts from fast API response times to learn information about other users'
prompts. Because prompt caching may cause privacy leakage, transparency around
the caching policies of API providers is important. To this end, we develop and
conduct statistical audits to detect prompt caching in real-world LLM API
providers. We detect global cache sharing across users in seven API providers,
including OpenAI, resulting in potential privacy leakage about users' prompts.
Timing variations due to prompt caching can also result in leakage of
information about model architecture. Namely, we find evidence that OpenAI's
embedding model is a decoder-only Transformer, which was previously not
publicly known. | 4 | 67ac1f7851c7f3b53ffc4e1b | null | null |
|
2025-02-11T23:10:26.895000 | NatureLM: Deciphering the Language of Nature for Scientific Discovery | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07527 | [
{
"_id": "67ac1eaac61306b0ac95d2c6",
"hidden": false,
"name": "Yingce Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2c7",
"hidden": false,
"name": "Peiran Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2c8",
"hidden": false,
"name": "Shufang Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2c9",
"hidden": false,
"name": "Liang He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ca",
"hidden": false,
"name": "Chuan Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2cb",
"hidden": false,
"name": "Renqian Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2cc",
"hidden": false,
"name": "Guoqing Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2cd",
"hidden": false,
"name": "Yue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ce",
"hidden": false,
"name": "Zequn Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2cf",
"hidden": false,
"name": "Yuan-Jyue Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d0",
"hidden": false,
"name": "Zekun Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d1",
"hidden": false,
"name": "Yeqi Bai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d2",
"hidden": false,
"name": "Pan Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d3",
"hidden": false,
"name": "Yaosen Min",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d4",
"hidden": false,
"name": "Ziheng Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d5",
"hidden": false,
"name": "Hongxia Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d6",
"hidden": false,
"name": "Han Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d7",
"hidden": false,
"name": "Jielan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d8",
"hidden": false,
"name": "Chang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2d9",
"hidden": false,
"name": "Jia Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2da",
"hidden": false,
"name": "Jianwei Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2db",
"hidden": false,
"name": "Kehan Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2dc",
"hidden": false,
"name": "Wei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2dd",
"hidden": false,
"name": "Kaiyuan Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2de",
"hidden": false,
"name": "Qizhi Pei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2df",
"hidden": false,
"name": "Qian Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e0",
"hidden": false,
"name": "Xixian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e1",
"hidden": false,
"name": "Yanting Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e2",
"hidden": false,
"name": "Houtian Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e3",
"hidden": false,
"name": "Yeqing Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e4",
"hidden": false,
"name": "Mingqian Ma",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:24.363Z",
"user": {
"_id": "662f15453f9804ca5f9c1fd3",
"avatarUrl": "/avatars/f7d514a7c7ef08f16dbde35310d0d8b6.svg",
"fullname": "Mingqian Ma",
"isPro": false,
"type": "user",
"user": "Mishamq"
}
},
{
"_id": "67ac1eaac61306b0ac95d2e5",
"hidden": false,
"name": "Zun Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e6",
"hidden": false,
"name": "Tian Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e7",
"hidden": false,
"name": "Krzysztof Maziarz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e8",
"hidden": false,
"name": "Marwin Segler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2e9",
"hidden": false,
"name": "Zhao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ea",
"hidden": false,
"name": "Zilong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2eb",
"hidden": false,
"name": "Yu Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ec",
"hidden": false,
"name": "Shuxin Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ed",
"hidden": false,
"name": "Lijun Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ee",
"hidden": false,
"name": "Chen Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2ef",
"hidden": false,
"name": "Peggy Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2f0",
"hidden": false,
"name": "Tie-Yan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2f1",
"hidden": false,
"name": "Haiguang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1eaac61306b0ac95d2f2",
"hidden": false,
"name": "Tao Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T13:08:03 | NatureLM: Deciphering the Language of Nature for Scientific Discovery | Foundation models have revolutionized natural language processing and
artificial intelligence, significantly enhancing how machines comprehend and
generate human languages. Inspired by the success of these foundation models,
researchers have developed foundation models for individual scientific domains,
including small molecules, materials, proteins, DNA, and RNA. However, these
models are typically trained in isolation, lacking the ability to integrate
across different scientific domains. Recognizing that entities within these
domains can all be represented as sequences, which together form the "language
of nature", we introduce Nature Language Model (briefly, NatureLM), a
sequence-based science foundation model designed for scientific discovery.
Pre-trained with data from multiple scientific domains, NatureLM offers a
unified, versatile model that enables various applications including: (i)
generating and optimizing small molecules, proteins, RNA, and materials using
text instructions; (ii) cross-domain generation/design, such as
protein-to-molecule and protein-to-RNA generation; and (iii) achieving
state-of-the-art performance in tasks like SMILES-to-IUPAC translation and
retrosynthesis on USPTO-50k. NatureLM offers a promising generalist approach
for various scientific tasks, including drug discovery (hit
generation/optimization, ADMET optimization, synthesis), novel material design,
and the development of therapeutic proteins or nucleotides. We have developed
NatureLM models in different sizes (1 billion, 8 billion, and 46.7 billion
parameters) and observed a clear improvement in performance as the model size
increases. | 19 | 67ac1eabc61306b0ac95d346 | null | null |
|
2025-02-11T23:04:08.153000 | Hephaestus: Improving Fundamental Agent Capabilities of Large Language Models through Continual Pre-Training | 2 | {
"_id": "6471bddd609ae9f56368f132",
"avatarUrl": "/avatars/71a80127a01e662ab2790de0511326b6.svg",
"followerCount": 1,
"fullname": "Yuchen Zhuang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yczhuang",
"type": "user"
} | true | null | 2502.06589 | [
{
"_id": "67ac1d45e6f1e95ccf6de3b7",
"hidden": false,
"name": "Yuchen Zhuang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-12T04:02:14.866Z",
"user": {
"_id": "6471bddd609ae9f56368f132",
"avatarUrl": "/avatars/71a80127a01e662ab2790de0511326b6.svg",
"fullname": "Yuchen Zhuang",
"isPro": true,
"type": "user",
"user": "yczhuang"
}
},
{
"_id": "67ac1d45e6f1e95ccf6de3b8",
"hidden": false,
"name": "Jingfeng Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3b9",
"hidden": false,
"name": "Haoming Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3ba",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3bb",
"hidden": false,
"name": "Kewei Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3bc",
"hidden": false,
"name": "Sanket Lokegaonkar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3bd",
"hidden": false,
"name": "Yifan Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3be",
"hidden": false,
"name": "Qing Ping",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3bf",
"hidden": false,
"name": "Tianyi Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c0",
"hidden": false,
"name": "Binxuan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c1",
"hidden": false,
"name": "Zheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c2",
"hidden": false,
"name": "Zhengyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c3",
"hidden": false,
"name": "Pei Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c4",
"hidden": false,
"name": "Ruijie Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c5",
"hidden": false,
"name": "Rongzhi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c6",
"hidden": false,
"name": "Nasser Zalmout",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c7",
"hidden": false,
"name": "Priyanka Nigam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c8",
"hidden": false,
"name": "Bing Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d45e6f1e95ccf6de3c9",
"hidden": false,
"name": "Chao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T15:54:34 | Hephaestus: Improving Fundamental Agent Capabilities of Large Language
Models through Continual Pre-Training | Due to the scarcity of agent-oriented pre-training data, LLM-based autonomous
agents typically rely on complex prompting or extensive fine-tuning, which
often fails to introduce new capabilities while preserving strong
generalizability. We introduce Hephaestus-Forge, the first large-scale
pre-training corpus designed to enhance the fundamental capabilities of LLM
agents in API function calling, intrinsic reasoning and planning, and adapting
to environmental feedback. Hephaestus-Forge comprises 103B agent-specific data
encompassing 76,537 APIs, including both tool documentation to introduce
knowledge of API functions and function calling trajectories to strengthen
intrinsic reasoning. To explore effective training protocols, we investigate
scaling laws to identify the optimal recipe in data mixing ratios. By continual
pre-training on Hephaestus-Forge, Hephaestus outperforms small- to medium-scale
open-source LLMs and rivals commercial LLMs on three agent benchmarks,
demonstrating the effectiveness of our pre-training corpus in enhancing
fundamental agentic capabilities and generalization of LLMs to new tasks or
environments. | 18 | 67ac1d46e6f1e95ccf6de419 | null | null |
|
2025-02-11T23:03:08.578000 | Scaling Pre-training to One Hundred Billion Data for Vision Language Models | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07617 | [
{
"_id": "67ac1d68c29356f92ed772c5",
"hidden": false,
"name": "Xiao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d68c29356f92ed772c6",
"hidden": false,
"name": "Ibrahim Alabdulmohsin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d68c29356f92ed772c7",
"hidden": false,
"name": "Daniel Salz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d68c29356f92ed772c8",
"hidden": false,
"name": "Zhe Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1d68c29356f92ed772c9",
"hidden": false,
"name": "Keran Rong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:28.934Z",
"user": {
"_id": "648a9083abcf427a9a498679",
"avatarUrl": "/avatars/e32d413ce0a48d83f95e29c11a8a8ae8.svg",
"fullname": "Keran ",
"isPro": false,
"type": "user",
"user": "Keeera"
}
},
{
"_id": "67ac1d68c29356f92ed772ca",
"hidden": false,
"name": "Xiaohua Zhai",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T15:05:33 | Scaling Pre-training to One Hundred Billion Data for Vision Language
Models | We provide an empirical investigation of the potential of pre-training
vision-language models on an unprecedented scale: 100 billion examples. We find
that model performance tends to saturate at this scale on many common
Western-centric classification and retrieval benchmarks, such as COCO Captions.
Nevertheless, tasks of cultural diversity achieve more substantial gains from
the 100-billion scale web data, thanks to its coverage of long-tail concepts.
Furthermore, we analyze the model's multilinguality and show gains in
low-resource languages as well. In addition, we observe that reducing the size
of the pretraining dataset via quality filters like using CLIP, typically used
to enhance performance, may inadvertently reduce the cultural diversity
represented even in large-scale datasets. Our results highlight that while
traditional benchmarks may not benefit significantly from scaling noisy, raw
web data to 100 billion examples, this data scale is vital for building truly
inclusive multimodal systems. | 29 | 67ac1d6ac29356f92ed77354 | null | null |
|
2025-02-11T23:00:20.080000 | CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction | 3 | {
"_id": "621e40ac944c7e36aaec2369",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621e40ac944c7e36aaec2369/Yj-FJRWps3rvsS_B2bnKo.jpeg",
"followerCount": 6,
"fullname": "Junlong Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lockon",
"type": "user"
} | false | null | 2502.07316 | [
{
"_id": "67ac0ab720e98bddc5c19fed",
"hidden": false,
"name": "Junlong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac0ab720e98bddc5c19fee",
"hidden": false,
"name": "Daya Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac0ab720e98bddc5c19fef",
"hidden": false,
"name": "Dejian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac0ab720e98bddc5c19ff0",
"hidden": false,
"name": "Runxin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac0ab720e98bddc5c19ff1",
"hidden": false,
"name": "Yu Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac0ab720e98bddc5c19ff2",
"hidden": false,
"name": "Junxian He",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T07:26:50 | CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction | Reasoning is a fundamental capability of Large Language Models. While prior
research predominantly focuses on enhancing narrow skills like math or code
generation, improving performance on many other reasoning tasks remains
challenging due to sparse and fragmented training data. To address this issue,
we propose CodeI/O, a novel approach that systematically condenses diverse
reasoning patterns inherently embedded in contextually-grounded codes, through
transforming the original code into a code input-output prediction format. By
training models to predict inputs/outputs given code and test cases entirely in
natural language as Chain-of-Thought (CoT) rationales, we expose them to
universal reasoning primitives -- like logic flow planning, state-space
searching, decision tree traversal, and modular decomposition -- while
decoupling structured reasoning from code-specific syntax and preserving
procedural rigor. Experimental results demonstrate CodeI/O leads to consistent
improvements across symbolic, scientific, logic, math & numerical, and
commonsense reasoning tasks. By matching the existing ground-truth outputs or
re-executing the code with predicted inputs, we can verify each prediction and
further enhance the CoTs through multi-turn revision, resulting in CodeI/O++
and achieving higher performance. Our data and models are available at
https://github.com/hkust-nlp/CodeIO. | 46 | 67ac0ab820e98bddc5c1a039 | null | null |
|
2025-02-11T22:58:37.585000 | LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters! | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.07374 | [
{
"_id": "67ac1c6436464325ebe3c6e3",
"hidden": false,
"name": "Dacheng Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:18:06.131Z",
"user": {
"_id": "63715b25ffc0489ed7d1f415",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63715b25ffc0489ed7d1f415/xZJepbs0LRqFbW1knnBKR.jpeg",
"fullname": "Dacheng Li",
"isPro": false,
"type": "user",
"user": "DachengLi"
}
},
{
"_id": "67ac1c6436464325ebe3c6e4",
"hidden": false,
"name": "Shiyi Cao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:33.967Z",
"user": {
"_id": "64ebbae6895a36ab28de811a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ebbae6895a36ab28de811a/gBiaQP4paS4L13eu-yRm7.jpeg",
"fullname": "Shiyi Cao",
"isPro": false,
"type": "user",
"user": "eva98"
}
},
{
"_id": "67ac1c6436464325ebe3c6e5",
"hidden": false,
"name": "Tyler Griggs",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6e6",
"hidden": false,
"name": "Shu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6e7",
"hidden": false,
"name": "Xiangxi Mo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6e8",
"hidden": false,
"name": "Shishir G. Patil",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6e9",
"hidden": false,
"name": "Matei Zaharia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6ea",
"hidden": false,
"name": "Joseph E. Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1c6436464325ebe3c6eb",
"hidden": false,
"name": "Ion Stoica",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-11T08:48:48 | LLMs Can Easily Learn to Reason from Demonstrations Structure, not
content, is what matters! | Large reasoning models (LRMs) tackle complex reasoning problems by following
long chain-of-thoughts (Long CoT) that incorporate reflection, backtracking,
and self-validation. However, the training techniques and data requirements to
elicit Long CoT remain poorly understood. In this work, we find that a Large
Language model (LLM) can effectively learn Long CoT reasoning through
data-efficient supervised fine-tuning (SFT) and parameter-efficient low-rank
adaptation (LoRA). With just 17k long CoT training samples, the
Qwen2.5-32B-Instruct model achieves significant improvements on a wide range of
math and coding benchmarks, including 56.7% (+40.0%) on AIME 2024 and 57.0%
(+8.1%) on LiveCodeBench, competitive to the proprietary o1-preview model's
score of 44.6% and 59.1%. More importantly, we find that the structure of Long
CoT is critical to the learning process, whereas the content of individual
reasoning steps has minimal impact. Perturbations affecting content, such as
training on incorrect samples or removing reasoning keywords, have little
impact on performance. In contrast, structural modifications that disrupt
logical consistency in the Long CoT, such as shuffling or deleting reasoning
steps, significantly degrade accuracy. For example, a model trained on Long CoT
samples with incorrect answers still achieves only 3.2% lower accuracy compared
to training with fully correct samples. These insights deepen our understanding
of how to elicit reasoning capabilities in LLMs and highlight key
considerations for efficiently training the next generation of reasoning
models. This is the academic paper of our previous released Sky-T1-32B-Preview
model. Codes are available at https://github.com/NovaSky-AI/SkyThought. | 36 | 67ac1c6536464325ebe3c723 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.