publishedAt
timestamp[ns]date 2023-02-13 12:55:54
2025-05-02 03:36:49
⌀ | title
stringlengths 8
206
⌀ | thumbnail
stringlengths 77
77
⌀ | numComments
int64 0
143
⌀ | submittedBy
dict | isAuthorParticipating
bool 2
classes | mediaUrls
sequencelengths 0
12
⌀ | paper_id
stringlengths 10
10
⌀ | paper_authors
listlengths 1
942
⌀ | paper_publishedAt
timestamp[ns]date 2023-02-13 17:55:54
2025-05-02 07:36:49
⌀ | paper_title
stringlengths 8
206
⌀ | paper_summary
stringlengths 165
1.92k
⌀ | paper_upvotes
int64 0
615
⌀ | paper_discussionId
stringlengths 24
24
⌀ | paper_projectPage
stringclasses 572
values | paper_githubRepo
stringclasses 813
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-02-11T22:53:19.310000 | Competitive Programming with Large Reasoning Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06807 | [
{
"_id": "67ac1b080686a1e0690741ce",
"hidden": false,
"name": "OpenAI",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d0",
"hidden": false,
"name": "Ahmed El-Kishky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d1",
"hidden": false,
"name": "Alexander Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d2",
"hidden": false,
"name": "Andre Saraiva",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d3",
"hidden": false,
"name": "Borys Minaev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d4",
"hidden": false,
"name": "Daniel Selsam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d5",
"hidden": false,
"name": "David Dohan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d6",
"hidden": false,
"name": "Francis Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d7",
"hidden": false,
"name": "Hunter Lightman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d8",
"hidden": false,
"name": "Ignasi Clavera",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741d9",
"hidden": false,
"name": "Jakub Pachocki",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741da",
"hidden": false,
"name": "Jerry Tworek",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741db",
"hidden": false,
"name": "Lorenz Kuhn",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741dc",
"hidden": false,
"name": "Lukasz Kaiser",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741dd",
"hidden": false,
"name": "Mark Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741de",
"hidden": false,
"name": "Max Schwarzer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741df",
"hidden": false,
"name": "Mostafa Rohaninejad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e0",
"hidden": false,
"name": "Nat McAleese",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e1",
"hidden": false,
"name": "o3 contributors",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e2",
"hidden": false,
"name": "Oleg Mürk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e3",
"hidden": false,
"name": "Rhythm Garg",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e4",
"hidden": false,
"name": "Rui Shu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e5",
"hidden": false,
"name": "Szymon Sidor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e6",
"hidden": false,
"name": "Vineet Kosaraju",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac1b080686a1e0690741e7",
"hidden": false,
"name": "Wenda Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T23:00:15 | Competitive Programming with Large Reasoning Models | We show that reinforcement learning applied to large language models (LLMs)
significantly boosts performance on complex coding and reasoning tasks.
Additionally, we compare two general-purpose reasoning models - OpenAI o1 and
an early checkpoint of o3 - with a domain-specific system, o1-ioi, which uses
hand-engineered inference strategies designed for competing in the 2024
International Olympiad in Informatics (IOI). We competed live at IOI 2024 with
o1-ioi and, using hand-crafted test-time strategies, placed in the 49th
percentile. Under relaxed competition constraints, o1-ioi achieved a gold
medal. However, when evaluating later models such as o3, we find that o3
achieves gold without hand-crafted domain-specific strategies or relaxed
constraints. Our findings show that although specialized pipelines such as
o1-ioi yield solid improvements, the scaled-up, general-purpose o3 model
surpasses those results without relying on hand-crafted inference heuristics.
Notably, o3 achieves a gold medal at the 2024 IOI and obtains a Codeforces
rating on par with elite human competitors. Overall, these results indicate
that scaling general-purpose reinforcement learning, rather than relying on
domain-specific techniques, offers a robust path toward state-of-the-art AI in
reasoning domains, such as competitive programming. | 67 | 67ac1b090686a1e069074208 | null | null |
|
2025-02-11T21:08:23.528000 | Forbidden Science: Dual-Use AI Challenge Benchmark and Scientific Refusal Tests | 2 | {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"followerCount": null,
"fullname": "David Noever",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dnoever",
"type": "user"
} | false | null | 2502.06867 | [
{
"_id": "67ac026e401012b81040ae8b",
"hidden": false,
"name": "David Noever",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ac026e401012b81040ae8c",
"hidden": false,
"name": "Forrest McKee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T04:27:33 | Forbidden Science: Dual-Use AI Challenge Benchmark and Scientific
Refusal Tests | The development of robust safety benchmarks for large language models
requires open, reproducible datasets that can measure both appropriate refusal
of harmful content and potential over-restriction of legitimate scientific
discourse. We present an open-source dataset and testing framework for
evaluating LLM safety mechanisms across mainly controlled substance queries,
analyzing four major models' responses to systematically varied prompts. Our
results reveal distinct safety profiles: Claude-3.5-sonnet demonstrated the
most conservative approach with 73% refusals and 27% allowances, while Mistral
attempted to answer 100% of queries. GPT-3.5-turbo showed moderate restriction
with 10% refusals and 90% allowances, and Grok-2 registered 20% refusals and
80% allowances. Testing prompt variation strategies revealed decreasing
response consistency, from 85% with single prompts to 65% with five variations.
This publicly available benchmark enables systematic evaluation of the critical
balance between necessary safety restrictions and potential over-censorship of
legitimate scientific inquiry, while providing a foundation for measuring
progress in AI safety implementation. Chain-of-thought analysis reveals
potential vulnerabilities in safety mechanisms, highlighting the complexity of
implementing robust safeguards without unduly restricting desirable and valid
scientific discourse. | 1 | 67ac026f401012b81040aeb0 | null | null |
|
2025-02-11T20:30:51.808000 | Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE | 2 | {
"_id": "65780c60411e14898b8da93e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fvjP6qICiuR09LV8Xzahb.png",
"followerCount": null,
"fullname": "Haiduo Huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Hhaiduo",
"type": "user"
} | true | null | 2502.06282 | [
{
"_id": "67ab6c9867a1607ab478b975",
"hidden": false,
"name": "Haiduo Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:33:50.773Z",
"user": {
"_id": "65780c60411e14898b8da93e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fvjP6qICiuR09LV8Xzahb.png",
"fullname": "Haiduo Huang",
"isPro": false,
"type": "user",
"user": "Hhaiduo"
}
},
{
"_id": "67ab6c9867a1607ab478b976",
"hidden": false,
"name": "Fuwei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b977",
"hidden": false,
"name": "Zhenhua Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b978",
"hidden": false,
"name": "Yixing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b979",
"hidden": false,
"name": "Jinze Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b97a",
"hidden": false,
"name": "Yang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b97b",
"hidden": false,
"name": "Xuanwu Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b97c",
"hidden": false,
"name": "Dong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b97d",
"hidden": false,
"name": "Pengju Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab6c9867a1607ab478b97e",
"hidden": false,
"name": "Emad Barsoum",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T09:24:06 | Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE | Speculative decoding (SD) accelerates large language model inference by using
a smaller draft model to predict multiple tokens, which are then verified in
parallel by the larger target model. However, the limited capacity of the draft
model often necessitates tree-based sampling to improve prediction accuracy,
where multiple candidates are generated at each step. We identify a key
limitation in this approach: the candidates at the same step are derived from
the same representation, limiting diversity and reducing overall effectiveness.
To address this, we propose Jakiro, leveraging Mixture of Experts (MoE), where
independent experts generate diverse predictions, effectively decoupling
correlations among candidates. Furthermore, we introduce a hybrid inference
strategy, combining autoregressive decoding for initial tokens with parallel
decoding for subsequent stages, and enhance the latter with contrastive
mechanism in features to improve accuracy. Our method significantly boosts
prediction accuracy and achieves higher inference speedups. Extensive
experiments across diverse models validate the effectiveness and robustness of
our approach, establishing a new SOTA in speculative decoding. Our codes are
available at https://github.com/haiduo/Jakiro. | 5 | 67ab6c9967a1607ab478b9d0 | null | null |
|
2025-02-11T14:26:46.017000 | Towards Internet-Scale Training For Agents | 2 | {
"_id": "632d8b2e1d8a018adf4f98f1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/632d8b2e1d8a018adf4f98f1/vGvpkxyGLNQSJmEONR2uX.jpeg",
"followerCount": null,
"fullname": "Brandon Trabucco",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "btrabucco",
"type": "user"
} | true | null | 2502.06776 | [
{
"_id": "67aae05bb893603a0b4b241d",
"hidden": false,
"name": "Brandon Trabucco",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T17:37:50.173Z",
"user": {
"_id": "632d8b2e1d8a018adf4f98f1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/632d8b2e1d8a018adf4f98f1/vGvpkxyGLNQSJmEONR2uX.jpeg",
"fullname": "Brandon Trabucco",
"isPro": false,
"type": "user",
"user": "btrabucco"
}
},
{
"_id": "67aae05bb893603a0b4b241e",
"hidden": false,
"name": "Gunnar Sigurdsson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae05bb893603a0b4b241f",
"hidden": false,
"name": "Robinson Piramuthu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae05bb893603a0b4b2420",
"hidden": false,
"name": "Ruslan Salakhutdinov",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:54:05 | Towards Internet-Scale Training For Agents | The predominant approach for training web navigation agents gathers human
demonstrations for a set of popular websites and hand-written tasks, but it is
becoming clear that human data are an inefficient resource. We develop a
pipeline to facilitate Internet-scale training for agents without laborious
human annotations. In the first stage, an LLM generates tasks for 150k diverse
websites. In the next stage, LLM agents complete tasks and produce
trajectories. In the final stage, an LLM reviews the trajectories and judges
their success. Language models are competitive with human annotators, detecting
and filtering out harmful content with an accuracy of 97%, generating feasible
tasks with an 89% rate, and judging successful trajectories with an 82.6%
accuracy. Scaling the pipeline, agents based on Llama 3.1 70B solve 16.7% of
tasks for 150k sites. Training on the data generated by our pipeline is
competitive with training on human demonstrations. In data-limited settings
derived from Mind2Web and WebLINX, we improve Step Accuracy by up to +89.5% and
+122.1% respectively for agents trained on mixtures of data from our pipeline,
and human data. When training agents with all available human data from these
benchmarks, agents fail to generalize to diverse real sites, and adding our
data improves their generalization by +149.0% for WebLINX and +156.3% for
Mind2Web. Code will be available at: data-for-agents.github.io. | 6 | 67aae05cb893603a0b4b2480 | null | null |
|
2025-02-11T12:06:30.185000 | Embodied Red Teaming for Auditing Robotic Foundation Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2411.18676 | [
{
"_id": "67ab837b02329ca8f809ceae",
"hidden": false,
"name": "Sathwik Karnik",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:48.921Z",
"user": {
"_id": "67ab79ead3f12c4887867022",
"avatarUrl": "/avatars/d8e1dd4afabd4c2d04a73712a610cea6.svg",
"fullname": "Sathwik Karnik",
"isPro": false,
"type": "user",
"user": "S-Karnik"
}
},
{
"_id": "67ab837b02329ca8f809ceaf",
"hidden": false,
"name": "Zhang-Wei Hong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb0",
"hidden": false,
"name": "Nishant Abhangi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb1",
"hidden": false,
"name": "Yen-Chen Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb2",
"hidden": false,
"name": "Tsun-Hsuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb3",
"hidden": false,
"name": "Christophe Dupuy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb4",
"hidden": false,
"name": "Rahul Gupta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab837b02329ca8f809ceb5",
"hidden": false,
"name": "Pulkit Agrawal",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-11-27T18:57:26 | Embodied Red Teaming for Auditing Robotic Foundation Models | Language-conditioned robot models have the potential to enable robots to
perform a wide range of tasks based on natural language instructions. However,
assessing their safety and effectiveness remains challenging because it is
difficult to test all the different ways a single task can be phrased. Current
benchmarks have two key limitations: they rely on a limited set of
human-generated instructions, missing many challenging cases, and focus only on
task performance without assessing safety, such as avoiding damage. To address
these gaps, we introduce Embodied Red Teaming (ERT), a new evaluation method
that generates diverse and challenging instructions to test these models. ERT
uses automated red teaming techniques with Vision Language Models (VLMs) to
create contextually grounded, difficult instructions. Experimental results show
that state-of-the-art language-conditioned robot models fail or behave unsafely
on ERT-generated instructions, underscoring the shortcomings of current
benchmarks in evaluating real-world performance and safety. Code and videos are
available at: https://s-karnik.github.io/embodied-red-team-project-page. | 1 | 67ab837d02329ca8f809cef0 | null | null |
|
2025-02-11T09:36:24.937000 | CODESIM: Multi-Agent Code Generation and Problem Solving through Simulation-Driven Planning and Debugging | 3 | {
"_id": "63fe51dcc0ec83fda436d558",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63fe51dcc0ec83fda436d558/22wrFA08OxRLIsRVxPts0.jpeg",
"followerCount": 1,
"fullname": "Md. Ashraful Islam",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ashraful",
"type": "user"
} | true | null | 2502.05664 | [
{
"_id": "67ab56dc0bc5f6a94eb49892",
"hidden": false,
"name": "Md. Ashraful Islam",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:25:21.564Z",
"user": {
"_id": "63fe51dcc0ec83fda436d558",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63fe51dcc0ec83fda436d558/22wrFA08OxRLIsRVxPts0.jpeg",
"fullname": "Md. Ashraful Islam",
"isPro": false,
"type": "user",
"user": "ashraful"
}
},
{
"_id": "67ab56dc0bc5f6a94eb49893",
"hidden": false,
"name": "Mohammed Eunus Ali",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab56dc0bc5f6a94eb49894",
"hidden": false,
"name": "Md Rizwan Parvez",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T18:43:59 | CODESIM: Multi-Agent Code Generation and Problem Solving through
Simulation-Driven Planning and Debugging | Large Language Models (LLMs) have made significant strides in code generation
and problem solving. Current approaches employ external tool-based iterative
debuggers that use compiler or other tool-based runtime feedback to refine
coarse programs generated by various methods. However, the effectiveness of
these approaches heavily relies on the quality of the initial code generation,
which remains an open challenge. In this paper, we introduce CodeSim, a novel
multi-agent code generation framework that comprehensively addresses the stages
of program synthesis-planning, coding, and debugging-through a human-like
perception approach. As human verifies their understanding of any algorithms
through visual simulation, CodeSim uniquely features a method of plan
verification and internal debugging through the step-by-step simulation of
input/output. Extensive experiments across seven challenging competitive
problem-solving and program synthesis benchmarks demonstrate CodeSim's
remarkable code generation capabilities. Our framework achieves new
state-of-the-art (pass@1) results-(HumanEval 95.1%, MBPP 90.7%, APPS 22%, and
CodeContests 29.1%). Furthermore, our method shows potential for even greater
enhancement when cascaded with external debuggers. To facilitate further
research and development in this area, we have open-sourced our framework in
this link (https://kagnlp.github.io/codesim.github.io/). | 23 | 67ab56de0bc5f6a94eb49918 | null | null |
|
2025-02-11T04:30:30.043000 | The Curse of Depth in Large Language Models | 5 | {
"_id": "64245f2c089d5fae56b4549a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64245f2c089d5fae56b4549a/qUHFsL9Svwyj5BKpfMtaY.jpeg",
"followerCount": 3,
"fullname": "Pengxiang Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "pengxiang",
"type": "user"
} | true | null | 2502.05795 | [
{
"_id": "67ab189a8087b66340398b01",
"hidden": false,
"name": "Wenfang Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:25:29.711Z",
"user": {
"_id": "643ce831a16fa581f3f826c9",
"avatarUrl": "/avatars/2f2dffb660eee3d3c7029dd7305f5226.svg",
"fullname": "Wenfang Sun",
"isPro": false,
"type": "user",
"user": "lmsdss"
}
},
{
"_id": "67ab189a8087b66340398b02",
"hidden": false,
"name": "Xinyuan Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:36:06.467Z",
"user": {
"_id": "6728a79b2a2e92f37afb900d",
"avatarUrl": "/avatars/92106187ccbe34dfdbfc5d6d6fd63210.svg",
"fullname": "XinyuanSong",
"isPro": false,
"type": "user",
"user": "XinyuanSong"
}
},
{
"_id": "67ab189a8087b66340398b03",
"hidden": false,
"name": "Pengxiang Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T09:51:15.671Z",
"user": {
"_id": "64245f2c089d5fae56b4549a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64245f2c089d5fae56b4549a/qUHFsL9Svwyj5BKpfMtaY.jpeg",
"fullname": "Pengxiang Li",
"isPro": false,
"type": "user",
"user": "pengxiang"
}
},
{
"_id": "67ab189a8087b66340398b04",
"hidden": false,
"name": "Lu Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab189a8087b66340398b05",
"hidden": false,
"name": "Yefeng Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab189a8087b66340398b06",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:15:25.316Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
}
] | 2025-02-09T07:03:36 | The Curse of Depth in Large Language Models | In this paper, we introduce the Curse of Depth, a concept that highlights,
explains, and addresses the recent observation in modern Large Language
Models(LLMs) where nearly half of the layers are less effective than expected.
We first confirm the wide existence of this phenomenon across the most popular
families of LLMs such as Llama, Mistral, DeepSeek, and Qwen. Our analysis,
theoretically and empirically, identifies that the underlying reason for the
ineffectiveness of deep layers in LLMs is the widespread usage of Pre-Layer
Normalization (Pre-LN). While Pre-LN stabilizes the training of Transformer
LLMs, its output variance exponentially grows with the model depth, which
undesirably causes the derivative of the deep Transformer blocks to be an
identity matrix, and therefore barely contributes to the training. To resolve
this training pitfall, we propose LayerNorm Scaling, which scales the variance
of output of the layer normalization inversely by the square root of its depth.
This simple modification mitigates the output variance explosion of deeper
Transformer layers, improving their contribution. Our experimental results,
spanning model sizes from 130M to 1B, demonstrate that LayerNorm Scaling
significantly enhances LLM pre-training performance compared to Pre-LN.
Moreover, this improvement seamlessly carries over to supervised fine-tuning.
All these gains can be attributed to the fact that LayerNorm Scaling enables
deeper layers to contribute more effectively during training. | 35 | 67ab189b8087b66340398b3b | null | null |
|
2025-02-11T04:08:55.672000 | Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning | 3 | {
"_id": "63abbf74ad514ca8d14a0548",
"avatarUrl": "/avatars/b1357b73b8f9a8ff9908710ad64154ef.svg",
"followerCount": 3,
"fullname": "Bidipta Sarkar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "bidiptas",
"type": "user"
} | true | null | 2502.06060 | [
{
"_id": "67ab1314385da1f07cda1271",
"hidden": false,
"name": "Bidipta Sarkar",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T09:51:17.933Z",
"user": {
"_id": "63abbf74ad514ca8d14a0548",
"avatarUrl": "/avatars/b1357b73b8f9a8ff9908710ad64154ef.svg",
"fullname": "Bidipta Sarkar",
"isPro": false,
"type": "user",
"user": "bidiptas"
}
},
{
"_id": "67ab1314385da1f07cda1272",
"hidden": false,
"name": "Warren Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab1314385da1f07cda1273",
"hidden": false,
"name": "C. Karen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67ab1314385da1f07cda1274",
"hidden": false,
"name": "Dorsa Sadigh",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T22:44:45 | Training Language Models for Social Deduction with Multi-Agent
Reinforcement Learning | Communicating in natural language is a powerful tool in multi-agent settings,
as it enables independent agents to share information in partially observable
settings and allows zero-shot coordination with humans. However, most prior
works are limited as they either rely on training with large amounts of human
demonstrations or lack the ability to generate natural and useful communication
strategies. In this work, we train language models to have productive
discussions about their environment in natural language without any human
demonstrations. We decompose the communication problem into listening and
speaking. Our key idea is to leverage the agent's goal to predict useful
information about the world as a dense reward signal that guides communication.
Specifically, we improve a model's listening skills by training them to predict
information about the environment based on discussions, and we simultaneously
improve a model's speaking skills with multi-agent reinforcement learning by
rewarding messages based on their influence on other agents. To investigate the
role and necessity of communication in complex social settings, we study an
embodied social deduction game based on Among Us, where the key question to
answer is the identity of an adversarial imposter. We analyze emergent
behaviors due to our technique, such as accusing suspects and providing
evidence, and find that it enables strong discussions, doubling the win rates
compared to standard RL. We release our code and models at
https://socialdeductionllm.github.io/ | 34 | 67ab1315385da1f07cda12a5 | null | null |
|
2025-02-11T03:03:12.135000 | SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data Annotators | 2 | {
"_id": "61ade264f602880813dbe10b",
"avatarUrl": "/avatars/a92dea7d853bbabbf60b351c207b6875.svg",
"followerCount": 3,
"fullname": "Daniil Moskovskiy",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "etomoscow",
"type": "user"
} | true | null | 2502.06394 | [
{
"_id": "67aafead3711ca5b760f324c",
"hidden": false,
"name": "Daniil Moskovskiy",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:17.448Z",
"user": {
"_id": "61ade264f602880813dbe10b",
"avatarUrl": "/avatars/a92dea7d853bbabbf60b351c207b6875.svg",
"fullname": "Daniil Moskovskiy",
"isPro": false,
"type": "user",
"user": "etomoscow"
}
},
{
"_id": "67aafead3711ca5b760f324d",
"hidden": false,
"name": "Nikita Sushko",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:21.453Z",
"user": {
"_id": "634c72e6fe1bfa967d6c2b5c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/634c72e6fe1bfa967d6c2b5c/WFWIAlWl-FsiJRyGxQTTx.jpeg",
"fullname": "Nikita Sushko",
"isPro": false,
"type": "user",
"user": "chameleon-lizard"
}
},
{
"_id": "67aafead3711ca5b760f324e",
"hidden": false,
"name": "Sergey Pletenev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T09:59:47.063Z",
"user": {
"_id": "5dfa8e07da6d0311fd3d5430",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1651090418656-5dfa8e07da6d0311fd3d5430.png",
"fullname": "Sergey Pletenev",
"isPro": false,
"type": "user",
"user": "memyprokotow"
}
},
{
"_id": "67aafead3711ca5b760f324f",
"hidden": false,
"name": "Elena Tutubalina",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T09:59:50.003Z",
"user": {
"_id": "662f8d645c4db70c77a203b0",
"avatarUrl": "/avatars/72f9a3c39b3ba5114388d16a35524835.svg",
"fullname": "Elena Tutubalina",
"isPro": false,
"type": "user",
"user": "tlenusik"
}
},
{
"_id": "67aafead3711ca5b760f3250",
"hidden": false,
"name": "Alexander Panchenko",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:54:34.688Z",
"user": {
"_id": "605473729d7c1d4d81b7e52b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1662046050710-605473729d7c1d4d81b7e52b.jpeg",
"fullname": "Alexander Panchenko",
"isPro": false,
"type": "user",
"user": "apanc"
}
}
] | 2025-02-10T12:30:25 | SynthDetoxM: Modern LLMs are Few-Shot Parallel Detoxification Data
Annotators | Existing approaches to multilingual text detoxification are hampered by the
scarcity of parallel multilingual datasets. In this work, we introduce a
pipeline for the generation of multilingual parallel detoxification data. We
also introduce SynthDetoxM, a manually collected and synthetically generated
multilingual parallel text detoxification dataset comprising 16,000
high-quality detoxification sentence pairs across German, French, Spanish and
Russian. The data was sourced from different toxicity evaluation datasets and
then rewritten with nine modern open-source LLMs in few-shot setting. Our
experiments demonstrate that models trained on the produced synthetic datasets
have superior performance to those trained on the human-annotated
MultiParaDetox dataset even in data limited setting. Models trained on
SynthDetoxM outperform all evaluated LLMs in few-shot setting. We release our
dataset and code to help further research in multilingual text detoxification. | 86 | 67aafeae3711ca5b760f3280 | null | null |
|
2025-02-11T02:46:33.870000 | DreamDPO: Aligning Text-to-3D Generation with Human Preferences via Direct Preference Optimization | 2 | {
"_id": "6425318d175bd2952281065e",
"avatarUrl": "/avatars/37deb6ceb1552dece43a1c8c13c1c871.svg",
"followerCount": 1,
"fullname": "ZhenglinZhou",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zhenglin",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6425318d175bd2952281065e/R7cMLIsmYovAMtL1vhsDn.mp4"
] | 2502.04370 | [
{
"_id": "67aafd90141fac22732a79b3",
"hidden": false,
"name": "Zhenglin Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:53:27.764Z",
"user": {
"_id": "6425318d175bd2952281065e",
"avatarUrl": "/avatars/37deb6ceb1552dece43a1c8c13c1c871.svg",
"fullname": "ZhenglinZhou",
"isPro": false,
"type": "user",
"user": "zhenglin"
}
},
{
"_id": "67aafd90141fac22732a79b4",
"hidden": false,
"name": "Xiaobo Xia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:53:52.391Z",
"user": {
"_id": "66c48f944511077b9ff5ce9d",
"avatarUrl": "/avatars/a4977246bc951e9da0cb2301bedd8249.svg",
"fullname": "Xiaobo Xia",
"isPro": false,
"type": "user",
"user": "XiaoboXia1997"
}
},
{
"_id": "67aafd90141fac22732a79b5",
"hidden": false,
"name": "Fan Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aafd90141fac22732a79b6",
"hidden": false,
"name": "Hehe Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:54:14.222Z",
"user": {
"_id": "64ad04020fb9b20dbabbd30e",
"avatarUrl": "/avatars/a6bae4a3a4bcd6b54c33860fe14c7923.svg",
"fullname": "Hehe Fan",
"isPro": false,
"type": "user",
"user": "hehefan"
}
},
{
"_id": "67aafd90141fac22732a79b7",
"hidden": false,
"name": "Yi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aafd90141fac22732a79b8",
"hidden": false,
"name": "Tat-Seng Chua",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:54:04.147Z",
"user": {
"_id": "6570ae84c4993b8fb96f41a8",
"avatarUrl": "/avatars/21f7d79d46ac4df0ecff8eca7678b33f.svg",
"fullname": "Tat-Seng Chua",
"isPro": false,
"type": "user",
"user": "chuats"
}
}
] | 2025-02-05T11:03:08 | DreamDPO: Aligning Text-to-3D Generation with Human Preferences via
Direct Preference Optimization | Text-to-3D generation automates 3D content creation from textual
descriptions, which offers transformative potential across various fields.
However, existing methods often struggle to align generated content with human
preferences, limiting their applicability and flexibility. To address these
limitations, in this paper, we propose DreamDPO, an optimization-based
framework that integrates human preferences into the 3D generation process,
through direct preference optimization. Practically, DreamDPO first constructs
pairwise examples, then compare their alignment with human preferences using
reward or large multimodal models, and lastly optimizes the 3D representation
with a preference-driven loss function. By leveraging pairwise comparison to
reflect preferences, DreamDPO reduces reliance on precise pointwise quality
evaluations while enabling fine-grained controllability through
preference-guided optimization. Experiments demonstrate that DreamDPO achieves
competitive results, and provides higher-quality and more controllable 3D
content compared to existing methods. The code and models will be open-sourced. | 6 | 67aafd94141fac22732a7adc | null | null |
|
2025-02-11T02:09:27.778000 | Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and Generation | 2 | {
"_id": "64bba541da140e461924dfed",
"avatarUrl": "/avatars/367993765b0ca3734b2b100db33ed787.svg",
"followerCount": 2,
"fullname": "zhijie deng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zhijie3",
"type": "user"
} | true | null | 2502.05415 | [
{
"_id": "67aaea0a0acaa007694aed73",
"hidden": false,
"name": "Chenkai Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:28.861Z",
"user": {
"_id": "65708920806dee337da0eef5",
"avatarUrl": "/avatars/945e328dedc8e1e3111f48c344ad5b03.svg",
"fullname": "xuchenkai",
"isPro": false,
"type": "user",
"user": "UnhurriedDawn"
}
},
{
"_id": "67aaea0a0acaa007694aed74",
"hidden": false,
"name": "Xu Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:26.432Z",
"user": {
"_id": "6644548a3a16452261cdb173",
"avatarUrl": "/avatars/4643db904204e3a60202a29e8c884139.svg",
"fullname": "wangxu",
"isPro": false,
"type": "user",
"user": "asunalove"
}
},
{
"_id": "67aaea0a0acaa007694aed75",
"hidden": false,
"name": "Zhenyi Liao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaea0a0acaa007694aed76",
"hidden": false,
"name": "Yishun Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaea0a0acaa007694aed77",
"hidden": false,
"name": "Tianqi Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaea0a0acaa007694aed78",
"hidden": false,
"name": "Zhijie Deng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:24.089Z",
"user": {
"_id": "64bba541da140e461924dfed",
"avatarUrl": "/avatars/367993765b0ca3734b2b100db33ed787.svg",
"fullname": "zhijie deng",
"isPro": false,
"type": "user",
"user": "zhijie3"
}
}
] | 2025-02-08T02:52:25 | Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and
Generation | There has been increasing research interest in building unified multimodal
understanding and generation models, among which Show-o stands as a notable
representative, demonstrating great promise for both text-to-image and
image-to-text generation. The inference of Show-o involves progressively
denoising image tokens and autoregressively decoding text tokens, and hence,
unfortunately, suffers from inefficiency issues from both sides. This paper
introduces Show-o Turbo to bridge the gap. We first identify a unified
denoising perspective for the generation of images and text in Show-o based on
the parallel decoding of text tokens. We then propose to extend consistency
distillation (CD), a qualified approach for shortening the denoising process of
diffusion models, to the multimodal denoising trajectories of Show-o. We
introduce a trajectory segmentation strategy and a curriculum learning
procedure to improve the training convergence. Empirically, in text-to-image
generation, Show-o Turbo displays a GenEval score of 0.625 at 4 sampling steps
without using classifier-free guidance (CFG), outperforming that of the
original Show-o with 8 steps and CFG; in image-to-text generation, Show-o Turbo
exhibits a 1.5x speedup without significantly sacrificing performance. The code
is available at https://github.com/zhijie-group/Show-o-Turbo. | 22 | 67aaea100acaa007694aeea5 | null | null |
|
2025-02-11T01:33:35.134000 | MetaChain: A Fully-Automated and Zero-Code Framework for LLM Agents | 2 | {
"_id": "643b751cc5f633a7fa84b325",
"avatarUrl": "/avatars/a094b856cf3d51eb78d16a14361def62.svg",
"followerCount": 12,
"fullname": "Tang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Jiabin99",
"type": "user"
} | false | null | 2502.05957 | [
{
"_id": "67aaecec114e64d6e15e7f41",
"hidden": false,
"name": "Jiabin Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaecec114e64d6e15e7f42",
"hidden": false,
"name": "Tianyu Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaecec114e64d6e15e7f43",
"hidden": false,
"name": "Chao Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T16:53:56 | MetaChain: A Fully-Automated and Zero-Code Framework for LLM Agents | Large Language Model (LLM) Agents have demonstrated remarkable capabilities
in task automation and intelligent decision-making, driving the widespread
adoption of agent development frameworks such as LangChain and AutoGen.
However, these frameworks predominantly serve developers with extensive
technical expertise - a significant limitation considering that only 0.03 % of
the global population possesses the necessary programming skills. This stark
accessibility gap raises a fundamental question: Can we enable everyone,
regardless of technical background, to build their own LLM agents using natural
language alone? To address this challenge, we introduce MetaChain-a
Fully-Automated and highly Self-Developing framework that enables users to
create and deploy LLM agents through Natural Language Alone. Operating as an
autonomous Agent Operating System, MetaChain comprises four key components: i)
Agentic System Utilities, ii) LLM-powered Actionable Engine, iii) Self-Managing
File System, and iv) Self-Play Agent Customization module. This lightweight yet
powerful system enables efficient and dynamic creation and modification of
tools, agents, and workflows without coding requirements or manual
intervention. Beyond its code-free agent development capabilities, MetaChain
also serves as a versatile multi-agent system for General AI Assistants.
Comprehensive evaluations on the GAIA benchmark demonstrate MetaChain's
effectiveness in generalist multi-agent tasks, surpassing existing
state-of-the-art methods. Furthermore, MetaChain's Retrieval-Augmented
Generation (RAG)-related capabilities have shown consistently superior
performance compared to many alternative LLM-based solutions. | 16 | 67aaecef114e64d6e15e802c | null | null |
|
2025-02-11T01:07:50.116000 | Matryoshka Quantization | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06786 | [
{
"_id": "67aae91b83b1182df7c0cf54",
"hidden": false,
"name": "Pranav Nair",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae91b83b1182df7c0cf55",
"hidden": false,
"name": "Puranjay Datta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae91b83b1182df7c0cf56",
"hidden": false,
"name": "Jeff Dean",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae91b83b1182df7c0cf57",
"hidden": false,
"name": "Prateek Jain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae91b83b1182df7c0cf58",
"hidden": false,
"name": "Aditya Kusupati",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:59:10 | Matryoshka Quantization | Quantizing model weights is critical for reducing the communication and
inference costs of large models. However, quantizing models -- especially to
low precisions like int4 or int2 -- requires a trade-off in model quality;
int2, in particular, is known to severely degrade model quality. Consequently,
practitioners are often forced to maintain multiple models with different
quantization levels or serve a single model that best satisfies the
quality-latency trade-off. On the other hand, integer data types, such as int8,
inherently possess a nested (Matryoshka) structure where smaller bit-width
integers, like int4 or int2, are nested within the most significant bits. This
paper proposes Matryoshka Quantization (MatQuant), a novel multi-scale
quantization technique that addresses the challenge of needing multiple
quantized models. It allows training and maintaining just one model, which can
then be served at different precision levels. Furthermore, due to the
co-training and co-distillation regularization provided by MatQuant, the int2
precision models extracted by MatQuant can be up to 10% more accurate than
standard int2 quantization (using techniques like QAT or OmniQuant). This
represents significant progress in model quantization, demonstrated by the fact
that, with the same recipe, an int2 FFN-quantized Gemma-2 9B model is more
accurate than an int8 FFN-quantized Gemma-2 2B model. | 29 | 67aae91d83b1182df7c0cff6 | null | null |
|
2025-02-11T01:00:25.383000 | Lumina-Video: Efficient and Flexible Video Generation with Multi-scale Next-DiT | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06782 | [
{
"_id": "67aae76c71a9983f50e134ef",
"hidden": false,
"name": "Dongyang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f0",
"hidden": false,
"name": "Shicheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f1",
"hidden": false,
"name": "Yutong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f2",
"hidden": false,
"name": "Zhen Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:44.523Z",
"user": {
"_id": "6285a9133ab6642179158944",
"avatarUrl": "/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg",
"fullname": "Zhen Li",
"isPro": false,
"type": "user",
"user": "Paper99"
}
},
{
"_id": "67aae76c71a9983f50e134f3",
"hidden": false,
"name": "Kai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f4",
"hidden": false,
"name": "Xinyue Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f5",
"hidden": false,
"name": "Qi Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f6",
"hidden": false,
"name": "Yufei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f7",
"hidden": false,
"name": "Yi Xin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f8",
"hidden": false,
"name": "Zhongyu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134f9",
"hidden": false,
"name": "Bin Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134fa",
"hidden": false,
"name": "Chenyang Si",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134fb",
"hidden": false,
"name": "Yuewen Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134fc",
"hidden": false,
"name": "Conghui He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134fd",
"hidden": false,
"name": "Ziwei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134fe",
"hidden": false,
"name": "Yu Qiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e134ff",
"hidden": false,
"name": "Qibin Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e13500",
"hidden": false,
"name": "Hongsheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae76c71a9983f50e13501",
"hidden": false,
"name": "Peng Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:58:11 | Lumina-Video: Efficient and Flexible Video Generation with Multi-scale
Next-DiT | Recent advancements have established Diffusion Transformers (DiTs) as a
dominant framework in generative modeling. Building on this success,
Lumina-Next achieves exceptional performance in the generation of
photorealistic images with Next-DiT. However, its potential for video
generation remains largely untapped, with significant challenges in modeling
the spatiotemporal complexity inherent to video data. To address this, we
introduce Lumina-Video, a framework that leverages the strengths of Next-DiT
while introducing tailored solutions for video synthesis. Lumina-Video
incorporates a Multi-scale Next-DiT architecture, which jointly learns multiple
patchifications to enhance both efficiency and flexibility. By incorporating
the motion score as an explicit condition, Lumina-Video also enables direct
control of generated videos' dynamic degree. Combined with a progressive
training scheme with increasingly higher resolution and FPS, and a multi-source
training scheme with mixed natural and synthetic data, Lumina-Video achieves
remarkable aesthetic quality and motion smoothness at high training and
inference efficiency. We additionally propose Lumina-V2A, a video-to-audio
model based on Next-DiT, to create synchronized sounds for generated videos.
Codes are released at https://www.github.com/Alpha-VLLM/Lumina-Video. | 12 | 67aae76e71a9983f50e1357d | null | null |
|
2025-02-11T00:55:33.866000 | History-Guided Video Diffusion | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06764 | [
{
"_id": "67aac6052c02e43558b6b4b0",
"hidden": false,
"name": "Kiwhan Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:16:57.802Z",
"user": {
"_id": "6613663bcfbba5e761a69531",
"avatarUrl": "/avatars/2baf348371d87a2f9dd4b9c56f1483a9.svg",
"fullname": "Kiwhan Song",
"isPro": true,
"type": "user",
"user": "kiwhansong"
}
},
{
"_id": "67aac6052c02e43558b6b4b1",
"hidden": false,
"name": "Boyuan Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:45.325Z",
"user": {
"_id": "646935e05d70156639500beb",
"avatarUrl": "/avatars/1373ef2a40145aa505a03bfbae37c95a.svg",
"fullname": "Boyuan Chen",
"isPro": false,
"type": "user",
"user": "buoyancy99"
}
},
{
"_id": "67aac6052c02e43558b6b4b2",
"hidden": false,
"name": "Max Simchowitz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac6052c02e43558b6b4b3",
"hidden": false,
"name": "Yilun Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac6052c02e43558b6b4b4",
"hidden": false,
"name": "Russ Tedrake",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac6052c02e43558b6b4b5",
"hidden": false,
"name": "Vincent Sitzmann",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:44:25 | History-Guided Video Diffusion | Classifier-free guidance (CFG) is a key technique for improving conditional
generation in diffusion models, enabling more accurate control while enhancing
sample quality. It is natural to extend this technique to video diffusion,
which generates video conditioned on a variable number of context frames,
collectively referred to as history. However, we find two key challenges to
guiding with variable-length history: architectures that only support
fixed-size conditioning, and the empirical observation that CFG-style history
dropout performs poorly. To address this, we propose the Diffusion Forcing
Transformer (DFoT), a video diffusion architecture and theoretically grounded
training objective that jointly enable conditioning on a flexible number of
history frames. We then introduce History Guidance, a family of guidance
methods uniquely enabled by DFoT. We show that its simplest form, vanilla
history guidance, already significantly improves video generation quality and
temporal consistency. A more advanced method, history guidance across time and
frequency further enhances motion dynamics, enables compositional
generalization to out-of-distribution history, and can stably roll out
extremely long videos. Website: https://boyuan.space/history-guidance | 11 | 67aac6072c02e43558b6b543 | null | null |
|
2025-02-11T00:46:11.168000 | CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for Zero-Shot Customized Video Diffusion Transformers | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06527 | [
{
"_id": "67aae4128d478dcb4b39a097",
"hidden": false,
"name": "D. She",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a098",
"hidden": false,
"name": "Mushui Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a099",
"hidden": false,
"name": "Jingxuan Pang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09a",
"hidden": false,
"name": "Jin Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09b",
"hidden": false,
"name": "Zhen Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09c",
"hidden": false,
"name": "Wanggui He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09d",
"hidden": false,
"name": "Guanghao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09e",
"hidden": false,
"name": "Yi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a09f",
"hidden": false,
"name": "Qihan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a0a0",
"hidden": false,
"name": "Haobin Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a0a1",
"hidden": false,
"name": "Yunlong Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aae4128d478dcb4b39a0a2",
"hidden": false,
"name": "Siming Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T14:50:32 | CustomVideoX: 3D Reference Attention Driven Dynamic Adaptation for
Zero-Shot Customized Video Diffusion Transformers | Customized generation has achieved significant progress in image synthesis,
yet personalized video generation remains challenging due to temporal
inconsistencies and quality degradation. In this paper, we introduce
CustomVideoX, an innovative framework leveraging the video diffusion
transformer for personalized video generation from a reference image.
CustomVideoX capitalizes on pre-trained video networks by exclusively training
the LoRA parameters to extract reference features, ensuring both efficiency and
adaptability. To facilitate seamless interaction between the reference image
and video content, we propose 3D Reference Attention, which enables direct and
simultaneous engagement of reference image features with all video frames
across spatial and temporal dimensions. To mitigate the excessive influence of
reference image features and textual guidance on generated video content during
inference, we implement the Time-Aware Reference Attention Bias (TAB) strategy,
dynamically modulating reference bias over different time steps. Additionally,
we introduce the Entity Region-Aware Enhancement (ERAE) module, aligning highly
activated regions of key entity tokens with reference feature injection by
adjusting attention bias. To thoroughly evaluate personalized video generation,
we establish a new benchmark, VideoBench, comprising over 50 objects and 100
prompts for extensive assessment. Experimental results show that CustomVideoX
significantly outperforms existing methods in terms of video consistency and
quality. | 10 | 67aae4178d478dcb4b39a1e7 | null | null |
|
2025-02-11T00:36:11.270000 | Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling | 6 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.06703 | [
{
"_id": "67aabf93c0f8648f68c68ce4",
"hidden": false,
"name": "Runze Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:22.940Z",
"user": {
"_id": "667187ba9ab144eb3ac43a1b",
"avatarUrl": "/avatars/db5558aa1c5160b9aee8b58573271959.svg",
"fullname": "Runze Liu",
"isPro": false,
"type": "user",
"user": "RyanLiu112"
}
},
{
"_id": "67aabf93c0f8648f68c68ce5",
"hidden": false,
"name": "Junqi Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:54:46.128Z",
"user": {
"_id": "67ab05fe4c6ca2d5db4c0c52",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/QpGUNDkeuKjX71s2GXlXF.png",
"fullname": "Junqi Gao",
"isPro": false,
"type": "user",
"user": "ChetKao"
}
},
{
"_id": "67aabf93c0f8648f68c68ce6",
"hidden": false,
"name": "Jian Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aabf93c0f8648f68c68ce7",
"hidden": false,
"name": "Kaiyan Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:18.725Z",
"user": {
"_id": "60bc94cd85a3ab33829b6211",
"avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg",
"fullname": "Kaiyan Zhang",
"isPro": false,
"type": "user",
"user": "iseesaw"
}
},
{
"_id": "67aabf93c0f8648f68c68ce8",
"hidden": false,
"name": "Xiu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aabf93c0f8648f68c68ce9",
"hidden": false,
"name": "Biqing Qi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:55:23.328Z",
"user": {
"_id": "645d9c3058f9ee315148116d",
"avatarUrl": "/avatars/165e18f27b5a50738bf1d22857118478.svg",
"fullname": "Biqing Qi",
"isPro": false,
"type": "user",
"user": "jackqi7"
}
},
{
"_id": "67aabf93c0f8648f68c68cea",
"hidden": false,
"name": "Wanli Ouyang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aabf93c0f8648f68c68ceb",
"hidden": false,
"name": "Bowen Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:55:11.315Z",
"user": {
"_id": "669f614b59adf5b56e05bce3",
"avatarUrl": "/avatars/ffd4189efbceb0e63a03db273065a44b.svg",
"fullname": "BowenZhou",
"isPro": false,
"type": "user",
"user": "bowenZhou"
}
}
] | 2025-02-10T17:30:23 | Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time
Scaling | Test-Time Scaling (TTS) is an important method for improving the performance
of Large Language Models (LLMs) by using additional computation during the
inference phase. However, current studies do not systematically analyze how
policy models, Process Reward Models (PRMs), and problem difficulty influence
TTS. This lack of analysis limits the understanding and practical use of TTS
methods. In this paper, we focus on two core questions: (1) What is the optimal
approach to scale test-time computation across different policy models, PRMs,
and problem difficulty levels? (2) To what extent can extended computation
improve the performance of LLMs on complex tasks, and can smaller language
models outperform larger ones through this approach? Through comprehensive
experiments on MATH-500 and challenging AIME24 tasks, we have the following
observations: (1) The compute-optimal TTS strategy is highly dependent on the
choice of policy model, PRM, and problem difficulty. (2) With our
compute-optimal TTS strategy, extremely small policy models can outperform
larger models. For example, a 1B LLM can exceed a 405B LLM on MATH-500.
Moreover, on both MATH-500 and AIME24, a 0.5B LLM outperforms GPT-4o, a 3B LLM
surpasses a 405B LLM, and a 7B LLM beats o1 and DeepSeek-R1, while with higher
inference efficiency. These findings show the significance of adapting TTS
strategies to the specific characteristics of each task and model and indicate
that TTS is a promising approach for enhancing the reasoning abilities of LLMs. | 141 | 67aabf94c0f8648f68c68d19 | null | null |
|
2025-02-10T23:18:11.727000 | Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning | 6 | {
"_id": "6601196cc91ba4c08ad6e270",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6601196cc91ba4c08ad6e270/X2YPNzUOQXBz5Gv-xR9LW.jpeg",
"followerCount": 2,
"fullname": "yuzhe gu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vanilla1116",
"type": "user"
} | true | null | 2502.06781 | [
{
"_id": "67aacd7e078cdf445284f9f6",
"hidden": false,
"name": "Chengqi Lyu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284f9f7",
"hidden": false,
"name": "Songyang Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:56:04.271Z",
"user": {
"_id": "650ab54e23196fb2d86b486b",
"avatarUrl": "/avatars/e0506393589695b553ec9ee3fe99b93a.svg",
"fullname": "SongYang Gao",
"isPro": false,
"type": "user",
"user": "Wizardcoast"
}
},
{
"_id": "67aacd7e078cdf445284f9f8",
"hidden": false,
"name": "Yuzhe Gu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T16:40:46.962Z",
"user": {
"_id": "6601196cc91ba4c08ad6e270",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6601196cc91ba4c08ad6e270/X2YPNzUOQXBz5Gv-xR9LW.jpeg",
"fullname": "yuzhe gu",
"isPro": false,
"type": "user",
"user": "vanilla1116"
}
},
{
"_id": "67aacd7e078cdf445284f9f9",
"hidden": false,
"name": "Wenwei Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:40.279Z",
"user": {
"_id": "64e8505321540e1da3226b54",
"avatarUrl": "/avatars/18958b8406d1ce492b54c1c839f18c54.svg",
"fullname": "Wenwei Zhang",
"isPro": false,
"type": "user",
"user": "ZwwWayne"
}
},
{
"_id": "67aacd7e078cdf445284f9fa",
"hidden": false,
"name": "Jianfei Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-11T15:56:31.173Z",
"user": {
"_id": "64070c5c4dc5f2846c925e93",
"avatarUrl": "/avatars/ac2d7c1cd4ecccd6a88b85767c963ec7.svg",
"fullname": "Gao Jianfei",
"isPro": false,
"type": "user",
"user": "pppppM"
}
},
{
"_id": "67aacd7e078cdf445284f9fb",
"hidden": false,
"name": "Kuikun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284f9fc",
"hidden": false,
"name": "Ziyi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284f9fd",
"hidden": false,
"name": "Shuaibin Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284f9fe",
"hidden": false,
"name": "Qian Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284f9ff",
"hidden": false,
"name": "Haian Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284fa00",
"hidden": false,
"name": "Weihan Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284fa01",
"hidden": false,
"name": "Jiangning Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284fa02",
"hidden": false,
"name": "Hongwei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284fa03",
"hidden": false,
"name": "Junnan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacd7e078cdf445284fa04",
"hidden": false,
"name": "Songyang Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:37.733Z",
"user": {
"_id": "630716d11801ecc7d2595021",
"avatarUrl": "/avatars/2d36a880ce4a3cf7efc5ff3987dbeaf3.svg",
"fullname": "Songyang Zhang",
"isPro": false,
"type": "user",
"user": "zsytony"
}
},
{
"_id": "67aacd7e078cdf445284fa05",
"hidden": false,
"name": "Dahua Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-19T16:16:07.907Z",
"user": {
"_id": "636317ed80c1a705a6eff396",
"avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg",
"fullname": "Dahua Lin",
"isPro": false,
"type": "user",
"user": "lindahua"
}
},
{
"_id": "67aacd7e078cdf445284fa06",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:57:29 | Exploring the Limit of Outcome Reward for Learning Mathematical
Reasoning | Reasoning abilities, especially those for solving complex math problems, are
crucial components of general intelligence. Recent advances by proprietary
companies, such as o-series models of OpenAI, have made remarkable progress on
reasoning tasks. However, the complete technical details remain unrevealed, and
the techniques that are believed certainly to be adopted are only reinforcement
learning (RL) and the long chain of thoughts. This paper proposes a new RL
framework, termed OREAL, to pursue the performance limit that can be achieved
through Outcome REwArd-based reinforcement
Learning for mathematical reasoning tasks, where only binary outcome
rewards are easily accessible. We theoretically prove that behavior cloning on
positive trajectories from best-of-N (BoN) sampling is sufficient to learn the
KL-regularized optimal policy in binary feedback environments. This formulation
further implies that the rewards of negative samples should be reshaped to
ensure the gradient consistency between positive and negative samples. To
alleviate the long-existing difficulties brought by sparse rewards in RL, which
are even exacerbated by the partial correctness of the long chain of thought
for reasoning tasks, we further apply a token-level reward model to sample
important tokens in reasoning trajectories for learning. With OREAL, for the
first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL,
being on par with 32B models. OREAL-32B also surpasses previous 32B models
trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our
investigation also indicates the importance of initial policy models and
training queries for RL. Code, models, and data will be released to benefit
future researchhttps://github.com/InternLM/OREAL. | 60 | 67aacd7f078cdf445284fa4b | null | null |
|
2025-02-10T22:58:41.471000 | Lossless Acceleration of Large Language Models with Hierarchical Drafting based on Temporal Locality in Speculative Decoding | 3 | {
"_id": "64ec4c04c782d648d28d70fc",
"avatarUrl": "/avatars/6975526fcf4b513cc934b5bc45370a48.svg",
"followerCount": 2,
"fullname": "Sukmin Cho",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zomss",
"type": "user"
} | true | null | 2502.05609 | [
{
"_id": "67aacaaaa03eecbc2d72835f",
"hidden": false,
"name": "Sukmin Cho",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:43.377Z",
"user": {
"_id": "64ec4c04c782d648d28d70fc",
"avatarUrl": "/avatars/6975526fcf4b513cc934b5bc45370a48.svg",
"fullname": "Sukmin Cho",
"isPro": false,
"type": "user",
"user": "zomss"
}
},
{
"_id": "67aacaaaa03eecbc2d728360",
"hidden": false,
"name": "Sangjin Choi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728361",
"hidden": false,
"name": "Taeho Hwang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:54:45.737Z",
"user": {
"_id": "64d1e70a84f205869017703b",
"avatarUrl": "/avatars/215d0d4db5f79cb74df4d888b18c6a0d.svg",
"fullname": "Taeho Hwang",
"isPro": false,
"type": "user",
"user": "doubleyyh"
}
},
{
"_id": "67aacaaaa03eecbc2d728362",
"hidden": false,
"name": "Jeongyeon Seo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728363",
"hidden": false,
"name": "Soyeong Jeong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728364",
"hidden": false,
"name": "Huije Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728365",
"hidden": false,
"name": "Hoyun Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728366",
"hidden": false,
"name": "Jong C. Park",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aacaaaa03eecbc2d728367",
"hidden": false,
"name": "Youngjin Kwon",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T15:32:53 | Lossless Acceleration of Large Language Models with Hierarchical
Drafting based on Temporal Locality in Speculative Decoding | Accelerating inference in Large Language Models (LLMs) is critical for
real-time interactions, as they have been widely incorporated into real-world
services. Speculative decoding, a fully algorithmic solution, has gained
attention for improving inference speed by drafting and verifying tokens,
thereby generating multiple tokens in a single forward pass. However, current
drafting strategies usually require significant fine-tuning or have
inconsistent performance across tasks. To address these challenges, we propose
Hierarchy Drafting (HD), a novel lossless drafting approach that organizes
various token sources into multiple databases in a hierarchical framework based
on temporal locality. In the drafting step, HD sequentially accesses multiple
databases to obtain draft tokens from the highest to the lowest locality,
ensuring consistent acceleration across diverse tasks and minimizing drafting
latency. Our experiments on Spec-Bench using LLMs with 7B and 13B parameters
demonstrate that HD outperforms existing database drafting methods, achieving
robust inference speedups across model sizes, tasks, and temperatures. | 17 | 67aacaaca03eecbc2d728394 | null | null |
|
2025-02-10T22:49:56.390000 | ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates | 3 | {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"followerCount": 6,
"fullname": "Ling Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Lingaaaaaaa",
"type": "user"
} | true | null | 2502.06772 | [
{
"_id": "67aac8adfe33f6d8d695bc40",
"hidden": false,
"name": "Ling Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:25:31.970Z",
"user": {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"fullname": "Ling Yang",
"isPro": false,
"type": "user",
"user": "Lingaaaaaaa"
}
},
{
"_id": "67aac8adfe33f6d8d695bc41",
"hidden": false,
"name": "Zhaochen Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac8adfe33f6d8d695bc42",
"hidden": false,
"name": "Bin Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac8adfe33f6d8d695bc43",
"hidden": false,
"name": "Mengdi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:51:47 | ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates | We present that hierarchical LLM reasoning via scaling thought templates can
effectively optimize the reasoning search space and outperform the mathematical
reasoning capabilities of powerful LLMs like OpenAI o1-preview and DeepSeek V3.
We train our ReasonFlux-32B model with only 8 GPUs and introduces three
innovations: (i) a structured and generic thought template library, containing
around 500 high-level thought templates capable of generalizing to similar or
relevant reasoning problems; (ii) performing hierarchical reinforcement
learning on a sequence of thought templates instead of long CoTs, optimizing a
base LLM to plan out an optimal template trajectory for gradually handling
complex problems; (iii) a brand new inference scaling system that enables
hierarchical LLM reasoning by adaptively scaling thought templates at inference
time. With a template trajectory containing sequential thought templates, our
ReasonFlux-32B significantly advances math reasoning capabilities to
state-of-the-art levels. Notably, on the MATH benchmark, it achieves an
accuracy of 91.2% and surpasses o1-preview by 6.7%. On the USA Math Olympiad
(AIME) benchmark, ReasonFlux-32B solves an average of 56.7% of problems,
surpassing o1-preview and DeepSeek-V3 by 27% and 45%, respectively. Code:
https://github.com/Gen-Verse/ReasonFlux | 20 | 67aac8affe33f6d8d695bcbd | null | null |
|
2025-02-10T22:40:39.442000 | EVEv2: Improved Baselines for Encoder-Free Vision-Language Models | 2 | {
"_id": "64b4a717aa03b6520839e9b8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b4a717aa03b6520839e9b8/Rt3ERG-6BVEA4hAwOz0_I.jpeg",
"followerCount": 3,
"fullname": "Haiwen Diao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Paranioar",
"type": "user"
} | false | null | 2502.06788 | [
{
"_id": "67aac64de37429ebdbdafc40",
"hidden": false,
"name": "Haiwen Diao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc41",
"hidden": false,
"name": "Xiaotong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc42",
"hidden": false,
"name": "Yufeng Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc43",
"hidden": false,
"name": "Yueze Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:25:34.818Z",
"user": {
"_id": "6458b59c7a7e192202df8fa0",
"avatarUrl": "/avatars/33ee716477e5686da8723d01e199cd27.svg",
"fullname": "Yueze Wang",
"isPro": false,
"type": "user",
"user": "yzwang"
}
},
{
"_id": "67aac64de37429ebdbdafc44",
"hidden": false,
"name": "Haoge Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc45",
"hidden": false,
"name": "Ting Pan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:09.401Z",
"user": {
"_id": "6565bc5ee5aac326bfc98e39",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/vIfHy9Y1yAK6A96UCHNBH.jpeg",
"fullname": "Ting Pan",
"isPro": false,
"type": "user",
"user": "PhyscalX"
}
},
{
"_id": "67aac64de37429ebdbdafc46",
"hidden": false,
"name": "Wenxuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc47",
"hidden": false,
"name": "Huchuan Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac64de37429ebdbdafc48",
"hidden": false,
"name": "Xinlong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T18:59:58 | EVEv2: Improved Baselines for Encoder-Free Vision-Language Models | Existing encoder-free vision-language models (VLMs) are rapidly narrowing the
performance gap with their encoder-based counterparts, highlighting the
promising potential for unified multimodal systems with structural simplicity
and efficient deployment. We systematically clarify the performance gap between
VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist
visual layers from scratch, deeply excavating the under-examined
characteristics of encoder-free VLMs. We develop efficient strategies for
encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth
investigation, we launch EVEv2.0, a new and improved family of encoder-free
VLMs. We show that: (i) Properly decomposing and hierarchically associating
vision and language within a unified model reduces interference between
modalities. (ii) A well-designed training strategy enables effective
optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0
represents a thorough study for developing a decoder-only architecture across
modalities, demonstrating superior data efficiency and strong vision-reasoning
capability. Code is publicly available at: https://github.com/baaivision/EVE. | 12 | 67aac64ee37429ebdbdafc96 | null | null |
|
2025-02-10T22:33:17.468000 | Dual Caption Preference Optimization for Diffusion Models | 2 | {
"_id": "640f6299ef5c6dcac8b1df52",
"avatarUrl": "/avatars/022f21183abc8a8b5ce1b198d3ba96dc.svg",
"followerCount": null,
"fullname": "Amir",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sahsaeedi",
"type": "user"
} | true | null | 2502.06023 | [
{
"_id": "67aac3a9ef5570c0c9047095",
"hidden": false,
"name": "Amir Saeidi",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-11T03:31:48.492Z",
"user": {
"_id": "640f6299ef5c6dcac8b1df52",
"avatarUrl": "/avatars/022f21183abc8a8b5ce1b198d3ba96dc.svg",
"fullname": "Amir",
"isPro": false,
"type": "user",
"user": "sahsaeedi"
}
},
{
"_id": "67aac3a9ef5570c0c9047096",
"hidden": false,
"name": "Yiran Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac3a9ef5570c0c9047097",
"hidden": false,
"name": "Agneet Chatterjee",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:34.589Z",
"user": {
"_id": "6320c537a023aad6a7680c8b",
"avatarUrl": "/avatars/057dc492b8f756b83f12ced0b74fae65.svg",
"fullname": "Agneet Chatterjee",
"isPro": false,
"type": "user",
"user": "agneet"
}
},
{
"_id": "67aac3a9ef5570c0c9047098",
"hidden": false,
"name": "Shamanthak Hegde",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac3a9ef5570c0c9047099",
"hidden": false,
"name": "Bimsara Pathiraja",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac3a9ef5570c0c904709a",
"hidden": false,
"name": "Yezhou Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac3a9ef5570c0c904709b",
"hidden": false,
"name": "Chitta Baral",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T20:34:43 | Dual Caption Preference Optimization for Diffusion Models | Recent advancements in human preference optimization, originally developed
for Large Language Models (LLMs), have shown significant potential in improving
text-to-image diffusion models. These methods aim to learn the distribution of
preferred samples while distinguishing them from less preferred ones. However,
existing preference datasets often exhibit overlap between these distributions,
leading to a conflict distribution. Additionally, we identified that input
prompts contain irrelevant information for less preferred images, limiting the
denoising network's ability to accurately predict noise in preference
optimization methods, known as the irrelevant prompt issue. To address these
challenges, we propose Dual Caption Preference Optimization (DCPO), a novel
approach that utilizes two distinct captions to mitigate irrelevant prompts. To
tackle conflict distribution, we introduce the Pick-Double Caption dataset, a
modified version of Pick-a-Pic v2 with separate captions for preferred and less
preferred images. We further propose three different strategies for generating
distinct captions: captioning, perturbation, and hybrid methods. Our
experiments show that DCPO significantly improves image quality and relevance
to prompts, outperforming Stable Diffusion (SD) 2.1, SFT_Chosen, Diffusion-DPO,
and MaPO across multiple metrics, including Pickscore, HPSv2.1, GenEval,
CLIPscore, and ImageReward, fine-tuned on SD 2.1 as the backbone. | 9 | 67aac3b1ef5570c0c9047264 | null | null |
|
2025-02-10T22:29:36.102000 | APE: Faster and Longer Context-Augmented Generation via Adaptive Parallel Encoding | 4 | {
"_id": "64f58b970b24e548a85522bc",
"avatarUrl": "/avatars/c8ca1294b5a1edd609694877e335b22f.svg",
"followerCount": null,
"fullname": "Xinyu Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Hanyuezhuohua",
"type": "user"
} | true | null | 2502.05431 | [
{
"_id": "67aac392385da1f07cc7fcbd",
"hidden": false,
"name": "Xinyu Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:13.131Z",
"user": {
"_id": "64f58b970b24e548a85522bc",
"avatarUrl": "/avatars/c8ca1294b5a1edd609694877e335b22f.svg",
"fullname": "Xinyu Yang",
"isPro": false,
"type": "user",
"user": "Hanyuezhuohua"
}
},
{
"_id": "67aac392385da1f07cc7fcbe",
"hidden": false,
"name": "Tianqi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac392385da1f07cc7fcbf",
"hidden": false,
"name": "Beidi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-08T03:41:16 | APE: Faster and Longer Context-Augmented Generation via Adaptive
Parallel Encoding | Context-augmented generation (CAG) techniques, including RAG and ICL, require
the efficient combination of multiple contexts to generate responses to user
queries. Directly inputting these contexts as a sequence introduces a
considerable computational burden by re-encoding the combined selection of
contexts for every request. To address this, we explore the promising potential
of parallel encoding to independently pre-compute and cache each context's KV
states. This approach enables the direct loading of cached states during
inference while accommodating more contexts through position reuse across
contexts. However, due to misalignments in attention distribution, directly
applying parallel encoding results in a significant performance drop. To enable
effective and efficient CAG, we propose Adaptive Parallel Encoding
(APE), which brings shared prefix, attention temperature, and
scaling factor to align the distribution of parallel encoding with sequential
encoding. Results on RAG and ICL tasks demonstrate that APE can preserve 98%
and 93% sequential encoding performance using the same inputs while
outperforming parallel encoding by 3.6% and 7.9%, respectively. It also scales
to many-shot CAG, effectively encoding hundreds of contexts in parallel.
Efficiency evaluation shows that APE can achieve an end-to-end 4.5times
speedup by reducing 28times prefilling time for a 128K-length context. | 6 | 67aac393385da1f07cc7fd17 | null | null |
|
2025-02-10T22:20:38.168000 | Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM | 2 | {
"_id": "64ab99dcb76bfd863eba64c1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ab99dcb76bfd863eba64c1/UBXwDPx17X-gl-SzBPvrc.jpeg",
"followerCount": 12,
"fullname": "TY.Zheng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "aaabiao",
"type": "user"
} | true | null | 2502.06635 | [
{
"_id": "67aac0ba91e6f5eb5476ea76",
"hidden": false,
"name": "Qingshui Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac0ba91e6f5eb5476ea77",
"hidden": false,
"name": "Shu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac0ba91e6f5eb5476ea78",
"hidden": false,
"name": "Tianyu Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:15.968Z",
"user": {
"_id": "64ab99dcb76bfd863eba64c1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ab99dcb76bfd863eba64c1/UBXwDPx17X-gl-SzBPvrc.jpeg",
"fullname": "TY.Zheng",
"isPro": false,
"type": "user",
"user": "aaabiao"
}
},
{
"_id": "67aac0ba91e6f5eb5476ea79",
"hidden": false,
"name": "Zhaoxiang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T16:31:37 | Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building
a Chinese-Centric LLM | Steel-LLM is a Chinese-centric language model developed from scratch with the
goal of creating a high-quality, open-source model despite limited
computational resources. Launched in March 2024, the project aimed to train a
1-billion-parameter model on a large-scale dataset, prioritizing transparency
and the sharing of practical insights to assist others in the community. The
training process primarily focused on Chinese data, with a small proportion of
English data included, addressing gaps in existing open-source LLMs by
providing a more detailed and practical account of the model-building journey.
Steel-LLM has demonstrated competitive performance on benchmarks such as CEVAL
and CMMLU, outperforming early models from larger institutions. This paper
provides a comprehensive summary of the project's key contributions, including
data collection, model design, training methodologies, and the challenges
encountered along the way, offering a valuable resource for researchers and
practitioners looking to develop their own LLMs. The model checkpoints and
training script are available at https://github.com/zhanshijinwat/Steel-LLM. | 4 | 67aac0bb91e6f5eb5476eab8 | null | null |
|
2025-02-10T22:13:17.117000 | LM2: Large Memory Models | 7 | {
"_id": "6489e10ca13f65198dc6e122",
"avatarUrl": "/avatars/4aa9eab488157711b2f0298ddadee2f4.svg",
"followerCount": null,
"fullname": "Kang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "JaxonK",
"type": "user"
} | true | null | 2502.06049 | [
{
"_id": "67aac01bd7b18841e7c266df",
"hidden": false,
"name": "Jikun Kang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:26:38.660Z",
"user": {
"_id": "6489e10ca13f65198dc6e122",
"avatarUrl": "/avatars/4aa9eab488157711b2f0298ddadee2f4.svg",
"fullname": "Kang",
"isPro": false,
"type": "user",
"user": "JaxonK"
}
},
{
"_id": "67aac01bd7b18841e7c266e0",
"hidden": false,
"name": "Wenqi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac01bd7b18841e7c266e1",
"hidden": false,
"name": "Filippos Christianos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac01bd7b18841e7c266e2",
"hidden": false,
"name": "Alex J. Chan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:26:36.110Z",
"user": {
"_id": "636c1e4415cd58e915bc45df",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/636c1e4415cd58e915bc45df/KnPgdPe0G5ngvXaCBua6R.jpeg",
"fullname": "Alex J. Chan",
"isPro": false,
"type": "user",
"user": "XanderJC"
}
},
{
"_id": "67aac01bd7b18841e7c266e3",
"hidden": false,
"name": "Fraser Greenlee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac01bd7b18841e7c266e4",
"hidden": false,
"name": "George Thomas",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac01bd7b18841e7c266e5",
"hidden": false,
"name": "Marvin Purtorab",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aac01bd7b18841e7c266e6",
"hidden": false,
"name": "Andy Toulis",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-09T22:11:42 | LM2: Large Memory Models | This paper introduces the Large Memory Model (LM2), a decoder-only
Transformer architecture enhanced with an auxiliary memory module that aims to
address the limitations of standard Transformers in multi-step reasoning,
relational argumentation, and synthesizing information distributed over long
contexts. The proposed LM2 incorporates a memory module that acts as a
contextual representation repository, interacting with input tokens via cross
attention and updating through gating mechanisms. To preserve the Transformers
general-purpose capabilities, LM2 maintains the original information flow while
integrating a complementary memory pathway. Experimental results on the
BABILong benchmark demonstrate that the LM2model outperforms both the
memory-augmented RMT model by 37.1% and the baseline Llama-3.2 model by 86.3%
on average across tasks. LM2 exhibits exceptional capabilities in multi-hop
inference, numerical reasoning, and large-context question-answering. On the
MMLU dataset, it achieves a 5.0% improvement over a pre-trained vanilla model,
demonstrating that its memory module does not degrade performance on general
tasks. Further, in our analysis, we explore the memory interpretability,
effectiveness of memory modules, and test-time behavior. Our findings emphasize
the importance of explicit memory in enhancing Transformer architectures. | 30 | 67aac01dd7b18841e7c26739 | null | null |
|
2025-02-10T22:09:58.181000 | Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile | 2 | {
"_id": "63565cc56d7fcf1bedb7d347",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63565cc56d7fcf1bedb7d347/XGcHP4VkO_oieA1gZ4IAX.jpeg",
"followerCount": 82,
"fullname": "Zhang Peiyuan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "PY007",
"type": "user"
} | false | null | 2502.06155 | [
{
"_id": "67aab9b4a2bf5e5ea03d4c19",
"hidden": false,
"name": "Hangliang Ding",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:29.115Z",
"user": {
"_id": "643a451ee2b979ae6141329d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643a451ee2b979ae6141329d/HN3M5vyroanQoUEiXJFyB.jpeg",
"fullname": "Hangliang Ding",
"isPro": false,
"type": "user",
"user": "foreverpiano"
}
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1a",
"hidden": false,
"name": "Dacheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1b",
"hidden": false,
"name": "Runlong Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1c",
"hidden": false,
"name": "Peiyuan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1d",
"hidden": false,
"name": "Zhijie Deng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:25.471Z",
"user": {
"_id": "64bba541da140e461924dfed",
"avatarUrl": "/avatars/367993765b0ca3734b2b100db33ed787.svg",
"fullname": "zhijie deng",
"isPro": false,
"type": "user",
"user": "zhijie3"
}
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1e",
"hidden": false,
"name": "Ion Stoica",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab9b4a2bf5e5ea03d4c1f",
"hidden": false,
"name": "Hao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-10T05:00:56 | Efficient-vDiT: Efficient Video Diffusion Transformers With Attention
Tile | Despite the promise of synthesizing high-fidelity videos, Diffusion
Transformers (DiTs) with 3D full attention suffer from expensive inference due
to the complexity of attention computation and numerous sampling steps. For
example, the popular Open-Sora-Plan model consumes more than 9 minutes for
generating a single video of 29 frames. This paper addresses the inefficiency
issue from two aspects: 1) Prune the 3D full attention based on the redundancy
within video data; We identify a prevalent tile-style repetitive pattern in the
3D attention maps for video data, and advocate a new family of sparse 3D
attention that holds a linear complexity w.r.t. the number of video frames. 2)
Shorten the sampling process by adopting existing multi-step consistency
distillation; We split the entire sampling trajectory into several segments and
perform consistency distillation within each one to activate few-step
generation capacities. We further devise a three-stage training pipeline to
conjoin the low-complexity attention and few-step generation capacities.
Notably, with 0.1% pretraining data, we turn the Open-Sora-Plan-1.2 model into
an efficient one that is 7.4x -7.8x faster for 29 and 93 frames 720p video
generation with a marginal performance trade-off in VBench. In addition, we
demonstrate that our approach is amenable to distributed inference, achieving
an additional 3.91x speedup when running on 4 GPUs with sequence parallelism. | 8 | 67aab9bca2bf5e5ea03d4e3c | null | null |
|
2025-02-10T21:38:53.032000 | The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering | 3 | {
"_id": "64dfcc62e8b6f3f3baa950e0",
"avatarUrl": "/avatars/21bbff67d46c08044efe2406575aa77e.svg",
"followerCount": null,
"fullname": "Zhenting Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ztwang",
"type": "user"
} | false | null | 2502.03628 | [
{
"_id": "67aab82e6024056209d727a8",
"hidden": false,
"name": "Zhuowei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727a9",
"hidden": false,
"name": "Haizhou Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727aa",
"hidden": false,
"name": "Yunhe Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727ab",
"hidden": false,
"name": "Di Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727ac",
"hidden": false,
"name": "Zhenting Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727ad",
"hidden": false,
"name": "Yuxiao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727ae",
"hidden": false,
"name": "Ting Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727af",
"hidden": false,
"name": "Long Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:47.676Z",
"user": {
"_id": "650c249887dcda6616baa040",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/tIVfxoAHAJ0sWyDMkaarA.jpeg",
"fullname": "Long Zhao",
"isPro": false,
"type": "user",
"user": "garyzhao9012"
}
},
{
"_id": "67aab82e6024056209d727b0",
"hidden": false,
"name": "Hao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aab82e6024056209d727b1",
"hidden": false,
"name": "Dimitris N. Metaxas",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T21:34:02 | The Hidden Life of Tokens: Reducing Hallucination of Large
Vision-Language Models via Visual Information Steering | Large Vision-Language Models (LVLMs) can reason effectively over both textual
and visual inputs, but they tend to hallucinate syntactically coherent yet
visually ungrounded contents. In this paper, we investigate the internal
dynamics of hallucination by examining the tokens logits rankings throughout
the generation process, revealing three key patterns in how LVLMs process
information: (1) gradual visual information loss -- visually grounded tokens
gradually become less favored throughout generation, and (2) early excitation
-- semantically meaningful tokens achieve peak activation in the layers earlier
than the final layer. (3) hidden genuine information -- visually grounded
tokens though not being eventually decided still retain relatively high
rankings at inference. Based on these insights, we propose VISTA (Visual
Information Steering with Token-logit Augmentation), a training-free
inference-time intervention framework that reduces hallucination while
promoting genuine information. VISTA works by combining two complementary
approaches: reinforcing visual information in activation space and leveraging
early layer activations to promote semantically meaningful decoding. Compared
to existing methods, VISTA requires no external supervision and is applicable
to various decoding strategies. Extensive experiments show that VISTA on
average reduces hallucination by abount 40% on evaluated open-ended generation
task, and it consistently outperforms existing methods on four benchmarks
across four architectures under three decoding strategies. | 12 | 67aab82f6024056209d727f6 | null | null |
|
2025-02-10T19:59:41.241000 | Adaptive Semantic Prompt Caching with VectorQ | 2 | {
"_id": "652a656d1a3250bbfe3bb92d",
"avatarUrl": "/avatars/a1c25150d55c493edd9a7f81287fc449.svg",
"followerCount": null,
"fullname": "Alejandro Cuadron Lafuente",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AlexCuadron",
"type": "user"
} | true | null | 2502.03771 | [
{
"_id": "67aaa0ebe37429ebdbd113cf",
"hidden": false,
"name": "Luis Gaspar Schroeder",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d0",
"hidden": false,
"name": "Shu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d1",
"hidden": false,
"name": "Alejandro Cuadron",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:55:36.306Z",
"user": {
"_id": "652a656d1a3250bbfe3bb92d",
"avatarUrl": "/avatars/a1c25150d55c493edd9a7f81287fc449.svg",
"fullname": "Alejandro Cuadron Lafuente",
"isPro": false,
"type": "user",
"user": "AlexCuadron"
}
},
{
"_id": "67aaa0ebe37429ebdbd113d2",
"hidden": false,
"name": "Mark Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d3",
"hidden": false,
"name": "Stephan Krusche",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d4",
"hidden": false,
"name": "Alfons Kemper",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d5",
"hidden": false,
"name": "Matei Zaharia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aaa0ebe37429ebdbd113d6",
"hidden": false,
"name": "Joseph E. Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T04:16:20 | Adaptive Semantic Prompt Caching with VectorQ | Semantic prompt caches reduce the latency and cost of large language model
(LLM) inference by reusing cached LLM-generated responses for semantically
similar prompts. Vector similarity metrics assign a numerical score to quantify
the similarity between an embedded prompt and its nearest neighbor in the
cache. Existing systems rely on a static threshold to classify whether the
similarity score is sufficiently high to result in a cache hit. We show that
this one-size-fits-all threshold is insufficient across different prompts. We
propose VectorQ, a framework to learn embedding-specific threshold regions that
adapt to the complexity and uncertainty of an embedding. Through evaluations on
a combination of four diverse datasets, we show that VectorQ consistently
outperforms state-of-the-art systems across all static thresholds, achieving up
to 12x increases in cache hit rate and error rate reductions up to 92%. | 3 | 67aaa0ebe37429ebdbd113fb | null | null |
|
2025-02-10T18:55:57.167000 | SPARC: Subspace-Aware Prompt Adaptation for Robust Continual Learning in LLMs | 2 | {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"followerCount": null,
"fullname": "Sina Tayebati",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sinatayebati",
"type": "user"
} | true | null | 2502.02909 | [
{
"_id": "67aa91fd5f845ebfe01d7769",
"hidden": false,
"name": "Dinithi Jayasuriya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa91fd5f845ebfe01d776a",
"hidden": false,
"name": "Sina Tayebati",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:43.230Z",
"user": {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"fullname": "Sina Tayebati",
"isPro": false,
"type": "user",
"user": "sinatayebati"
}
},
{
"_id": "67aa91fd5f845ebfe01d776b",
"hidden": false,
"name": "Davide Ettori",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa91fd5f845ebfe01d776c",
"hidden": false,
"name": "Ranganath Krishnan",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-10T23:55:43.302Z",
"user": {
"_id": "647a45aeccb84c6180b41b54",
"avatarUrl": "/avatars/cd0db59a1b7f49f53f65751a8efc1033.svg",
"fullname": "Ranganath Krishnan",
"isPro": false,
"type": "user",
"user": "ranganathkrishnan"
}
},
{
"_id": "67aa91fd5f845ebfe01d776d",
"hidden": false,
"name": "Amit Ranjan Trivedi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T06:11:55 | SPARC: Subspace-Aware Prompt Adaptation for Robust Continual Learning in
LLMs | We propose SPARC, a lightweight continual learning framework for large
language models (LLMs) that enables efficient task adaptation through prompt
tuning in a lower-dimensional space. By leveraging principal component analysis
(PCA), we identify a compact subspace of the training data. Optimizing prompts
in this lower-dimensional space enhances training efficiency, as it focuses
updates on the most relevant features while reducing computational overhead.
Furthermore, since the model's internal structure remains unaltered, the
extensive knowledge gained from pretraining is fully preserved, ensuring that
previously learned information is not compromised during adaptation. Our method
achieves high knowledge retention in both task-incremental and
domain-incremental continual learning setups while fine-tuning only 0.04% of
the model's parameters. Additionally, by integrating LoRA, we enhance
adaptability to computational constraints, allowing for a tradeoff between
accuracy and training cost. Experiments on the SuperGLUE benchmark demonstrate
that our PCA-based prompt tuning combined with LoRA maintains full knowledge
retention while improving accuracy, utilizing only 1% of the model's
parameters. These results establish our approach as a scalable and
resource-efficient solution for continual learning in LLMs. | 2 | 67aa91ff5f845ebfe01d77fc | null | null |
|
2025-02-10T18:54:04.415000 | Intelligent Sensing-to-Action for Robust Autonomy at the Edge: Opportunities and Challenges | 2 | {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"followerCount": null,
"fullname": "Sina Tayebati",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sinatayebati",
"type": "user"
} | true | null | 2502.02692 | [
{
"_id": "67aa915d2e821999a96f8d85",
"hidden": false,
"name": "Amit Ranjan Trivedi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d86",
"hidden": false,
"name": "Sina Tayebati",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:25:47.153Z",
"user": {
"_id": "655ec30b12fb73960ceb048f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655ec30b12fb73960ceb048f/q7zVSStJWBywrtPoL2ChO.png",
"fullname": "Sina Tayebati",
"isPro": false,
"type": "user",
"user": "sinatayebati"
}
},
{
"_id": "67aa915d2e821999a96f8d87",
"hidden": false,
"name": "Hemant Kumawat",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d88",
"hidden": false,
"name": "Nastaran Darabi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:23.540Z",
"user": {
"_id": "671acb0de80155d7f9e162b0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/g7hnS2Mrjyy-RudyIxvVX.png",
"fullname": "Nastaran Darabi",
"isPro": false,
"type": "user",
"user": "Nstrndrbi"
}
},
{
"_id": "67aa915d2e821999a96f8d89",
"hidden": false,
"name": "Divake Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8a",
"hidden": false,
"name": "Adarsh Kumar Kosta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8b",
"hidden": false,
"name": "Yeshwanth Venkatesha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8c",
"hidden": false,
"name": "Dinithi Jayasuriya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8d",
"hidden": false,
"name": "Nethmi Jayasinghe",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8e",
"hidden": false,
"name": "Priyadarshini Panda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d8f",
"hidden": false,
"name": "Saibal Mukhopadhyay",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa915d2e821999a96f8d90",
"hidden": false,
"name": "Kaushik Roy",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T20:13:58 | Intelligent Sensing-to-Action for Robust Autonomy at the Edge:
Opportunities and Challenges | Autonomous edge computing in robotics, smart cities, and autonomous vehicles
relies on the seamless integration of sensing, processing, and actuation for
real-time decision-making in dynamic environments. At its core is the
sensing-to-action loop, which iteratively aligns sensor inputs with
computational models to drive adaptive control strategies. These loops can
adapt to hyper-local conditions, enhancing resource efficiency and
responsiveness, but also face challenges such as resource constraints,
synchronization delays in multi-modal data fusion, and the risk of cascading
errors in feedback loops. This article explores how proactive, context-aware
sensing-to-action and action-to-sensing adaptations can enhance efficiency by
dynamically adjusting sensing and computation based on task demands, such as
sensing a very limited part of the environment and predicting the rest. By
guiding sensing through control actions, action-to-sensing pathways can improve
task relevance and resource use, but they also require robust monitoring to
prevent cascading errors and maintain reliability. Multi-agent sensing-action
loops further extend these capabilities through coordinated sensing and actions
across distributed agents, optimizing resource use via collaboration.
Additionally, neuromorphic computing, inspired by biological systems, provides
an efficient framework for spike-based, event-driven processing that conserves
energy, reduces latency, and supports hierarchical control--making it ideal for
multi-agent optimization. This article highlights the importance of end-to-end
co-design strategies that align algorithmic models with hardware and
environmental dynamics and improve cross-layer interdependencies to improve
throughput, precision, and adaptability for energy-efficient edge autonomy in
complex environments. | 0 | 67aa91602e821999a96f8e79 | null | null |
|
2025-02-10T14:43:39.581000 | Continuous 3D Perception Model with Persistent State | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2501.12387 | [
{
"_id": "67a1596a167bea74d5057f25",
"hidden": false,
"name": "Qianqian Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1596a167bea74d5057f26",
"hidden": false,
"name": "Yifei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1596a167bea74d5057f27",
"hidden": false,
"name": "Aleksander Holynski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1596a167bea74d5057f28",
"hidden": false,
"name": "Alexei A. Efros",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1596a167bea74d5057f29",
"hidden": false,
"name": "Angjoo Kanazawa",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T18:59:23 | Continuous 3D Perception Model with Persistent State | We present a unified framework capable of solving a broad range of 3D tasks.
Our approach features a stateful recurrent model that continuously updates its
state representation with each new observation. Given a stream of images, this
evolving state can be used to generate metric-scale pointmaps (per-pixel 3D
points) for each new input in an online fashion. These pointmaps reside within
a common coordinate system, and can be accumulated into a coherent, dense scene
reconstruction that updates as new images arrive. Our model, called CUT3R
(Continuous Updating Transformer for 3D Reconstruction), captures rich priors
of real-world scenes: not only can it predict accurate pointmaps from image
observations, but it can also infer unseen regions of the scene by probing at
virtual, unobserved views. Our method is simple yet highly flexible, naturally
accepting varying lengths of images that may be either video streams or
unordered photo collections, containing both static and dynamic content. We
evaluate our method on various 3D/4D tasks and demonstrate competitive or
state-of-the-art performance in each. Project Page: https://cut3r.github.io/ | 3 | 67a1596d167bea74d5057fa9 | null | null |
|
2025-02-10T13:27:42.383000 | Value-Based Deep RL Scales Predictably | 5 | {
"_id": "64d1161315b26cc7f70f37e6",
"avatarUrl": "/avatars/c020b7d2c6c2bb1ca289a9cf0c4eaf00.svg",
"followerCount": null,
"fullname": "Oleh Rybkin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "orybkin",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/64d1161315b26cc7f70f37e6/BNKgENZKBSBIAAhDgUpwL.qt"
] | 2502.04327 | [
{
"_id": "67aa44a0927861da2b7a3479",
"hidden": false,
"name": "Oleh Rybkin",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-10T18:25:37.833Z",
"user": {
"_id": "64d1161315b26cc7f70f37e6",
"avatarUrl": "/avatars/c020b7d2c6c2bb1ca289a9cf0c4eaf00.svg",
"fullname": "Oleh Rybkin",
"isPro": false,
"type": "user",
"user": "orybkin"
}
},
{
"_id": "67aa44a0927861da2b7a347a",
"hidden": false,
"name": "Michal Nauman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa44a0927861da2b7a347b",
"hidden": false,
"name": "Preston Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa44a0927861da2b7a347c",
"hidden": false,
"name": "Charlie Snell",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa44a0927861da2b7a347d",
"hidden": false,
"name": "Pieter Abbeel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa44a0927861da2b7a347e",
"hidden": false,
"name": "Sergey Levine",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67aa44a0927861da2b7a347f",
"hidden": false,
"name": "Aviral Kumar",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:25:38.016Z",
"user": {
"_id": "67315a324d2eab8035de786a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/INH5_PlcJYfYcGkyibjgP.png",
"fullname": "Aviral Kumar",
"isPro": false,
"type": "user",
"user": "aviralku"
}
}
] | 2025-02-06T18:59:47 | Value-Based Deep RL Scales Predictably | Scaling data and compute is critical to the success of machine learning.
However, scaling demands predictability: we want methods to not only perform
well with more compute or data, but also have their performance be predictable
from small-scale runs, without running the large-scale experiment. In this
paper, we show that value-based off-policy RL methods are predictable despite
community lore regarding their pathological behavior. First, we show that data
and compute requirements to attain a given performance level lie on a Pareto
frontier, controlled by the updates-to-data (UTD) ratio. By estimating this
frontier, we can predict this data requirement when given more compute, and
this compute requirement when given more data. Second, we determine the optimal
allocation of a total resource budget across data and compute for a given
performance and use it to determine hyperparameters that maximize performance
for a given budget. Third, this scaling behavior is enabled by first estimating
predictable relationships between hyperparameters, which is used to manage
effects of overfitting and plasticity loss unique to RL. We validate our
approach using three algorithms: SAC, BRO, and PQL on DeepMind Control, OpenAI
gym, and IsaacGym, when extrapolating to higher levels of data, compute,
budget, or performance. | 6 | 67aa44a1927861da2b7a34bc | null | null |
|
2025-02-10T08:59:36.230000 | Lost in Time: Clock and Calendar Understanding Challenges in Multimodal LLMs | 4 | {
"_id": "657ccbf2869d5bb0e53b482f",
"avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg",
"followerCount": 4,
"fullname": "Rohit Saxena",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "rohitsaxena",
"type": "user"
} | true | null | 2502.05092 | [
{
"_id": "67aa05c5ffb9f6b5b2f658b2",
"hidden": false,
"name": "Rohit Saxena",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:18:07.805Z",
"user": {
"_id": "657ccbf2869d5bb0e53b482f",
"avatarUrl": "/avatars/2eae5a10bdc14814a04d9f255f16de6b.svg",
"fullname": "Rohit Saxena",
"isPro": false,
"type": "user",
"user": "rohitsaxena"
}
},
{
"_id": "67aa05c5ffb9f6b5b2f658b3",
"hidden": false,
"name": "Aryo Pradipta Gema",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:03.971Z",
"user": {
"_id": "644f895e23d7eb05ca695054",
"avatarUrl": "/avatars/3fb04dd8544b403262bf98507de05453.svg",
"fullname": "Aryo Pradipta Gema",
"isPro": true,
"type": "user",
"user": "aryopg"
}
},
{
"_id": "67aa05c5ffb9f6b5b2f658b4",
"hidden": false,
"name": "Pasquale Minervini",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T15:51:20.711Z",
"user": {
"_id": "61001311e043e15c13412d30",
"avatarUrl": "/avatars/eea1e4c39decee282f2940d122090491.svg",
"fullname": "Pasquale Minervini",
"isPro": false,
"type": "user",
"user": "pminervini"
}
}
] | 2025-02-07T17:11:23 | Lost in Time: Clock and Calendar Understanding Challenges in Multimodal
LLMs | Understanding time from visual representations is a fundamental cognitive
skill, yet it remains a challenge for multimodal large language models (MLLMs).
In this work, we investigate the capabilities of MLLMs in interpreting time and
date through analogue clocks and yearly calendars. To facilitate this, we
curated a structured dataset comprising two subsets: 1) ClockQA,
which comprises various types of clock styles-standard, black-dial,
no-second-hand, Roman numeral, and arrow-hand clocks-paired with time related
questions; and 2) CalendarQA, which consists of yearly calendar
images with questions ranging from commonly known dates (e.g., Christmas, New
Year's Day) to computationally derived ones (e.g., the 100th or 153rd day of
the year). We aim to analyse how MLLMs can perform visual recognition,
numerical reasoning, and temporal inference when presented with time-related
visual data. Our evaluations show that despite recent advancements, reliably
understanding time remains a significant challenge for MLLMs. | 7 | 67aa05c6ffb9f6b5b2f658fb | null | null |
|
2025-02-10T07:46:25.333000 | No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces | 2 | {
"_id": "65a5358ddb5c00652ef24c8d",
"avatarUrl": "/avatars/d50b6297584c9b4c2ccd93e64477b940.svg",
"followerCount": null,
"fullname": "Daniel Marczak",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "danielm1405",
"type": "user"
} | true | null | 2502.04959 | [
{
"_id": "67a9f4900b97667e0a82ad3d",
"hidden": false,
"name": "Daniel Marczak",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T13:11:49.393Z",
"user": {
"_id": "65a5358ddb5c00652ef24c8d",
"avatarUrl": "/avatars/d50b6297584c9b4c2ccd93e64477b940.svg",
"fullname": "Daniel Marczak",
"isPro": false,
"type": "user",
"user": "danielm1405"
}
},
{
"_id": "67a9f4900b97667e0a82ad3e",
"hidden": false,
"name": "Simone Magistri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9f4900b97667e0a82ad3f",
"hidden": false,
"name": "Sebastian Cygert",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:17:02.946Z",
"user": {
"_id": "6763f031c0a39e58c57ed9f9",
"avatarUrl": "/avatars/2e864838a571f52b5316f90d60b763f1.svg",
"fullname": "Sebastian Cygert",
"isPro": false,
"type": "user",
"user": "cygerts"
}
},
{
"_id": "67a9f4900b97667e0a82ad40",
"hidden": false,
"name": "Bartłomiej Twardowski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9f4900b97667e0a82ad41",
"hidden": false,
"name": "Andrew D. Bagdanov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9f4900b97667e0a82ad42",
"hidden": false,
"name": "Joost van de Weijer",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T14:22:56 | No Task Left Behind: Isotropic Model Merging with Common and
Task-Specific Subspaces | Model merging integrates the weights of multiple task-specific models into a
single multi-task model. Despite recent interest in the problem, a significant
performance gap between the combined and single-task models remains. In this
paper, we investigate the key characteristics of task matrices -- weight update
matrices applied to a pre-trained model -- that enable effective merging. We
show that alignment between singular components of task-specific and merged
matrices strongly correlates with performance improvement over the pre-trained
model. Based on this, we propose an isotropic merging framework that flattens
the singular value spectrum of task matrices, enhances alignment, and reduces
the performance gap. Additionally, we incorporate both common and task-specific
subspaces to further improve alignment and performance. Our proposed approach
achieves state-of-the-art performance across multiple scenarios, including
various sets of tasks and model scales. This work advances the understanding of
model merging dynamics, offering an effective methodology to merge models
without requiring additional training. Code is available at
https://github.com/danielm1405/iso-merging . | 11 | 67a9f4920b97667e0a82adeb | null | null |
|
2025-02-10T05:25:07.375000 | CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference | 2 | {
"_id": "6527c063e86758eb6ca800a1",
"avatarUrl": "/avatars/9091be87eea518209c1de9eebfa663c0.svg",
"followerCount": null,
"fullname": "JarvisPei",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Eleven-P",
"type": "user"
} | true | null | 2502.04416 | [
{
"_id": "67a970920d2e1d1311d04053",
"hidden": false,
"name": "Zehua Pei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:50:03.881Z",
"user": {
"_id": "6527c063e86758eb6ca800a1",
"avatarUrl": "/avatars/9091be87eea518209c1de9eebfa663c0.svg",
"fullname": "JarvisPei",
"isPro": false,
"type": "user",
"user": "Eleven-P"
}
},
{
"_id": "67a970920d2e1d1311d04054",
"hidden": false,
"name": "Lancheng Zou",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T13:12:04.847Z",
"user": {
"_id": "65392c3429f8a911550fb9d8",
"avatarUrl": "/avatars/f12cd0ace82072817baea0d72f158de5.svg",
"fullname": "LANCHENG ZOU",
"isPro": false,
"type": "user",
"user": "culczou"
}
},
{
"_id": "67a970920d2e1d1311d04055",
"hidden": false,
"name": "Hui-Ling Zhen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a970920d2e1d1311d04056",
"hidden": false,
"name": "Xianzhi Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a970920d2e1d1311d04057",
"hidden": false,
"name": "Wulong Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:18:49.935Z",
"user": {
"_id": "67444dc518f0b9f39c48aa2f",
"avatarUrl": "/avatars/c911660165cc4c0abf6e0dcf6fa46034.svg",
"fullname": "liuwulong",
"isPro": false,
"type": "user",
"user": "long202005589"
}
},
{
"_id": "67a970920d2e1d1311d04058",
"hidden": false,
"name": "Sinno Jialin Pan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:19:06.682Z",
"user": {
"_id": "6751cc1807c0a99c402af739",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/GPK4RlHBRFxbgrSKfkxUz.png",
"fullname": "Sinno Pan",
"isPro": false,
"type": "user",
"user": "SinnoPan"
}
},
{
"_id": "67a970920d2e1d1311d04059",
"hidden": false,
"name": "Mingxuan Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a970920d2e1d1311d0405a",
"hidden": false,
"name": "Bei Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T14:05:30 | CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference | Large language models (LLMs) achieve impressive performance by scaling model
parameters, but this comes with significant inference overhead. Feed-forward
networks (FFNs), which dominate LLM parameters, exhibit high activation
sparsity in hidden neurons. To exploit this, researchers have proposed using a
mixture-of-experts (MoE) architecture, where only a subset of parameters is
activated. However, existing approaches often require extensive training data
and resources, limiting their practicality. We propose CMoE (Carved MoE), a
novel framework to efficiently carve MoE models from dense models. CMoE
achieves remarkable performance through efficient expert grouping and
lightweight adaptation. First, neurons are grouped into shared and routed
experts based on activation rates. Next, we construct a routing mechanism
without training from scratch, incorporating a differentiable routing process
and load balancing. Using modest data, CMoE produces a well-designed, usable
MoE from a 7B dense model within five minutes. With lightweight fine-tuning, it
achieves high-performance recovery in under an hour. We make our code publicly
available at https://github.com/JarvisPei/CMoE. | 12 | 67a970970d2e1d1311d040ff | null | null |
|
2025-02-10T03:30:51.974000 | ARR: Question Answering with Large Language Models via Analyzing, Retrieving, and Reasoning | 3 | {
"_id": "64510a21f800611f94f0d9f8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/lOeHK9Bvt3IXcB7Urx6jZ.jpeg",
"followerCount": 4,
"fullname": "Yuwei Yin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yuweiyin",
"type": "user"
} | true | null | 2502.04689 | [
{
"_id": "67a9b911b1f5eece682d7961",
"hidden": false,
"name": "Yuwei Yin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:32.672Z",
"user": {
"_id": "64510a21f800611f94f0d9f8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/lOeHK9Bvt3IXcB7Urx6jZ.jpeg",
"fullname": "Yuwei Yin",
"isPro": false,
"type": "user",
"user": "yuweiyin"
}
},
{
"_id": "67a9b911b1f5eece682d7962",
"hidden": false,
"name": "Giuseppe Carenini",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T06:30:33 | ARR: Question Answering with Large Language Models via Analyzing,
Retrieving, and Reasoning | Large language models (LLMs) achieve remarkable performance on challenging
benchmarks that are often structured as multiple-choice question-answering (QA)
tasks. Zero-shot Chain-of-Thought (CoT) prompting enhances reasoning in LLMs
but provides only vague and generic guidance ("think step by step"). This paper
introduces ARR, an intuitive and effective zero-shot prompting method that
explicitly incorporates three key steps in QA solving: analyzing the intent of
the question, retrieving relevant information, and reasoning step by step.
Comprehensive experiments across diverse and challenging QA tasks demonstrate
that ARR consistently improves the Baseline (without ARR prompting) and
outperforms CoT. Ablation and case studies further validate the positive
contributions of each component: analyzing, retrieving, and reasoning. Notably,
intent analysis plays a vital role in ARR. Additionally, extensive evaluations
across various model sizes, LLM series, and generation settings solidify the
effectiveness, robustness, and generalizability of ARR. | 7 | 67a9b911b1f5eece682d798c | null | null |
|
2025-02-10T03:00:12.065000 | QuEST: Stable Training of LLMs with 1-Bit Weights and Activations | 3 | {
"_id": "64ef52c2718f94ae8e78a5e7",
"avatarUrl": "/avatars/d169f4ee62786a3eb4a3fa9d1fec52e9.svg",
"followerCount": 6,
"fullname": "Alistarh",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "d-alistarh",
"type": "user"
} | true | null | 2502.05003 | [
{
"_id": "67a9b1a69a99341e859c488d",
"hidden": false,
"name": "Andrei Panferov",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-10T08:09:18.686Z",
"user": {
"_id": "623753b5eddd7763adc9346a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/623753b5eddd7763adc9346a/rcpQAKZNrkn1-tMtraQBX.jpeg",
"fullname": "Andrei Panferov",
"isPro": false,
"type": "user",
"user": "BlackSamorez"
}
},
{
"_id": "67a9b1a69a99341e859c488e",
"hidden": false,
"name": "Jiale Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9b1a69a99341e859c488f",
"hidden": false,
"name": "Soroush Tabesh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:37.573Z",
"user": {
"_id": "632a2e325f2ff1958c0103be",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/632a2e325f2ff1958c0103be/Tb0ql9e4LcaFktTK1hzqe.jpeg",
"fullname": "Soroush Tabesh",
"isPro": false,
"type": "user",
"user": "soroushtabesh"
}
},
{
"_id": "67a9b1a69a99341e859c4890",
"hidden": false,
"name": "Roberto L. Castro",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9b1a69a99341e859c4891",
"hidden": false,
"name": "Mahdi Nikdan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:50:25.944Z",
"user": {
"_id": "6526b8ebba9a8279c139616b",
"avatarUrl": "/avatars/09f6b677603a03be128996a0765233e6.svg",
"fullname": "Mahdi Nikdan",
"isPro": false,
"type": "user",
"user": "mnikdan97"
}
},
{
"_id": "67a9b1a69a99341e859c4892",
"hidden": false,
"name": "Dan Alistarh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:35.449Z",
"user": {
"_id": "64ef52c2718f94ae8e78a5e7",
"avatarUrl": "/avatars/d169f4ee62786a3eb4a3fa9d1fec52e9.svg",
"fullname": "Alistarh",
"isPro": false,
"type": "user",
"user": "d-alistarh"
}
}
] | 2025-02-07T15:23:34 | QuEST: Stable Training of LLMs with 1-Bit Weights and Activations | One approach to reducing the massive costs of large language models (LLMs) is
the use of quantized or sparse representations for training or deployment.
While post-training compression methods are very popular, the question of
obtaining even more accurate compressed models by directly training over such
representations, i.e., Quantization-Aware Training (QAT), is still open: for
example, a recent study (arXiv:2411.04330v2) put the "optimal" bit-width at
which models can be trained using QAT, while staying accuracy-competitive with
standard FP16/BF16 precision, at 8-bits weights and activations.
We advance this state-of-the-art via a new method called QuEST, which is
Pareto-competitive with FP16, i.e., it provides better accuracy at lower model
size, while training models with weights and activations in 4-bits or less.
Moreover, QuEST allows stable training with 1-bit weights and activations.
QuEST achieves this by improving two key aspects of QAT methods: (1) accurate
and fast quantization of the (continuous) distributions of weights and
activations via Hadamard normalization and MSE-optimal fitting; (2) a new trust
gradient estimator based on the idea of explicitly minimizing the error between
the noisy gradient computed over quantized states and the "true" (but unknown)
full-precision gradient. Experiments on Llama-type architectures show that
QuEST induces stable scaling laws across the entire range of hardware-supported
precisions, and can be extended to sparse representations. We provide GPU
kernel support showing that models produced by QuEST can be executed
efficiently. Our code is available at https://github.com/IST-DASLab/QuEST. | 42 | 67a9b1a79a99341e859c48c7 | null | null |
|
2025-02-10T02:34:31.480000 | Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2502.03738 | [
{
"_id": "67a8d049406cb5a65f847eb1",
"hidden": false,
"name": "Feng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a8d049406cb5a65f847eb2",
"hidden": false,
"name": "Yaodong Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:22:16.391Z",
"user": {
"_id": "6100e69a393be1b5c4c83867",
"avatarUrl": "/avatars/1b87098cffb9c50345789808daea4f68.svg",
"fullname": "Yaodong Yu",
"isPro": false,
"type": "user",
"user": "yaodongyu"
}
},
{
"_id": "67a8d049406cb5a65f847eb3",
"hidden": false,
"name": "Guoyizhe Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a8d049406cb5a65f847eb4",
"hidden": false,
"name": "Wei Shao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a8d049406cb5a65f847eb5",
"hidden": false,
"name": "Yuyin Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:22:41.354Z",
"user": {
"_id": "66c7fb4ce2c92fe5b132f314",
"avatarUrl": "/avatars/22d915fa339a70803c5c748255250256.svg",
"fullname": "Yuyin Zhou",
"isPro": false,
"type": "user",
"user": "RitaCoding"
}
},
{
"_id": "67a8d049406cb5a65f847eb6",
"hidden": false,
"name": "Alan Yuille",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a8d049406cb5a65f847eb7",
"hidden": false,
"name": "Cihang Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:22:31.527Z",
"user": {
"_id": "645eb61da3c5cd8a16efffff",
"avatarUrl": "/avatars/9112bfeed598dfabf9e077e69e09ecc9.svg",
"fullname": "Cihang Xie",
"isPro": false,
"type": "user",
"user": "cihangxie"
}
}
] | 2025-02-06T03:01:38 | Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More | Since the introduction of Vision Transformer (ViT), patchification has long
been regarded as a de facto image tokenization approach for plain visual
architectures. By compressing the spatial size of images, this approach can
effectively shorten the token sequence and reduce the computational cost of
ViT-like plain architectures. In this work, we aim to thoroughly examine the
information loss caused by this patchification-based compressive encoding
paradigm and how it affects visual understanding. We conduct extensive patch
size scaling experiments and excitedly observe an intriguing scaling law in
patchification: the models can consistently benefit from decreased patch sizes
and attain improved predictive performance, until it reaches the minimum patch
size of 1x1, i.e., pixel tokenization. This conclusion is broadly applicable
across different vision tasks, various input scales, and diverse architectures
such as ViT and the recent Mamba models. Moreover, as a by-product, we discover
that with smaller patches, task-specific decoder heads become less critical for
dense prediction. In the experiments, we successfully scale up the visual
sequence to an exceptional length of 50,176 tokens, achieving a competitive
test accuracy of 84.6% with a base-sized model on the ImageNet-1k benchmark. We
hope this study can provide insights and theoretical foundations for future
works of building non-compressive vision models. Code is available at
https://github.com/wangf3014/Patch_Scaling. | 10 | 67a8d04a406cb5a65f847ed3 | null | null |
|
2025-02-10T02:21:52.370000 | YINYANG-ALIGN: Benchmarking Contradictory Objectives and Proposing Multi-Objective Optimization based DPO for Text-to-Image Alignment | 2 | {
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"followerCount": 3,
"fullname": "Aman Chadha",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "amanchadha",
"type": "user"
} | true | null | 2502.03512 | [
{
"_id": "67a9a7cb6be3ca4a7ede471e",
"hidden": false,
"name": "Amitava Das",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede471f",
"hidden": false,
"name": "Yaswanth Narsupalli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede4720",
"hidden": false,
"name": "Gurpreet Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede4721",
"hidden": false,
"name": "Vinija Jain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede4722",
"hidden": false,
"name": "Vasu Sharma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede4723",
"hidden": false,
"name": "Suranjana Trivedy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9a7cb6be3ca4a7ede4724",
"hidden": false,
"name": "Aman Chadha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:39.550Z",
"user": {
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"fullname": "Aman Chadha",
"isPro": false,
"type": "user",
"user": "amanchadha"
}
},
{
"_id": "67a9a7cb6be3ca4a7ede4725",
"hidden": false,
"name": "Amit Sheth",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T18:46:20 | YINYANG-ALIGN: Benchmarking Contradictory Objectives and Proposing
Multi-Objective Optimization based DPO for Text-to-Image Alignment | Precise alignment in Text-to-Image (T2I) systems is crucial to ensure that
generated visuals not only accurately encapsulate user intents but also conform
to stringent ethical and aesthetic benchmarks. Incidents like the Google Gemini
fiasco, where misaligned outputs triggered significant public backlash,
underscore the critical need for robust alignment mechanisms. In contrast,
Large Language Models (LLMs) have achieved notable success in alignment.
Building on these advancements, researchers are eager to apply similar
alignment techniques, such as Direct Preference Optimization (DPO), to T2I
systems to enhance image generation fidelity and reliability.
We present YinYangAlign, an advanced benchmarking framework that
systematically quantifies the alignment fidelity of T2I systems, addressing six
fundamental and inherently contradictory design objectives. Each pair
represents fundamental tensions in image generation, such as balancing
adherence to user prompts with creative modifications or maintaining diversity
alongside visual coherence. YinYangAlign includes detailed axiom datasets
featuring human prompts, aligned (chosen) responses, misaligned (rejected)
AI-generated outputs, and explanations of the underlying contradictions. | 5 | 67a9a7cf6be3ca4a7ede47d5 | null | null |
|
2025-02-10T01:35:35.818000 | QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation | 2 | {
"_id": "638fe91639f7e2a7f9d2a8c6",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/638fe91639f7e2a7f9d2a8c6/hB7DMVODcdAEUdQnXxWA8.jpeg",
"followerCount": 3,
"fullname": "Yue Zhao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zhaoyue-zephyrus",
"type": "user"
} | true | null | 2502.05178 | [
{
"_id": "67a99dfe98423dca45d8f659",
"hidden": false,
"name": "Yue Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:43.493Z",
"user": {
"_id": "638fe91639f7e2a7f9d2a8c6",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/638fe91639f7e2a7f9d2a8c6/hB7DMVODcdAEUdQnXxWA8.jpeg",
"fullname": "Yue Zhao",
"isPro": false,
"type": "user",
"user": "zhaoyue-zephyrus"
}
},
{
"_id": "67a99dfe98423dca45d8f65a",
"hidden": false,
"name": "Fuzhao Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f65b",
"hidden": false,
"name": "Scott Reed",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f65c",
"hidden": false,
"name": "Linxi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f65d",
"hidden": false,
"name": "Yuke Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f65e",
"hidden": false,
"name": "Jan Kautz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f65f",
"hidden": false,
"name": "Zhiding Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f660",
"hidden": false,
"name": "Philipp Krähenbühl",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a99dfe98423dca45d8f661",
"hidden": false,
"name": "De-An Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T18:59:57 | QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive
Multimodal Understanding and Generation | We introduce Quantized Language-Image Pretraining (QLIP), a visual
tokenization method that combines state-of-the-art reconstruction quality with
state-of-the-art zero-shot image understanding. QLIP trains a
binary-spherical-quantization-based autoencoder with reconstruction and
language-image alignment objectives. We are the first to show that the two
objectives do not need to be at odds. We balance the two loss terms dynamically
during training and show that a two-stage training pipeline effectively mixes
the large-batch requirements of image-language pre-training with the memory
bottleneck imposed by the reconstruction objective. We validate the
effectiveness of QLIP for multimodal understanding and text-conditioned image
generation with a single model. Specifically, QLIP serves as a drop-in
replacement for the visual encoder for LLaVA and the image tokenizer for
LlamaGen with comparable or even better performance. Finally, we demonstrate
that QLIP enables a unified mixed-modality auto-regressive model for
understanding and generation. | 10 | 67a99dfe98423dca45d8f691 | null | null |
|
2025-02-10T01:15:52.070000 | MEETING DELEGATE: Benchmarking LLMs on Attending Meetings on Our Behalf | 3 | {
"_id": "662b0bc9c709a61df8291c0f",
"avatarUrl": "/avatars/16dd4d945e9fbef5ac889a8087101ded.svg",
"followerCount": null,
"fullname": "Xiaoting Qin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "XiaotingQin",
"type": "user"
} | false | null | 2502.04376 | [
{
"_id": "67a998fe495b23306cdbf51d",
"hidden": false,
"name": "Lingxiang Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf51e",
"hidden": false,
"name": "Shurun Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf51f",
"hidden": false,
"name": "Xiaoting Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf520",
"hidden": false,
"name": "Jue Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf521",
"hidden": false,
"name": "Qingwei Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf522",
"hidden": false,
"name": "Dongmei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf523",
"hidden": false,
"name": "Saravan Rajmohan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a998fe495b23306cdbf524",
"hidden": false,
"name": "Qi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T16:25:43 | MEETING DELEGATE: Benchmarking LLMs on Attending Meetings on Our Behalf | In contemporary workplaces, meetings are essential for exchanging ideas and
ensuring team alignment but often face challenges such as time consumption,
scheduling conflicts, and inefficient participation. Recent advancements in
Large Language Models (LLMs) have demonstrated their strong capabilities in
natural language generation and reasoning, prompting the question: can LLMs
effectively delegate participants in meetings? To explore this, we develop a
prototype LLM-powered meeting delegate system and create a comprehensive
benchmark using real meeting transcripts. Our evaluation reveals that GPT-4/4o
maintain balanced performance between active and cautious engagement
strategies. In contrast, Gemini 1.5 Pro tends to be more cautious, while Gemini
1.5 Flash and Llama3-8B/70B display more active tendencies. Overall, about 60\%
of responses address at least one key point from the ground-truth. However,
improvements are needed to reduce irrelevant or repetitive content and enhance
tolerance for transcription errors commonly found in real-world settings.
Additionally, we implement the system in practical settings and collect
real-world feedback from demos. Our findings underscore the potential and
challenges of utilizing LLMs as meeting delegates, offering valuable insights
into their practical application for alleviating the burden of meetings. | 3 | 67a99900495b23306cdbf57e | null | null |
|
2025-02-10T00:43:32.191000 | DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails | 2 | {
"_id": "642f4c789b2484d7d8551a93",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/642f4c789b2484d7d8551a93/0lH4YXcbZa-Xlzj6ESo7F.jpeg",
"followerCount": 8,
"fullname": "Yihe Deng",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "ydeng9",
"type": "user"
} | true | null | 2502.05163 | [
{
"_id": "67a9604851169a582d14c113",
"hidden": false,
"name": "Yihe Deng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:50:06.136Z",
"user": {
"_id": "642f4c789b2484d7d8551a93",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/642f4c789b2484d7d8551a93/0lH4YXcbZa-Xlzj6ESo7F.jpeg",
"fullname": "Yihe Deng",
"isPro": true,
"type": "user",
"user": "ydeng9"
}
},
{
"_id": "67a9604851169a582d14c114",
"hidden": false,
"name": "Yu Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:01:45.439Z",
"user": {
"_id": "62f82e52870a3f98bbf9e302",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62f82e52870a3f98bbf9e302/5pN3oNBouZWlYu-uKa7lA.jpeg",
"fullname": "Yu Yang",
"isPro": false,
"type": "user",
"user": "yuyangy"
}
},
{
"_id": "67a9604851169a582d14c115",
"hidden": false,
"name": "Junkai Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:11:40.494Z",
"user": {
"_id": "64e7bb81b159a6f87be99459",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64e7bb81b159a6f87be99459/cxvzoEHg1YATnPJ9d3PTg.jpeg",
"fullname": "Junkai Zhang",
"isPro": false,
"type": "user",
"user": "JunkaiZ"
}
},
{
"_id": "67a9604851169a582d14c116",
"hidden": false,
"name": "Wei Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:11:53.569Z",
"user": {
"_id": "62fa0ffe0697d224219a0cb7",
"avatarUrl": "/avatars/f0ef59e1c0cf4ab4fe5cee08d488bd03.svg",
"fullname": "Wei Wang",
"isPro": false,
"type": "user",
"user": "WeiWang"
}
},
{
"_id": "67a9604851169a582d14c117",
"hidden": false,
"name": "Bo Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:12:08.449Z",
"user": {
"_id": "6493236b70d925ae8050a1bf",
"avatarUrl": "/avatars/b16069de1445cfa8608567175deaa2ae.svg",
"fullname": "Bo Li",
"isPro": false,
"type": "user",
"user": "BoLi-aisecure"
}
}
] | 2025-02-07T18:45:03 | DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM
Guardrails | The rapid advancement of large language models (LLMs) has increased the need
for guardrail models to ensure responsible use, particularly in detecting
unsafe and illegal content. While substantial safety data exist in English,
multilingual guardrail modeling remains underexplored due to the scarcity of
open-source safety data in other languages. To address this gap, we propose a
novel two-player Reinforcement Learning (RL) framework, where a generator and a
guardrail model co-evolve adversarially to produce high-quality synthetic data
for multilingual guardrail training. We theoretically formalize this
interaction as a two-player game, proving convergence to a Nash equilibrium.
Empirical evaluations show that our model \ours outperforms state-of-the-art
models, achieving nearly 10% improvement over LlamaGuard3 (8B) on English
benchmarks while being 4.5x faster at inference with a significantly smaller
model (0.5B). We achieve substantial advancements in multilingual safety tasks,
particularly in addressing the imbalance for lower-resource languages in a
collected real dataset. Ablation studies emphasize the critical role of
synthetic data generation in bridging the imbalance in open-source data between
English and other languages. These findings establish a scalable and efficient
approach to synthetic data generation, paving the way for improved multilingual
guardrail models to enhance LLM safety. Code, model, and data will be
open-sourced at https://github.com/yihedeng9/DuoGuard. | 22 | 67a9604951169a582d14c14d | null | null |
|
2025-02-10T00:35:37.019000 | FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video Generation | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.05179 | [
{
"_id": "67a9901cc0310368e2488929",
"hidden": false,
"name": "Shilong Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:14:33.747Z",
"user": {
"_id": "6424ffce46d202ad3d918a67",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6424ffce46d202ad3d918a67/gmYmOA072fP_5cJLc9Qs4.jpeg",
"fullname": "Shilong Zhang",
"isPro": false,
"type": "user",
"user": "shilongz"
}
},
{
"_id": "67a9901cc0310368e248892a",
"hidden": false,
"name": "Wenbo Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:14:43.524Z",
"user": {
"_id": "6538cc7f43d9189cdcbd1e6a",
"avatarUrl": "/avatars/6d06005601aeb665de37cc93f1fd03d3.svg",
"fullname": "wenboli",
"isPro": false,
"type": "user",
"user": "wenboli"
}
},
{
"_id": "67a9901cc0310368e248892b",
"hidden": false,
"name": "Shoufa Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:46.264Z",
"user": {
"_id": "6412a33900634c4fe9873652",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6412a33900634c4fe9873652/Nmn_yRA1gGD2VO1YbSOYF.jpeg",
"fullname": "Shoufa Chen",
"isPro": false,
"type": "user",
"user": "ShoufaChen"
}
},
{
"_id": "67a9901cc0310368e248892c",
"hidden": false,
"name": "Chongjian Ge",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:14:54.589Z",
"user": {
"_id": "620f126891e167b068fa76f8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620f126891e167b068fa76f8/NaPyS5lFjgZYJZrWaf0OI.jpeg",
"fullname": "ChongjianGE",
"isPro": false,
"type": "user",
"user": "RhettGee"
}
},
{
"_id": "67a9901cc0310368e248892d",
"hidden": false,
"name": "Peize Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:15:01.448Z",
"user": {
"_id": "640dc9bf8512ec51d7f0ac1a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/640dc9bf8512ec51d7f0ac1a/sT4rdEoQbzfW6D3xDVdqt.jpeg",
"fullname": "peizesun",
"isPro": false,
"type": "user",
"user": "peizesun"
}
},
{
"_id": "67a9901cc0310368e248892e",
"hidden": false,
"name": "Yida Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9901cc0310368e248892f",
"hidden": false,
"name": "Yi Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:49.413Z",
"user": {
"_id": "6344dcb1cd37e44d9ed46508",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344dcb1cd37e44d9ed46508/J92UKSxKR3iziD2WJfih4.jpeg",
"fullname": "Yi Jiang",
"isPro": false,
"type": "user",
"user": "JiangYi"
}
},
{
"_id": "67a9901cc0310368e2488930",
"hidden": false,
"name": "Zehuan Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:15:29.017Z",
"user": {
"_id": "661a80af3557013b638061d5",
"avatarUrl": "/avatars/4c551aeb223e257a5fc45b5b6c7ded49.svg",
"fullname": "Zehuan Yuan",
"isPro": false,
"type": "user",
"user": "sweetrabor"
}
},
{
"_id": "67a9901cc0310368e2488931",
"hidden": false,
"name": "Binyue Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9901cc0310368e2488932",
"hidden": false,
"name": "Ping Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-07T18:59:59 | FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution
Video Generation | DiT diffusion models have achieved great success in text-to-video generation,
leveraging their scalability in model capacity and data scale. High content and
motion fidelity aligned with text prompts, however, often require large model
parameters and a substantial number of function evaluations (NFEs). Realistic
and visually appealing details are typically reflected in high resolution
outputs, further amplifying computational demands especially for single stage
DiT models. To address these challenges, we propose a novel two stage
framework, FlashVideo, which strategically allocates model capacity and NFEs
across stages to balance generation fidelity and quality. In the first stage,
prompt fidelity is prioritized through a low resolution generation process
utilizing large parameters and sufficient NFEs to enhance computational
efficiency. The second stage establishes flow matching between low and high
resolutions, effectively generating fine details with minimal NFEs.
Quantitative and visual results demonstrate that FlashVideo achieves
state-of-the-art high resolution video generation with superior computational
efficiency. Additionally, the two-stage design enables users to preview the
initial output before committing to full resolution generation, thereby
significantly reducing computational costs and wait times as well as enhancing
commercial viability . | 24 | 67a9901ec0310368e24889c2 | null | null |
|
2025-02-10T00:22:26.568000 | Fast Video Generation with Sliding Tile Attention | 2 | {
"_id": "63565cc56d7fcf1bedb7d347",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63565cc56d7fcf1bedb7d347/XGcHP4VkO_oieA1gZ4IAX.jpeg",
"followerCount": 82,
"fullname": "Zhang Peiyuan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "PY007",
"type": "user"
} | true | null | 2502.04507 | [
{
"_id": "67a98cd1b8b21202c9004628",
"hidden": false,
"name": "Peiyuan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:07:27.309Z",
"user": {
"_id": "63565cc56d7fcf1bedb7d347",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63565cc56d7fcf1bedb7d347/XGcHP4VkO_oieA1gZ4IAX.jpeg",
"fullname": "Zhang Peiyuan",
"isPro": false,
"type": "user",
"user": "PY007"
}
},
{
"_id": "67a98cd1b8b21202c9004629",
"hidden": false,
"name": "Yongqi Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:48.410Z",
"user": {
"_id": "65416817271d3bc4d70f6745",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65416817271d3bc4d70f6745/1YkW0MpuufejvxqksVMIx.jpeg",
"fullname": "Yongqi Chen",
"isPro": false,
"type": "user",
"user": "BrianChen1129"
}
},
{
"_id": "67a98cd1b8b21202c900462a",
"hidden": false,
"name": "Runlong Su",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:00:40.942Z",
"user": {
"_id": "65d7ed4823e83e1591beacc7",
"avatarUrl": "/avatars/2a6714a2a7bbd591f6b726a7330bafbc.svg",
"fullname": "Su",
"isPro": false,
"type": "user",
"user": "r3su9"
}
},
{
"_id": "67a98cd1b8b21202c900462b",
"hidden": false,
"name": "Hangliang Ding",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T07:56:04.110Z",
"user": {
"_id": "643a451ee2b979ae6141329d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643a451ee2b979ae6141329d/HN3M5vyroanQoUEiXJFyB.jpeg",
"fullname": "Hangliang Ding",
"isPro": false,
"type": "user",
"user": "foreverpiano"
}
},
{
"_id": "67a98cd1b8b21202c900462c",
"hidden": false,
"name": "Ion Stoica",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98cd1b8b21202c900462d",
"hidden": false,
"name": "Zhenghong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98cd1b8b21202c900462e",
"hidden": false,
"name": "Hao Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:00:38.019Z",
"user": {
"_id": "62d363143eebd640a4fa41fa",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d363143eebd640a4fa41fa/pvPwXlJ5OOb-UIfmffv4E.jpeg",
"fullname": "Hao Zhang",
"isPro": false,
"type": "user",
"user": "zhisbug"
}
}
] | 2025-02-06T21:17:09 | Fast Video Generation with Sliding Tile Attention | Diffusion Transformers (DiTs) with 3D full attention power state-of-the-art
video generation, but suffer from prohibitive compute cost -- when generating
just a 5-second 720P video, attention alone takes 800 out of 945 seconds of
total inference time. This paper introduces sliding tile attention (STA) to
address this challenge. STA leverages the observation that attention scores in
pretrained video diffusion models predominantly concentrate within localized 3D
windows. By sliding and attending over the local spatial-temporal region, STA
eliminates redundancy from full attention. Unlike traditional token-wise
sliding window attention (SWA), STA operates tile-by-tile with a novel
hardware-aware sliding window design, preserving expressiveness while being
hardware-efficient. With careful kernel-level optimizations, STA offers the
first efficient 2D/3D sliding-window-like attention implementation, achieving
58.79% MFU. Precisely, STA accelerates attention by 2.8-17x over
FlashAttention-2 (FA2) and 1.6-10x over FlashAttention-3 (FA3). On the leading
video DiT, HunyuanVideo, STA reduces end-to-end latency from 945s (FA3) to 685s
without quality degradation, requiring no training. Enabling finetuning further
lowers latency to 268s with only a 0.09% drop on VBench. | 48 | 67a98cd7b8b21202c90047c5 | null | null |
|
2025-02-10T00:05:28.205000 | AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting | 3 | {
"_id": "6459d5da3b6fafd9664807ab",
"avatarUrl": "/avatars/57430d1bbde3a2fe5586e5fbcafb0e74.svg",
"followerCount": 3,
"fullname": "Yu-Lun Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yulunliu",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6459d5da3b6fafd9664807ab/KMKt5j_3UB0zDhxjSiyxI.mp4"
] | 2502.05176 | [
{
"_id": "67a9889dc1fbde5146aba8b1",
"hidden": false,
"name": "Chung-Ho Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:08:15.192Z",
"user": {
"_id": "65d70288ca16ef9ba7f72542",
"avatarUrl": "/avatars/8ceec128b7e7be6d1b4c615b9eced98d.svg",
"fullname": "Chung-Ho Wu",
"isPro": false,
"type": "user",
"user": "kkennethwu"
}
},
{
"_id": "67a9889dc1fbde5146aba8b2",
"hidden": false,
"name": "Yang-Jung Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9889dc1fbde5146aba8b3",
"hidden": false,
"name": "Ying-Huan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9889dc1fbde5146aba8b4",
"hidden": false,
"name": "Jie-Ying Lee",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:52.729Z",
"user": {
"_id": "655f1770f74fa124d1172ec1",
"avatarUrl": "/avatars/e4413693c34974fac75a438ffe2cc630.svg",
"fullname": "Jay Lee",
"isPro": false,
"type": "user",
"user": "jayinnn"
}
},
{
"_id": "67a9889dc1fbde5146aba8b5",
"hidden": false,
"name": "Bo-Hsu Ke",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:08:43.078Z",
"user": {
"_id": "67173302fd698e5b2a9c91dd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/7_5xqQEShNkkKpGbjFIjG.png",
"fullname": "Bo-Hsu Ke",
"isPro": false,
"type": "user",
"user": "Hentci"
}
},
{
"_id": "67a9889dc1fbde5146aba8b6",
"hidden": false,
"name": "Chun-Wei Tuan Mu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a9889dc1fbde5146aba8b7",
"hidden": false,
"name": "Yi-Chuan Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:09:08.174Z",
"user": {
"_id": "665d84f05fdfe8f923fb0fe2",
"avatarUrl": "/avatars/71fa629eda3d34d5d854055f2a905b53.svg",
"fullname": "Yichuan Huang",
"isPro": false,
"type": "user",
"user": "yichuan-huang"
}
},
{
"_id": "67a9889dc1fbde5146aba8b8",
"hidden": false,
"name": "Chin-Yang Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:09:19.714Z",
"user": {
"_id": "66b9d0996f861799b80b457a",
"avatarUrl": "/avatars/31d4989c5e0983283a6a8e8a152b82e6.svg",
"fullname": "CY Lin",
"isPro": false,
"type": "user",
"user": "chinyanglin"
}
},
{
"_id": "67a9889dc1fbde5146aba8b9",
"hidden": false,
"name": "Min-Hung Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:50.370Z",
"user": {
"_id": "64ae22dd1aee69ece065cdcd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ae22dd1aee69ece065cdcd/JG7QaHIrr4i2k4uwR4pZK.png",
"fullname": "Min-Hung Chen",
"isPro": false,
"type": "user",
"user": "cmhungsteve"
}
},
{
"_id": "67a9889dc1fbde5146aba8ba",
"hidden": false,
"name": "Yen-Yu Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:09:38.418Z",
"user": {
"_id": "65cd7863b2e8d2486a01bd49",
"avatarUrl": "/avatars/33a9c978924e69bcc5db1e620ff3c0f7.svg",
"fullname": "YenYu Lin",
"isPro": false,
"type": "user",
"user": "Yenyu"
}
},
{
"_id": "67a9889dc1fbde5146aba8bb",
"hidden": false,
"name": "Yu-Lun Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:09:49.464Z",
"user": {
"_id": "6459d5da3b6fafd9664807ab",
"avatarUrl": "/avatars/57430d1bbde3a2fe5586e5fbcafb0e74.svg",
"fullname": "Yu-Lun Liu",
"isPro": false,
"type": "user",
"user": "yulunliu"
}
}
] | 2025-02-07T18:59:55 | AuraFusion360: Augmented Unseen Region Alignment for Reference-based
360° Unbounded Scene Inpainting | Three-dimensional scene inpainting is crucial for applications from virtual
reality to architectural visualization, yet existing methods struggle with view
consistency and geometric accuracy in 360{\deg} unbounded scenes. We present
AuraFusion360, a novel reference-based method that enables high-quality object
removal and hole filling in 3D scenes represented by Gaussian Splatting. Our
approach introduces (1) depth-aware unseen mask generation for accurate
occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot
method for accurate initial point placement without requiring additional
training, and (3) SDEdit-based detail enhancement for multi-view coherence. We
also introduce 360-USID, the first comprehensive dataset for 360{\deg}
unbounded scene inpainting with ground truth. Extensive experiments demonstrate
that AuraFusion360 significantly outperforms existing methods, achieving
superior perceptual quality while maintaining geometric accuracy across
dramatic viewpoint changes. See our project page for video results and the
dataset at https://kkennethwu.github.io/aurafusion360/. | 31 | 67a988a4c1fbde5146abaa3b | null | null |
|
2025-02-09T23:43:39.239000 | Goku: Flow Based Video Generative Foundation Models | 12 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.04896 | [
{
"_id": "67a983ea9b72585dd12587fb",
"hidden": false,
"name": "Shoufa Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:52.136Z",
"user": {
"_id": "6412a33900634c4fe9873652",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6412a33900634c4fe9873652/Nmn_yRA1gGD2VO1YbSOYF.jpeg",
"fullname": "Shoufa Chen",
"isPro": false,
"type": "user",
"user": "ShoufaChen"
}
},
{
"_id": "67a983ea9b72585dd12587fc",
"hidden": false,
"name": "Chongjian Ge",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:54:30.233Z",
"user": {
"_id": "620f126891e167b068fa76f8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620f126891e167b068fa76f8/NaPyS5lFjgZYJZrWaf0OI.jpeg",
"fullname": "ChongjianGE",
"isPro": false,
"type": "user",
"user": "RhettGee"
}
},
{
"_id": "67a983ea9b72585dd12587fd",
"hidden": false,
"name": "Yuqi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd12587fe",
"hidden": false,
"name": "Yida Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd12587ff",
"hidden": false,
"name": "Fengda Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:55:24.292Z",
"user": {
"_id": "656971db2f7ea4b5ac238169",
"avatarUrl": "/avatars/29eca045338f1b9a272c42cf10a62823.svg",
"fullname": "Fengda Zhu",
"isPro": false,
"type": "user",
"user": "zhufengdaaa"
}
},
{
"_id": "67a983ea9b72585dd1258800",
"hidden": false,
"name": "Hao Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:54.824Z",
"user": {
"_id": "67b06737019b7825d9fb508e",
"avatarUrl": "/avatars/80502db1a7fba7398e08dacbf401f152.svg",
"fullname": "Hanish",
"isPro": false,
"type": "user",
"user": "Hannah12"
}
},
{
"_id": "67a983ea9b72585dd1258801",
"hidden": false,
"name": "Hongxiang Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd1258802",
"hidden": false,
"name": "Hui Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd1258803",
"hidden": false,
"name": "Zhichao Lai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:53:38.146Z",
"user": {
"_id": "6673e67d65b9964067706db9",
"avatarUrl": "/avatars/45018a5fffa77643b7a6d476f6063151.svg",
"fullname": "Zhichao Lai",
"isPro": false,
"type": "user",
"user": "sgcc-chao"
}
},
{
"_id": "67a983ea9b72585dd1258804",
"hidden": false,
"name": "Yifei Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:53:30.624Z",
"user": {
"_id": "64832c6675779e269260e98e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64832c6675779e269260e98e/r-d14egc7wRBBY7_pD9dr.jpeg",
"fullname": "Yifei Hu",
"isPro": false,
"type": "user",
"user": "yifeihu"
}
},
{
"_id": "67a983ea9b72585dd1258805",
"hidden": false,
"name": "Ting-Che Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:53:19.492Z",
"user": {
"_id": "63f89398da440a47e9f6b782",
"avatarUrl": "/avatars/6e2b4994a59b38add1332cc07b0ff3de.svg",
"fullname": "Ting-Che Lin",
"isPro": false,
"type": "user",
"user": "dronchego"
}
},
{
"_id": "67a983ea9b72585dd1258806",
"hidden": false,
"name": "Shilong Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:53:12.376Z",
"user": {
"_id": "6424ffce46d202ad3d918a67",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6424ffce46d202ad3d918a67/gmYmOA072fP_5cJLc9Qs4.jpeg",
"fullname": "Shilong Zhang",
"isPro": false,
"type": "user",
"user": "shilongz"
}
},
{
"_id": "67a983ea9b72585dd1258807",
"hidden": false,
"name": "Fu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd1258808",
"hidden": false,
"name": "Chuan Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:00:44.165Z",
"user": {
"_id": "67aa537bdc097a969e614493",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67aa537bdc097a969e614493/A3LR9rOsOO5V5F7nscSr6.jpeg",
"fullname": "Chuan Li",
"isPro": false,
"type": "user",
"user": "chuanrichardli"
}
},
{
"_id": "67a983ea9b72585dd1258809",
"hidden": false,
"name": "Xing Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd125880a",
"hidden": false,
"name": "Yanghua Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd125880b",
"hidden": false,
"name": "Peize Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd125880c",
"hidden": false,
"name": "Ping Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd125880d",
"hidden": false,
"name": "Yi Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-02T20:18:51.440Z",
"user": {
"_id": "6344dcb1cd37e44d9ed46508",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6344dcb1cd37e44d9ed46508/J92UKSxKR3iziD2WJfih4.jpeg",
"fullname": "Yi Jiang",
"isPro": false,
"type": "user",
"user": "JiangYi"
}
},
{
"_id": "67a983ea9b72585dd125880e",
"hidden": false,
"name": "Zehuan Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:52:06.140Z",
"user": {
"_id": "661a80af3557013b638061d5",
"avatarUrl": "/avatars/4c551aeb223e257a5fc45b5b6c7ded49.svg",
"fullname": "Zehuan Yuan",
"isPro": false,
"type": "user",
"user": "sweetrabor"
}
},
{
"_id": "67a983ea9b72585dd125880f",
"hidden": false,
"name": "Bingyue Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a983ea9b72585dd1258810",
"hidden": false,
"name": "Xiaobing Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T15:51:52.195Z",
"user": {
"_id": "66dbf16d7ec0e5f42175dbcb",
"avatarUrl": "/avatars/d28477ac9f02b633300cd51dea78704f.svg",
"fullname": "liuxiaobing",
"isPro": false,
"type": "user",
"user": "xiaobinggg"
}
}
] | 2025-02-07T13:03:55 | Goku: Flow Based Video Generative Foundation Models | This paper introduces Goku, a state-of-the-art family of joint
image-and-video generation models leveraging rectified flow Transformers to
achieve industry-leading performance. We detail the foundational elements
enabling high-quality visual generation, including the data curation pipeline,
model architecture design, flow formulation, and advanced infrastructure for
efficient and robust large-scale training. The Goku models demonstrate superior
performance in both qualitative and quantitative evaluations, setting new
benchmarks across major tasks. Specifically, Goku achieves 0.76 on GenEval and
83.65 on DPG-Bench for text-to-image generation, and 84.85 on VBench for
text-to-video tasks. We believe that this work provides valuable insights and
practical advancements for the research community in developing joint
image-and-video generation models. | 93 | 67a983ee9b72585dd125890f | null | null |
|
2025-02-09T23:33:13.185000 | On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for Mobile Devices | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04363 | [
{
"_id": "67a98180d0dc1ed664297368",
"hidden": false,
"name": "Bosung Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98180d0dc1ed664297369",
"hidden": false,
"name": "Kyuhwan Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98180d0dc1ed66429736a",
"hidden": false,
"name": "Isu Jeong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98180d0dc1ed66429736b",
"hidden": false,
"name": "Jungmin Cheon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:24:07.147Z",
"user": {
"_id": "65c9911ad3870f24084060f9",
"avatarUrl": "/avatars/889ff75133203f9ed5b3c46cc67fb068.svg",
"fullname": "Jungmin Cheon",
"isPro": false,
"type": "user",
"user": "Ruyan2"
}
},
{
"_id": "67a98180d0dc1ed66429736c",
"hidden": false,
"name": "Yeojin Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a98180d0dc1ed66429736d",
"hidden": false,
"name": "Seulki Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-05T05:42:29 | On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for
Mobile Devices | We present On-device Sora, a first pioneering solution for diffusion-based
on-device text-to-video generation that operates efficiently on
smartphone-grade devices. Building on Open-Sora, On-device Sora applies three
novel techniques to address the challenges of diffusion-based text-to-video
generation on computation- and memory-limited mobile devices. First, Linear
Proportional Leap (LPL) reduces the excessive denoising steps required in video
diffusion through an efficient leap-based approach. Second, Temporal Dimension
Token Merging (TDTM) minimizes intensive token-processing computation in
attention layers by merging consecutive tokens along the temporal dimension.
Third, Concurrent Inference with Dynamic Loading (CI-DL) dynamically partitions
large models into smaller blocks and loads them into memory for concurrent
model inference, effectively addressing the challenges of limited device
memory. We implement On-device Sora on the iPhone 15 Pro, and the experimental
evaluations demonstrate that it is capable of generating high-quality videos on
the device, comparable to those produced by Open-Sora running on high-end GPUs.
These results show that On-device Sora enables efficient and high-quality video
generation on resource-constrained mobile devices, expanding accessibility,
ensuring user privacy, reducing dependence on cloud infrastructure, and
lowering associated costs. We envision the proposed On-device Sora as a
significant first step toward democratizing state-of-the-art generative
technologies, enabling video generation capabilities on commodity mobile and
embedded devices. The code implementation is publicly available at an GitHub
repository: https://github.com/eai-lab/On-device-Sora. | 11 | 67a98185d0dc1ed664297491 | null | null |
|
2025-02-09T23:22:06.784000 | Linear Correlation in LM's Compositional Generalization and Hallucination | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04520 | [
{
"_id": "67a97eea96d822bc6e13a1bb",
"hidden": false,
"name": "Letian Peng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:17:52.581Z",
"user": {
"_id": "64323dd503d81fa4d26deaf9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64323dd503d81fa4d26deaf9/x3ES8VXEZJljxDWvFWaAf.png",
"fullname": "Letian Peng",
"isPro": false,
"type": "user",
"user": "KomeijiForce"
}
},
{
"_id": "67a97eea96d822bc6e13a1bc",
"hidden": false,
"name": "Chenyang An",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:18:00.824Z",
"user": {
"_id": "6546d644bab28a482e1956c3",
"avatarUrl": "/avatars/b35e53afd1acf56534338b7788b49ee1.svg",
"fullname": "Chenyang An",
"isPro": false,
"type": "user",
"user": "chenyang-an"
}
},
{
"_id": "67a97eea96d822bc6e13a1bd",
"hidden": false,
"name": "Shibo Hao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:18:07.047Z",
"user": {
"_id": "660ee5df35d092e3fc2a3685",
"avatarUrl": "/avatars/a7e0472fb7ea49973f74e3eea13dc964.svg",
"fullname": "Shibo Hao",
"isPro": false,
"type": "user",
"user": "Shibo-UCSD"
}
},
{
"_id": "67a97eea96d822bc6e13a1be",
"hidden": false,
"name": "Chengyu Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:18:12.963Z",
"user": {
"_id": "668640a1369b09d564b75509",
"avatarUrl": "/avatars/ef70bfdaae307a602f0ce0a0753596c7.svg",
"fullname": "CHENGYU_DONG",
"isPro": false,
"type": "user",
"user": "sakuraCY"
}
},
{
"_id": "67a97eea96d822bc6e13a1bf",
"hidden": false,
"name": "Jingbo Shang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:54.200Z",
"user": {
"_id": "660655119e3555d648f6c6b5",
"avatarUrl": "/avatars/ae1e2c97a08be39b77a9f1a5c2a718ef.svg",
"fullname": "Jingbo Shang",
"isPro": false,
"type": "user",
"user": "shangjingbo"
}
}
] | 2025-02-06T21:44:30 | Linear Correlation in LM's Compositional Generalization and
Hallucination | The generalization of language models (LMs) is undergoing active debates,
contrasting their potential for general intelligence with their struggles with
basic knowledge composition (e.g., reverse/transition curse). This paper
uncovers the phenomenon of linear correlations in LMs during knowledge
composition. For explanation, there exists a linear transformation between
certain related knowledge that maps the next token prediction logits from one
prompt to another, e.g., "X lives in the city of" rightarrow "X lives in the
country of" for every given X. This mirrors the linearity in human knowledge
composition, such as Paris rightarrow France. Our findings indicate that the
linear transformation is resilient to large-scale fine-tuning, generalizing
updated knowledge when aligned with real-world relationships, but causing
hallucinations when it deviates. Empirical results suggest that linear
correlation can serve as a potential identifier of LM's generalization.
Finally, we show such linear correlations can be learned with a single
feedforward network and pre-trained vocabulary representations, indicating LM
generalization heavily relies on the latter. | 11 | 67a97eea96d822bc6e13a1e7 | null | null |
|
2025-02-09T23:19:16.714000 | Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach | 12 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.05171 | [
{
"_id": "67a97e27495b23306cd5ea56",
"hidden": false,
"name": "Jonas Geiping",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:10:13.470Z",
"user": {
"_id": "63d86dbf3130cadcaf8bdd11",
"avatarUrl": "/avatars/29d79a0c6dcec01111ef192fecd0fa7a.svg",
"fullname": "Jonas Geiping",
"isPro": false,
"type": "user",
"user": "JonasGeiping"
}
},
{
"_id": "67a97e27495b23306cd5ea57",
"hidden": false,
"name": "Sean McLeish",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T13:11:57.835Z",
"user": {
"_id": "65255f1073a043e50d043641",
"avatarUrl": "/avatars/257085f01c439d7c84787a4e6d085b3d.svg",
"fullname": "Sean McLeish",
"isPro": false,
"type": "user",
"user": "smcleish"
}
},
{
"_id": "67a97e27495b23306cd5ea58",
"hidden": false,
"name": "Neel Jain",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:10:34.976Z",
"user": {
"_id": "63e2b1ec282ee5f9624cfbcb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e2b1ec282ee5f9624cfbcb/4SVTp93cvRevacoJgiXzS.jpeg",
"fullname": "Neel Jain",
"isPro": false,
"type": "user",
"user": "nsjain"
}
},
{
"_id": "67a97e27495b23306cd5ea59",
"hidden": false,
"name": "John Kirchenbauer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:10:40.754Z",
"user": {
"_id": "63d98af1897746d6496177df",
"avatarUrl": "/avatars/c5d0031c796a3c11bcb0d01b959168dc.svg",
"fullname": "John Kirchenbauer",
"isPro": false,
"type": "user",
"user": "jwkirchenbauer"
}
},
{
"_id": "67a97e27495b23306cd5ea5a",
"hidden": false,
"name": "Siddharth Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97e27495b23306cd5ea5b",
"hidden": false,
"name": "Brian R. Bartoldson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97e27495b23306cd5ea5c",
"hidden": false,
"name": "Bhavya Kailkhura",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:11:04.907Z",
"user": {
"_id": "65cb79db6427380bc21261e2",
"avatarUrl": "/avatars/a003eb5d0955417329c1a4170ae65879.svg",
"fullname": "Bhavya Kailkhura",
"isPro": false,
"type": "user",
"user": "bhavyakailkhura"
}
},
{
"_id": "67a97e27495b23306cd5ea5d",
"hidden": false,
"name": "Abhinav Bhatele",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:11:12.069Z",
"user": {
"_id": "6361d9ce6bd72c97d005b4db",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6361d9ce6bd72c97d005b4db/vjKaW2JFavVffoRxwgFwn.jpeg",
"fullname": "Abhinav Bhatele",
"isPro": false,
"type": "user",
"user": "bhatele"
}
},
{
"_id": "67a97e27495b23306cd5ea5e",
"hidden": false,
"name": "Tom Goldstein",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:11:20.220Z",
"user": {
"_id": "6381ca7d65dc156aba0b933d",
"avatarUrl": "/avatars/84dfdca8e1cd6fbf50d6fb2a6f1b488d.svg",
"fullname": "Tom Goldstein",
"isPro": false,
"type": "user",
"user": "tomgoldstein"
}
}
] | 2025-02-07T18:55:02 | Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth
Approach | We study a novel language model architecture that is capable of scaling
test-time computation by implicitly reasoning in latent space. Our model works
by iterating a recurrent block, thereby unrolling to arbitrary depth at
test-time. This stands in contrast to mainstream reasoning models that scale up
compute by producing more tokens. Unlike approaches based on chain-of-thought,
our approach does not require any specialized training data, can work with
small context windows, and can capture types of reasoning that are not easily
represented in words. We scale a proof-of-concept model to 3.5 billion
parameters and 800 billion tokens. We show that the resulting model can improve
its performance on reasoning benchmarks, sometimes dramatically, up to a
computation load equivalent to 50 billion parameters. | 121 | 67a97e29495b23306cd5eae5 | null | null |
|
2025-02-09T23:17:42.258000 | Generating Symbolic World Models via Test-time Scaling of Large Language Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04728 | [
{
"_id": "67a97d1c02da0cdf059cb0d8",
"hidden": false,
"name": "Zhouliang Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:16:00.654Z",
"user": {
"_id": "62a80fe3ac97233f1625235a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a80fe3ac97233f1625235a/_rGtpqdY7OEBz3pyqb6fE.jpeg",
"fullname": "Zhouliang Yu",
"isPro": false,
"type": "user",
"user": "zhouliang"
}
},
{
"_id": "67a97d1c02da0cdf059cb0d9",
"hidden": false,
"name": "Yuhuan Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:16:06.566Z",
"user": {
"_id": "632ff46bf242a8532b713381",
"avatarUrl": "/avatars/72e96e0dd7d7b4fce64a07def170174f.svg",
"fullname": "yuhuanyuan",
"isPro": false,
"type": "user",
"user": "yuhuanyuan"
}
},
{
"_id": "67a97d1c02da0cdf059cb0da",
"hidden": false,
"name": "Tim Z. Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97d1c02da0cdf059cb0db",
"hidden": false,
"name": "Fuxiang Frank Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97d1c02da0cdf059cb0dc",
"hidden": false,
"name": "Jie Fu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T13:12:00.097Z",
"user": {
"_id": "641a6895fb5ffff5ac79d593",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641a6895fb5ffff5ac79d593/dFR_ofjbqCrcqGa9R3MMq.jpeg",
"fullname": "Jie Fu",
"isPro": false,
"type": "user",
"user": "bigaidream"
}
},
{
"_id": "67a97d1c02da0cdf059cb0dd",
"hidden": false,
"name": "Ge Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:56.250Z",
"user": {
"_id": "638efcf4c67af472d316d424",
"avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg",
"fullname": "Ge Zhang",
"isPro": false,
"type": "user",
"user": "zhangysk"
}
},
{
"_id": "67a97d1c02da0cdf059cb0de",
"hidden": false,
"name": "Ge Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97d1c02da0cdf059cb0df",
"hidden": false,
"name": "Weiyang Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:16:29.068Z",
"user": {
"_id": "648905d1a15c43c791d4381f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648905d1a15c43c791d4381f/GpqGBzsLiMHX0gWZEz3qn.jpeg",
"fullname": "Weiyang Liu",
"isPro": false,
"type": "user",
"user": "wy1iu"
}
}
] | 2025-02-07T07:52:25 | Generating Symbolic World Models via Test-time Scaling of Large Language
Models | Solving complex planning problems requires Large Language Models (LLMs) to
explicitly model the state transition to avoid rule violations, comply with
constraints, and ensure optimality-a task hindered by the inherent ambiguity of
natural language. To overcome such ambiguity, Planning Domain Definition
Language (PDDL) is leveraged as a planning abstraction that enables precise and
formal state descriptions. With PDDL, we can generate a symbolic world model
where classic searching algorithms, such as A*, can be seamlessly applied to
find optimal plans. However, directly generating PDDL domains with current LLMs
remains an open challenge due to the lack of PDDL training data. To address
this challenge, we propose to scale up the test-time computation of LLMs to
enhance their PDDL reasoning capabilities, thereby enabling the generation of
high-quality PDDL domains. Specifically, we introduce a simple yet effective
algorithm, which first employs a Best-of-N sampling approach to improve the
quality of the initial solution and then refines the solution in a fine-grained
manner with verbalized machine learning. Our method outperforms o1-mini by a
considerable margin in the generation of PDDL domain, achieving over 50%
success rate on two tasks (i.e., generating PDDL domains from natural language
description or PDDL problems). This is done without requiring additional
training. By taking advantage of PDDL as state abstraction, our method is able
to outperform current state-of-the-art methods on almost all competition-level
planning tasks. | 19 | 67a97d1d02da0cdf059cb11a | null | null |
|
2025-02-09T23:11:57.959000 | Agency Is Frame-Dependent | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04403 | [
{
"_id": "67a97c7542d4d2f92ee57d20",
"hidden": false,
"name": "David Abel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d21",
"hidden": false,
"name": "André Barreto",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:12:50.057Z",
"user": {
"_id": "6602bfb4a44fc523256912b0",
"avatarUrl": "/avatars/d23fd4d654fa490389dd6dfb37c0e834.svg",
"fullname": "Andre Barreto",
"isPro": false,
"type": "user",
"user": "andrebarreto"
}
},
{
"_id": "67a97c7542d4d2f92ee57d22",
"hidden": false,
"name": "Michael Bowling",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:13:08.978Z",
"user": {
"_id": "64929b5d53b71f9cf934dcb8",
"avatarUrl": "/avatars/462add1d4f423f831481acf53217f900.svg",
"fullname": "Michael Bowling",
"isPro": false,
"type": "user",
"user": "Alkaroth"
}
},
{
"_id": "67a97c7542d4d2f92ee57d23",
"hidden": false,
"name": "Will Dabney",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d24",
"hidden": false,
"name": "Shi Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d25",
"hidden": false,
"name": "Steven Hansen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d26",
"hidden": false,
"name": "Anna Harutyunyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d27",
"hidden": false,
"name": "Khimya Khetarpal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d28",
"hidden": false,
"name": "Clare Lyle",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:13:37.970Z",
"user": {
"_id": "6313f69d54e6e5d9f0fb9f10",
"avatarUrl": "/avatars/12114d479bb6dc394d29f988944f6d47.svg",
"fullname": "Clare Lyle",
"isPro": false,
"type": "user",
"user": "justclarifying"
}
},
{
"_id": "67a97c7542d4d2f92ee57d29",
"hidden": false,
"name": "Razvan Pascanu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:13:44.165Z",
"user": {
"_id": "64b9310403124195cd9778ec",
"avatarUrl": "/avatars/57c594d3d0f97d3010b15b6a0806451c.svg",
"fullname": "Razvan Pascanu",
"isPro": false,
"type": "user",
"user": "razp"
}
},
{
"_id": "67a97c7542d4d2f92ee57d2a",
"hidden": false,
"name": "Georgios Piliouras",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d2b",
"hidden": false,
"name": "Doina Precup",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d2c",
"hidden": false,
"name": "Jonathan Richens",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d2d",
"hidden": false,
"name": "Mark Rowland",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d2e",
"hidden": false,
"name": "Tom Schaul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97c7542d4d2f92ee57d2f",
"hidden": false,
"name": "Satinder Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T08:34:57 | Agency Is Frame-Dependent | Agency is a system's capacity to steer outcomes toward a goal, and is a
central topic of study across biology, philosophy, cognitive science, and
artificial intelligence. Determining if a system exhibits agency is a
notoriously difficult question: Dennett (1989), for instance, highlights the
puzzle of determining which principles can decide whether a rock, a thermostat,
or a robot each possess agency. We here address this puzzle from the viewpoint
of reinforcement learning by arguing that agency is fundamentally
frame-dependent: Any measurement of a system's agency must be made relative to
a reference frame. We support this claim by presenting a philosophical argument
that each of the essential properties of agency proposed by Barandiaran et al.
(2009) and Moreno (2018) are themselves frame-dependent. We conclude that any
basic science of agency requires frame-dependence, and discuss the implications
of this claim for reinforcement learning. | 22 | 67a97c7642d4d2f92ee57d77 | null | null |
|
2025-02-09T23:09:01.160000 | Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of Language Models | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04404 | [
{
"_id": "67a97bc5500b3bcf5babc5e8",
"hidden": false,
"name": "Xiao-Wen Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:57.842Z",
"user": {
"_id": "64bb3d1eb1a618880956da76",
"avatarUrl": "/avatars/ec393b5eee8a3ccec61107b4aa63c4d9.svg",
"fullname": "Xiao-Wen Yang",
"isPro": false,
"type": "user",
"user": "yangxw"
}
},
{
"_id": "67a97bc5500b3bcf5babc5e9",
"hidden": false,
"name": "Xuan-Yi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97bc5500b3bcf5babc5ea",
"hidden": false,
"name": "Wen-Da Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97bc5500b3bcf5babc5eb",
"hidden": false,
"name": "Ding-Chu Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:34:44.713Z",
"user": {
"_id": "65542c8c9bd4907a067050b2",
"avatarUrl": "/avatars/031a2fdcc7d73da4f88fbcfca6ad3920.svg",
"fullname": "Zhang Dingchu",
"isPro": false,
"type": "user",
"user": "zhangdingchu"
}
},
{
"_id": "67a97bc5500b3bcf5babc5ec",
"hidden": false,
"name": "Jie-Jing Shao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:25:55.507Z",
"user": {
"_id": "640731714dc5f2846c945251",
"avatarUrl": "/avatars/a15695d306f05dd10a7b7f636af6a4f5.svg",
"fullname": "Jie-Jing Shao",
"isPro": false,
"type": "user",
"user": "shjj"
}
},
{
"_id": "67a97bc5500b3bcf5babc5ed",
"hidden": false,
"name": "Zhi Zhou",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:17:00.318Z",
"user": {
"_id": "64675fd0b990713c50317559",
"avatarUrl": "/avatars/931ad545e6b889b5fa02a96411bcb2f3.svg",
"fullname": "Zhi Zhou",
"isPro": false,
"type": "user",
"user": "WNJXYK"
}
},
{
"_id": "67a97bc5500b3bcf5babc5ee",
"hidden": false,
"name": "Lan-Zhe Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:25:47.379Z",
"user": {
"_id": "63fc116b1b4b1bd4e707d198",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63fc116b1b4b1bd4e707d198/kM1pL6_FUVwM2PXpNV160.jpeg",
"fullname": "Lan-Zhe Guo",
"isPro": false,
"type": "user",
"user": "Guolz"
}
},
{
"_id": "67a97bc5500b3bcf5babc5ef",
"hidden": false,
"name": "Yu-Feng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T08:52:43 | Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of
Language Models | The integration of slow-thinking mechanisms into large language models (LLMs)
offers a promising way toward achieving Level 2 AGI Reasoners, as exemplified
by systems like OpenAI's o1. However, several significant challenges remain,
including inefficient overthinking and an overreliance on auxiliary reward
models. We point out that these limitations stem from LLMs' inability to
internalize the search process, a key component of effective reasoning. A
critical step toward addressing this issue is enabling LLMs to autonomously
determine when and where to backtrack, a fundamental operation in traditional
search algorithms. To this end, we propose a self-backtracking mechanism that
equips LLMs with the ability to backtrack during both training and inference.
This mechanism not only enhances reasoning ability but also efficiency by
transforming slow-thinking processes into fast-thinking through
self-improvement. Empirical evaluations demonstrate that our proposal
significantly enhances the reasoning capabilities of LLMs, achieving a
performance gain of over 40 percent compared to the optimal-path supervised
fine-tuning method. We believe this study introduces a novel and promising
pathway for developing more advanced and robust Reasoners. | 23 | 67a97bc7500b3bcf5babc64e | null | null |
|
2025-02-09T23:03:21.947000 | VideoRoPE: What Makes for Good Video Rotary Position Embedding? | 2 | {
"_id": "64b4eec4faa3181a5eab9c46",
"avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg",
"followerCount": 16,
"fullname": "Jiaqi Wang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "myownskyW7",
"type": "user"
} | false | null | 2502.05173 | [
{
"_id": "67a97a47174028234b74f687",
"hidden": false,
"name": "Xilin Wei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T13:12:02.432Z",
"user": {
"_id": "62eb70462f0f5e54df42f778",
"avatarUrl": "/avatars/456049dba67638d3cdb330cdf383f272.svg",
"fullname": "Xilin Wei",
"isPro": false,
"type": "user",
"user": "Wiselnn"
}
},
{
"_id": "67a97a47174028234b74f688",
"hidden": false,
"name": "Xiaoran Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:49:59.999Z",
"user": {
"_id": "64f033ef82c6eea604c4da8b",
"avatarUrl": "/avatars/51b93fea7fd68b4274ee03701245dcca.svg",
"fullname": "Liu Xiaoran",
"isPro": false,
"type": "user",
"user": "LiuXR"
}
},
{
"_id": "67a97a47174028234b74f689",
"hidden": false,
"name": "Yuhang Zang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:50:02.011Z",
"user": {
"_id": "63859cf3b2906edaf83af9f0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg",
"fullname": "Yuhang Zang",
"isPro": false,
"type": "user",
"user": "yuhangzang"
}
},
{
"_id": "67a97a47174028234b74f68a",
"hidden": false,
"name": "Xiaoyi Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a47174028234b74f68b",
"hidden": false,
"name": "Pan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a47174028234b74f68c",
"hidden": false,
"name": "Yuhang Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:01:29.622Z",
"user": {
"_id": "65000bef18830fabea469fdd",
"avatarUrl": "/avatars/b320c77dfad039d9f9c54127f610d44f.svg",
"fullname": "Cao Yuhang",
"isPro": false,
"type": "user",
"user": "yhcao"
}
},
{
"_id": "67a97a47174028234b74f68d",
"hidden": false,
"name": "Jian Tong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a47174028234b74f68e",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:01:41.685Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67a97a47174028234b74f68f",
"hidden": false,
"name": "Qipeng Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:01:48.883Z",
"user": {
"_id": "6491cd52b1e5d3444528edb1",
"avatarUrl": "/avatars/a85635d886c7f157b6723dec5c01c030.svg",
"fullname": "Qipeng Guo",
"isPro": false,
"type": "user",
"user": "QipengGuo"
}
},
{
"_id": "67a97a47174028234b74f690",
"hidden": false,
"name": "Jiaqi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a47174028234b74f691",
"hidden": false,
"name": "Xipeng Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:02:04.834Z",
"user": {
"_id": "61457b8deff2c9fdb4de4988",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg",
"fullname": "Xipeng Qiu",
"isPro": false,
"type": "user",
"user": "xpqiu"
}
},
{
"_id": "67a97a47174028234b74f692",
"hidden": false,
"name": "Dahua Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-10T16:02:10.781Z",
"user": {
"_id": "636317ed80c1a705a6eff396",
"avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg",
"fullname": "Dahua Lin",
"isPro": false,
"type": "user",
"user": "lindahua"
}
}
] | 2025-02-07T18:56:04 | VideoRoPE: What Makes for Good Video Rotary Position Embedding? | While Rotary Position Embedding (RoPE) and its variants are widely adopted
for their long-context capabilities, the extension of the 1D RoPE to video,
with its complex spatio-temporal structure, remains an open challenge. This
work first introduces a comprehensive analysis that identifies four key
characteristics essential for the effective adaptation of RoPE to video, which
have not been fully considered in prior work. As part of our analysis, we
introduce a challenging V-NIAH-D (Visual Needle-In-A-Haystack with Distractors)
task, which adds periodic distractors into V-NIAH. The V-NIAH-D task
demonstrates that previous RoPE variants, lacking appropriate temporal
dimension allocation, are easily misled by distractors. Based on our analysis,
we introduce VideoRoPE, with a 3D structure designed to
preserve spatio-temporal relationships. VideoRoPE features
low-frequency temporal allocation to mitigate periodic oscillations, a
diagonal layout to maintain spatial symmetry, and adjustable
temporal spacing to decouple temporal and spatial indexing. VideoRoPE
consistently surpasses previous RoPE variants, across diverse downstream tasks
such as long video retrieval, video understanding, and video hallucination. Our
code will be available at
https://github.com/Wiselnn570/VideoRoPE{https://github.com/Wiselnn570/VideoRoPE}. | 63 | 67a97a4a174028234b74f707 | null | null |
|
2025-02-09T23:03:14.294000 | CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04350 | [
{
"_id": "67a97a77d163c9e6ea2bdb85",
"hidden": false,
"name": "Yongchao Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:07.141Z",
"user": {
"_id": "67266d21b7d88dbcf9e6c4aa",
"avatarUrl": "/avatars/0328058e74c424473ef890d1fbdd3e4d.svg",
"fullname": "Yongchao Chen",
"isPro": false,
"type": "user",
"user": "yongchao98"
}
},
{
"_id": "67a97a77d163c9e6ea2bdb86",
"hidden": false,
"name": "Yilun Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a77d163c9e6ea2bdb87",
"hidden": false,
"name": "Yueying Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a77d163c9e6ea2bdb88",
"hidden": false,
"name": "Yang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a97a77d163c9e6ea2bdb89",
"hidden": false,
"name": "Chuchu Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T15:53:59 | CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance | Existing methods fail to effectively steer Large Language Models (LLMs)
between textual reasoning and code generation, leaving symbolic computing
capabilities underutilized. We introduce CodeSteer, an effective method for
guiding LLM code/text generation. We construct a comprehensive benchmark
SymBench comprising 37 symbolic tasks with adjustable complexity and also
synthesize datasets of 12k multi-round guidance/generation trajectories and
5.5k guidance comparison pairs. We fine-tune the Llama-3-8B model with a newly
designed multi-round supervised fine-tuning (SFT) and direct preference
optimization (DPO). The resulting model, CodeSteerLLM, augmented with the
proposed symbolic and self-answer checkers, effectively guides the code/text
generation of larger models. Augmenting GPT-4o with CodeSteer raises its
average performance score from 53.3 to 86.4, even outperforming the existing
best LLM OpenAI o1 (82.7), o1-preview (74.8), and DeepSeek R1 (76.8) across all
37 tasks (28 seen, 9 unseen). Trained for GPT-4o, CodeSteer demonstrates
superior generalizability, providing an average 41.8 performance boost on
Claude, Mistral, and GPT-3.5. CodeSteer-guided LLMs fully harness symbolic
computing to maintain strong performance on highly complex tasks. Models,
Datasets, and Codes are available at
https://github.com/yongchao98/CodeSteer-v1.0. | 11 | 67a97a79d163c9e6ea2bdc0c | null | null |
|
2025-02-07T12:46:43.929000 | ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features | 3 | {
"_id": "64f8b03f83807928d25e766f",
"avatarUrl": "/avatars/68fd4ee967a1673a1d78a7581be8b3da.svg",
"followerCount": null,
"fullname": "Tuna Han Salih Meral",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tmeral",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/64f8b03f83807928d25e766f/t0642OHdPxXymRKmI5l-g.jpeg"
] | 2502.04320 | [
{
"_id": "67a6431d0fdd5543151da7d2",
"hidden": false,
"name": "Alec Helbling",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-08T13:56:13.003Z",
"user": {
"_id": "62d757a22d32f0bff5710596",
"avatarUrl": "/avatars/f3e05ea4fb853420923e04b6bf3a1a6e.svg",
"fullname": "Alec Helbling",
"isPro": true,
"type": "user",
"user": "helblazer811"
}
},
{
"_id": "67a6431d0fdd5543151da7d3",
"hidden": false,
"name": "Tuna Han Salih Meral",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-08T13:56:15.868Z",
"user": {
"_id": "64f8b03f83807928d25e766f",
"avatarUrl": "/avatars/68fd4ee967a1673a1d78a7581be8b3da.svg",
"fullname": "Tuna Han Salih Meral",
"isPro": false,
"type": "user",
"user": "tmeral"
}
},
{
"_id": "67a6431d0fdd5543151da7d4",
"hidden": false,
"name": "Ben Hoover",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a6431d0fdd5543151da7d5",
"hidden": false,
"name": "Pinar Yanardag",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a6431d0fdd5543151da7d6",
"hidden": false,
"name": "Duen Horng Chau",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T18:59:00 | ConceptAttention: Diffusion Transformers Learn Highly Interpretable
Features | Do the rich representations of multi-modal diffusion transformers (DiTs)
exhibit unique properties that enhance their interpretability? We introduce
ConceptAttention, a novel method that leverages the expressive power of DiT
attention layers to generate high-quality saliency maps that precisely locate
textual concepts within images. Without requiring additional training,
ConceptAttention repurposes the parameters of DiT attention layers to produce
highly contextualized concept embeddings, contributing the major discovery that
performing linear projections in the output space of DiT attention layers
yields significantly sharper saliency maps compared to commonly used
cross-attention mechanisms. Remarkably, ConceptAttention even achieves
state-of-the-art performance on zero-shot image segmentation benchmarks,
outperforming 11 other zero-shot interpretability methods on the
ImageNet-Segmentation dataset and on a single-class subset of PascalVOC. Our
work contributes the first evidence that the representations of multi-modal DiT
models like Flux are highly transferable to vision tasks like segmentation,
even outperforming multi-modal foundation models like CLIP. | 34 | 67a643200fdd5543151da869 | null | null |
|
2025-02-07T07:08:30.818000 | Weak-to-Strong Diffusion with Reflection | 2 | {
"_id": "66348bf4e1555067669870fa",
"avatarUrl": "/avatars/8b8bbc7dff7d9a0a02b0960084bc95ab.svg",
"followerCount": null,
"fullname": "白立忱",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Indulge-Bai",
"type": "user"
} | true | null | 2502.00473 | [
{
"_id": "67a5f635c20315f5e3f16f62",
"hidden": false,
"name": "Lichen Bai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T12:56:27.462Z",
"user": {
"_id": "66348bf4e1555067669870fa",
"avatarUrl": "/avatars/8b8bbc7dff7d9a0a02b0960084bc95ab.svg",
"fullname": "白立忱",
"isPro": false,
"type": "user",
"user": "Indulge-Bai"
}
},
{
"_id": "67a5f635c20315f5e3f16f63",
"hidden": false,
"name": "Masashi Sugiyama",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5f635c20315f5e3f16f64",
"hidden": false,
"name": "Zeke Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-01T16:00:08 | Weak-to-Strong Diffusion with Reflection | The goal of diffusion generative models is to align the learned distribution
with the real data distribution through gradient score matching. However,
inherent limitations in training data quality, modeling strategies, and
architectural design lead to inevitable gap between generated outputs and real
data. To reduce this gap, we propose Weak-to-Strong Diffusion (W2SD), a novel
framework that utilizes the estimated difference between existing weak and
strong models (i.e., weak-to-strong difference) to approximate the gap between
an ideal model and a strong model. By employing a reflective operation that
alternates between denoising and inversion with weak-to-strong difference, we
theoretically understand that W2SD steers latent variables along sampling
trajectories toward regions of the real data distribution. W2SD is highly
flexible and broadly applicable, enabling diverse improvements through the
strategic selection of weak-to-strong model pairs (e.g., DreamShaper vs. SD1.5,
good experts vs. bad experts in MoE). Extensive experiments demonstrate that
W2SD significantly improves human preference, aesthetic quality, and prompt
adherence, achieving SOTA performance across various modalities (e.g., image,
video), architectures (e.g., UNet-based, DiT-based, MoE), and benchmarks. For
example, Juggernaut-XL with W2SD can improve with the HPSv2 winning rate up to
90% over the original results. Moreover, the performance gains achieved by W2SD
markedly outweigh its additional computational overhead, while the cumulative
improvements from different weak-to-strong difference further solidify its
practical utility and deployability. | 22 | 67a5f638c20315f5e3f17086 | null | null |
|
2025-02-07T05:29:23.184000 | PlotGen: Multi-Agent LLM-based Scientific Data Visualization via Multimodal Feedback | 2 | {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"followerCount": 8,
"fullname": "Franck Dernoncourt",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Franck-Dernoncourt",
"type": "user"
} | false | null | 2502.00988 | [
{
"_id": "67a5e076b94446dfc848533b",
"hidden": false,
"name": "Kanika Goswami",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5e076b94446dfc848533c",
"hidden": false,
"name": "Puneet Mathur",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5e076b94446dfc848533d",
"hidden": false,
"name": "Ryan Rossi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5e076b94446dfc848533e",
"hidden": false,
"name": "Franck Dernoncourt",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T10:30:03.421Z",
"user": {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"fullname": "Franck Dernoncourt",
"isPro": false,
"type": "user",
"user": "Franck-Dernoncourt"
}
}
] | 2025-02-03T02:00:29 | PlotGen: Multi-Agent LLM-based Scientific Data Visualization via
Multimodal Feedback | Scientific data visualization is pivotal for transforming raw data into
comprehensible visual representations, enabling pattern recognition,
forecasting, and the presentation of data-driven insights. However, novice
users often face difficulties due to the complexity of selecting appropriate
tools and mastering visualization techniques. Large Language Models (LLMs) have
recently demonstrated potential in assisting code generation, though they
struggle with accuracy and require iterative debugging. In this paper, we
propose PlotGen, a novel multi-agent framework aimed at automating the creation
of precise scientific visualizations. PlotGen orchestrates multiple LLM-based
agents, including a Query Planning Agent that breaks down complex user requests
into executable steps, a Code Generation Agent that converts pseudocode into
executable Python code, and three retrieval feedback agents - a Numeric
Feedback Agent, a Lexical Feedback Agent, and a Visual Feedback Agent - that
leverage multimodal LLMs to iteratively refine the data accuracy, textual
labels, and visual correctness of generated plots via self-reflection.
Extensive experiments show that PlotGen outperforms strong baselines, achieving
a 4-6 percent improvement on the MatPlotBench dataset, leading to enhanced user
trust in LLM-generated visualizations and improved novice productivity due to a
reduction in debugging time needed for plot errors. | 5 | 67a5e077b94446dfc8485375 | null | null |
|
2025-02-07T05:25:27.744000 | Enhancing Code Generation for Low-Resource Languages: No Silver Bullet | 2 | {
"_id": "663486a1f64712540644cb68",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/663486a1f64712540644cb68/YZFR41ERY6UrC6rCC6Nan.jpeg",
"followerCount": 2,
"fullname": "Alessandro",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "Devy1",
"type": "user"
} | true | null | 2501.19085 | [
{
"_id": "67a5b65fe7798ca5b7473a45",
"hidden": false,
"name": "Alessandro Giagnorio",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:57:46.514Z",
"user": {
"_id": "663486a1f64712540644cb68",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/663486a1f64712540644cb68/YZFR41ERY6UrC6rCC6Nan.jpeg",
"fullname": "Alessandro",
"isPro": true,
"type": "user",
"user": "Devy1"
}
},
{
"_id": "67a5b65fe7798ca5b7473a46",
"hidden": false,
"name": "Alberto Martin-Lopez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:46:32.306Z",
"user": {
"_id": "65a7cb0fc5ffe1d019a21cb3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/GShcO1DwVNzlIUr3n1ifi.jpeg",
"fullname": "Alberto Martín López ",
"isPro": false,
"type": "user",
"user": "AML14"
}
},
{
"_id": "67a5b65fe7798ca5b7473a47",
"hidden": false,
"name": "Gabriele Bavota",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:46:38.124Z",
"user": {
"_id": "6638bea59e57161faac814e7",
"avatarUrl": "/avatars/91375b88945af50e51b7229a789a31b8.svg",
"fullname": "Gabriele Bavota",
"isPro": false,
"type": "user",
"user": "gbavota"
}
}
] | 2025-01-31T12:23:28 | Enhancing Code Generation for Low-Resource Languages: No Silver Bullet | The advent of Large Language Models (LLMs) has significantly advanced the
field of automated code generation. LLMs rely on large and diverse datasets to
learn syntax, semantics, and usage patterns of programming languages. For
low-resource languages (i.e., niche programming languages characterized by the
scarcity of training data), the limited availability of such data hampers the
models' ability to generalize effectively, resulting in poorer code generation
performance as compared to high-resource languages. For this reason, there is a
quest for techniques able to close this performance gap. We present an
empirical study investigating the effectiveness of several approaches for
boosting LLMs' performance on low-resource languages, namely: (i) a classic
fine-tuning, which is however capped in size by the scarcity of training data;
(ii) three variants of in-context learning, with prompts crafted to provide the
LLM with additional information about the low-resource language (e.g., few-shot
examples showcasing features of the targeted language); and (iii) a
pre-training objective teaching the model how to translate between high- and
low-resource languages. The context of our study are two low-resource languages
(R and Racket) and six LLMs having different architectures and sizes. Our
findings reveal that a fine-tuning is usually the best choice for smaller LLMs,
possibly due to the fact that even a small dataset is sufficient to train their
limited number of parameters. With the increase in size of the models,
in-context learning becomes more and more effective, representing a safe and
cheap bet (i.e., it always helps, but with different magnitudes). Differently,
very large LLMs may deteriorate their performance on low-resource languages
when fine-tuning is performed, possibly due to the lack of enough data needed
to effectively update their weights. | 5 | 67a5b660e7798ca5b7473a6b | null | null |
|
2025-02-07T03:42:17.799000 | ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual Attribution | 2 | {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"followerCount": 8,
"fullname": "Franck Dernoncourt",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Franck-Dernoncourt",
"type": "user"
} | false | null | 2502.00989 | [
{
"_id": "67a5c7601e6db426653ebc3d",
"hidden": false,
"name": "Kanika Goswami",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5c7601e6db426653ebc3e",
"hidden": false,
"name": "Puneet Mathur",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:47:20.093Z",
"user": {
"_id": "65c16444d4c3b8dff2f0d78d",
"avatarUrl": "/avatars/4ed764c1657bd260d2a12ba61c111062.svg",
"fullname": "Puneet Mathur",
"isPro": false,
"type": "user",
"user": "puneetm"
}
},
{
"_id": "67a5c7601e6db426653ebc3f",
"hidden": false,
"name": "Ryan Rossi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:47:32.496Z",
"user": {
"_id": "62a3ab83e4dd6252344d27cd",
"avatarUrl": "/avatars/7ca8510f70a58dc207b104240e30c35c.svg",
"fullname": "Ryan A. Rossi",
"isPro": false,
"type": "user",
"user": "ryanrossi"
}
},
{
"_id": "67a5c7601e6db426653ebc40",
"hidden": false,
"name": "Franck Dernoncourt",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T10:30:50.575Z",
"user": {
"_id": "62c5947524171688a9feb992",
"avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg",
"fullname": "Franck Dernoncourt",
"isPro": false,
"type": "user",
"user": "Franck-Dernoncourt"
}
}
] | 2025-02-03T02:00:51 | ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual
Attribution | Large Language Models (LLMs) can perform chart question-answering tasks but
often generate unverified hallucinated responses. Existing answer attribution
methods struggle to ground responses in source charts due to limited
visual-semantic context, complex visual-text alignment requirements, and
difficulties in bounding box prediction across complex layouts. We present
ChartCitor, a multi-agent framework that provides fine-grained bounding box
citations by identifying supporting evidence within chart images. The system
orchestrates LLM agents to perform chart-to-table extraction, answer
reformulation, table augmentation, evidence retrieval through pre-filtering and
re-ranking, and table-to-chart mapping. ChartCitor outperforms existing
baselines across different chart types. Qualitative user studies show that
ChartCitor helps increase user trust in Generative AI by providing enhanced
explainability for LLM-assisted chart QA and enables professionals to be more
productive. | 7 | 67a5c7621e6db426653ebc8a | null | null |
|
2025-02-07T02:46:29.675000 | Great Models Think Alike and this Undermines AI Oversight | 2 | {
"_id": "6506832221ac448013f94995",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6506832221ac448013f94995/sVUI1JV4Dxan5l-MqNze4.jpeg",
"followerCount": 1,
"fullname": "Shashwat Goel",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "shash42",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6506832221ac448013f94995/pXBCc2dpWXCw6JinTbiFP.png"
] | 2502.04313 | [
{
"_id": "67a5b9107897c8f5406155e0",
"hidden": false,
"name": "Shashwat Goel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:39:36.508Z",
"user": {
"_id": "6506832221ac448013f94995",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6506832221ac448013f94995/sVUI1JV4Dxan5l-MqNze4.jpeg",
"fullname": "Shashwat Goel",
"isPro": false,
"type": "user",
"user": "shash42"
}
},
{
"_id": "67a5b9107897c8f5406155e1",
"hidden": false,
"name": "Joschka Struber",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:39:43.250Z",
"user": {
"_id": "6728c6113d35dd53cfe9f30c",
"avatarUrl": "/avatars/7f93b9d41446cce382f63c78ca5059a1.svg",
"fullname": "Joschka Strüber",
"isPro": false,
"type": "user",
"user": "Klingspor"
}
},
{
"_id": "67a5b9107897c8f5406155e2",
"hidden": false,
"name": "Ilze Amanda Auzina",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:40:04.242Z",
"user": {
"_id": "671b49503fd1d03dc69194b0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/tnkR0j1VaWClUcumXcgjQ.png",
"fullname": "Ilze Amanda Auzina",
"isPro": false,
"type": "user",
"user": "iaa01"
}
},
{
"_id": "67a5b9107897c8f5406155e3",
"hidden": false,
"name": "Karuna K Chandra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5b9107897c8f5406155e4",
"hidden": false,
"name": "Ponnurangam Kumaraguru",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-08T13:56:22.133Z",
"user": {
"_id": "67a6a4b7f379cef464950268",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/yaMsT2jYsj6mvvYU1gvO_.jpeg",
"fullname": "ponnurangam kumaraguru",
"isPro": false,
"type": "user",
"user": "pk-profgiri"
}
},
{
"_id": "67a5b9107897c8f5406155e5",
"hidden": false,
"name": "Douwe Kiela",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:40:22.153Z",
"user": {
"_id": "61dc997715b47073db1620dc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1641847245435-61dc997715b47073db1620dc.jpeg",
"fullname": "Douwe Kiela",
"isPro": false,
"type": "user",
"user": "douwekiela"
}
},
{
"_id": "67a5b9107897c8f5406155e6",
"hidden": false,
"name": "Ameya Prabhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:40:34.763Z",
"user": {
"_id": "6464a0d41683d3c81f51924a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg",
"fullname": "Ameya Prabhu",
"isPro": false,
"type": "user",
"user": "AmeyaPrabhu"
}
},
{
"_id": "67a5b9107897c8f5406155e7",
"hidden": false,
"name": "Matthias Bethge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5b9107897c8f5406155e8",
"hidden": false,
"name": "Jonas Geiping",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:40:49.233Z",
"user": {
"_id": "63d86dbf3130cadcaf8bdd11",
"avatarUrl": "/avatars/29d79a0c6dcec01111ef192fecd0fa7a.svg",
"fullname": "Jonas Geiping",
"isPro": false,
"type": "user",
"user": "JonasGeiping"
}
}
] | 2025-02-06T18:56:01 | Great Models Think Alike and this Undermines AI Oversight | As Language Model (LM) capabilities advance, evaluating and supervising them
at scale is getting harder for humans. There is hope that other language models
can automate both these tasks, which we refer to as "AI Oversight". We study
how model similarity affects both aspects of AI oversight by proposing a
probabilistic metric for LM similarity based on overlap in model mistakes.
Using this metric, we first show that LLM-as-a-judge scores favor models
similar to the judge, generalizing recent self-preference results. Then, we
study training on LM annotations, and find complementary knowledge between the
weak supervisor and strong student model plays a crucial role in gains from
"weak-to-strong generalization". As model capabilities increase, it becomes
harder to find their mistakes, and we might defer more to AI oversight.
However, we observe a concerning trend -- model mistakes are becoming more
similar with increasing capabilities, pointing to risks from correlated
failures. Our work underscores the importance of reporting and correcting for
model similarity, especially in the emerging paradigm of AI oversight. | 31 | 67a5b9137897c8f540615673 | null | null |
|
2025-02-07T01:37:25.953000 | Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions | 2 | {
"_id": "64bf072bae436c8813494ba3",
"avatarUrl": "/avatars/afb96d2bbf90411f4b1a030ebebff300.svg",
"followerCount": 1,
"fullname": "Yuxin Xiao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "YuxinXiao",
"type": "user"
} | true | null | 2502.04322 | [
{
"_id": "67a5a9357415f9155e9b4b58",
"hidden": false,
"name": "Yik Siu Chan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5a9357415f9155e9b4b59",
"hidden": true,
"name": "Narutatsu Ri",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:57:58.519Z",
"user": {
"_id": "64698ed0dcbb937d56b9dd02",
"avatarUrl": "/avatars/835ce9bf6e2cd1d4b7a709cf41a884e2.svg",
"fullname": "Edward Ri",
"isPro": false,
"type": "user",
"user": "narutatsuri"
}
},
{
"_id": "67a5a9357415f9155e9b4b5a",
"hidden": false,
"name": "Yuxin Xiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:00.910Z",
"user": {
"_id": "64bf072bae436c8813494ba3",
"avatarUrl": "/avatars/afb96d2bbf90411f4b1a030ebebff300.svg",
"fullname": "Yuxin Xiao",
"isPro": false,
"type": "user",
"user": "YuxinXiao"
}
},
{
"_id": "67a5a9357415f9155e9b4b5b",
"hidden": false,
"name": "Marzyeh Ghassemi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T18:59:02 | Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple
Interactions | Despite extensive safety alignment efforts, large language models (LLMs)
remain vulnerable to jailbreak attacks that elicit harmful behavior. While
existing studies predominantly focus on attack methods that require technical
expertise, two critical questions remain underexplored: (1) Are jailbroken
responses truly useful in enabling average users to carry out harmful actions?
(2) Do safety vulnerabilities exist in more common, simple human-LLM
interactions? In this paper, we demonstrate that LLM responses most effectively
facilitate harmful actions when they are both actionable and informative--two
attributes easily elicited in multi-step, multilingual interactions. Using this
insight, we propose HarmScore, a jailbreak metric that measures how effectively
an LLM response enables harmful actions, and Speak Easy, a simple multi-step,
multilingual attack framework. Notably, by incorporating Speak Easy into direct
request and jailbreak baselines, we see an average absolute increase of 0.319
in Attack Success Rate and 0.426 in HarmScore in both open-source and
proprietary LLMs across four safety benchmarks. Our work reveals a critical yet
often overlooked vulnerability: Malicious users can easily exploit common
interaction patterns for harmful intentions. | 3 | 67a5a9367415f9155e9b4bbb | null | null |
|
2025-02-07T01:29:53.798000 | Analyze Feature Flow to Enhance Interpretation and Steering in Language Models | 2 | {
"_id": "62a9c8edc19f92ae443ab37f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669110208492-62a9c8edc19f92ae443ab37f.png",
"followerCount": 10,
"fullname": "Daniil Gavrilov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kefirski",
"type": "user"
} | true | null | 2502.03032 | [
{
"_id": "67a59c4e7ffacd843a56404a",
"hidden": false,
"name": "Daniil Laptev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:04.546Z",
"user": {
"_id": "634c5f8cfb80cc6bcaf42c03",
"avatarUrl": "/avatars/1f37db0e70cbaf9707f4c8cbcee37ca0.svg",
"fullname": "Daniil Laptev",
"isPro": false,
"type": "user",
"user": "dlaptev"
}
},
{
"_id": "67a59c4e7ffacd843a56404b",
"hidden": false,
"name": "Nikita Balagansky",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:02.693Z",
"user": {
"_id": "60b364e7f88532cd79eaff7b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654185363389-60b364e7f88532cd79eaff7b.jpeg",
"fullname": "Nikita Balagansky",
"isPro": false,
"type": "user",
"user": "elephantmipt"
}
},
{
"_id": "67a59c4e7ffacd843a56404c",
"hidden": false,
"name": "Yaroslav Aksenov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:36:09.037Z",
"user": {
"_id": "63ed5676684767daecac6f8a",
"avatarUrl": "/avatars/d0e4a715f9c3fb6d74c183bab751ec35.svg",
"fullname": "Yaroslav Aksenov",
"isPro": false,
"type": "user",
"user": "yaraksen"
}
},
{
"_id": "67a59c4e7ffacd843a56404d",
"hidden": false,
"name": "Daniil Gavrilov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:06.718Z",
"user": {
"_id": "62a9c8edc19f92ae443ab37f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669110208492-62a9c8edc19f92ae443ab37f.png",
"fullname": "Daniil Gavrilov",
"isPro": false,
"type": "user",
"user": "kefirski"
}
}
] | 2025-02-05T09:39:34 | Analyze Feature Flow to Enhance Interpretation and Steering in Language
Models | We introduce a new approach to systematically map features discovered by
sparse autoencoder across consecutive layers of large language models,
extending earlier work that examined inter-layer feature links. By using a
data-free cosine similarity technique, we trace how specific features persist,
transform, or first appear at each stage. This method yields granular flow
graphs of feature evolution, enabling fine-grained interpretability and
mechanistic insights into model computations. Crucially, we demonstrate how
these cross-layer feature maps facilitate direct steering of model behavior by
amplifying or suppressing chosen features, achieving targeted thematic control
in text generation. Together, our findings highlight the utility of a causal,
cross-layer interpretability framework that not only clarifies how features
develop through forward passes but also provides new means for transparent
manipulation of large language models. | 56 | 67a59c4f7ffacd843a56408f | null | null |
|
2025-02-07T00:56:20.873000 | MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion | 2 | {
"_id": "64b764bffdb702b3d8640610",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b764bffdb702b3d8640610/lpHg0AX_NOmzw-ZxeOa1s.png",
"followerCount": 3,
"fullname": "haoxintong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "haoxintong",
"type": "user"
} | true | null | 2502.04235 | [
{
"_id": "67a56af6d7c26c7497a86308",
"hidden": false,
"name": "Xintong Hao",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-07T04:41:11.249Z",
"user": {
"_id": "64b764bffdb702b3d8640610",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b764bffdb702b3d8640610/lpHg0AX_NOmzw-ZxeOa1s.png",
"fullname": "haoxintong",
"isPro": false,
"type": "user",
"user": "haoxintong"
}
},
{
"_id": "67a56af6d7c26c7497a86309",
"hidden": false,
"name": "Ke Shen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T12:56:30.330Z",
"user": {
"_id": "645604eebabbbbd3486dc615",
"avatarUrl": "/avatars/17a5ca8274e2bfc8f183a4af9878a930.svg",
"fullname": "shenke",
"isPro": false,
"type": "user",
"user": "shenke18"
}
},
{
"_id": "67a56af6d7c26c7497a8630a",
"hidden": false,
"name": "Chenggang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T17:19:55 | MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus
Expansion | Despite the remarkable capabilities of large language models across various
tasks, their continued scaling faces a critical challenge: the scarcity of
high-quality pretraining data. While model architectures continue to evolve,
the natural language data struggles to scale up. To tackle this bottleneck, we
propose MAssive Genre-Audience~(MAGA) reformulation
method, which systematic synthesizes diverse, contextually-rich pretraining
data from existing corpus. This work makes three main contributions: (1) We
propose MAGA reformulation method, a lightweight and scalable approach for
pretraining corpus expansion, and build a 770B tokens MAGACorpus. (2) We
evaluate MAGACorpus with different data budget scaling strategies,
demonstrating consistent improvements across various model sizes (134M-13B),
establishing the necessity for next-generation large-scale synthetic
pretraining language models. (3) Through comprehensive analysis, we investigate
prompt engineering's impact on synthetic training collapse and reveal
limitations in conventional collapse detection metrics using validation losses.
Our work shows that MAGA can substantially expand training datasets while
maintaining quality, offering a reliably pathway for scaling models beyond data
limitations. | 21 | 67a56af8d7c26c7497a86359 | null | null |
|
2025-02-07T00:54:43.254000 | Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment | 2 | {
"_id": "64f001bfabd9fb1914398bd5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64f001bfabd9fb1914398bd5/9teH82hkBI4csIz_WQh5q.jpeg",
"followerCount": 2,
"fullname": "liuzuyan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Zuyan",
"type": "user"
} | true | null | 2502.04328 | [
{
"_id": "67a586fad177de2eeba7de7b",
"hidden": false,
"name": "Zuyan Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:10.679Z",
"user": {
"_id": "64f001bfabd9fb1914398bd5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64f001bfabd9fb1914398bd5/9teH82hkBI4csIz_WQh5q.jpeg",
"fullname": "liuzuyan",
"isPro": false,
"type": "user",
"user": "Zuyan"
}
},
{
"_id": "67a586fad177de2eeba7de7c",
"hidden": false,
"name": "Yuhao Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:37:45.556Z",
"user": {
"_id": "652965773a416e1f2173443b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/652965773a416e1f2173443b/y9MB8YgHzbwCXAc4EI9T3.jpeg",
"fullname": "Yuhao Dong",
"isPro": false,
"type": "user",
"user": "THUdyh"
}
},
{
"_id": "67a586fad177de2eeba7de7d",
"hidden": false,
"name": "Jiahui Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a586fad177de2eeba7de7e",
"hidden": false,
"name": "Ziwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:38:14.852Z",
"user": {
"_id": "62ab1ac1d48b4d8b048a3473",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png",
"fullname": "Ziwei Liu",
"isPro": false,
"type": "user",
"user": "liuziwei7"
}
},
{
"_id": "67a586fad177de2eeba7de7f",
"hidden": false,
"name": "Winston Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:38:23.468Z",
"user": {
"_id": "63673bb9d0ee6e2662be0ec1",
"avatarUrl": "/avatars/1b8976785d64bc4e3f7159ccdb7f06c5.svg",
"fullname": "Qingqiao Hu",
"isPro": false,
"type": "user",
"user": "WinstonHu"
}
},
{
"_id": "67a586fad177de2eeba7de80",
"hidden": false,
"name": "Jiwen Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:38:29.456Z",
"user": {
"_id": "66c44203ea476bea05e9fcd7",
"avatarUrl": "/avatars/b061eebec609446e669f5ad6365959f9.svg",
"fullname": "lu",
"isPro": false,
"type": "user",
"user": "jiwenlu"
}
},
{
"_id": "67a586fad177de2eeba7de81",
"hidden": false,
"name": "Yongming Rao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:38:35.766Z",
"user": {
"_id": "63e4865354f51ea342d45d78",
"avatarUrl": "/avatars/2e7eccc878751331ca8b282f53e38899.svg",
"fullname": "Yongming Rao",
"isPro": false,
"type": "user",
"user": "raoyongming"
}
}
] | 2025-02-06T18:59:55 | Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive
Modality Alignment | Recent advances in large language models, particularly following GPT-4o, have
sparked increasing interest in developing omni-modal models capable of
understanding more modalities. While some open-source alternatives have
emerged, there is still a notable lag behind specialized single-modality models
in performance. In this paper, we present Ola, an Omni-modal language model
that achieves competitive performance across image, video, and audio
understanding compared to specialized counterparts. The core design of Ola lies
in its progressive modality alignment strategy that extends the supporting
modality of the language model progressively. Our training pipeline begins with
the most distinct modalities: image and text, then gradually expands the skill
sets of the model using speech data that connects language and audio knowledge,
and video data that connects all modalities. The progressive learning pipeline
also enables us to maintain a relatively small size of the cross-modal
alignment data, making developing omni-modal from existing vision-language
models easy and less costly. Moreover, to unlock an advanced interactive
experience like GPT-4o, we further design a sentence-wise decoding solution for
streaming speech generation. Extensive experiments demonstrate that Ola
surpasses existing open omni-modal LLMs across all modalities while achieving
highly competitive performance compared to state-of-the-art specialized models
of similar sizes. We aim to make Ola a fully open omni-modal understanding
solution to advance future research in this emerging field. Model weights,
code, and data are open-sourced at https://github.com/Ola-Omni/Ola. | 28 | 67a586fbd177de2eeba7deae | null | null |
|
2025-02-07T00:48:49.217000 | DynVFX: Augmenting Real Videos with Dynamic Content | 3 | {
"_id": "6181c72cdcc1df2c9de8a4d8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655248010394-6181c72cdcc1df2c9de8a4d8.jpeg",
"followerCount": 14,
"fullname": "Hila Chefer",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Hila",
"type": "user"
} | false | null | 2502.03621 | [
{
"_id": "67a59e5298f41a0460ee5282",
"hidden": false,
"name": "Danah Yatim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:37:14.464Z",
"user": {
"_id": "6301d8324ccccaa23d3864f4",
"avatarUrl": "/avatars/148b1b1d1460e26f03a1f2ce0feacf78.svg",
"fullname": "Danah Yatim",
"isPro": false,
"type": "user",
"user": "DanahY"
}
},
{
"_id": "67a59e5298f41a0460ee5283",
"hidden": false,
"name": "Rafail Fridman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:37:21.549Z",
"user": {
"_id": "62627f3c02cd5952e013c843",
"avatarUrl": "/avatars/1d76689d75d670630b6fa0307309c31f.svg",
"fullname": "Rafail Fridman",
"isPro": false,
"type": "user",
"user": "RafailFridman"
}
},
{
"_id": "67a59e5298f41a0460ee5284",
"hidden": false,
"name": "Omer Bar-Tal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:37:28.382Z",
"user": {
"_id": "62e29044a133a252b5cf70b2",
"avatarUrl": "/avatars/6d09ddcba9bc47c309150a8d77815891.svg",
"fullname": "Omer Bar-Tal",
"isPro": false,
"type": "user",
"user": "omerbartal"
}
},
{
"_id": "67a59e5298f41a0460ee5285",
"hidden": false,
"name": "Tali Dekel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:37:34.275Z",
"user": {
"_id": "631cddec68f7da9ad24f6fc7",
"avatarUrl": "/avatars/7d4f1ce805e5889ca6594bd4a93f2583.svg",
"fullname": "Tali Dekel",
"isPro": false,
"type": "user",
"user": "talidekel"
}
}
] | 2025-02-05T21:14:55 | DynVFX: Augmenting Real Videos with Dynamic Content | We present a method for augmenting real-world videos with newly generated
dynamic content. Given an input video and a simple user-provided text
instruction describing the desired content, our method synthesizes dynamic
objects or complex scene effects that naturally interact with the existing
scene over time. The position, appearance, and motion of the new content are
seamlessly integrated into the original footage while accounting for camera
motion, occlusions, and interactions with other dynamic objects in the scene,
resulting in a cohesive and realistic output video. We achieve this via a
zero-shot, training-free framework that harnesses a pre-trained text-to-video
diffusion transformer to synthesize the new content and a pre-trained Vision
Language Model to envision the augmented scene in detail. Specifically, we
introduce a novel inference-based method that manipulates features within the
attention mechanism, enabling accurate localization and seamless integration of
the new content while preserving the integrity of the original scene. Our
method is fully automated, requiring only a simple user instruction. We
demonstrate its effectiveness on a wide range of edits applied to real-world
videos, encompassing diverse objects and scenarios involving both camera and
object motion. | 28 | 67a59e5798f41a0460ee5389 | null | null |
|
2025-02-06T23:52:49.331000 | Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.03639 | [
{
"_id": "67a59193f86e1b9d7ae7cd55",
"hidden": false,
"name": "Yunuo Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:50:19.710Z",
"user": {
"_id": "65a47b4d60cc6b04c9ebb0ff",
"avatarUrl": "/avatars/b35ae99eab95e95a327c30b6d3ad6c83.svg",
"fullname": "Yunuo Chen",
"isPro": false,
"type": "user",
"user": "yunuoch"
}
},
{
"_id": "67a59193f86e1b9d7ae7cd56",
"hidden": false,
"name": "Junli Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:50:25.916Z",
"user": {
"_id": "63f54aa73aa49d8cb97b84bc",
"avatarUrl": "/avatars/c73c5870039611ab9162daad46a1ba20.svg",
"fullname": "junli cao",
"isPro": false,
"type": "user",
"user": "jlcao2"
}
},
{
"_id": "67a59193f86e1b9d7ae7cd57",
"hidden": false,
"name": "Anil Kag",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:50:32.227Z",
"user": {
"_id": "66b01ee8e53bbad918362856",
"avatarUrl": "/avatars/293529589a91dd7a95909d66727db224.svg",
"fullname": "Anil Kag",
"isPro": false,
"type": "user",
"user": "anilkagak2"
}
},
{
"_id": "67a59193f86e1b9d7ae7cd58",
"hidden": false,
"name": "Vidit Goel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:50:37.829Z",
"user": {
"_id": "636c0c1a15cd58e915bb8139",
"avatarUrl": "/avatars/7c675ac6a7d303d3425e498c4e939eb0.svg",
"fullname": "Vidit Goel",
"isPro": false,
"type": "user",
"user": "vidit98"
}
},
{
"_id": "67a59193f86e1b9d7ae7cd59",
"hidden": false,
"name": "Sergei Korolev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a59193f86e1b9d7ae7cd5a",
"hidden": false,
"name": "Chenfanfu Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:50:50.619Z",
"user": {
"_id": "655683727be68c0961673f45",
"avatarUrl": "/avatars/cddca36c041fa04860a4d42c0feaa07f.svg",
"fullname": "Chenfanfu Jiang",
"isPro": false,
"type": "user",
"user": "cffjiang"
}
},
{
"_id": "67a59193f86e1b9d7ae7cd5b",
"hidden": false,
"name": "Sergey Tulyakov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a59193f86e1b9d7ae7cd5c",
"hidden": false,
"name": "Jian Ren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-10T09:50:19.428Z",
"user": {
"_id": "61f19829233c91cbd2f79e70",
"avatarUrl": "/avatars/a0735a94542b4f7cda5aed8bc4be0538.svg",
"fullname": "Jian Ren",
"isPro": false,
"type": "user",
"user": "alanspike"
}
}
] | 2025-02-05T21:49:06 | Towards Physical Understanding in Video Generation: A 3D Point
Regularization Approach | We present a novel video generation framework that integrates 3-dimensional
geometry and dynamic awareness. To achieve this, we augment 2D videos with 3D
point trajectories and align them in pixel space. The resulting 3D-aware video
dataset, PointVid, is then used to fine-tune a latent diffusion model, enabling
it to track 2D objects with 3D Cartesian coordinates. Building on this, we
regularize the shape and motion of objects in the video to eliminate undesired
artifacts, \eg, nonphysical deformation. Consequently, we enhance the quality
of generated RGB videos and alleviate common issues like object morphing, which
are prevalent in current video models due to a lack of shape awareness. With
our 3D augmentation and regularization, our model is capable of handling
contact-rich scenarios such as task-oriented videos. These videos involve
complex interactions of solids, where 3D information is essential for
perceiving deformation and contact. Furthermore, our model improves the overall
quality of video generation by promoting the 3D consistency of moving objects
and reducing abrupt changes in shape and motion. | 8 | 67a59195f86e1b9d7ae7cd97 | null | null |
|
2025-02-06T23:50:54.836000 | MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.04299 | [
{
"_id": "67a591234020a3bfdb8cb2e5",
"hidden": false,
"name": "Jinbo Xing",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:43:19.911Z",
"user": {
"_id": "64770e86d7cf39f2e937ae9a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64770e86d7cf39f2e937ae9a/pLqGg2z1KzQxCGpMwds-9.jpeg",
"fullname": "Jinbo Xing",
"isPro": false,
"type": "user",
"user": "Doubiiu"
}
},
{
"_id": "67a591234020a3bfdb8cb2e6",
"hidden": false,
"name": "Long Mai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a591234020a3bfdb8cb2e7",
"hidden": false,
"name": "Cusuh Ham",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:43:42.324Z",
"user": {
"_id": "6372fab1bd1595ae66a62543",
"avatarUrl": "/avatars/783bdae07b2663eebeea4c7919a87c91.svg",
"fullname": "Cusuh Ham",
"isPro": false,
"type": "user",
"user": "cusuh"
}
},
{
"_id": "67a591234020a3bfdb8cb2e8",
"hidden": true,
"name": "Jiahui Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:44:01.034Z",
"user": {
"_id": "644a717e75fce8ebef4e4955",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/zLga4NZBohFPlv50dcAo9.png",
"fullname": "Jiahui Huang",
"isPro": false,
"type": "user",
"user": "heiwang1997"
}
},
{
"_id": "67a591234020a3bfdb8cb2e9",
"hidden": false,
"name": "Aniruddha Mahapatra",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:44:08.344Z",
"user": {
"_id": "633bd831d5935998f74c4156",
"avatarUrl": "/avatars/feb4976ad10dd678ccad2652acf8a611.svg",
"fullname": "Aniruddha Mahapatra",
"isPro": false,
"type": "user",
"user": "aniruddha26398"
}
},
{
"_id": "67a591234020a3bfdb8cb2ea",
"hidden": false,
"name": "Chi-Wing Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a591234020a3bfdb8cb2eb",
"hidden": false,
"name": "Tien-Tsin Wong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:44:26.835Z",
"user": {
"_id": "65574f0fc4865c852d5eec15",
"avatarUrl": "/avatars/1e03db4f2de4959dee620c577fbbb063.svg",
"fullname": "Tien-Tsin Wong",
"isPro": false,
"type": "user",
"user": "ttwong"
}
},
{
"_id": "67a591234020a3bfdb8cb2ec",
"hidden": false,
"name": "Feng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T18:41:04 | MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video
Generation | This paper presents a method that allows users to design cinematic video
shots in the context of image-to-video generation. Shot design, a critical
aspect of filmmaking, involves meticulously planning both camera movements and
object motions in a scene. However, enabling intuitive shot design in modern
image-to-video generation systems presents two main challenges: first,
effectively capturing user intentions on the motion design, where both camera
movements and scene-space object motions must be specified jointly; and second,
representing motion information that can be effectively utilized by a video
diffusion model to synthesize the image animations. To address these
challenges, we introduce MotionCanvas, a method that integrates user-driven
controls into image-to-video (I2V) generation models, allowing users to control
both object and camera motions in a scene-aware manner. By connecting insights
from classical computer graphics and contemporary video generation techniques,
we demonstrate the ability to achieve 3D-aware motion control in I2V synthesis
without requiring costly 3D-related training data. MotionCanvas enables users
to intuitively depict scene-space motion intentions, and translates them into
spatiotemporal motion-conditioning signals for video diffusion models. We
demonstrate the effectiveness of our method on a wide range of real-world image
content and shot-design scenarios, highlighting its potential to enhance the
creative workflows in digital content creation and adapt to various image and
video editing applications. | 17 | 67a5912b4020a3bfdb8cb4d5 | null | null |
|
2025-02-06T23:38:19.926000 | MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm | 3 | {
"_id": "659faf1d874e583fed79d09b",
"avatarUrl": "/avatars/178a18686426908b9496ce71f6550655.svg",
"followerCount": 1,
"fullname": "Ziyan Guo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ZiyanGuo",
"type": "user"
} | true | null | 2502.02358 | [
{
"_id": "67a43546f6caedc30f9d8c71",
"hidden": false,
"name": "Ziyan Guo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:01.599Z",
"user": {
"_id": "659faf1d874e583fed79d09b",
"avatarUrl": "/avatars/178a18686426908b9496ce71f6550655.svg",
"fullname": "Ziyan Guo",
"isPro": false,
"type": "user",
"user": "ZiyanGuo"
}
},
{
"_id": "67a43546f6caedc30f9d8c72",
"hidden": false,
"name": "Zeyu Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:38:48.101Z",
"user": {
"_id": "65fbc3c6f52ac1107f5b1677",
"avatarUrl": "/avatars/b8373c039c3d978510b89d057bd9b5e8.svg",
"fullname": "Zeyu Hu",
"isPro": false,
"type": "user",
"user": "zeyuhu"
}
},
{
"_id": "67a43546f6caedc30f9d8c73",
"hidden": false,
"name": "Na Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a43546f6caedc30f9d8c74",
"hidden": false,
"name": "De Wen Soh",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T14:43:26 | MotionLab: Unified Human Motion Generation and Editing via the
Motion-Condition-Motion Paradigm | Human motion generation and editing are key components of computer graphics
and vision. However, current approaches in this field tend to offer isolated
solutions tailored to specific tasks, which can be inefficient and impractical
for real-world applications. While some efforts have aimed to unify
motion-related tasks, these methods simply use different modalities as
conditions to guide motion generation. Consequently, they lack editing
capabilities, fine-grained control, and fail to facilitate knowledge sharing
across tasks. To address these limitations and provide a versatile, unified
framework capable of handling both human motion generation and editing, we
introduce a novel paradigm: Motion-Condition-Motion, which enables the unified
formulation of diverse tasks with three concepts: source motion, condition, and
target motion. Based on this paradigm, we propose a unified framework,
MotionLab, which incorporates rectified flows to learn the mapping from source
motion to target motion, guided by the specified conditions. In MotionLab, we
introduce the 1) MotionFlow Transformer to enhance conditional generation and
editing without task-specific modules; 2) Aligned Rotational Position Encoding}
to guarantee the time synchronization between source motion and target motion;
3) Task Specified Instruction Modulation; and 4) Motion Curriculum Learning for
effective multi-task learning and knowledge sharing across tasks. Notably, our
MotionLab demonstrates promising generalization capabilities and inference
efficiency across multiple benchmarks for human motion. Our code and additional
video results are available at: https://diouo.github.io/motionlab.github.io/. | 17 | 67a43547f6caedc30f9d8c9b | null | null |
|
2025-02-06T23:20:09.641000 | Gold-medalist Performance in Solving Olympiad Geometry with AlphaGeometry2 | 5 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.03544 | [
{
"_id": "67a589ebb16fabcdd2dea1eb",
"hidden": false,
"name": "Yuri Chervonyi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1ec",
"hidden": false,
"name": "Trieu H. Trinh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1ed",
"hidden": false,
"name": "Miroslav Olšák",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1ee",
"hidden": false,
"name": "Xiaomeng Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1ef",
"hidden": false,
"name": "Hoang Nguyen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1f0",
"hidden": false,
"name": "Marcelo Menegali",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:41:38.940Z",
"user": {
"_id": "60cc0c3494ab6115ab6ecf12",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1623985197562-noauth.jpeg",
"fullname": "Marcelo Menegali",
"isPro": false,
"type": "user",
"user": "mmenegali"
}
},
{
"_id": "67a589ebb16fabcdd2dea1f1",
"hidden": false,
"name": "Junehyuk Jung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1f2",
"hidden": false,
"name": "Vikas Verma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1f3",
"hidden": false,
"name": "Quoc V. Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a589ebb16fabcdd2dea1f4",
"hidden": false,
"name": "Thang Luong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:42:08.011Z",
"user": {
"_id": "65ee0b97306927c125d65779",
"avatarUrl": "/avatars/637129308a95efdf8faac9fb81a66589.svg",
"fullname": "Thang Luong",
"isPro": false,
"type": "user",
"user": "lmthang"
}
}
] | 2025-02-05T19:02:03 | Gold-medalist Performance in Solving Olympiad Geometry with
AlphaGeometry2 | We present AlphaGeometry2, a significantly improved version of AlphaGeometry
introduced in Trinh et al. (2024), which has now surpassed an average gold
medalist in solving Olympiad geometry problems. To achieve this, we first
extend the original AlphaGeometry language to tackle harder problems involving
movements of objects, and problems containing linear equations of angles,
ratios, and distances. This, together with other additions, has markedly
improved the coverage rate of the AlphaGeometry language on International Math
Olympiads (IMO) 2000-2024 geometry problems from 66% to 88%. The search process
of AlphaGeometry2 has also been greatly improved through the use of Gemini
architecture for better language modeling, and a novel knowledge-sharing
mechanism that combines multiple search trees. Together with further
enhancements to the symbolic engine and synthetic data generation, we have
significantly boosted the overall solving rate of AlphaGeometry2 to 84% for
all geometry problems over the last 25 years, compared to 54%
previously. AlphaGeometry2 was also part of the system that achieved
silver-medal standard at IMO 2024 https://dpmd.ai/imo-silver. Last but not
least, we report progress towards using AlphaGeometry2 as a part of a fully
automated system that reliably solves geometry problems directly from natural
language input. | 43 | 67a589ecb16fabcdd2dea259 | null | null |
|
2025-02-06T23:17:40.725000 | Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.04128 | [
{
"_id": "67a5894db16fabcdd2de5459",
"hidden": false,
"name": "Zhen Ye",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:08.787Z",
"user": {
"_id": "645f172d7c6bff8577353d1a",
"avatarUrl": "/avatars/a83682e1343809257b082b78d58c582a.svg",
"fullname": "ZhenYE",
"isPro": false,
"type": "user",
"user": "ZhenYe234"
}
},
{
"_id": "67a5894db16fabcdd2de545a",
"hidden": false,
"name": "Xinfa Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de545b",
"hidden": false,
"name": "Chi-Min Chan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de545c",
"hidden": false,
"name": "Xinsheng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de545d",
"hidden": false,
"name": "Xu Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de545e",
"hidden": false,
"name": "Jiahe Lei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de545f",
"hidden": false,
"name": "Yi Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5460",
"hidden": false,
"name": "Haohe Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5461",
"hidden": false,
"name": "Yizhu Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5462",
"hidden": false,
"name": "Zheqi DAI",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5463",
"hidden": false,
"name": "Hongzhan Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T09:05:09.037Z",
"user": {
"_id": "6499466c7d1edf7cb612a9a6",
"avatarUrl": "/avatars/c2e18594aa0879db8226f2a04496fb0b.svg",
"fullname": "Hongzhan Lin",
"isPro": false,
"type": "user",
"user": "danielhzlin"
}
},
{
"_id": "67a5894db16fabcdd2de5464",
"hidden": false,
"name": "Jianyi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5465",
"hidden": false,
"name": "Xingjian Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5466",
"hidden": false,
"name": "Liumeng Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5467",
"hidden": false,
"name": "Yunlin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5468",
"hidden": false,
"name": "Zhifei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de5469",
"hidden": false,
"name": "Lei Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de546a",
"hidden": false,
"name": "Qiuqiang Kong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de546b",
"hidden": false,
"name": "Yike Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5894db16fabcdd2de546c",
"hidden": false,
"name": "Wei Xue",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-07T04:17:17.888Z",
"user": {
"_id": "6628adb14277eae0da5eee28",
"avatarUrl": "/avatars/6cb41b80cc5e014e455dfc2a22682e64.svg",
"fullname": "HKUST Audio",
"isPro": true,
"type": "user",
"user": "HKUST-Audio"
}
}
] | 2025-02-06T15:04:00 | Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based
Speech Synthesis | Recent advances in text-based large language models (LLMs), particularly in
the GPT series and the o1 model, have demonstrated the effectiveness of scaling
both training-time and inference-time compute. However, current
state-of-the-art TTS systems leveraging LLMs are often multi-stage, requiring
separate models (e.g., diffusion models after LLM), complicating the decision
of whether to scale a particular model during training or testing. This work
makes the following contributions: First, we explore the scaling of train-time
and inference-time compute for speech synthesis. Second, we propose a simple
framework Llasa for speech synthesis that employs a single-layer vector
quantizer (VQ) codec and a single Transformer architecture to fully align with
standard LLMs such as Llama. Our experiments reveal that scaling train-time
compute for Llasa consistently improves the naturalness of synthesized speech
and enables the generation of more complex and accurate prosody patterns.
Furthermore, from the perspective of scaling inference-time compute, we employ
speech understanding models as verifiers during the search, finding that
scaling inference-time compute shifts the sampling modes toward the preferences
of specific verifiers, thereby improving emotional expressiveness, timbre
consistency, and content accuracy. In addition, we released the checkpoint and
training code for our TTS model (1B, 3B, 8B) and codec model publicly
available. | 24 | 67a5894db16fabcdd2de54d3 | null | null |
|
2025-02-06T23:13:23.158000 | PILAF: Optimal Human Preference Sampling for Reward Modeling | 2 | {
"_id": "65cbfa6c968742be942e6cba",
"avatarUrl": "/avatars/1a6cc0983edc28fa92178d3abc283ba1.svg",
"followerCount": null,
"fullname": "Feng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yunzhen",
"type": "user"
} | false | null | 2502.04270 | [
{
"_id": "67a5882fa8e877ef10b8d1fd",
"hidden": false,
"name": "Yunzhen Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:49:26.187Z",
"user": {
"_id": "664187fa1cd689758847f44b",
"avatarUrl": "/avatars/501ed1d5bcffd7466fd8b8c8d3b758f0.svg",
"fullname": "Yunzhen Feng",
"isPro": false,
"type": "user",
"user": "Coolfyz"
}
},
{
"_id": "67a5882fa8e877ef10b8d1fe",
"hidden": false,
"name": "Ariel Kwiatkowski",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:49:33.321Z",
"user": {
"_id": "625de0717341c641426e7932",
"avatarUrl": "/avatars/9deb06fc565a80002c3ae75c6f4cd9e7.svg",
"fullname": "Ariel Kwiatkowski",
"isPro": false,
"type": "user",
"user": "RedTachyon"
}
},
{
"_id": "67a5882fa8e877ef10b8d1ff",
"hidden": false,
"name": "Kunhao Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:49:41.052Z",
"user": {
"_id": "6424123d3fa01ecba6fd94e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/B-1YSkTJMVBBQDX3WVxIL.jpeg",
"fullname": "Kunhao Zheng",
"isPro": false,
"type": "user",
"user": "Kunhao"
}
},
{
"_id": "67a5882fa8e877ef10b8d200",
"hidden": false,
"name": "Julia Kempe",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:49:48.009Z",
"user": {
"_id": "65ce30e06da01df536eded5a",
"avatarUrl": "/avatars/04c32cba7a3bbaf9ea5dee88c96cf87b.svg",
"fullname": "Julia Kempe",
"isPro": false,
"type": "user",
"user": "Knykny"
}
},
{
"_id": "67a5882fa8e877ef10b8d201",
"hidden": false,
"name": "Yaqi Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:49:54.220Z",
"user": {
"_id": "66494b428d50b4b0efceab9c",
"avatarUrl": "/avatars/ac7293aafaf15759d53cf62f4e1ae874.svg",
"fullname": "Yaqi Duan",
"isPro": false,
"type": "user",
"user": "duanyq"
}
}
] | 2025-02-06T18:09:00 | PILAF: Optimal Human Preference Sampling for Reward Modeling | As large language models increasingly drive real-world applications, aligning
them with human values becomes paramount. Reinforcement Learning from Human
Feedback (RLHF) has emerged as a key technique, translating preference data
into reward models when oracle human values remain inaccessible. In practice,
RLHF mostly relies on approximate reward models, which may not consistently
guide the policy toward maximizing the underlying human values. We propose
Policy-Interpolated Learning for Aligned Feedback (PILAF), a novel response
sampling strategy for preference labeling that explicitly aligns preference
learning with maximizing the underlying oracle reward. PILAF is theoretically
grounded, demonstrating optimality from both an optimization and a statistical
perspective. The method is straightforward to implement and demonstrates strong
performance in iterative and online RLHF settings where feedback curation is
critical. | 11 | 67a58830a8e877ef10b8d226 | null | null |
|
2025-02-06T23:12:15.874000 | BOLT: Bootstrap Long Chain-of-Thought in Language Models without Distillation | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.03860 | [
{
"_id": "67a5880c886a1e223b1d57ec",
"hidden": false,
"name": "Bo Pang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:46:17.484Z",
"user": {
"_id": "63e08acbf351dc0745749d56",
"avatarUrl": "/avatars/8e2d5ce9db5bd8008ac2ad80f6025553.svg",
"fullname": "Bo Pang",
"isPro": false,
"type": "user",
"user": "bpucla"
}
},
{
"_id": "67a5880c886a1e223b1d57ed",
"hidden": false,
"name": "Hanze Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:46:02.107Z",
"user": {
"_id": "63a3ff69f91ad3ea5703841d",
"avatarUrl": "/avatars/69227c4bce01d33747c1377b6f9672db.svg",
"fullname": "Hanze Dong",
"isPro": false,
"type": "user",
"user": "hendrydong"
}
},
{
"_id": "67a5880c886a1e223b1d57ee",
"hidden": false,
"name": "Jiacheng Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:45:54.892Z",
"user": {
"_id": "631983d5cb116eab31df5821",
"avatarUrl": "/avatars/6a42c842a9439241ead2ace1d79fc32c.svg",
"fullname": "Jiacheng Xu",
"isPro": false,
"type": "user",
"user": "jcxu"
}
},
{
"_id": "67a5880c886a1e223b1d57ef",
"hidden": false,
"name": "Silvio Savarese",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a5880c886a1e223b1d57f0",
"hidden": false,
"name": "Yingbo Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:45:09.887Z",
"user": {
"_id": "649bc93758d8b19de0c7785f",
"avatarUrl": "/avatars/3ed9473aee23d99f4ee949d3705089ea.svg",
"fullname": "Yingbo Zhou",
"isPro": false,
"type": "user",
"user": "yingbozhou"
}
},
{
"_id": "67a5880c886a1e223b1d57f1",
"hidden": false,
"name": "Caiming Xiong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:45:02.543Z",
"user": {
"_id": "649dbcc4e0fff1ed099dc80a",
"avatarUrl": "/avatars/c87c273ca628dbcddccbf1ee19b2ce33.svg",
"fullname": "Caiming Xiong",
"isPro": false,
"type": "user",
"user": "cxiong"
}
}
] | 2025-02-06T08:19:59 | BOLT: Bootstrap Long Chain-of-Thought in Language Models without
Distillation | Large language models (LLMs), such as o1 from OpenAI, have demonstrated
remarkable reasoning capabilities. o1 generates a long chain-of-thought
(LongCoT) before answering a question. LongCoT allows LLMs to analyze problems,
devise plans, reflect, and backtrack effectively. These actions empower LLM to
solve complex problems. After the release of o1, many teams have attempted to
replicate its LongCoT and reasoning capabilities. In terms of methods, they
primarily rely on knowledge distillation with data from existing models with
LongCoT capacities (e.g., OpenAI-o1, Qwen-QwQ, DeepSeek-R1-Preview), leaving
significant uncertainties on systematically developing such reasoning
abilities. In terms of data domains, these works focus narrowly on math while a
few others include coding, limiting their generalizability. This paper
introduces a novel approach to enable LLM's LongCoT capacity without
distillation from o1-like models or expensive human annotations, where we
bootstrap LongCoT (BOLT) from a standard instruct model. BOLT involves three
stages: 1) LongCoT data bootstrapping with in-context learning on a standard
instruct model; 2) LongCoT supervised finetuning; 3) online training to further
refine LongCoT capacities. In BOLT, only a few in-context examples need to be
constructed during the bootstrapping stage; in our experiments, we created 10
examples, demonstrating the feasibility of this approach. We use
Llama-3.1-70B-Instruct to bootstrap LongCoT and apply our method to various
model scales (7B, 8B, 70B). We achieve impressive performance on a variety of
benchmarks, Arena-Hard, MT-Bench, WildBench, ZebraLogic, MATH500, which
evaluate diverse task-solving and reasoning capabilities. | 24 | 67a5880e886a1e223b1d58ca | null | null |
|
2025-02-06T22:34:42.483000 | ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference Optimization | 2 | {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"followerCount": 6,
"fullname": "Ling Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Lingaaaaaaa",
"type": "user"
} | true | null | 2502.04306 | [
{
"_id": "67a57f334e50b2956b13f4e0",
"hidden": false,
"name": "Yinjie Wang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-07T03:34:13.176Z",
"user": {
"_id": "6730dc8df84c8aac97451e57",
"avatarUrl": "/avatars/4f2cf5363b17744daca41d2a18ddfeb8.svg",
"fullname": "Yinjie Wang",
"isPro": false,
"type": "user",
"user": "yinjiewang"
}
},
{
"_id": "67a57f334e50b2956b13f4e1",
"hidden": false,
"name": "Ling Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T14:36:48.541Z",
"user": {
"_id": "64fde4e252e82dd432b74ce9",
"avatarUrl": "/avatars/061a69d858b86d1600be916122cae7fc.svg",
"fullname": "Ling Yang",
"isPro": false,
"type": "user",
"user": "Lingaaaaaaa"
}
},
{
"_id": "67a57f334e50b2956b13f4e2",
"hidden": false,
"name": "Guohao Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:43:05.909Z",
"user": {
"_id": "6338790e76421c054310c96b",
"avatarUrl": "/avatars/112e3d88d155bc998a89fef6f33af64d.svg",
"fullname": "Guohao Li",
"isPro": false,
"type": "user",
"user": "lightaime"
}
},
{
"_id": "67a57f334e50b2956b13f4e3",
"hidden": false,
"name": "Mengdi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:42:45.269Z",
"user": {
"_id": "6599415e8c8ac79295e0b5e3",
"avatarUrl": "/avatars/85500bc8d2cd51444adcc19b1f8db313.svg",
"fullname": "Mengdi Wang",
"isPro": false,
"type": "user",
"user": "Edify-Kd2024"
}
},
{
"_id": "67a57f334e50b2956b13f4e4",
"hidden": false,
"name": "Bryon Aragam",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T18:47:49 | ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference
Optimization | Recent research has leveraged large language model multi-agent systems for
complex problem-solving while trying to reduce the manual effort required to
build them, driving the development of automated agent workflow optimization
methods. However, existing methods remain inflexible due to representational
limitations, a lack of adaptability, and poor scalability when relying on
discrete optimization techniques. We address these challenges with ScoreFlow, a
simple yet high-performance framework that leverages efficient gradient-based
optimization in a continuous space. ScoreFlow incorporates Score-DPO, a novel
variant of the direct preference optimization method that accounts for
quantitative feedback. Across six benchmarks spanning question answering,
coding, and mathematical reasoning, ScoreFlow achieves an 8.2% improvement over
existing baselines. Moreover, it empowers smaller models to outperform larger
ones with lower inference costs. Project:
https://github.com/Gen-Verse/ScoreFlow | 19 | 67a57f354e50b2956b13f53d | null | null |
|
2025-02-06T22:27:51.425000 | UltraIF: Advancing Instruction Following from the Wild | 2 | {
"_id": "66c89152d33e34fbc29497d7",
"avatarUrl": "/avatars/bbddabf6532393951c4759e5915a065b.svg",
"followerCount": 2,
"fullname": "KaikaiAn",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kkk-an",
"type": "user"
} | false | null | 2502.04153 | [
{
"_id": "67a57b1fdea89ffe80d9fe56",
"hidden": false,
"name": "Kaikai An",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:18.320Z",
"user": {
"_id": "66c89152d33e34fbc29497d7",
"avatarUrl": "/avatars/bbddabf6532393951c4759e5915a065b.svg",
"fullname": "KaikaiAn",
"isPro": false,
"type": "user",
"user": "kkk-an"
}
},
{
"_id": "67a57b1fdea89ffe80d9fe57",
"hidden": false,
"name": "Li Sheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T11:10:55.558Z",
"user": {
"_id": "65c874daa3ea4f6d8df75dd1",
"avatarUrl": "/avatars/871c5d59c8cca32a3849c6ea56f5a2a7.svg",
"fullname": "li sheng",
"isPro": false,
"type": "user",
"user": "bambisheng"
}
},
{
"_id": "67a57b1fdea89ffe80d9fe58",
"hidden": false,
"name": "Ganqu Cui",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:31:05.333Z",
"user": {
"_id": "650eba9555dc1e841746f132",
"avatarUrl": "/avatars/af6f5ee78f161d25ec0afc45d2def8eb.svg",
"fullname": "Ganqu Cui",
"isPro": false,
"type": "user",
"user": "ganqu"
}
},
{
"_id": "67a57b1fdea89ffe80d9fe59",
"hidden": false,
"name": "Shuzheng Si",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:16.229Z",
"user": {
"_id": "637c99bbfe115289cfedfb44",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/637c99bbfe115289cfedfb44/344NN9KKF_XXTlVYaGaMW.png",
"fullname": "ssz",
"isPro": false,
"type": "user",
"user": "ssz1111"
}
},
{
"_id": "67a57b1fdea89ffe80d9fe5a",
"hidden": false,
"name": "Ning Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57b1fdea89ffe80d9fe5b",
"hidden": false,
"name": "Yu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57b1fdea89ffe80d9fe5c",
"hidden": false,
"name": "Baobao Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T15:39:16 | UltraIF: Advancing Instruction Following from the Wild | Instruction-following made modern large language models (LLMs) helpful
assistants. However, the key to taming LLMs on complex instructions remains
mysterious, for that there are huge gaps between models trained by open-source
community and those trained by leading companies. To bridge the gap, we propose
a simple and scalable approach UltraIF for building LLMs that can follow
complex instructions with open-source data. UltraIF first decomposes real-world
user prompts into simpler queries, constraints, and corresponding evaluation
questions for the constraints. Then, we train an UltraComposer to compose
constraint-associated prompts with evaluation questions. This prompt composer
allows us to synthesize complicated instructions as well as filter responses
with evaluation questions. In our experiment, for the first time, we
successfully align LLaMA-3.1-8B-Base to catch up with its instruct version on 5
instruction-following benchmarks without any benchmark information, using only
8B model as response generator and evaluator. The aligned model also achieved
competitive scores on other benchmarks. Moreover, we also show that UltraIF
could further improve LLaMA-3.1-8B-Instruct through self-alignment, motivating
broader use cases for the method. Our code will be available at
https://github.com/kkk-an/UltraIF. | 22 | 67a57b1fdea89ffe80d9fe93 | null | null |
|
2025-02-06T22:27:24.284000 | Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization | 2 | {
"_id": "62abdf657b037eafffc48808",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655430982462-noauth.jpeg",
"followerCount": 9,
"fullname": "Jiahang Xu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Jiahang",
"type": "user"
} | true | null | 2502.04295 | [
{
"_id": "67a57d32bc587f5b57a3f24f",
"hidden": false,
"name": "Yuanye Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:09.117Z",
"user": {
"_id": "66825791c238f05e95f53f60",
"avatarUrl": "/avatars/db771a39095ec5e9125959d1d919d593.svg",
"fullname": "Yuanye Liu",
"isPro": false,
"type": "user",
"user": "HenryLau7"
}
},
{
"_id": "67a57d32bc587f5b57a3f250",
"hidden": false,
"name": "Jiahang Xu",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-07T03:25:39.760Z",
"user": {
"_id": "62abdf657b037eafffc48808",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655430982462-noauth.jpeg",
"fullname": "Jiahang Xu",
"isPro": false,
"type": "user",
"user": "Jiahang"
}
},
{
"_id": "67a57d32bc587f5b57a3f251",
"hidden": false,
"name": "Li Lyna Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:47:55.350Z",
"user": {
"_id": "62b0009c72043b05d29492b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0009c72043b05d29492b2/NqRkX2YLhlfOLvYysa7dD.png",
"fullname": "Li Lyna Zhang",
"isPro": false,
"type": "user",
"user": "lynazhang"
}
},
{
"_id": "67a57d32bc587f5b57a3f252",
"hidden": false,
"name": "Qi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57d32bc587f5b57a3f253",
"hidden": false,
"name": "Xuan Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57d32bc587f5b57a3f254",
"hidden": false,
"name": "Yang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57d32bc587f5b57a3f255",
"hidden": false,
"name": "Zhongxin Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57d32bc587f5b57a3f256",
"hidden": false,
"name": "Yuqing Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57d32bc587f5b57a3f257",
"hidden": false,
"name": "Cheng Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-06T18:36:44 | Beyond Prompt Content: Enhancing LLM Performance via Content-Format
Integrated Prompt Optimization | Large Language Models (LLMs) have shown significant capability across various
tasks, with their real-world effectiveness often driven by prompt design. While
recent research has focused on optimizing prompt content, the role of prompt
formatting, a critical but often overlooked dimension, has received limited
systematic investigation. In this paper, we introduce Content-Format Integrated
Prompt Optimization (CFPO), an innovative methodology that jointly optimizes
both prompt content and formatting through an iterative refinement process.
CFPO leverages natural language mutations to explore content variations and
employs a dynamic format exploration strategy that systematically evaluates
diverse format options. Our extensive evaluations across multiple tasks and
open-source LLMs demonstrate that CFPO demonstrates measurable performance
improvements compared to content-only optimization methods. This highlights the
importance of integrated content-format optimization and offers a practical,
model-agnostic approach to enhancing LLM performance. Code will be available at
https://github.com/HenryLau7/CFPO. | 13 | 67a57d33bc587f5b57a3f29d | null | null |
|
2025-02-06T22:17:36.193000 | Learning Real-World Action-Video Dynamics with Heterogeneous Masked Autoregression | 3 | {
"_id": "63151385b031f7b1c7c0871c",
"avatarUrl": "/avatars/0088eb929866face5f95218943e3f478.svg",
"followerCount": 4,
"fullname": "Lirui Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "liruiw",
"type": "user"
} | true | null | 2502.04296 | [
{
"_id": "67a57a4637e2abc28667ec1b",
"hidden": false,
"name": "Lirui Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:51:11.182Z",
"user": {
"_id": "63151385b031f7b1c7c0871c",
"avatarUrl": "/avatars/0088eb929866face5f95218943e3f478.svg",
"fullname": "Lirui Wang",
"isPro": false,
"type": "user",
"user": "liruiw"
}
},
{
"_id": "67a57a4637e2abc28667ec1c",
"hidden": false,
"name": "Kevin Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a57a4637e2abc28667ec1d",
"hidden": false,
"name": "Chaoqi Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:51:27.451Z",
"user": {
"_id": "6747a05a736eaadf2eec50ff",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/mF6_-m3GRm5OfG2HDNorC.jpeg",
"fullname": "Chaoqi Liu",
"isPro": false,
"type": "user",
"user": "chaoqi-liu"
}
},
{
"_id": "67a57a4637e2abc28667ec1e",
"hidden": false,
"name": "Xinlei Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-07T10:51:33.905Z",
"user": {
"_id": "63e58e3a006a775275e59e41",
"avatarUrl": "/avatars/75262a35b27a2ae1939df9118120d99e.svg",
"fullname": "Xinlei Chen",
"isPro": false,
"type": "user",
"user": "endernewton"
}
}
] | 2025-02-06T18:38:26 | Learning Real-World Action-Video Dynamics with Heterogeneous Masked
Autoregression | We propose Heterogeneous Masked Autoregression (HMA) for modeling
action-video dynamics to generate high-quality data and evaluation in scaling
robot learning. Building interactive video world models and policies for
robotics is difficult due to the challenge of handling diverse settings while
maintaining computational efficiency to run in real time. HMA uses
heterogeneous pre-training from observations and action sequences across
different robotic embodiments, domains, and tasks. HMA uses masked
autoregression to generate quantized or soft tokens for video predictions.
\ourshort achieves better visual fidelity and controllability than the previous
robotic video generation models with 15 times faster speed in the real world.
After post-training, this model can be used as a video simulator from low-level
action inputs for evaluating policies and generating synthetic data. See this
link https://liruiw.github.io/hma for more information. | 6 | 67a57a4737e2abc28667ec58 | null | null |
|
2025-02-06T13:40:55.430000 | HackerRank-ASTRA: Evaluating Correctness & Consistency of Large Language Models on cross-domain multi-file project problems | 2 | {
"_id": "63eff09f4a788ed1dd863b09",
"avatarUrl": "/avatars/b557d83cf6d6b3b46dfbe9b7727ae16d.svg",
"followerCount": null,
"fullname": "Jun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "oldteacherjoy",
"type": "user"
} | true | null | 2502.00226 | [
{
"_id": "67a3d37e2d9a08978848c657",
"hidden": false,
"name": "Jun Xing",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:09.857Z",
"user": {
"_id": "63eff09f4a788ed1dd863b09",
"avatarUrl": "/avatars/b557d83cf6d6b3b46dfbe9b7727ae16d.svg",
"fullname": "Jun",
"isPro": false,
"type": "user",
"user": "oldteacherjoy"
}
},
{
"_id": "67a3d37e2d9a08978848c658",
"hidden": false,
"name": "Mayur Bhatia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3d37e2d9a08978848c659",
"hidden": false,
"name": "Sahil Phulwani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3d37e2d9a08978848c65a",
"hidden": false,
"name": "Darshan Suresh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3d37e2d9a08978848c65b",
"hidden": false,
"name": "Rafik Matta",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T23:47:02 | HackerRank-ASTRA: Evaluating Correctness & Consistency of Large Language
Models on cross-domain multi-file project problems | Evaluating the real-world applicability of large language models (LLMs)
provides valuable insights for their development and use in software
development tasks. Existing benchmarks often focus on standalone coding
problems or specific libraries, overlooking multi-file, project-based scenarios
and lacking a rigorous evaluation of consistency. The HackerRank-ASTRA
Benchmark introduces project-based coding problems that mirror real-world
scenarios. It evaluates model consistency through 32 runs (k = 32) and median
standard deviation while incorporating taxonomy-level analysis to assess
sub-skill capabilities. Initial evaluations on 65 problems show that the top
three models -- o1, o1-preview, and Claude-3.5-Sonnet-1022 -- achieved
comparable average scores of 75%, with no statistically significant differences
in performance. Notably, Claude-3.5-Sonnet-1022 demonstrated the highest
consistency across problems, with low variability (SD = 0.0497), which was
statistically significant compared to other models, highlighting its
reliability for real-world software development tasks. | 0 | 67a3d37f2d9a08978848c6b0 | null | null |
|
2025-02-06T13:16:05.750000 | Activation-Informed Merging of Large Language Models | 2 | {
"_id": "64d516ba80d47a6b76fc1015",
"avatarUrl": "/avatars/e520825f2ac9ff047844496ae2dad7d6.svg",
"followerCount": null,
"fullname": "Amin Heyrani Nobari",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ahn1376",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/64d516ba80d47a6b76fc1015/1uM4nP_D1hLZFUonzTJfU.png"
] | 2502.02421 | [
{
"_id": "67a4fc2450641c7d60cead58",
"hidden": false,
"name": "Amin Heyrani Nobari",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-06T18:15:01.447Z",
"user": {
"_id": "64d516ba80d47a6b76fc1015",
"avatarUrl": "/avatars/e520825f2ac9ff047844496ae2dad7d6.svg",
"fullname": "Amin Heyrani Nobari",
"isPro": false,
"type": "user",
"user": "ahn1376"
}
},
{
"_id": "67a4fc2450641c7d60cead59",
"hidden": false,
"name": "Kaveh Alimohammadi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4fc2450641c7d60cead5a",
"hidden": false,
"name": "Ali ArjomandBigdeli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4fc2450641c7d60cead5b",
"hidden": false,
"name": "Akash Srivastava",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4fc2450641c7d60cead5c",
"hidden": false,
"name": "Faez Ahmed",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4fc2450641c7d60cead5d",
"hidden": false,
"name": "Navid Azizan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T15:42:03 | Activation-Informed Merging of Large Language Models | Model merging, a method that combines the parameters and embeddings of
multiple fine-tuned large language models (LLMs), offers a promising approach
to enhance model performance across various tasks while maintaining
computational efficiency. This paper introduces Activation-Informed Merging
(AIM), a technique that integrates the information from the activation space of
LLMs into the merging process to improve performance and robustness. AIM is
designed as a flexible, complementary solution that is applicable to any
existing merging method. It aims to preserve critical weights from the base
model, drawing on principles from continual learning~(CL) and model
compression. Utilizing a task-agnostic calibration set, AIM selectively
prioritizes essential weights during merging. We empirically demonstrate that
AIM significantly enhances the performance of merged models across multiple
benchmarks. Our findings suggest that considering the activation-space
information can provide substantial advancements in the model merging
strategies for LLMs with up to 40\% increase in benchmark performance. | 5 | 67a4fc2550641c7d60ceada5 | null | null |
|
2025-02-06T10:25:05.958000 | Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation | 2 | {
"_id": "647a1010ffe1b559f5418534",
"avatarUrl": "/avatars/fed1a8dbd1090d8f48dc6c2d321a6212.svg",
"followerCount": 5,
"fullname": "Anshuman Suri",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "iamgroot42",
"type": "user"
} | true | null | 2502.00306 | [
{
"_id": "67a4d341784a1ad88b6110a0",
"hidden": false,
"name": "Ali Naseh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4d341784a1ad88b6110a1",
"hidden": false,
"name": "Yuefeng Peng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:14:27.273Z",
"user": {
"_id": "66c6763f2402eab42a0ed395",
"avatarUrl": "/avatars/a6f47924d3a705dd350327f9814eb77e.svg",
"fullname": "Yuefeng Peng",
"isPro": false,
"type": "user",
"user": "yfp16443"
}
},
{
"_id": "67a4d341784a1ad88b6110a2",
"hidden": false,
"name": "Anshuman Suri",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T16:06:34.501Z",
"user": {
"_id": "647a1010ffe1b559f5418534",
"avatarUrl": "/avatars/fed1a8dbd1090d8f48dc6c2d321a6212.svg",
"fullname": "Anshuman Suri",
"isPro": false,
"type": "user",
"user": "iamgroot42"
}
},
{
"_id": "67a4d341784a1ad88b6110a3",
"hidden": false,
"name": "Harsh Chaudhari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4d341784a1ad88b6110a4",
"hidden": false,
"name": "Alina Oprea",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4d341784a1ad88b6110a5",
"hidden": false,
"name": "Amir Houmansadr",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-01T04:01:18 | Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented
Generation | Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to
generate grounded responses by leveraging external knowledge databases without
altering model parameters. Although the absence of weight tuning prevents
leakage via model parameters, it introduces the risk of inference adversaries
exploiting retrieved documents in the model's context. Existing methods for
membership inference and data extraction often rely on jailbreaking or
carefully crafted unnatural queries, which can be easily detected or thwarted
with query rewriting techniques common in RAG systems. In this work, we present
Interrogation Attack (IA), a membership inference technique targeting documents
in the RAG datastore. By crafting natural-text queries that are answerable only
with the target document's presence, our approach demonstrates successful
inference with just 30 queries while remaining stealthy; straightforward
detectors identify adversarial prompts from existing methods up to ~76x more
frequently than those generated by our attack. We observe a 2x improvement in
TPR@1%FPR over prior inference attacks across diverse RAG configurations, all
while costing less than $0.02 per document inference. | 5 | 67a4d342784a1ad88b6110d9 | null | null |
|
2025-02-06T09:35:37.969000 | Large Language Model Guided Self-Debugging Code Generation | 2 | {
"_id": "65eef9ce7443c09267513796",
"avatarUrl": "/avatars/62547f99130557f54093b2ff4d6c9c24.svg",
"followerCount": 1,
"fullname": "Muntasir Adnan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "adnaan525",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/65eef9ce7443c09267513796/TfB30iULajOm37wTzZDER.png"
] | 2502.02928 | [
{
"_id": "67a4213c54bfb820ffb26f4a",
"hidden": false,
"name": "Muntasir Adnan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:07.849Z",
"user": {
"_id": "65eef9ce7443c09267513796",
"avatarUrl": "/avatars/62547f99130557f54093b2ff4d6c9c24.svg",
"fullname": "Muntasir Adnan",
"isPro": false,
"type": "user",
"user": "adnaan525"
}
},
{
"_id": "67a4213c54bfb820ffb26f4b",
"hidden": false,
"name": "Zhiwei Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:13:47.448Z",
"user": {
"_id": "659fa5d10183046e16b6f993",
"avatarUrl": "/avatars/744497ac023d9ebc21e0c297d4f15fca.svg",
"fullname": "Zhiwei Xu",
"isPro": false,
"type": "user",
"user": "zhiwei555"
}
},
{
"_id": "67a4213c54bfb820ffb26f4c",
"hidden": false,
"name": "Carlos C. N. Kuhn",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:13:58.336Z",
"user": {
"_id": "6510df53469c325dc4dc69a5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/NBydVZ-3cc8ORQpQeQCf_.png",
"fullname": "Carlos Kuhn",
"isPro": false,
"type": "user",
"user": "CarlosKuhn"
}
}
] | 2025-02-05T06:43:40 | Large Language Model Guided Self-Debugging Code Generation | Automated code generation is gaining significant importance in intelligent
computer programming and system deployment. However, current approaches often
face challenges in computational efficiency and lack robust mechanisms for code
parsing and error correction. In this work, we propose a novel framework,
PyCapsule, with a simple yet effective two-agent pipeline and efficient
self-debugging modules for Python code generation. PyCapsule features
sophisticated prompt inference, iterative error handling, and case testing,
ensuring high generation stability, safety, and correctness. Empirically,
PyCapsule achieves up to 5.7% improvement of success rate on HumanEval, 10.3%
on HumanEval-ET, and 24.4% on BigCodeBench compared to the state-of-art
methods. We also observe a decrease in normalized success rate given more
self-debugging attempts, potentially affected by limited and noisy error
feedback in retention. PyCapsule demonstrates broader impacts on advancing
lightweight and efficient code generation for artificial intelligence systems. | 12 | 67a4213d54bfb820ffb26f75 | null | null |
|
2025-02-06T06:15:51.159000 | On Teacher Hacking in Language Model Distillation | 2 | {
"_id": "6262880c5eb4fa93219f0064",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6262880c5eb4fa93219f0064/6yyBvRK4Oh7OhjaaweaVN.jpeg",
"followerCount": 2,
"fullname": "Daniil Tiapkin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dtiapkin",
"type": "user"
} | true | null | 2502.02671 | [
{
"_id": "67a495ce0f2d0f0303a3af71",
"hidden": false,
"name": "Daniil Tiapkin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:19.738Z",
"user": {
"_id": "6262880c5eb4fa93219f0064",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6262880c5eb4fa93219f0064/6yyBvRK4Oh7OhjaaweaVN.jpeg",
"fullname": "Daniil Tiapkin",
"isPro": false,
"type": "user",
"user": "dtiapkin"
}
},
{
"_id": "67a495ce0f2d0f0303a3af72",
"hidden": false,
"name": "Daniele Calandriello",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a495ce0f2d0f0303a3af73",
"hidden": false,
"name": "Johan Ferret",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:12:15.183Z",
"user": {
"_id": "65afb7dbdd6bdfd73cd8e609",
"avatarUrl": "/avatars/b21069bc2d7ee4cc1508008e3c8ade64.svg",
"fullname": "Johan Ferret",
"isPro": false,
"type": "user",
"user": "ferretj"
}
},
{
"_id": "67a495ce0f2d0f0303a3af74",
"hidden": false,
"name": "Sarah Perrin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:12:39.678Z",
"user": {
"_id": "66328157b270ae503e91339b",
"avatarUrl": "/avatars/ea7a52060f5360f523ca28e137e85e33.svg",
"fullname": "Sarah Perrin",
"isPro": false,
"type": "user",
"user": "Sper42"
}
},
{
"_id": "67a495ce0f2d0f0303a3af75",
"hidden": false,
"name": "Nino Vieillard",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a495ce0f2d0f0303a3af76",
"hidden": false,
"name": "Alexandre Ramé",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:12:53.881Z",
"user": {
"_id": "63c94ede00104ea998de19a6",
"avatarUrl": "/avatars/273959d87f0c67747588cf0700d64039.svg",
"fullname": "Alexandre Rame",
"isPro": false,
"type": "user",
"user": "alexrame"
}
},
{
"_id": "67a495ce0f2d0f0303a3af77",
"hidden": false,
"name": "Mathieu Blondel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:13:01.896Z",
"user": {
"_id": "66d093b681e0683bca48bed6",
"avatarUrl": "/avatars/cc1fbccb0b6aa93d648bcbdf9c3a35e1.svg",
"fullname": "Mathieu Blondel",
"isPro": false,
"type": "user",
"user": "mblondel"
}
}
] | 2025-02-04T19:26:28 | On Teacher Hacking in Language Model Distillation | Post-training of language models (LMs) increasingly relies on the following
two stages: (i) knowledge distillation, where the LM is trained to imitate a
larger teacher LM, and (ii) reinforcement learning from human feedback (RLHF),
where the LM is aligned by optimizing a reward model. In the second RLHF stage,
a well-known challenge is reward hacking, where the LM over-optimizes the
reward model. Such phenomenon is in line with Goodhart's law and can lead to
degraded performance on the true objective. In this paper, we investigate
whether a similar phenomenon, that we call teacher hacking, can occur during
knowledge distillation. This could arise because the teacher LM is itself an
imperfect approximation of the true distribution. To study this, we propose a
controlled experimental setup involving: (i) an oracle LM representing the
ground-truth distribution, (ii) a teacher LM distilled from the oracle, and
(iii) a student LM distilled from the teacher. Our experiments reveal the
following insights. When using a fixed offline dataset for distillation,
teacher hacking occurs; moreover, we can detect it by observing when the
optimization process deviates from polynomial convergence laws. In contrast,
employing online data generation techniques effectively mitigates teacher
hacking. More precisely, we identify data diversity as the key factor in
preventing hacking. Overall, our findings provide a deeper understanding of the
benefits and limitations of distillation for building robust and efficient LMs. | 18 | 67a495d00f2d0f0303a3afde | null | null |
|
2025-02-06T02:11:41.374000 | Jailbreaking with Universal Multi-Prompts | 2 | {
"_id": "608abf1272b50b02c4b02865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1619708309549-608abf1272b50b02c4b02865.jpeg",
"followerCount": 2,
"fullname": "Hsuan Su",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jacksukk",
"type": "user"
} | true | null | 2502.01154 | [
{
"_id": "67a4609af2e553c1d0da914d",
"hidden": false,
"name": "Yu-Ling Hsu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4609af2e553c1d0da914e",
"hidden": false,
"name": "Hsuan Su",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:29.721Z",
"user": {
"_id": "608abf1272b50b02c4b02865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1619708309549-608abf1272b50b02c4b02865.jpeg",
"fullname": "Hsuan Su",
"isPro": false,
"type": "user",
"user": "jacksukk"
}
},
{
"_id": "67a4609af2e553c1d0da914f",
"hidden": false,
"name": "Shang-Tse Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T08:44:24 | Jailbreaking with Universal Multi-Prompts | Large language models (LLMs) have seen rapid development in recent years,
revolutionizing various applications and significantly enhancing convenience
and productivity. However, alongside their impressive capabilities, ethical
concerns and new types of attacks, such as jailbreaking, have emerged. While
most prompting techniques focus on optimizing adversarial inputs for individual
cases, resulting in higher computational costs when dealing with large
datasets. Less research has addressed the more general setting of training a
universal attacker that can transfer to unseen tasks. In this paper, we
introduce JUMP, a prompt-based method designed to jailbreak LLMs using
universal multi-prompts. We also adapt our approach for defense, which we term
DUMP. Experimental results demonstrate that our method for optimizing universal
multi-prompts outperforms existing techniques. | 9 | 67a4609bf2e553c1d0da9181 | null | null |
|
2025-02-06T01:55:37.207000 | LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer | 4 | {
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"followerCount": 4,
"fullname": "Yiren Song",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yiren98",
"type": "user"
} | false | null | 2502.01105 | [
{
"_id": "67a45c85e73ad243c0b9529e",
"hidden": false,
"name": "Yiren Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a45c85e73ad243c0b9529f",
"hidden": false,
"name": "Danze Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:20.740Z",
"user": {
"_id": "6729d1fed3ec5370cb035901",
"avatarUrl": "/avatars/50f7ce9c635148df76d1c63ebf3efa38.svg",
"fullname": "1",
"isPro": false,
"type": "user",
"user": "DANNY621"
}
},
{
"_id": "67a45c85e73ad243c0b952a0",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-06T06:54:02.195Z",
"user": {
"_id": "63a55320ce5763e06f78519c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671779060549-noauth.jpeg",
"fullname": "Mike Shou",
"isPro": false,
"type": "user",
"user": "mikeshou"
}
}
] | 2025-02-03T06:49:58 | LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion
Transformer | Generating cognitive-aligned layered SVGs remains challenging due to existing
methods' tendencies toward either oversimplified single-layer outputs or
optimization-induced shape redundancies. We propose LayerTracer, a diffusion
transformer based framework that bridges this gap by learning designers'
layered SVG creation processes from a novel dataset of sequential design
operations. Our approach operates in two phases: First, a text-conditioned DiT
generates multi-phase rasterized construction blueprints that simulate human
design workflows. Second, layer-wise vectorization with path deduplication
produces clean, editable SVGs. For image vectorization, we introduce a
conditional diffusion mechanism that encodes reference images into latent
tokens, guiding hierarchical reconstruction while preserving structural
integrity. Extensive experiments demonstrate LayerTracer's superior performance
against optimization-based and neural baselines in both generation quality and
editability, effectively aligning AI-generated vectors with professional design
cognition. | 20 | 67a45c8ae73ad243c0b953ea | null | null |
|
2025-02-06T00:29:44.686000 | Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.03275 | [
{
"_id": "67a448b69ca42c642a723a7d",
"hidden": false,
"name": "DiJia Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a448b69ca42c642a723a7e",
"hidden": false,
"name": "Hanlin Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:10:24.407Z",
"user": {
"_id": "6467bc59b990713c50339d2d",
"avatarUrl": "/avatars/064aba45e37040f7b1de8f76169f5174.svg",
"fullname": "Hanlin Zhu",
"isPro": false,
"type": "user",
"user": "hanlinzhu"
}
},
{
"_id": "67a448b69ca42c642a723a7f",
"hidden": false,
"name": "Yingchen Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:09:40.111Z",
"user": {
"_id": "6481333f8c6a3b8f11fc4114",
"avatarUrl": "/avatars/15c194296506b32e3f218530382c9f78.svg",
"fullname": "Yingchen Xu",
"isPro": false,
"type": "user",
"user": "xuyingchen"
}
},
{
"_id": "67a448b69ca42c642a723a80",
"hidden": false,
"name": "Jiantao Jiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:09:29.786Z",
"user": {
"_id": "653b306986b88947d5cacfa4",
"avatarUrl": "/avatars/21ebd4daf35ec67c7d5f9b0a53628b00.svg",
"fullname": "Jiantao Jiao",
"isPro": false,
"type": "user",
"user": "nexus-jt-llm"
}
},
{
"_id": "67a448b69ca42c642a723a81",
"hidden": false,
"name": "Yuandong Tian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:09:19.670Z",
"user": {
"_id": "6344cf73ee1504dbcd5bdfe7",
"avatarUrl": "/avatars/6dd2bf1f9c5679e5c8c85d62c9836aac.svg",
"fullname": "Yuandong Tian",
"isPro": false,
"type": "user",
"user": "tydsh"
}
},
{
"_id": "67a448b69ca42c642a723a82",
"hidden": false,
"name": "Qinqing Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:09:11.277Z",
"user": {
"_id": "64d27579dafee18faf9308ac",
"avatarUrl": "/avatars/8914a47244017c3541d3d5ac5b2d0372.svg",
"fullname": "Qinqing Zheng",
"isPro": false,
"type": "user",
"user": "goodsleep"
}
}
] | 2025-02-05T15:33:00 | Token Assorted: Mixing Latent and Text Tokens for Improved Language
Model Reasoning | Large Language Models (LLMs) excel at reasoning and planning when trained on
chainof-thought (CoT) data, where the step-by-step thought process is
explicitly outlined by text tokens. However, this results in lengthy inputs
where many words support textual coherence rather than core reasoning
information, and processing these inputs consumes substantial computation
resources. In this work, we propose a hybrid representation of the reasoning
process, where we partially abstract away the initial reasoning steps using
latent discrete tokens generated by VQ-VAE, significantly reducing the length
of reasoning traces. We explore the use of latent trace abstractions in two
scenarios: 1) training the model from scratch for the Keys-Finding Maze
problem, 2) fine-tuning LLMs on this hybrid data with an extended vocabulary
including unseen latent tokens, for both logical and mathematical reasoning
problems. To facilitate effective learning, we introduce a simple training
procedure that randomly mixes latent and text tokens, which enables fast
adaptation to new latent tokens. Our approach consistently outperforms the
baselines methods in various benchmarks. | 15 | 67a448b89ca42c642a723ac6 | null | null |
|
2025-02-06T00:26:02.483000 | LIMO: Less is More for Reasoning | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.03387 | [
{
"_id": "67a445ccbdd74b63b4e52a7d",
"hidden": false,
"name": "Yixin Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a445ccbdd74b63b4e52a7e",
"hidden": false,
"name": "Zhen Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:37:03.643Z",
"user": {
"_id": "643581a4f3b08e267d990499",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643581a4f3b08e267d990499/KRhB-48W4IPuB0bX16Ahj.png",
"fullname": "Zhen Huang",
"isPro": false,
"type": "user",
"user": "ZhenHuang"
}
},
{
"_id": "67a445ccbdd74b63b4e52a7f",
"hidden": false,
"name": "Yang Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a445ccbdd74b63b4e52a80",
"hidden": false,
"name": "Ethan Chern",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:36:45.045Z",
"user": {
"_id": "64bb5f9d8e051085bace4d1e",
"avatarUrl": "/avatars/15ccbb78c6131dfe46b7a9d8e7d1a31f.svg",
"fullname": "Ethan Chern",
"isPro": true,
"type": "user",
"user": "ethanchern"
}
},
{
"_id": "67a445ccbdd74b63b4e52a81",
"hidden": false,
"name": "Shijie Xia",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:59.334Z",
"user": {
"_id": "65900d4ff5a209eeac08b463",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65900d4ff5a209eeac08b463/PJNNBRJIk1qR24oaRLTex.jpeg",
"fullname": "shijie xia",
"isPro": false,
"type": "user",
"user": "seven-cat"
}
},
{
"_id": "67a445ccbdd74b63b4e52a82",
"hidden": false,
"name": "Pengfei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:36:38.049Z",
"user": {
"_id": "6144a0c4ff1146bbd84d9865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661715958139-6144a0c4ff1146bbd84d9865.png",
"fullname": "Pengfei Liu",
"isPro": true,
"type": "user",
"user": "Pengfei"
}
}
] | 2025-02-05T17:23:45 | LIMO: Less is More for Reasoning | We present a fundamental discovery that challenges our understanding of how
complex reasoning emerges in large language models. While conventional wisdom
suggests that sophisticated reasoning tasks demand extensive training data
(>100,000 examples), we demonstrate that complex mathematical reasoning
abilities can be effectively elicited with surprisingly few examples. Through
comprehensive experiments, our proposed model LIMO demonstrates unprecedented
performance in mathematical reasoning. With merely 817 curated training
samples, LIMO achieves 57.1% accuracy on AIME and 94.8% on MATH, improving from
previous SFT-based models' 6.5% and 59.2% respectively, while only using 1% of
the training data required by previous approaches. LIMO demonstrates
exceptional out-of-distribution generalization, achieving 40.5% absolute
improvement across 10 diverse benchmarks, outperforming models trained on 100x
more data, challenging the notion that SFT leads to memorization rather than
generalization. Based on these results, we propose the Less-Is-More Reasoning
Hypothesis (LIMO Hypothesis): In foundation models where domain knowledge has
been comprehensively encoded during pre-training, sophisticated reasoning
capabilities can emerge through minimal but precisely orchestrated
demonstrations of cognitive processes. This hypothesis posits that the
elicitation threshold for complex reasoning is determined by two key factors:
(1) the completeness of the model's encoded knowledge foundation during
pre-training, and (2) the effectiveness of post-training examples as "cognitive
templates" that show the model how to utilize its knowledge base to solve
complex reasoning tasks. To facilitate reproducibility and future research in
data-efficient reasoning, we release LIMO as a comprehensive open-source suite
at https://github.com/GAIR-NLP/LIMO. | 57 | 67a445cdbdd74b63b4e52af7 | null | null |
|
2025-02-06T00:20:51.704000 | SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model | 5 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.02737 | [
{
"_id": "67a446a9430e358f5d5ac4c3",
"hidden": false,
"name": "Loubna Ben Allal",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:56.506Z",
"user": {
"_id": "61c141342aac764ce1654e43",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61c141342aac764ce1654e43/81AwoT5IQ_Xdw0OVw7TKu.jpeg",
"fullname": "Loubna Ben Allal",
"isPro": false,
"type": "user",
"user": "loubnabnl"
}
},
{
"_id": "67a446a9430e358f5d5ac4c4",
"hidden": false,
"name": "Anton Lozhkov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:39.237Z",
"user": {
"_id": "602e6dee60e3dd96631c906e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1613655355830-noauth.png",
"fullname": "Anton Lozhkov",
"isPro": false,
"type": "user",
"user": "anton-l"
}
},
{
"_id": "67a446a9430e358f5d5ac4c5",
"hidden": false,
"name": "Elie Bakouch",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:45.734Z",
"user": {
"_id": "651e96991b97c9f33d26bde6",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/651e96991b97c9f33d26bde6/-Bqs6qrmz0yCfwtB2e-6q.jpeg",
"fullname": "Elie Bakouch",
"isPro": false,
"type": "user",
"user": "eliebak"
}
},
{
"_id": "67a446a9430e358f5d5ac4c6",
"hidden": false,
"name": "Gabriel Martín Blázquez",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:43.746Z",
"user": {
"_id": "60f2fc91b92afccb7c34b8ed",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f2fc91b92afccb7c34b8ed/W2-Nay12Ef4Ltyaf8EKE9.jpeg",
"fullname": "Gabriel Martín Blázquez",
"isPro": false,
"type": "user",
"user": "gabrielmbmb"
}
},
{
"_id": "67a446a9430e358f5d5ac4c7",
"hidden": false,
"name": "Guilherme Penedo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:50.085Z",
"user": {
"_id": "62596f9e1c0a084224b93e00",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62596f9e1c0a084224b93e00/X2aLkJ0ofhkXwAg7lXvxD.jpeg",
"fullname": "Guilherme Penedo",
"isPro": false,
"type": "user",
"user": "guipenedo"
}
},
{
"_id": "67a446a9430e358f5d5ac4c8",
"hidden": false,
"name": "Lewis Tunstall",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:16:30.456Z",
"user": {
"_id": "5f0c746619cb630495b814fd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594651707950-noauth.jpeg",
"fullname": "Lewis Tunstall",
"isPro": true,
"type": "user",
"user": "lewtun"
}
},
{
"_id": "67a446a9430e358f5d5ac4c9",
"hidden": false,
"name": "Andrés Marafioti",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:16:37.742Z",
"user": {
"_id": "65d66b494bbd0d92b641cdbb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d66b494bbd0d92b641cdbb/6-7dm7B-JxcoS1QlCPdMN.jpeg",
"fullname": "Andres Marafioti",
"isPro": false,
"type": "user",
"user": "andito"
}
},
{
"_id": "67a446a9430e358f5d5ac4ca",
"hidden": false,
"name": "Hynek Kydlíček",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:16:43.590Z",
"user": {
"_id": "626ede24d2fa9e7d598c8709",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626ede24d2fa9e7d598c8709/JKS8-Y2Jw87EgNQZBRswq.jpeg",
"fullname": "Hynek Kydlicek",
"isPro": true,
"type": "user",
"user": "hynky"
}
},
{
"_id": "67a446a9430e358f5d5ac4cb",
"hidden": false,
"name": "Agustín Piqueres Lajarín",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:16:49.324Z",
"user": {
"_id": "6435d564a4bd75c62cc03701",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6435d564a4bd75c62cc03701/7P2G_wVNB6MISp2Phh427.jpeg",
"fullname": "Agustín Piqueres Lajarín",
"isPro": false,
"type": "user",
"user": "plaguss"
}
},
{
"_id": "67a446a9430e358f5d5ac4cc",
"hidden": false,
"name": "Vaibhav Srivastav",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:52.239Z",
"user": {
"_id": "61b85ce86eb1f2c5e6233736",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655385361868-61b85ce86eb1f2c5e6233736.jpeg",
"fullname": "Vaibhav Srivastav",
"isPro": true,
"type": "user",
"user": "reach-vb"
}
},
{
"_id": "67a446a9430e358f5d5ac4cd",
"hidden": false,
"name": "Joshua Lochner",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:36.878Z",
"user": {
"_id": "61b253b7ac5ecaae3d1efe0c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"isPro": false,
"type": "user",
"user": "Xenova"
}
},
{
"_id": "67a446a9430e358f5d5ac4ce",
"hidden": false,
"name": "Caleb Fahlgren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:16:56.849Z",
"user": {
"_id": "648a374f00f7a3374ee64b99",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648a374f00f7a3374ee64b99/YPwSOrronoozwHbJchPn3.jpeg",
"fullname": "Caleb Fahlgren",
"isPro": true,
"type": "user",
"user": "cfahlgren1"
}
},
{
"_id": "67a446a9430e358f5d5ac4cf",
"hidden": false,
"name": "Xuan-Son Nguyen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:02.477Z",
"user": {
"_id": "63ca214abedad7e2bf1d1517",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674191139776-noauth.png",
"fullname": "Xuan-Son Nguyen",
"isPro": false,
"type": "user",
"user": "ngxson"
}
},
{
"_id": "67a446a9430e358f5d5ac4d0",
"hidden": false,
"name": "Clémentine Fourrier",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:54.591Z",
"user": {
"_id": "6202a599216215a22221dea9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"isPro": false,
"type": "user",
"user": "clefourrier"
}
},
{
"_id": "67a446a9430e358f5d5ac4d1",
"hidden": false,
"name": "Ben Burtenshaw",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:14:41.918Z",
"user": {
"_id": "62d648291fa3e4e7ae3fa6e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62d648291fa3e4e7ae3fa6e8/oatOwf8Xqe5eDbCSuYqCd.png",
"fullname": "ben burtenshaw",
"isPro": false,
"type": "user",
"user": "burtenshaw"
}
},
{
"_id": "67a446a9430e358f5d5ac4d2",
"hidden": false,
"name": "Hugo Larcher",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:14.767Z",
"user": {
"_id": "641cc77c92cd25302998b740",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641cc77c92cd25302998b740/5A81W5s3ecLaLXFir52Rw.jpeg",
"fullname": "Hugo Larcher",
"isPro": false,
"type": "user",
"user": "hlarcher"
}
},
{
"_id": "67a446a9430e358f5d5ac4d3",
"hidden": false,
"name": "Haojun Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:33.798Z",
"user": {
"_id": "660ed80b1889bf2cd53cab7f",
"avatarUrl": "/avatars/93ee6ff00668c2698ad8b6fa6f072b92.svg",
"fullname": "Haojun Zhao",
"isPro": false,
"type": "user",
"user": "zzhhjjj"
}
},
{
"_id": "67a446a9430e358f5d5ac4d4",
"hidden": false,
"name": "Cyril Zakka",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:43.679Z",
"user": {
"_id": "66ba71a4447411b9c0e19d71",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/4f93ZrYdaKfK3F53IB51x.jpeg",
"fullname": "Cyril",
"isPro": false,
"type": "user",
"user": "cyrilzakka"
}
},
{
"_id": "67a446a9430e358f5d5ac4d5",
"hidden": false,
"name": "Mathieu Morlon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:50.199Z",
"user": {
"_id": "664d7d1e4f54c9372970e121",
"avatarUrl": "/avatars/695a209d6951a4623eceedcd2eed3a68.svg",
"fullname": "Mathieu Morlon",
"isPro": false,
"type": "user",
"user": "glutamatt"
}
},
{
"_id": "67a446a9430e358f5d5ac4d6",
"hidden": false,
"name": "Colin Raffel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:17:57.936Z",
"user": {
"_id": "6079c29765b9d0165cb18392",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1618592397610-noauth.jpeg",
"fullname": "Colin Raffel",
"isPro": false,
"type": "user",
"user": "craffel"
}
},
{
"_id": "67a446a9430e358f5d5ac4d7",
"hidden": false,
"name": "Leandro von Werra",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T11:03:02.572Z",
"user": {
"_id": "5e48005437cb5b49818287a5",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5e48005437cb5b49818287a5/4uCXGGui-9QifAT4qelxU.png",
"fullname": "Leandro von Werra",
"isPro": false,
"type": "user",
"user": "lvwerra"
}
},
{
"_id": "67a446a9430e358f5d5ac4d8",
"hidden": false,
"name": "Thomas Wolf",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:14:14.159Z",
"user": {
"_id": "5df7e9e5da6d0311fd3d53f9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg",
"fullname": "Thomas Wolf",
"isPro": true,
"type": "user",
"user": "thomwolf"
}
}
] | 2025-02-04T21:43:16 | SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language
Model | While large language models have facilitated breakthroughs in many
applications of artificial intelligence, their inherent largeness makes them
computationally expensive and challenging to deploy in resource-constrained
settings. In this paper, we document the development of SmolLM2, a
state-of-the-art "small" (1.7 billion parameter) language model (LM). To attain
strong performance, we overtrain SmolLM2 on ~11 trillion tokens of data using a
multi-stage training process that mixes web text with specialized math, code,
and instruction-following data. We additionally introduce new specialized
datasets (FineMath, Stack-Edu, and SmolTalk) at stages where we found existing
datasets to be problematically small or low-quality. To inform our design
decisions, we perform both small-scale ablations as well as a manual refinement
process that updates the dataset mixing rates at each stage based on the
performance at the previous stage. Ultimately, we demonstrate that SmolLM2
outperforms other recent small LMs including Qwen2.5-1.5B and Llama3.2-1B. To
facilitate future research on LM development as well as applications of small
LMs, we release both SmolLM2 as well as all of the datasets we prepared in the
course of this project. | 198 | 67a446a9430e358f5d5ac4f8 | null | null |
|
2025-02-05T23:23:08.428000 | A Probabilistic Inference Approach to Inference-Time Scaling of LLMs using Particle-Based Monte Carlo Methods | 3 | {
"_id": "648b3f3208c4a9d807a90a99",
"avatarUrl": "/avatars/03634b4e7f8afe9b589a2d7370e29960.svg",
"followerCount": 9,
"fullname": "Akash Srivastava",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "akashsri",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/648b3f3208c4a9d807a90a99/gwgJD14Bd0fdz7xpcHdHe.mp4",
"https://cdn-uploads.huggingface.co/production/uploads/648b3f3208c4a9d807a90a99/KHcaqxZL3wiloAm7x-7nA.mp4"
] | 2502.01618 | [
{
"_id": "67a438d26bb8caaab06f5a5e",
"hidden": false,
"name": "Isha Puri",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-06T04:21:39.202Z",
"user": {
"_id": "64c2abe8c43875b438efef25",
"avatarUrl": "/avatars/6efda081f52cf56db2d29a5ec05cb557.svg",
"fullname": "isha",
"isPro": false,
"type": "user",
"user": "ishapuri-mit"
}
},
{
"_id": "67a438d26bb8caaab06f5a5f",
"hidden": false,
"name": "Shivchander Sudalairaj",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a438d26bb8caaab06f5a60",
"hidden": false,
"name": "Guangxuan Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T16:07:05.566Z",
"user": {
"_id": "66104696134c832243bde60d",
"avatarUrl": "/avatars/b5a0d194d0e12c60fc5599f81f75c205.svg",
"fullname": "Guangxuan Xu",
"isPro": false,
"type": "user",
"user": "gx-ai-architect"
}
},
{
"_id": "67a438d26bb8caaab06f5a61",
"hidden": false,
"name": "Kai Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a438d26bb8caaab06f5a62",
"hidden": false,
"name": "Akash Srivastava",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T18:50:50 | A Probabilistic Inference Approach to Inference-Time Scaling of LLMs
using Particle-Based Monte Carlo Methods | Large language models (LLMs) have achieved significant performance gains via
scaling up model sizes and/or data. However, recent evidence suggests
diminishing returns from such approaches, motivating scaling the computation
spent at inference time. Existing inference-time scaling methods, usually with
reward models, cast the task as a search problem, which tends to be vulnerable
to reward hacking as a consequence of approximation errors in reward models. In
this paper, we instead cast inference-time scaling as a probabilistic inference
task and leverage sampling-based techniques to explore the typical set of the
state distribution of a state-space model with an approximate likelihood,
rather than optimize for its mode directly. We propose a novel inference-time
scaling approach by adapting particle-based Monte Carlo methods to this task.
Our empirical evaluation demonstrates that our methods have a 4-16x better
scaling rate over our deterministic search counterparts on various challenging
mathematical reasoning tasks. Using our approach, we show that
Qwen2.5-Math-1.5B-Instruct can surpass GPT-4o accuracy in only 4 rollouts,
while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts.
Our work not only presents an effective method to inference-time scaling, but
also connects the rich literature in probabilistic inference with
inference-time scaling of LLMs to develop more robust algorithms in future
work. Code and further information is available at
https://probabilistic-inference-scaling.github.io. | 10 | 67a438d36bb8caaab06f5a87 | null | null |
|
2025-02-05T22:27:48.348000 | Demystifying Long Chain-of-Thought Reasoning in LLMs | 3 | {
"_id": "6230d750d93e84e233882dbc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6230d750d93e84e233882dbc/4MGEekLW3oWzqeFWDWvIK.jpeg",
"followerCount": 29,
"fullname": "Xiang Yue",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yuexiang96",
"type": "user"
} | true | null | 2502.03373 | [
{
"_id": "67a42c079a4fb11b11cc4f6f",
"hidden": false,
"name": "Edward Yeo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a42c079a4fb11b11cc4f70",
"hidden": false,
"name": "Yuxuan Tong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:34:50.960Z",
"user": {
"_id": "6448e1fbe988635a3d6aa97d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/eG4R9-3hgrimttP7ep3dN.jpeg",
"fullname": "Shawn/Yuxuan Tong",
"isPro": false,
"type": "user",
"user": "tongyx361"
}
},
{
"_id": "67a42c079a4fb11b11cc4f71",
"hidden": false,
"name": "Morry Niu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:34:57.424Z",
"user": {
"_id": "65bb14f139c4e7087640a91c",
"avatarUrl": "/avatars/dbf75dd161d22b4511e9fccff6afc515.svg",
"fullname": "Morry Niu",
"isPro": false,
"type": "user",
"user": "bl1ndbot"
}
},
{
"_id": "67a42c079a4fb11b11cc4f72",
"hidden": false,
"name": "Graham Neubig",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:35:04.994Z",
"user": {
"_id": "60de14638bedd2315529d43f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1625166923504-noauth.png",
"fullname": "Graham Neubig",
"isPro": false,
"type": "user",
"user": "gneubig"
}
},
{
"_id": "67a42c079a4fb11b11cc4f73",
"hidden": false,
"name": "Xiang Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:35:19.222Z",
"user": {
"_id": "6230d750d93e84e233882dbc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6230d750d93e84e233882dbc/4MGEekLW3oWzqeFWDWvIK.jpeg",
"fullname": "Xiang Yue",
"isPro": false,
"type": "user",
"user": "yuexiang96"
}
}
] | 2025-02-05T17:13:32 | Demystifying Long Chain-of-Thought Reasoning in LLMs | Scaling inference compute enhances reasoning in large language models (LLMs),
with long chains-of-thought (CoTs) enabling strategies like backtracking and
error correction. Reinforcement learning (RL) has emerged as a crucial method
for developing these capabilities, yet the conditions under which long CoTs
emerge remain unclear, and RL training requires careful design choices. In this
study, we systematically investigate the mechanics of long CoT reasoning,
identifying the key factors that enable models to generate long CoT
trajectories. Through extensive supervised fine-tuning (SFT) and RL
experiments, we present four main findings: (1) While SFT is not strictly
necessary, it simplifies training and improves efficiency; (2) Reasoning
capabilities tend to emerge with increased training compute, but their
development is not guaranteed, making reward shaping crucial for stabilizing
CoT length growth; (3) Scaling verifiable reward signals is critical for RL. We
find that leveraging noisy, web-extracted solutions with filtering mechanisms
shows strong potential, particularly for out-of-distribution (OOD) tasks such
as STEM reasoning; and (4) Core abilities like error correction are inherently
present in base models, but incentivizing these skills effectively for complex
tasks via RL demands significant compute, and measuring their emergence
requires a nuanced approach. These insights provide practical guidance for
optimizing training strategies to enhance long CoT reasoning in LLMs. Our code
is available at: https://github.com/eddycmu/demystify-long-cot. | 55 | 67a42c089a4fb11b11cc4fae | null | null |
|
2025-02-05T21:45:32.304000 | Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking | 4 | {
"_id": "6747de57f8cab58c22ec94a2",
"avatarUrl": "/avatars/5bae0341862fac24564781c0fa32aac5.svg",
"followerCount": 5,
"fullname": "Jinyang Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Jinyang23",
"type": "user"
} | true | null | 2502.02339 | [
{
"_id": "67a3262873bdaf626f1e9eab",
"hidden": false,
"name": "Jinyang Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:16.426Z",
"user": {
"_id": "6747de57f8cab58c22ec94a2",
"avatarUrl": "/avatars/5bae0341862fac24564781c0fa32aac5.svg",
"fullname": "Jinyang Wu",
"isPro": false,
"type": "user",
"user": "Jinyang23"
}
},
{
"_id": "67a3262873bdaf626f1e9eac",
"hidden": false,
"name": "Mingkuan Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:37:21.287Z",
"user": {
"_id": "660d13b85e00095e45ee28e0",
"avatarUrl": "/avatars/8f06c01edc2a791266feadc775acb901.svg",
"fullname": "FengMingkuan",
"isPro": false,
"type": "user",
"user": "fmk345"
}
},
{
"_id": "67a3262873bdaf626f1e9ead",
"hidden": false,
"name": "Shuai Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3262873bdaf626f1e9eae",
"hidden": false,
"name": "Ruihan Jin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:37:48.323Z",
"user": {
"_id": "64be16d8ef8c0e42bf3d27f6",
"avatarUrl": "/avatars/6ae308088f6f196d9f470655dae0c14d.svg",
"fullname": "Ruihan Jin",
"isPro": false,
"type": "user",
"user": "RuihanJin"
}
},
{
"_id": "67a3262873bdaf626f1e9eaf",
"hidden": false,
"name": "Feihu Che",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:37:55.510Z",
"user": {
"_id": "63ef2de81e695b35aa4813a2",
"avatarUrl": "/avatars/6abd1918c1b94d927c7c976054e16322.svg",
"fullname": "feihu",
"isPro": false,
"type": "user",
"user": "feihuchen"
}
},
{
"_id": "67a3262873bdaf626f1e9eb0",
"hidden": false,
"name": "Zengqi Wen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3262873bdaf626f1e9eb1",
"hidden": false,
"name": "Jianhua Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T14:18:29 | Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking | Multimodal large language models (MLLMs) exhibit impressive capabilities but
still face challenges in complex visual reasoning. While recent efforts attempt
to enhance MLLMs' reasoning by incorporating OpenAI o1-like structured thinking
through explicit search structures or teacher-guided distillation, they often
struggle to balance performance and efficiency. A critical limitation is their
heavy reliance on extensive data and search spaces, resulting in low-efficiency
implicit insight extraction and data utilization. To address this, we propose
AStar, an Automated Structured thinking paradigm for multimodal reasoning via
Monte Carlo Tree Search (MCTS). AStar automatically derives high-level
cognitive reasoning patterns from limited data using MCTS-powered hierarchical
structures. Building on these explicit patterns, we design a unified reasoning
framework that seamlessly integrates models' internal reasoning capabilities
and external reasoning guidelines, enabling efficient inference with minimal
tree iterations. This novel paradigm strikes a compelling balance between
performance and efficiency. Extensive experiments demonstrate AStar's
effectiveness, achieving superior accuracy (54.0%) on the MathVerse
benchmark with a 7B backbone, surpassing GPT-4o (50.2%) while maintaining
substantial data and computational efficiency. | 22 | 67a3262973bdaf626f1e9edb | null | null |
|
2025-02-05T21:44:36.248000 | TwinMarket: A Scalable Behavioral and Social Simulation for Financial Markets | 3 | {
"_id": "643c047326f177a3e41627b6",
"avatarUrl": "/avatars/ade75cebd049daf080ba80a80d516240.svg",
"followerCount": 2,
"fullname": "Yifei Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "amstrongzyf",
"type": "user"
} | true | null | 2502.01506 | [
{
"_id": "67a4214f12b90b15dc5a648e",
"hidden": false,
"name": "Yuzhe Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:03.582Z",
"user": {
"_id": "63f622c69cbd6730302783eb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63f622c69cbd6730302783eb/9cb96JVKiOm_JhF-shbFw.jpeg",
"fullname": "Yuzhe Yang",
"isPro": false,
"type": "user",
"user": "TobyYang7"
}
},
{
"_id": "67a4214f12b90b15dc5a648f",
"hidden": false,
"name": "Yifei Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:05.578Z",
"user": {
"_id": "643c047326f177a3e41627b6",
"avatarUrl": "/avatars/ade75cebd049daf080ba80a80d516240.svg",
"fullname": "Yifei Zhang",
"isPro": false,
"type": "user",
"user": "amstrongzyf"
}
},
{
"_id": "67a4214f12b90b15dc5a6490",
"hidden": false,
"name": "Minghao Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:18:20.423Z",
"user": {
"_id": "62d4bf8c97ab9eb08762a975",
"avatarUrl": "/avatars/73c6228e317cf37b4e3c3e7a4b3d8ae8.svg",
"fullname": "Minghao Wu",
"isPro": false,
"type": "user",
"user": "minghaowu"
}
},
{
"_id": "67a4214f12b90b15dc5a6491",
"hidden": false,
"name": "Kaidi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4214f12b90b15dc5a6492",
"hidden": false,
"name": "Yunmiao Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:03:21.159Z",
"user": {
"_id": "67aafd473b9cb77cc2223819",
"avatarUrl": "/avatars/e285666fdc918564071be26136fe3312.svg",
"fullname": "Yunmiao Zhang",
"isPro": false,
"type": "user",
"user": "Yunwater"
}
},
{
"_id": "67a4214f12b90b15dc5a6493",
"hidden": false,
"name": "Honghai Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4214f12b90b15dc5a6494",
"hidden": false,
"name": "Yan Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a4214f12b90b15dc5a6495",
"hidden": false,
"name": "Benyou Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-06T14:34:30.901Z",
"user": {
"_id": "637c6703ca8542a0ba900ccb",
"avatarUrl": "/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg",
"fullname": "Wang",
"isPro": false,
"type": "user",
"user": "Benyou"
}
}
] | 2025-02-03T16:39:48 | TwinMarket: A Scalable Behavioral and Social Simulation for Financial
Markets | The study of social emergence has long been a central focus in social
science. Traditional modeling approaches, such as rule-based Agent-Based Models
(ABMs), struggle to capture the diversity and complexity of human behavior,
particularly the irrational factors emphasized in behavioral economics.
Recently, large language model (LLM) agents have gained traction as simulation
tools for modeling human behavior in social science and role-playing
applications. Studies suggest that LLMs can account for cognitive biases,
emotional fluctuations, and other non-rational influences, enabling more
realistic simulations of socio-economic dynamics. In this work, we introduce
TwinMarket, a novel multi-agent framework that leverages LLMs to simulate
socio-economic systems. Specifically, we examine how individual behaviors,
through interactions and feedback mechanisms, give rise to collective dynamics
and emergent phenomena. Through experiments in a simulated stock market
environment, we demonstrate how individual actions can trigger group behaviors,
leading to emergent outcomes such as financial bubbles and recessions. Our
approach provides valuable insights into the complex interplay between
individual decision-making and collective socio-economic patterns. | 33 | 67a4215212b90b15dc5a650a | null | null |
|
2025-02-05T21:08:28.323000 | Text-to-CAD Generation Through Infusing Visual Feedback in Large Language Models | 2 | {
"_id": "63eb00a191a1b8ec4fbba2a9",
"avatarUrl": "/avatars/0cc7cf9b6d05337603f700e0d592edf5.svg",
"followerCount": 3,
"fullname": "ShizhaoSun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ShizhaoSun",
"type": "user"
} | true | null | 2501.19054 | [
{
"_id": "67a33e60b793ca5296f2a6d1",
"hidden": false,
"name": "Ruiyu Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T13:37:40.966Z",
"user": {
"_id": "67a30a243d5a32b36c7d7d0b",
"avatarUrl": "/avatars/4aa43f0d79797be1063f246d63638a85.svg",
"fullname": "Ruiyu Wang",
"isPro": false,
"type": "user",
"user": "rywang37"
}
},
{
"_id": "67a33e60b793ca5296f2a6d2",
"hidden": false,
"name": "Yu Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a33e60b793ca5296f2a6d3",
"hidden": false,
"name": "Shizhao Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:14.093Z",
"user": {
"_id": "63eb00a191a1b8ec4fbba2a9",
"avatarUrl": "/avatars/0cc7cf9b6d05337603f700e0d592edf5.svg",
"fullname": "ShizhaoSun",
"isPro": false,
"type": "user",
"user": "ShizhaoSun"
}
},
{
"_id": "67a33e60b793ca5296f2a6d4",
"hidden": false,
"name": "Jiang Bian",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T11:28:16 | Text-to-CAD Generation Through Infusing Visual Feedback in Large
Language Models | Creating Computer-Aided Design (CAD) models requires significant expertise
and effort. Text-to-CAD, which converts textual descriptions into CAD
parametric sequences, is crucial in streamlining this process. Recent studies
have utilized ground-truth parametric sequences, known as sequential signals,
as supervision to achieve this goal. However, CAD models are inherently
multimodal, comprising parametric sequences and corresponding rendered visual
objects. Besides,the rendering process from parametric sequences to visual
objects is many-to-one. Therefore, both sequential and visual signals are
critical for effective training. In this work, we introduce CADFusion, a
framework that uses Large Language Models (LLMs) as the backbone and alternates
between two training stages: the sequential learning (SL) stage and the visual
feedback (VF) stage. In the SL stage, we train LLMs using ground-truth
parametric sequences, enabling the generation of logically coherent parametric
sequences. In the VF stage, we reward parametric sequences that render into
visually preferred objects and penalize those that do not, allowing LLMs to
learn how rendered visual objects are perceived and evaluated. These two stages
alternate throughout the training, ensuring balanced learning and preserving
benefits of both signals. Experiments demonstrate that CADFusion significantly
improves performance, both qualitatively and quantitatively. | 9 | 67a33e67b793ca5296f2a8a6 | null | null |
|
2025-02-05T17:48:01.059000 | Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense | 3 | {
"_id": "6433707307bad11484af1d2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6433707307bad11484af1d2a/w5zB-zstJzY561n6q7m4D.jpeg",
"followerCount": null,
"fullname": "Lipeng (Tony) He",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ttttonyhe",
"type": "user"
} | true | null | 2502.00840 | [
{
"_id": "67a3e9fc2955dee2f54fb307",
"hidden": false,
"name": "Jiawen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb308",
"hidden": false,
"name": "Kejia Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb309",
"hidden": false,
"name": "Lipeng He",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-05T22:45:18.208Z",
"user": {
"_id": "6433707307bad11484af1d2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6433707307bad11484af1d2a/w5zB-zstJzY561n6q7m4D.jpeg",
"fullname": "Lipeng (Tony) He",
"isPro": false,
"type": "user",
"user": "ttttonyhe"
}
},
{
"_id": "67a3e9fc2955dee2f54fb30a",
"hidden": false,
"name": "Jian Lou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb30b",
"hidden": false,
"name": "Dan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb30c",
"hidden": false,
"name": "Zunlei Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb30d",
"hidden": false,
"name": "Mingli Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb30e",
"hidden": false,
"name": "Jian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb30f",
"hidden": false,
"name": "Kui Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3e9fc2955dee2f54fb310",
"hidden": false,
"name": "Xiaohu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-02T16:25:48 | Activation Approximations Can Incur Safety Vulnerabilities Even in
Aligned LLMs: Comprehensive Analysis and Defense | Large Language Models (LLMs) have showcased remarkable capabilities across
various domains. Accompanying the evolving capabilities and expanding
deployment scenarios of LLMs, their deployment challenges escalate due to their
sheer scale and the advanced yet complex activation designs prevalent in
notable model series, such as Llama, Gemma, and Mistral. These challenges have
become particularly pronounced in resource-constrained deployment scenarios,
where mitigating inference efficiency bottlenecks is imperative. Among various
recent efforts, activation approximation has emerged as a promising avenue for
pursuing inference efficiency, sometimes considered indispensable in
applications such as private inference. Despite achieving substantial speedups
with minimal impact on utility, even appearing sound and practical for
real-world deployment, the safety implications of activation approximations
remain unclear. In this work, we fill this critical gap in LLM safety by
conducting the first systematic safety evaluation of activation approximations.
Our safety vetting spans seven sota techniques across three popular categories,
revealing consistent safety degradation across ten safety-aligned LLMs. | 1 | 67a3e9fe2955dee2f54fb36e | null | null |
|
2025-02-05T15:45:57.451000 | Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models | 3 | {
"_id": "671ff4124b2e5a664aae01e1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/8PQkyFF-fc9W2K8uArMXn.png",
"followerCount": null,
"fullname": "Wenzhi Fang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wenzhifang",
"type": "user"
} | true | null | 2501.19389 | [
{
"_id": "67a2a05be5b870d51558fc00",
"hidden": false,
"name": "Wenzhi Fang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-04T23:18:52.300Z",
"user": {
"_id": "671ff4124b2e5a664aae01e1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/8PQkyFF-fc9W2K8uArMXn.png",
"fullname": "Wenzhi Fang",
"isPro": false,
"type": "user",
"user": "wenzhifang"
}
},
{
"_id": "67a2a05be5b870d51558fc01",
"hidden": false,
"name": "Dong-Jun Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2a05be5b870d51558fc02",
"hidden": false,
"name": "Liangqi Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2a05be5b870d51558fc03",
"hidden": false,
"name": "Seyyedali Hosseinalipour",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2a05be5b870d51558fc04",
"hidden": false,
"name": "Christopher G. Brinton",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T18:44:35 | Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large
Language Models | Fine-tuning large language models (LLMs) on devices is attracting increasing
interest. Recent works have fused low-rank adaptation (LoRA) techniques with
federated fine-tuning to mitigate challenges associated with device model sizes
and data scarcity. Still, the heterogeneity of computational resources remains
a critical bottleneck: while higher-rank modules generally enhance performance,
varying device capabilities constrain LoRA's feasible rank range. Existing
approaches attempting to resolve this issue either lack analytical
justification or impose additional computational overhead, leaving a wide gap
for an efficient and theoretically-grounded solution. To address these
challenges, we propose federated sketching LoRA (FSLoRA), which leverages a
sketching mechanism to enable devices to selectively update submatrices of
global LoRA modules maintained by the server. By adjusting the sketching
ratios, which determine the ranks of the submatrices on the devices, FSLoRA
flexibly adapts to device-specific communication and computational constraints.
We provide a rigorous convergence analysis of FSLoRA that characterizes how the
sketching ratios affect the convergence rate. Through comprehensive experiments
on multiple datasets and LLM models, we demonstrate FSLoRA's superior
performance compared to various baselines. | 4 | 67a2a05ce5b870d51558fc57 | null | null |
|
2025-02-05T13:27:36.138000 | COCONut-PanCap: Joint Panoptic Segmentation and Grounded Captions for Fine-Grained Understanding and Generation | 2 | {
"_id": "65ca9b1743207e438a95e90c",
"avatarUrl": "/avatars/8f7bde1c44d8e665a29ee08ce7fedfa4.svg",
"followerCount": null,
"fullname": "Xueqing Deng",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "xdeng77",
"type": "user"
} | true | null | 2502.02589 | [
{
"_id": "67a3ad7447edcbb9e1f1e2f0",
"hidden": false,
"name": "Xueqing Deng",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-05T18:27:02.349Z",
"user": {
"_id": "65ca9b1743207e438a95e90c",
"avatarUrl": "/avatars/8f7bde1c44d8e665a29ee08ce7fedfa4.svg",
"fullname": "Xueqing Deng",
"isPro": true,
"type": "user",
"user": "xdeng77"
}
},
{
"_id": "67a3ad7447edcbb9e1f1e2f1",
"hidden": false,
"name": "Qihang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f2",
"hidden": false,
"name": "Ali Athar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f3",
"hidden": false,
"name": "Chenglin Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f4",
"hidden": false,
"name": "Linjie Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f5",
"hidden": false,
"name": "Xiaojie Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f6",
"hidden": false,
"name": "Xiaohui Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a3ad7447edcbb9e1f1e2f7",
"hidden": false,
"name": "Liang-Chieh Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T18:59:46 | COCONut-PanCap: Joint Panoptic Segmentation and Grounded Captions for
Fine-Grained Understanding and Generation | This paper introduces the COCONut-PanCap dataset, created to enhance panoptic
segmentation and grounded image captioning. Building upon the COCO dataset with
advanced COCONut panoptic masks, this dataset aims to overcome limitations in
existing image-text datasets that often lack detailed, scene-comprehensive
descriptions. The COCONut-PanCap dataset incorporates fine-grained,
region-level captions grounded in panoptic segmentation masks, ensuring
consistency and improving the detail of generated captions. Through
human-edited, densely annotated descriptions, COCONut-PanCap supports improved
training of vision-language models (VLMs) for image understanding and
generative models for text-to-image tasks. Experimental results demonstrate
that COCONut-PanCap significantly boosts performance across understanding and
generation tasks, offering complementary benefits to large-scale datasets. This
dataset sets a new benchmark for evaluating models on joint panoptic
segmentation and grounded captioning tasks, addressing the need for
high-quality, detailed image-text annotations in multi-modal learning. | 10 | 67a3ad7647edcbb9e1f1e378 | null | null |
|
2025-02-05T12:16:39.189000 | Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification | 2 | {
"_id": "66824dacdd73c6dd2996c166",
"avatarUrl": "/avatars/7c43ccca705bfb608c8d46b68f62a89d.svg",
"followerCount": null,
"fullname": "Eric",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ericzhao28",
"type": "user"
} | true | null | 2502.01839 | [
{
"_id": "67a394a6049991184002e7f4",
"hidden": false,
"name": "Eric Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T16:53:58.892Z",
"user": {
"_id": "66824dacdd73c6dd2996c166",
"avatarUrl": "/avatars/7c43ccca705bfb608c8d46b68f62a89d.svg",
"fullname": "Eric",
"isPro": false,
"type": "user",
"user": "ericzhao28"
}
},
{
"_id": "67a394a6049991184002e7f5",
"hidden": false,
"name": "Pranjal Awasthi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a394a6049991184002e7f6",
"hidden": false,
"name": "Sreenivas Gollapudi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T21:31:07 | Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling
Verification | Sampling-based search, a simple paradigm for utilizing test-time compute,
involves generating multiple candidate responses and selecting the best one --
typically by verifying each response for correctness. In this paper, we study
the scaling trends governing sampling-based search. Among our findings is that
simply scaling up a minimalist implementation that uses only random sampling
and direct self-verification results in sustained performance improvements
that, for example, elevate the Gemini v1.5 Pro model's reasoning capabilities
past that of o1-Preview on popular benchmarks. We partially attribute the
scalability of sampling-based search to a phenomenon of implicit scaling, where
sampling a larger pool of responses in turn improves verification accuracy. We
further identify two useful principles for improving self-verification
capabilities with test-time compute: (1) comparing across responses provides
helpful signals about the locations of errors and hallucinations, and (2)
different model output styles are useful for different contexts -- chains of
thought are useful for reasoning but harder to verify. We also find that,
though accurate verification can be elicited, frontier models demonstrate
remarkably weak out-of-box verification capabilities and introduce a benchmark
to measure progress on these deficiencies. | 7 | 67a394a7049991184002e82d | null | null |
|
2025-02-05T08:09:02.787000 | Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models Beneficial? | 4 | {
"_id": "62f32eab52ad88c930bb3f3b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677134945205-62f32eab52ad88c930bb3f3b.png",
"followerCount": 55,
"fullname": "Asankhaya Sharma",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "codelion",
"type": "user"
} | false | null | 2502.00674 | [
{
"_id": "67a362c9b9a2bb11fdba4b9f",
"hidden": false,
"name": "Wenzhe Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a362c9b9a2bb11fdba4ba0",
"hidden": false,
"name": "Yong Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a362c9b9a2bb11fdba4ba1",
"hidden": false,
"name": "Mengzhou Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a362c9b9a2bb11fdba4ba2",
"hidden": false,
"name": "Chi Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-02T05:23:29 | Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models
Beneficial? | Ensembling outputs from diverse sources is a straightforward yet effective
approach to boost performance. Mixture-of-Agents (MoA) is one such popular
ensemble method that aggregates outputs from multiple different Large Language
Models (LLMs). This paper raises the question in the context of language
models: is mixing different LLMs truly beneficial? We propose Self-MoA -- an
ensemble method that aggregates outputs from only the single top-performing
LLM. Our extensive experiments reveal that, surprisingly, Self-MoA outperforms
standard MoA that mixes different LLMs in a large number of scenarios: Self-MoA
achieves 6.6% improvement over MoA on the AlpacaEval 2.0 benchmark, and an
average of 3.8% improvement across various benchmarks, including MMLU, CRUX,
and MATH. Applying Self-MoA to one of the top-ranking models in AlpacaEval 2.0
directly achieves the new state-of-the-art performance on the leaderboard. To
understand the effectiveness of Self-MoA, we systematically investigate the
trade-off between diversity and quality of outputs under various MoA settings.
We confirm that the MoA performance is rather sensitive to the quality, and
mixing different LLMs often lowers the average quality of the models. To
complement the study, we identify the scenarios where mixing different LLMs
could be helpful. This paper further introduces a sequential version of
Self-MoA, that is capable of aggregating a large number of LLM outputs
on-the-fly over multiple rounds, and is as effective as aggregating all outputs
at once. | 13 | 67a362cab9a2bb11fdba4bdc | null | null |
|
2025-02-05T07:44:45.130000 | Concept Steerers: Leveraging K-Sparse Autoencoders for Controllable Generations | 2 | {
"_id": "64bcc06fb567ae97c3272d3d",
"avatarUrl": "/avatars/bcb61fe9e575154d84913a1501971f1a.svg",
"followerCount": null,
"fullname": "kim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dahyekim",
"type": "user"
} | true | null | 2501.19066 | [
{
"_id": "67a0f59c5685d37e28880943",
"hidden": false,
"name": "Dahye Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T16:59:53.099Z",
"user": {
"_id": "64bcc06fb567ae97c3272d3d",
"avatarUrl": "/avatars/bcb61fe9e575154d84913a1501971f1a.svg",
"fullname": "kim",
"isPro": false,
"type": "user",
"user": "dahyekim"
}
},
{
"_id": "67a0f59c5685d37e28880944",
"hidden": false,
"name": "Deepti Ghadiyaram",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T11:52:47 | Concept Steerers: Leveraging K-Sparse Autoencoders for Controllable
Generations | Despite the remarkable progress in text-to-image generative models, they are
prone to adversarial attacks and inadvertently generate unsafe, unethical
content. Existing approaches often rely on fine-tuning models to remove
specific concepts, which is computationally expensive, lack scalability, and/or
compromise generation quality. In this work, we propose a novel framework
leveraging k-sparse autoencoders (k-SAEs) to enable efficient and interpretable
concept manipulation in diffusion models. Specifically, we first identify
interpretable monosemantic concepts in the latent space of text embeddings and
leverage them to precisely steer the generation away or towards a given concept
(e.g., nudity) or to introduce a new concept (e.g., photographic style).
Through extensive experiments, we demonstrate that our approach is very simple,
requires no retraining of the base model nor LoRA adapters, does not compromise
the generation quality, and is robust to adversarial prompt manipulations. Our
method yields an improvement of 20.01% in unsafe concept removal,
is effective in style manipulation, and is sim5x faster than
current state-of-the-art. | 12 | 67a0f5a05685d37e28880a1e | null | null |
|
2025-02-05T03:01:40.464000 | Inverse Bridge Matching Distillation | 2 | {
"_id": "672503c59f68afdd63cc81a2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/672503c59f68afdd63cc81a2/lw4ApCTwAKgt_uUyfSVRH.jpeg",
"followerCount": null,
"fullname": "Nikita Gushchin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ngushchin",
"type": "user"
} | true | null | 2502.01362 | [
{
"_id": "67a2ad6ac7caec9bf5a45e61",
"hidden": false,
"name": "Nikita Gushchin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:14:26.177Z",
"user": {
"_id": "672503c59f68afdd63cc81a2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/672503c59f68afdd63cc81a2/lw4ApCTwAKgt_uUyfSVRH.jpeg",
"fullname": "Nikita Gushchin",
"isPro": false,
"type": "user",
"user": "ngushchin"
}
},
{
"_id": "67a2ad6ac7caec9bf5a45e62",
"hidden": false,
"name": "David Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:14:24.236Z",
"user": {
"_id": "656a2e59b4020389028dc85f",
"avatarUrl": "/avatars/6fda3bddc3cecba2894233bebb3de968.svg",
"fullname": "David Li",
"isPro": false,
"type": "user",
"user": "kekchpek"
}
},
{
"_id": "67a2ad6ac7caec9bf5a45e63",
"hidden": false,
"name": "Daniil Selikhanovych",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T10:17:09.668Z",
"user": {
"_id": "64a42977250bfdecd9570a9e",
"avatarUrl": "/avatars/df5d7cf159e6bb9e961e1c77d1b89d36.svg",
"fullname": "Daniil Selikhanovych",
"isPro": false,
"type": "user",
"user": "apryc1"
}
},
{
"_id": "67a2ad6ac7caec9bf5a45e64",
"hidden": false,
"name": "Evgeny Burnaev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ad6ac7caec9bf5a45e65",
"hidden": false,
"name": "Dmitry Baranchuk",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T10:17:26.518Z",
"user": {
"_id": "62b6cc49752323892323bc04",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b6cc49752323892323bc04/gGBld1KJIP9AIpd81L3PC.jpeg",
"fullname": "Dmitry Baranchuk",
"isPro": true,
"type": "user",
"user": "dbaranchuk"
}
},
{
"_id": "67a2ad6ac7caec9bf5a45e66",
"hidden": false,
"name": "Alexander Korotin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T10:17:33.816Z",
"user": {
"_id": "67a31c9ae5b870d5157657db",
"avatarUrl": "/avatars/ca5fd356e3656e1beacb5a28ecaad5be.svg",
"fullname": "Alexander Korotin",
"isPro": false,
"type": "user",
"user": "akorotin"
}
}
] | 2025-02-03T13:56:03 | Inverse Bridge Matching Distillation | Learning diffusion bridge models is easy; making them fast and practical is
an art. Diffusion bridge models (DBMs) are a promising extension of diffusion
models for applications in image-to-image translation. However, like many
modern diffusion and flow models, DBMs suffer from the problem of slow
inference. To address it, we propose a novel distillation technique based on
the inverse bridge matching formulation and derive the tractable objective to
solve it in practice. Unlike previously developed DBM distillation techniques,
the proposed method can distill both conditional and unconditional types of
DBMs, distill models in a one-step generator, and use only the corrupted images
for training. We evaluate our approach for both conditional and unconditional
types of bridge matching on a wide set of setups, including super-resolution,
JPEG restoration, sketch-to-image, and other tasks, and show that our
distillation technique allows us to accelerate the inference of DBMs from 4x to
100x and even provide better generation quality than used teacher model
depending on particular setup. | 27 | 67a2ad70c7caec9bf5a45fb0 | null | null |
|
2025-02-05T00:59:11.275000 | Generating Multi-Image Synthetic Data for Text-to-Image Customization | 2 | {
"_id": "62f6a894c3372328414c7021",
"avatarUrl": "/avatars/e8b10912355712f38f10805c31bea962.svg",
"followerCount": 10,
"fullname": "Nupur Kumari",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "nupurkmr9",
"type": "user"
} | true | null | 2502.01720 | [
{
"_id": "67a2fddb4044bf1c86f765a3",
"hidden": false,
"name": "Nupur Kumari",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T13:37:30.825Z",
"user": {
"_id": "62f6a894c3372328414c7021",
"avatarUrl": "/avatars/e8b10912355712f38f10805c31bea962.svg",
"fullname": "Nupur Kumari",
"isPro": true,
"type": "user",
"user": "nupurkmr9"
}
},
{
"_id": "67a2fddb4044bf1c86f765a4",
"hidden": false,
"name": "Xi Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2fddb4044bf1c86f765a5",
"hidden": false,
"name": "Jun-Yan Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2fddb4044bf1c86f765a6",
"hidden": false,
"name": "Ishan Misra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2fddb4044bf1c86f765a7",
"hidden": false,
"name": "Samaneh Azadi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T18:59:41 | Generating Multi-Image Synthetic Data for Text-to-Image Customization | Customization of text-to-image models enables users to insert custom concepts
and generate the concepts in unseen settings. Existing methods either rely on
costly test-time optimization or train encoders on single-image training
datasets without multi-image supervision, leading to worse image quality. We
propose a simple approach that addresses both limitations. We first leverage
existing text-to-image models and 3D datasets to create a high-quality
Synthetic Customization Dataset (SynCD) consisting of multiple images of the
same object in different lighting, backgrounds, and poses. We then propose a
new encoder architecture based on shared attention mechanisms that better
incorporate fine-grained visual details from input images. Finally, we propose
a new inference technique that mitigates overexposure issues during inference
by normalizing the text and image guidance vectors. Through extensive
experiments, we show that our model, trained on the synthetic dataset with the
proposed encoder and inference algorithm, outperforms existing tuning-free
methods on standard customization benchmarks. | 8 | 67a2fde34044bf1c86f767ba | null | null |
|
2025-02-04T23:46:17.626000 | VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models | 7 | {
"_id": "6181c72cdcc1df2c9de8a4d8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655248010394-6181c72cdcc1df2c9de8a4d8.jpeg",
"followerCount": 14,
"fullname": "Hila Chefer",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Hila",
"type": "user"
} | true | null | 2502.02492 | [
{
"_id": "67a2ec904ea0e3138ac966f2",
"hidden": false,
"name": "Hila Chefer",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-05T04:44:03.218Z",
"user": {
"_id": "6181c72cdcc1df2c9de8a4d8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655248010394-6181c72cdcc1df2c9de8a4d8.jpeg",
"fullname": "Hila Chefer",
"isPro": false,
"type": "user",
"user": "Hila"
}
},
{
"_id": "67a2ec904ea0e3138ac966f3",
"hidden": false,
"name": "Uriel Singer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T16:53:54.046Z",
"user": {
"_id": "6345b71843f4f2d2ed113355",
"avatarUrl": "/avatars/a497669a4c53a724c4f6ea615d1dda59.svg",
"fullname": "Uriel Singer",
"isPro": false,
"type": "user",
"user": "urielsinger"
}
},
{
"_id": "67a2ec904ea0e3138ac966f4",
"hidden": false,
"name": "Amit Zohar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ec904ea0e3138ac966f5",
"hidden": false,
"name": "Yuval Kirstain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ec904ea0e3138ac966f6",
"hidden": false,
"name": "Adam Polyak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ec904ea0e3138ac966f7",
"hidden": false,
"name": "Yaniv Taigman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ec904ea0e3138ac966f8",
"hidden": false,
"name": "Lior Wolf",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2ec904ea0e3138ac966f9",
"hidden": false,
"name": "Shelly Sheynin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T17:07:10 | VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion
Generation in Video Models | Despite tremendous recent progress, generative video models still struggle to
capture real-world motion, dynamics, and physics. We show that this limitation
arises from the conventional pixel reconstruction objective, which biases
models toward appearance fidelity at the expense of motion coherence. To
address this, we introduce VideoJAM, a novel framework that instills an
effective motion prior to video generators, by encouraging the model to learn a
joint appearance-motion representation. VideoJAM is composed of two
complementary units. During training, we extend the objective to predict both
the generated pixels and their corresponding motion from a single learned
representation. During inference, we introduce Inner-Guidance, a mechanism that
steers the generation toward coherent motion by leveraging the model's own
evolving motion prediction as a dynamic guidance signal. Notably, our framework
can be applied to any video model with minimal adaptations, requiring no
modifications to the training data or scaling of the model. VideoJAM achieves
state-of-the-art performance in motion coherence, surpassing highly competitive
proprietary models while also enhancing the perceived visual quality of the
generations. These findings emphasize that appearance and motion can be
complementary and, when effectively integrated, enhance both the visual quality
and the coherence of video generation. Project website:
https://hila-chefer.github.io/videojam-paper.github.io/ | 58 | 67a2ec934ea0e3138ac9678e | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.