publishedAt
timestamp[ns]date 2023-02-13 12:55:54
2025-05-02 03:36:49
⌀ | title
stringlengths 8
206
⌀ | thumbnail
stringlengths 77
77
⌀ | numComments
int64 0
143
⌀ | submittedBy
dict | isAuthorParticipating
bool 2
classes | mediaUrls
sequencelengths 0
12
⌀ | paper_id
stringlengths 10
10
⌀ | paper_authors
listlengths 1
942
⌀ | paper_publishedAt
timestamp[ns]date 2023-02-13 17:55:54
2025-05-02 07:36:49
⌀ | paper_title
stringlengths 8
206
⌀ | paper_summary
stringlengths 165
1.92k
⌀ | paper_upvotes
int64 0
615
⌀ | paper_discussionId
stringlengths 24
24
⌀ | paper_projectPage
stringclasses 572
values | paper_githubRepo
stringclasses 813
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-02-26T00:56:27.275000 | K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs | 2 | {
"_id": "6285a9133ab6642179158944",
"avatarUrl": "/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg",
"followerCount": 15,
"fullname": "Zhen Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Paper99",
"type": "user"
} | true | null | 2502.18461 | [
{
"_id": "67bea0cc2d6011a72335f704",
"hidden": false,
"name": "Ziheng Ouyang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:25:46.941Z",
"user": {
"_id": "67be9daa65ae638b17e461e9",
"avatarUrl": "/avatars/30ab04b8a6a4d3e1d211943c0344b95e.svg",
"fullname": "Ziheng Ouyang",
"isPro": false,
"type": "user",
"user": "oyzh2005"
}
},
{
"_id": "67bea0cc2d6011a72335f705",
"hidden": false,
"name": "Zhen Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:36.061Z",
"user": {
"_id": "6285a9133ab6642179158944",
"avatarUrl": "/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg",
"fullname": "Zhen Li",
"isPro": false,
"type": "user",
"user": "Paper99"
}
},
{
"_id": "67bea0cc2d6011a72335f706",
"hidden": false,
"name": "Qibin Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T18:59:12 | K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs | Recent studies have explored combining different LoRAs to jointly generate
learned style and content. However, existing methods either fail to effectively
preserve both the original subject and style simultaneously or require
additional training. In this paper, we argue that the intrinsic properties of
LoRA can effectively guide diffusion models in merging learned subject and
style. Building on this insight, we propose K-LoRA, a simple yet effective
training-free LoRA fusion approach. In each attention layer, K-LoRA compares
the Top-K elements in each LoRA to be fused, determining which LoRA to select
for optimal fusion. This selection mechanism ensures that the most
representative features of both subject and style are retained during the
fusion process, effectively balancing their contributions. Experimental results
demonstrate that the proposed method effectively integrates the subject and
style information learned by the original LoRAs, outperforming state-of-the-art
training-based approaches in both qualitative and quantitative results. | 15 | 67bea0cf2d6011a72335f7aa | https://k-lora.github.io/K-LoRA.io/ | https://github.com/HVision-NKU/K-LoRA |
|
2025-02-26T00:38:42.527000 | Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI | 2 | {
"_id": "63d9e09f1cae35c27bf80cb2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675223055197-noauth.jpeg",
"followerCount": 6,
"fullname": "Syed Abdul Gaffar Shakhadri",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "SyedAbdul",
"type": "user"
} | true | null | 2502.17092 | [
{
"_id": "67bea8cc7e54112af6c372aa",
"hidden": false,
"name": "Syed Abdul Gaffar Shakhadri",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-26T05:52:19.355Z",
"user": {
"_id": "63d9e09f1cae35c27bf80cb2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675223055197-noauth.jpeg",
"fullname": "Syed Abdul Gaffar Shakhadri",
"isPro": true,
"type": "user",
"user": "SyedAbdul"
}
},
{
"_id": "67bea8cc7e54112af6c372ab",
"hidden": false,
"name": "Kruthika KR",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-26T05:38:21.529Z",
"user": {
"_id": "5fb7ae48e6ae537272bdeb3c",
"avatarUrl": "/avatars/e5d01cb428f4b22161e0d17895a5c678.svg",
"fullname": "Kruthika",
"isPro": false,
"type": "user",
"user": "kruthika"
}
},
{
"_id": "67bea8cc7e54112af6c372ac",
"hidden": false,
"name": "Kartik Basavaraj Angadi",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-26T05:38:21.529Z",
"user": {
"_id": "677cc34fe4cf361eedccd085",
"avatarUrl": "/avatars/e97a3f9a84ed258ab4b75c12865562d6.svg",
"fullname": "Kartik Basavaraj Angadi",
"isPro": false,
"type": "user",
"user": "KartikAngadi"
}
}
] | 2025-02-24T12:15:07 | Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI | We introduce Shakti VLM, a family of vision-language models in the capacity
of 1B and 4B parameters designed to address data efficiency challenges in
multimodal learning. While recent VLMs achieve strong performance through
extensive training data, Shakti models leverage architectural innovations to
attain competitive results with fewer tokens. Key advancements include
QK-Normalization for attention stability, hybrid normalization techniques, and
enhanced positional encoding. A three-stage training strategy further optimizes
learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and
Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR
extraction, and general multimodal reasoning. Our results highlight that high
performance can be achieved through model design and training strategy rather
than sheer data volume, making Shakti an efficient solution for
enterprise-scale multimodal tasks. | 3 | 67bea8cd7e54112af6c37305 | null | null |
|
2025-02-25T22:26:11.421000 | Scale-Distribution Decoupling: Enabling Stable and Effective Training of Large Language Models | 2 | {
"_id": "6371128eafbe42caa5a5222b",
"avatarUrl": "/avatars/c3b2ab35949c38aa3dfb2657a1300aac.svg",
"followerCount": 1,
"fullname": "Yutao Zeng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Taoer",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6371128eafbe42caa5a5222b/eu6jpeTjTn34I1SJ4_K1a.png",
"https://cdn-uploads.huggingface.co/production/uploads/6371128eafbe42caa5a5222b/P6mXXagZPsH6fwQ6myMlr.png"
] | 2502.15499 | [
{
"_id": "67be86743ea16c7e9491ff16",
"hidden": false,
"name": "Ya Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86743ea16c7e9491ff17",
"hidden": false,
"name": "Zhijian Zhuo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:58:10.556Z",
"user": {
"_id": "66335b9c95c5b79ebf306f30",
"avatarUrl": "/avatars/d57784ee65cbef014360c9bac1ad4119.svg",
"fullname": "Zhijian Zhuo",
"isPro": false,
"type": "user",
"user": "BryceZhuo"
}
},
{
"_id": "67be86743ea16c7e9491ff18",
"hidden": false,
"name": "Yutao Zeng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:25:55.016Z",
"user": {
"_id": "6371128eafbe42caa5a5222b",
"avatarUrl": "/avatars/c3b2ab35949c38aa3dfb2657a1300aac.svg",
"fullname": "Yutao Zeng",
"isPro": false,
"type": "user",
"user": "Taoer"
}
},
{
"_id": "67be86743ea16c7e9491ff19",
"hidden": false,
"name": "Xun Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:52:26.974Z",
"user": {
"_id": "62533db4a06ec75172eeabe7",
"avatarUrl": "/avatars/b1a4dad90afae5c00df97233a97777db.svg",
"fullname": "xunzhou",
"isPro": false,
"type": "user",
"user": "xunzhou"
}
},
{
"_id": "67be86743ea16c7e9491ff1a",
"hidden": false,
"name": "Jian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86743ea16c7e9491ff1b",
"hidden": false,
"name": "Xiaoqing Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:51:57.314Z",
"user": {
"_id": "64648638351adef1a847a7ad",
"avatarUrl": "/avatars/7518e058fcf81ee81a06c96e996531e9.svg",
"fullname": "Xiaoqing Li",
"isPro": false,
"type": "user",
"user": "LLIXQ"
}
}
] | 2025-02-21T14:49:34 | Scale-Distribution Decoupling: Enabling Stable and Effective Training of
Large Language Models | Training stability is a persistent challenge in the pre-training of large
language models (LLMs), particularly for architectures such as Post-Norm
Transformers, which are prone to gradient explosion and dissipation. In this
paper, we propose Scale-Distribution Decoupling (SDD), a novel approach that
stabilizes training by explicitly decoupling the scale and distribution of the
weight matrix in fully-connected layers. SDD applies a normalization mechanism
to regulate activations and a learnable scaling vector to maintain
well-conditioned gradients, effectively preventing gradient explosion
and dissipation. This separation improves optimization efficiency,
particularly in deep networks, by ensuring stable gradient propagation.
Experimental results demonstrate that our method stabilizes training across
various LLM architectures and outperforms existing techniques in different
normalization configurations. Furthermore, the proposed method is lightweight
and compatible with existing frameworks, making it a practical solution for
stabilizing LLM training. Code is available at https://github.com/kaihemo/SDD. | 13 | 67be86753ea16c7e9491ff49 | null | null |
|
2025-02-25T22:20:16.916000 | WebGames: Challenging General-Purpose Web-Browsing AI Agents | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.18356 | [
{
"_id": "67be8866823e790d21a2bb90",
"hidden": false,
"name": "George Thomas",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:06:53.500Z",
"user": {
"_id": "6529aa1460e706730575baa9",
"avatarUrl": "/avatars/550fac58a6ebf937a65d19a48e71eb45.svg",
"fullname": "George Thomas",
"isPro": false,
"type": "user",
"user": "georgethomas"
}
},
{
"_id": "67be8866823e790d21a2bb91",
"hidden": false,
"name": "Alex J. Chan",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-26T03:20:08.029Z",
"user": {
"_id": "636c1e4415cd58e915bc45df",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/636c1e4415cd58e915bc45df/KnPgdPe0G5ngvXaCBua6R.jpeg",
"fullname": "Alex J. Chan",
"isPro": false,
"type": "user",
"user": "XanderJC"
}
},
{
"_id": "67be8866823e790d21a2bb92",
"hidden": false,
"name": "Jikun Kang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:37.981Z",
"user": {
"_id": "6489e10ca13f65198dc6e122",
"avatarUrl": "/avatars/4aa9eab488157711b2f0298ddadee2f4.svg",
"fullname": "Kang",
"isPro": false,
"type": "user",
"user": "JaxonK"
}
},
{
"_id": "67be8866823e790d21a2bb93",
"hidden": false,
"name": "Wenqi Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:08:43.843Z",
"user": {
"_id": "63a2f1dfc8a2aa5d9e85f8f6",
"avatarUrl": "/avatars/f2191e3a0ce92563f9bfe83283d8d966.svg",
"fullname": "Wenqi Wu",
"isPro": false,
"type": "user",
"user": "BiggieW"
}
},
{
"_id": "67be8866823e790d21a2bb94",
"hidden": false,
"name": "Filippos Christianos",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:08:23.534Z",
"user": {
"_id": "64f46b681d337935d0495d4d",
"avatarUrl": "/avatars/cce5a4910617931fb13062b832e14ef8.svg",
"fullname": "Filippos Christianos",
"isPro": false,
"type": "user",
"user": "semitable"
}
},
{
"_id": "67be8866823e790d21a2bb95",
"hidden": false,
"name": "Fraser Greenlee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:08:16.380Z",
"user": {
"_id": "5f195784925b9863e28ad610",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1595496291585-noauth.png",
"fullname": "Fraser Greenlee",
"isPro": false,
"type": "user",
"user": "Fraser"
}
},
{
"_id": "67be8866823e790d21a2bb96",
"hidden": false,
"name": "Andy Toulis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be8866823e790d21a2bb97",
"hidden": false,
"name": "Marvin Purtorab",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:07:42.610Z",
"user": {
"_id": "6787c4a970c0f5272f456968",
"avatarUrl": "/avatars/bdfa53add57b0f0a9e4e94e24115b354.svg",
"fullname": "Marvin Purtorab",
"isPro": false,
"type": "user",
"user": "comvergent-marvin"
}
}
] | 2025-02-25T16:45:08 | WebGames: Challenging General-Purpose Web-Browsing AI Agents | We introduce WebGames, a comprehensive benchmark suite designed to evaluate
general-purpose web-browsing AI agents through a collection of 50+ interactive
challenges. These challenges are specifically crafted to be straightforward for
humans while systematically testing the limitations of current AI systems
across fundamental browser interactions, advanced input processing, cognitive
tasks, workflow automation, and interactive entertainment. Our framework
eliminates external dependencies through a hermetic testing environment,
ensuring reproducible evaluation with verifiable ground-truth solutions. We
evaluate leading vision-language models including GPT-4o, Claude Computer-Use,
Gemini-1.5-Pro, and Qwen2-VL against human performance. Results reveal a
substantial capability gap, with the best AI system achieving only 43.1%
success rate compared to human performance of 95.7%, highlighting fundamental
limitations in current AI systems' ability to handle common web interaction
patterns that humans find intuitive. The benchmark is publicly available at
webgames.convergence.ai, offering a lightweight, client-side implementation
that facilitates rapid evaluation cycles. Through its modular architecture and
standardized challenge specifications, WebGames provides a robust foundation
for measuring progress in development of more capable web-browsing agents. | 10 | 67be8868823e790d21a2bbea | null | null |
|
2025-02-25T22:20:08.416000 | AAD-LLM: Neural Attention-Driven Auditory Scene Understanding | 3 | {
"_id": "6531a65daed617662c7f1007",
"avatarUrl": "/avatars/ea2e504780dc40719f7501ab2c7d9c91.svg",
"followerCount": 1,
"fullname": "Xilin Jiang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xi-j",
"type": "user"
} | true | null | 2502.16794 | [
{
"_id": "67be86a78a5a80542314f0e6",
"hidden": false,
"name": "Xilin Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:25:52.841Z",
"user": {
"_id": "6531a65daed617662c7f1007",
"avatarUrl": "/avatars/ea2e504780dc40719f7501ab2c7d9c91.svg",
"fullname": "Xilin Jiang",
"isPro": false,
"type": "user",
"user": "xi-j"
}
},
{
"_id": "67be86a78a5a80542314f0e7",
"hidden": false,
"name": "Sukru Samet Dindar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:11:37.706Z",
"user": {
"_id": "661361993bb67cb4f356c3de",
"avatarUrl": "/avatars/b707c07f9c70d2ed1e8cd8cff2551c69.svg",
"fullname": "Sukru Samet Dindar",
"isPro": false,
"type": "user",
"user": "susameddin"
}
},
{
"_id": "67be86a78a5a80542314f0e8",
"hidden": false,
"name": "Vishal Choudhari",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:11:45.258Z",
"user": {
"_id": "670e8671ba29b3fca221b8c9",
"avatarUrl": "/avatars/20f6479bd5218d6d3e304539df5003f9.svg",
"fullname": "Vishal Choudhari",
"isPro": false,
"type": "user",
"user": "vchoudhari"
}
},
{
"_id": "67be86a78a5a80542314f0e9",
"hidden": false,
"name": "Stephan Bickel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86a78a5a80542314f0ea",
"hidden": false,
"name": "Ashesh Mehta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86a78a5a80542314f0eb",
"hidden": false,
"name": "Guy M McKhann",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86a78a5a80542314f0ec",
"hidden": false,
"name": "Adeen Flinker",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86a78a5a80542314f0ed",
"hidden": false,
"name": "Daniel Friedman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be86a78a5a80542314f0ee",
"hidden": false,
"name": "Nima Mesgarani",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T03:06:45 | AAD-LLM: Neural Attention-Driven Auditory Scene Understanding | Auditory foundation models, including auditory large language models (LLMs),
process all sound inputs equally, independent of listener perception. However,
human auditory perception is inherently selective: listeners focus on specific
speakers while ignoring others in complex auditory scenes. Existing models do
not incorporate this selectivity, limiting their ability to generate
perception-aligned responses. To address this, we introduce Intention-Informed
Auditory Scene Understanding (II-ASU) and present Auditory Attention-Driven LLM
(AAD-LLM), a prototype system that integrates brain signals to infer listener
attention. AAD-LLM extends an auditory LLM by incorporating intracranial
electroencephalography (iEEG) recordings to decode which speaker a listener is
attending to and refine responses accordingly. The model first predicts the
attended speaker from neural activity, then conditions response generation on
this inferred attentional state. We evaluate AAD-LLM on speaker description,
speech transcription and extraction, and question answering in multitalker
scenarios, with both objective and subjective ratings showing improved
alignment with listener intention. By taking a first step toward
intention-aware auditory AI, this work explores a new paradigm where listener
perception informs machine listening, paving the way for future
listener-centered auditory systems. Demo and code available:
https://aad-llm.github.io. | 5 | 67be86a98a5a80542314f16e | null | null |
|
2025-02-25T22:18:24.064000 | Unveiling Downstream Performance Scaling of LLMs: A Clustering-Based Perspective | 2 | {
"_id": "636b4d796e6981ebad73f398",
"avatarUrl": "/avatars/bcd405b98c12afaf1e32d85ad8ce7f23.svg",
"followerCount": null,
"fullname": "Kaiyuan Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Lucky2022",
"type": "user"
} | true | null | 2502.17262 | [
{
"_id": "67bd3870a917fc506d9f3d15",
"hidden": false,
"name": "Chengyin Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:44.252Z",
"user": {
"_id": "66ab06956b8847339d449128",
"avatarUrl": "/avatars/d71490acb91981459121005b84e556d8.svg",
"fullname": "Xu Chengyin",
"isPro": false,
"type": "user",
"user": "JerryXu98"
}
},
{
"_id": "67bd3870a917fc506d9f3d16",
"hidden": false,
"name": "Kaiyuan Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:01.532Z",
"user": {
"_id": "636b4d796e6981ebad73f398",
"avatarUrl": "/avatars/bcd405b98c12afaf1e32d85ad8ce7f23.svg",
"fullname": "Kaiyuan Chen",
"isPro": false,
"type": "user",
"user": "Lucky2022"
}
},
{
"_id": "67bd3870a917fc506d9f3d17",
"hidden": false,
"name": "Xiao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3870a917fc506d9f3d18",
"hidden": false,
"name": "Ke Shen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:49.578Z",
"user": {
"_id": "645604eebabbbbd3486dc615",
"avatarUrl": "/avatars/17a5ca8274e2bfc8f183a4af9878a930.svg",
"fullname": "shenke",
"isPro": false,
"type": "user",
"user": "shenke18"
}
},
{
"_id": "67bd3870a917fc506d9f3d19",
"hidden": false,
"name": "Chenggang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T15:44:57 | Unveiling Downstream Performance Scaling of LLMs: A Clustering-Based
Perspective | The rapid advancements in computing dramatically increase the scale and cost
of training Large Language Models (LLMs). Accurately predicting downstream task
performance prior to model training is crucial for efficient resource
allocation, yet remains challenging due to two primary constraints: (1) the
"emergence phenomenon", wherein downstream performance metrics become
meaningful only after extensive training, which limits the ability to use
smaller models for prediction; (2) Uneven task difficulty distributions and the
absence of consistent scaling laws, resulting in substantial metric
variability. Existing performance prediction methods suffer from limited
accuracy and reliability, thereby impeding the assessment of potential LLM
capabilities. To address these challenges, we propose a
Clustering-On-Difficulty (COD) downstream performance prediction framework. COD
first constructs a predictable support subset by clustering tasks based on
difficulty features, strategically excluding non-emergent and non-scalable
clusters. The scores on the selected subset serve as effective intermediate
predictors of downstream performance on the full evaluation set. With
theoretical support, we derive a mapping function that transforms performance
metrics from the predictable subset to the full evaluation set, thereby
ensuring accurate extrapolation of LLM downstream performance. The proposed
method has been applied to predict performance scaling for a 70B LLM, providing
actionable insights for training resource allocation and assisting in
monitoring the training process. Notably, COD achieves remarkable predictive
accuracy on the 70B LLM by leveraging an ensemble of small models,
demonstrating an absolute mean deviation of 1.36% across eight important LLM
evaluation benchmarks. | 18 | 67bd3872a917fc506d9f3d8f | null | null |
|
2025-02-25T22:04:57.351000 | SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference | 2 | {
"_id": "66c0a08bac74db25de8427ec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66c0a08bac74db25de8427ec/9D-piDBZqSt6KNkHImmkv.jpeg",
"followerCount": 3,
"fullname": "Jintao Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jt-zhang",
"type": "user"
} | true | null | 2502.18137 | [
{
"_id": "67be8443ed8e258c0f70063a",
"hidden": false,
"name": "Jintao Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:25:57.704Z",
"user": {
"_id": "66c0a08bac74db25de8427ec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66c0a08bac74db25de8427ec/9D-piDBZqSt6KNkHImmkv.jpeg",
"fullname": "Jintao Zhang",
"isPro": false,
"type": "user",
"user": "jt-zhang"
}
},
{
"_id": "67be8443ed8e258c0f70063b",
"hidden": false,
"name": "Chendong Xiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:49:29.341Z",
"user": {
"_id": "6329bdbbde087eac2921e6a9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1663679904323-noauth.jpeg",
"fullname": "Xiangchendong",
"isPro": false,
"type": "user",
"user": "Xiang-cd"
}
},
{
"_id": "67be8443ed8e258c0f70063c",
"hidden": false,
"name": "Haofeng Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be8443ed8e258c0f70063d",
"hidden": false,
"name": "Jia Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be8443ed8e258c0f70063e",
"hidden": false,
"name": "Haocheng Xi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:49:45.446Z",
"user": {
"_id": "65d5a000ec7e31555e4db57e",
"avatarUrl": "/avatars/aab8319fbaffdd53faff59a40ca5a5ea.svg",
"fullname": "Haocheng Xi",
"isPro": false,
"type": "user",
"user": "hxi0408"
}
},
{
"_id": "67be8443ed8e258c0f70063f",
"hidden": false,
"name": "Jun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be8443ed8e258c0f700640",
"hidden": false,
"name": "Jianfei Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:49:52.550Z",
"user": {
"_id": "65fcad0ba0d7adc40b54fac2",
"avatarUrl": "/avatars/7564b5642378fddb46ec3b5ae57c0402.svg",
"fullname": "Jianfei Chen",
"isPro": false,
"type": "user",
"user": "surfingtomchen"
}
}
] | 2025-02-25T12:02:17 | SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference | An efficient attention implementation is essential for large models due to
its quadratic time complexity. Fortunately, attention commonly exhibits
sparsity, i.e., many values in the attention map are near zero, allowing for
the omission of corresponding computations. Many studies have utilized the
sparse pattern to accelerate attention. However, most existing works focus on
optimizing attention within specific models by exploiting certain sparse
patterns of the attention map. A universal sparse attention that guarantees
both the speedup and end-to-end performance of diverse models remains elusive.
In this paper, we propose SpargeAttn, a universal sparse and quantized
attention for any model. Our method uses a two-stage online filter: in the
first stage, we rapidly and accurately predict the attention map, enabling the
skip of some matrix multiplications in attention. In the second stage, we
design an online softmax-aware filter that incurs no extra overhead and further
skips some matrix multiplications. Experiments show that our method
significantly accelerates diverse models, including language, image, and video
generation, without sacrificing end-to-end metrics. The codes are available at
https://github.com/thu-ml/SpargeAttn. | 50 | 67be8447ed8e258c0f70075f | null | null |
|
2025-02-25T22:03:08.515000 | SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution | 5 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.18449 | [
{
"_id": "67be845a8a5a80542314579f",
"hidden": false,
"name": "Yuxiang Wei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:50:44.837Z",
"user": {
"_id": "632a176259950c1d279d5ea7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/632a176259950c1d279d5ea7/xsSGhBXalt9RaKzSKY8uk.jpeg",
"fullname": "Yuxiang Wei",
"isPro": false,
"type": "user",
"user": "yuxiang630"
}
},
{
"_id": "67be845a8a5a8054231457a0",
"hidden": false,
"name": "Olivier Duchenne",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be845a8a5a8054231457a1",
"hidden": false,
"name": "Jade Copet",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:50:58.290Z",
"user": {
"_id": "6481e0ac50b759c75d5fdad0",
"avatarUrl": "/avatars/49f08d989ca505ae01bce5578a94f6fe.svg",
"fullname": "Jade Copet",
"isPro": false,
"type": "user",
"user": "JadeCopet"
}
},
{
"_id": "67be845a8a5a8054231457a2",
"hidden": false,
"name": "Quentin Carbonneaux",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be845a8a5a8054231457a3",
"hidden": false,
"name": "Lingming Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:51:10.640Z",
"user": {
"_id": "656f473c14fa8cfccd14559e",
"avatarUrl": "/avatars/8f4fef3d835a7a11c2ab66dbf04f3424.svg",
"fullname": "Lingming Zhang",
"isPro": false,
"type": "user",
"user": "lingming"
}
},
{
"_id": "67be845a8a5a8054231457a4",
"hidden": false,
"name": "Daniel Fried",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be845a8a5a8054231457a5",
"hidden": false,
"name": "Gabriel Synnaeve",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:51:21.641Z",
"user": {
"_id": "630eac7931970d1cd4fbacf2",
"avatarUrl": "/avatars/b7ccbddfa745db854dc342be1327cd53.svg",
"fullname": "Gabriel Synnaeve",
"isPro": false,
"type": "user",
"user": "gsynnaeve"
}
},
{
"_id": "67be845a8a5a8054231457a6",
"hidden": false,
"name": "Rishabh Singh",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:51:28.321Z",
"user": {
"_id": "6597e5a6420dcc68501a69e9",
"avatarUrl": "/avatars/da48b13e07c367ecd5c891abfd6c3ded.svg",
"fullname": "Rishabh Singh",
"isPro": false,
"type": "user",
"user": "RishabhSingh021"
}
},
{
"_id": "67be845a8a5a8054231457a7",
"hidden": false,
"name": "Sida I. Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T18:45:04 | SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open
Software Evolution | The recent DeepSeek-R1 release has demonstrated the immense potential of
reinforcement learning (RL) in enhancing the general reasoning capabilities of
large language models (LLMs). While DeepSeek-R1 and other follow-up work
primarily focus on applying RL to competitive coding and math problems, this
paper introduces SWE-RL, the first approach to scale RL-based LLM reasoning for
real-world software engineering. Leveraging a lightweight rule-based reward
(e.g., the similarity score between ground-truth and LLM-generated solutions),
SWE-RL enables LLMs to autonomously recover a developer's reasoning processes
and solutions by learning from extensive open-source software evolution data --
the record of a software's entire lifecycle, including its code snapshots, code
changes, and events such as issues and pull requests. Trained on top of Llama
3, our resulting reasoning model, Llama3-SWE-RL-70B, achieves a 41.0% solve
rate on SWE-bench Verified -- a human-verified collection of real-world GitHub
issues. To our knowledge, this is the best performance reported for
medium-sized (<100B) LLMs to date, even comparable to leading proprietary LLMs
like GPT-4o. Surprisingly, despite performing RL solely on software evolution
data, Llama3-SWE-RL has even emerged with generalized reasoning skills. For
example, it shows improved results on five out-of-domain tasks, namely,
function coding, library use, code reasoning, mathematics, and general language
understanding, whereas a supervised-finetuning baseline even leads to
performance degradation on average. Overall, SWE-RL opens up a new direction to
improve the reasoning capabilities of LLMs through reinforcement learning on
massive software engineering data. | 61 | 67be845b8a5a8054231457d6 | null | null |
|
2025-02-25T22:01:56.532000 | OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference | 2 | {
"_id": "6530e62f536dbca918e71c3e",
"avatarUrl": "/avatars/efc93bc767e561c6c6d429f65c23382d.svg",
"followerCount": 4,
"fullname": "Xiangyu Z",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "PhoenixZ",
"type": "user"
} | true | null | 2502.18411 | [
{
"_id": "67be834ae7b05f9e43b172b2",
"hidden": false,
"name": "Xiangyu Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:26:02.247Z",
"user": {
"_id": "6530e62f536dbca918e71c3e",
"avatarUrl": "/avatars/efc93bc767e561c6c6d429f65c23382d.svg",
"fullname": "Xiangyu Z",
"isPro": false,
"type": "user",
"user": "PhoenixZ"
}
},
{
"_id": "67be834ae7b05f9e43b172b3",
"hidden": false,
"name": "Shengyuan Ding",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:25:59.887Z",
"user": {
"_id": "646cd947da8e99940b6e55cf",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646cd947da8e99940b6e55cf/9c0P0WppFqNW9pdo8LgOS.jpeg",
"fullname": "Shengyuan Ding",
"isPro": false,
"type": "user",
"user": "ChrisDing1105"
}
},
{
"_id": "67be834ae7b05f9e43b172b4",
"hidden": false,
"name": "Zicheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:49:10.028Z",
"user": {
"_id": "675aa937ab6aa7ecd09341ce",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/d_CNUsNOw92pg7MVhf9Vm.png",
"fullname": "Zicheng Zhang",
"isPro": false,
"type": "user",
"user": "UniverseCA"
}
},
{
"_id": "67be834ae7b05f9e43b172b5",
"hidden": false,
"name": "Haian Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be834ae7b05f9e43b172b6",
"hidden": false,
"name": "Maosong Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be834ae7b05f9e43b172b7",
"hidden": false,
"name": "Weiyun Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:48:45.520Z",
"user": {
"_id": "619507e7b74b6c591f794340",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/619507e7b74b6c591f794340/JbPDoy6Ko1V1-6oJJwFV8.jpeg",
"fullname": "Weiyun Wang",
"isPro": false,
"type": "user",
"user": "Weiyun1025"
}
},
{
"_id": "67be834ae7b05f9e43b172b8",
"hidden": false,
"name": "Jiaqi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:48:38.876Z",
"user": {
"_id": "64638c4d51fa6e63060521b5",
"avatarUrl": "/avatars/c863ace5b1dc788a341bcf4ddbdfaec1.svg",
"fullname": "JIaqi",
"isPro": false,
"type": "user",
"user": "Jiaqiwang"
}
},
{
"_id": "67be834ae7b05f9e43b172b9",
"hidden": false,
"name": "Xinyu Fang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:26:04.433Z",
"user": {
"_id": "64f5f8dd9b17cd59c453c57f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64f5f8dd9b17cd59c453c57f/MulhwLcePFUWUQel8LQZ8.jpeg",
"fullname": "Xinyu Fang",
"isPro": false,
"type": "user",
"user": "nebulae09"
}
},
{
"_id": "67be834ae7b05f9e43b172ba",
"hidden": false,
"name": "Wenhai Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:48:28.151Z",
"user": {
"_id": "64d1c560c0c627dfa71bdbe0",
"avatarUrl": "/avatars/f42794fe25bffcd870a1bcee69b95298.svg",
"fullname": "wenhai.wang",
"isPro": false,
"type": "user",
"user": "wangwhcore"
}
},
{
"_id": "67be834ae7b05f9e43b172bb",
"hidden": false,
"name": "Guangtao Zhai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be834ae7b05f9e43b172bc",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:48:20.155Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67be834ae7b05f9e43b172bd",
"hidden": false,
"name": "Hua Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be834ae7b05f9e43b172be",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T18:05:14 | OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference | Recent advancements in open-source multi-modal large language models (MLLMs)
have primarily focused on enhancing foundational capabilities, leaving a
significant gap in human preference alignment. This paper introduces
OmniAlign-V, a comprehensive dataset of 200K high-quality training samples
featuring diverse images, complex questions, and varied response formats to
improve MLLMs' alignment with human preferences. We also present MM-AlignBench,
a human-annotated benchmark specifically designed to evaluate MLLMs' alignment
with human values. Experimental results show that finetuning MLLMs with
OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference
Optimization (DPO), significantly enhances human preference alignment while
maintaining or enhancing performance on standard VQA benchmarks, preserving
their fundamental capabilities. Our datasets, benchmark, code and checkpoints
have been released at https://github.com/PhoenixZ810/OmniAlign-V. | 67 | 67be834ce7b05f9e43b1730a | null | null |
|
2025-02-25T21:50:19.941000 | ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation | 4 | {
"_id": "646f69a6041e48e1c4728de3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646f69a6041e48e1c4728de3/U5OaW6PgsXTXnfG03xs9Q.png",
"followerCount": 34,
"fullname": "GlyphByT5",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "GlyphByT5",
"type": "user"
} | false | null | 2502.18364 | [
{
"_id": "67be81414084d82ee69ad4a2",
"hidden": false,
"name": "Yifan Pu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:58:24.942Z",
"user": {
"_id": "647e83257f9ad5e44babe82a",
"avatarUrl": "/avatars/2d9593775c49856fe5dfa5bd23dfcda7.svg",
"fullname": "yifan pu",
"isPro": false,
"type": "user",
"user": "yifanpu001"
}
},
{
"_id": "67be81414084d82ee69ad4a3",
"hidden": false,
"name": "Yiming Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:17:43.946Z",
"user": {
"_id": "637a2be47ce76c3b8347aae2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/637a2be47ce76c3b8347aae2/rQdt1j35MA2OXJnkGQfHJ.jpeg",
"fullname": "Yiming Zhao",
"isPro": false,
"type": "user",
"user": "ZYMPKU"
}
},
{
"_id": "67be81414084d82ee69ad4a4",
"hidden": false,
"name": "Zhicong Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4a5",
"hidden": false,
"name": "Ruihong Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4a6",
"hidden": false,
"name": "Haoxing Ye",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:58:52.821Z",
"user": {
"_id": "65229f2f6b01183a67e86370",
"avatarUrl": "/avatars/b218207fce28497b30e22c807d44b2d2.svg",
"fullname": "Haoxing Ye",
"isPro": false,
"type": "user",
"user": "131131yhx"
}
},
{
"_id": "67be81414084d82ee69ad4a7",
"hidden": false,
"name": "Yuhui Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4a8",
"hidden": false,
"name": "Dong Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:59:16.526Z",
"user": {
"_id": "666470a28f5513b0cf11e850",
"avatarUrl": "/avatars/7beea758882677ad32a12ce56d4d084a.svg",
"fullname": "Dong Chen",
"isPro": false,
"type": "user",
"user": "DongChen06"
}
},
{
"_id": "67be81414084d82ee69ad4a9",
"hidden": false,
"name": "Jianmin Bao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:59:22.654Z",
"user": {
"_id": "646b2f4bb1202bc77c0fb396",
"avatarUrl": "/avatars/6b09dec5d5affe817ad6acda60f61740.svg",
"fullname": "Jianmin_bao",
"isPro": false,
"type": "user",
"user": "JianminBao"
}
},
{
"_id": "67be81414084d82ee69ad4aa",
"hidden": false,
"name": "Sirui Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:59:30.766Z",
"user": {
"_id": "64f7f119a92703ef65d9a717",
"avatarUrl": "/avatars/118524faab66cecba6d4da622034b44b.svg",
"fullname": "Sirui Zhang",
"isPro": false,
"type": "user",
"user": "zsr200901"
}
},
{
"_id": "67be81414084d82ee69ad4ab",
"hidden": false,
"name": "Yanbin Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:59:38.138Z",
"user": {
"_id": "67965a5a9f57883759a6efc3",
"avatarUrl": "/avatars/9138a879fbe1f60c2f4720810bfdfda6.svg",
"fullname": "Yanbin Wang",
"isPro": false,
"type": "user",
"user": "yanbinwang"
}
},
{
"_id": "67be81414084d82ee69ad4ac",
"hidden": false,
"name": "Lin Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4ad",
"hidden": false,
"name": "Lijuan Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T09:00:05.520Z",
"user": {
"_id": "6672e20d1dbdf7da8310dd92",
"avatarUrl": "/avatars/5d2fb23f92a7f9ff025a5be17a26de4d.svg",
"fullname": "lijuanwang",
"isPro": false,
"type": "user",
"user": "lijuanwang228"
}
},
{
"_id": "67be81414084d82ee69ad4ae",
"hidden": false,
"name": "Ji Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4af",
"hidden": false,
"name": "Xiu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4b0",
"hidden": false,
"name": "Zhouhui Lian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:59:57.943Z",
"user": {
"_id": "64c882f7527d7636555bbb2c",
"avatarUrl": "/avatars/578a118a945dd6fa62fd3be9d6e4e986.svg",
"fullname": "Zhouhui Lian",
"isPro": false,
"type": "user",
"user": "lianzhouhui"
}
},
{
"_id": "67be81414084d82ee69ad4b1",
"hidden": false,
"name": "Gao Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be81414084d82ee69ad4b2",
"hidden": false,
"name": "Baining Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-25T16:57:04 | ART: Anonymous Region Transformer for Variable Multi-Layer Transparent
Image Generation | Multi-layer image generation is a fundamental task that enables users to
isolate, select, and edit specific image layers, thereby revolutionizing
interactions with generative models. In this paper, we introduce the Anonymous
Region Transformer (ART), which facilitates the direct generation of variable
multi-layer transparent images based on a global text prompt and an anonymous
region layout. Inspired by Schema theory suggests that knowledge is organized
in frameworks (schemas) that enable people to interpret and learn from new
information by linking it to prior knowledge.}, this anonymous region layout
allows the generative model to autonomously determine which set of visual
tokens should align with which text tokens, which is in contrast to the
previously dominant semantic layout for the image generation task. In addition,
the layer-wise region crop mechanism, which only selects the visual tokens
belonging to each anonymous region, significantly reduces attention computation
costs and enables the efficient generation of images with numerous distinct
layers (e.g., 50+). When compared to the full attention approach, our method is
over 12 times faster and exhibits fewer layer conflicts. Furthermore, we
propose a high-quality multi-layer transparent image autoencoder that supports
the direct encoding and decoding of the transparency of variable multi-layer
images in a joint manner. By enabling precise control and scalable layer
generation, ART establishes a new paradigm for interactive content creation. | 32 | 67be81464084d82ee69ad576 | null | null |
|
2025-02-25T21:36:19.851000 | KV-Edit: Training-Free Image Editing for Precise Background Preservation | 3 | {
"_id": "66078994c50f8393c56ed837",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/aYYde45zaFACRllyEhJyU.jpeg",
"followerCount": 3,
"fullname": "Tianrui Zhu",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "xilluill",
"type": "user"
} | true | null | 2502.17363 | [
{
"_id": "67bd6d2bbf6d46017e619f31",
"hidden": false,
"name": "Tianrui Zhu",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-25T07:24:35.845Z",
"user": {
"_id": "66078994c50f8393c56ed837",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/aYYde45zaFACRllyEhJyU.jpeg",
"fullname": "Tianrui Zhu",
"isPro": true,
"type": "user",
"user": "xilluill"
}
},
{
"_id": "67bd6d2bbf6d46017e619f32",
"hidden": false,
"name": "Shiyi Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:30:48.262Z",
"user": {
"_id": "6315d306a9456afe2b9bf34a",
"avatarUrl": "/avatars/7285b4e7d84b528d1a50f8ee4eb10727.svg",
"fullname": "ElevenZ",
"isPro": false,
"type": "user",
"user": "shiyi0408"
}
},
{
"_id": "67bd6d2bbf6d46017e619f33",
"hidden": false,
"name": "Jiawei Shao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-26T08:50:09.030Z",
"user": {
"_id": "646c6985d072747f7ebf352a",
"avatarUrl": "/avatars/8aaf92045687b21b56c257db62bf4fa5.svg",
"fullname": "Jiawei Shao",
"isPro": false,
"type": "user",
"user": "jewelshaw"
}
},
{
"_id": "67bd6d2bbf6d46017e619f34",
"hidden": false,
"name": "Yansong Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T17:40:09 | KV-Edit: Training-Free Image Editing for Precise Background Preservation | Background consistency remains a significant challenge in image editing
tasks. Despite extensive developments, existing works still face a trade-off
between maintaining similarity to the original image and generating content
that aligns with the target. Here, we propose KV-Edit, a training-free approach
that uses KV cache in DiTs to maintain background consistency, where background
tokens are preserved rather than regenerated, eliminating the need for complex
mechanisms or expensive training, ultimately generating new content that
seamlessly integrates with the background within user-provided regions. We
further explore the memory consumption of the KV cache during editing and
optimize the space complexity to O(1) using an inversion-free method. Our
approach is compatible with any DiT-based generative model without additional
training. Experiments demonstrate that KV-Edit significantly outperforms
existing approaches in terms of both background and image quality, even
surpassing training-based methods. Project webpage is available at
https://xilluill.github.io/projectpages/KV-Edit | 32 | 67bd6d2dbf6d46017e619f99 | null | null |
|
2025-02-25T19:35:42.726000 | MutaGReP: Execution-Free Repository-Grounded Plan Search for Code-Use | 2 | {
"_id": "6301c3e0a123c93a5fb295ff",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661060051926-noauth.jpeg",
"followerCount": null,
"fullname": "Zaid Khan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "codezakh",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6301c3e0a123c93a5fb295ff/okGV09FjfhO7T3uVDYjte.qt"
] | 2502.15872 | [
{
"_id": "67be572f65ae638b17d35eae",
"hidden": false,
"name": "Zaid Khan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:27:57.798Z",
"user": {
"_id": "6301c3e0a123c93a5fb295ff",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661060051926-noauth.jpeg",
"fullname": "Zaid Khan",
"isPro": false,
"type": "user",
"user": "codezakh"
}
},
{
"_id": "67be572f65ae638b17d35eaf",
"hidden": false,
"name": "Ali Farhadi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be572f65ae638b17d35eb0",
"hidden": false,
"name": "Ranjay Krishna",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be572f65ae638b17d35eb1",
"hidden": false,
"name": "Luca Weihs",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be572f65ae638b17d35eb2",
"hidden": false,
"name": "Mohit Bansal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be572f65ae638b17d35eb3",
"hidden": false,
"name": "Tanmay Gupta",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-25T23:50:08.896Z",
"user": {
"_id": "62a108242e30aaf94ec283bb",
"avatarUrl": "/avatars/4da19ae99df6a3fa2d81336eb59cdaa7.svg",
"fullname": "Tanmay Gupta",
"isPro": false,
"type": "user",
"user": "tanmayg"
}
}
] | 2025-02-21T18:58:17 | MutaGReP: Execution-Free Repository-Grounded Plan Search for Code-Use | When a human requests an LLM to complete a coding task using functionality
from a large code repository, how do we provide context from the repo to the
LLM? One approach is to add the entire repo to the LLM's context window.
However, most tasks involve only fraction of symbols from a repo, longer
contexts are detrimental to the LLM's reasoning abilities, and context windows
are not unlimited. Alternatively, we could emulate the human ability to
navigate a large repo, pick out the right functionality, and form a plan to
solve the task. We propose MutaGReP (Mutation-guided Grounded Repository Plan
Search), an approach to search for plans that decompose a user request into
natural language steps grounded in the codebase. MutaGReP performs neural tree
search in plan space, exploring by mutating plans and using a symbol retriever
for grounding. On the challenging LongCodeArena benchmark, our plans use less
than 5% of the 128K context window for GPT-4o but rival the coding performance
of GPT-4o with a context window filled with the repo. Plans produced by
MutaGReP allow Qwen 2.5 Coder 32B and 72B to match the performance of GPT-4o
with full repo context and enable progress on the hardest LongCodeArena tasks.
Project page: zaidkhan.me/MutaGReP | 4 | 67be573165ae638b17d35f24 | null | null |
|
2025-02-25T17:06:48.440000 | Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and Texture Generation | 2 | {
"_id": "6444d87e5691ca69b0d8f56a",
"avatarUrl": "/avatars/78d4d2b36d629a8e6ad833e102bb86f7.svg",
"followerCount": 1,
"fullname": "Peter Ji",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "peterji",
"type": "user"
} | false | null | 2502.14247 | [
{
"_id": "67be3ebc1c80786468704721",
"hidden": false,
"name": "Jiayu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704722",
"hidden": false,
"name": "Taizhang Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704723",
"hidden": false,
"name": "Weixuan Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704724",
"hidden": false,
"name": "Xibin Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704725",
"hidden": false,
"name": "Ziang Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704726",
"hidden": false,
"name": "Senbo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704727",
"hidden": false,
"name": "Shenzhou Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704728",
"hidden": false,
"name": "Weizhe Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c80786468704729",
"hidden": false,
"name": "Hongdong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be3ebc1c8078646870472a",
"hidden": false,
"name": "Pan Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T04:22:30 | Pandora3D: A Comprehensive Framework for High-Quality 3D Shape and
Texture Generation | This report presents a comprehensive framework for generating high-quality 3D
shapes and textures from diverse input prompts, including single images,
multi-view images, and text descriptions. The framework consists of 3D shape
generation and texture generation. (1). The 3D shape generation pipeline
employs a Variational Autoencoder (VAE) to encode implicit 3D geometries into a
latent space and a diffusion network to generate latents conditioned on input
prompts, with modifications to enhance model capacity. An alternative
Artist-Created Mesh (AM) generation approach is also explored, yielding
promising results for simpler geometries. (2). Texture generation involves a
multi-stage process starting with frontal images generation followed by
multi-view images generation, RGB-to-PBR texture conversion, and
high-resolution multi-view texture refinement. A consistency scheduler is
plugged into every stage, to enforce pixel-wise consistency among multi-view
textures during inference, ensuring seamless integration.
The pipeline demonstrates effective handling of diverse input formats,
leveraging advanced neural architectures and novel methodologies to produce
high-quality 3D content. This report details the system architecture,
experimental results, and potential future directions to improve and expand the
framework. The source code and pretrained weights are released at:
https://github.com/Tencent/Tencent-XR-3DGen. | 5 | 67be3ec21c80786468704886 | null | null |
|
2025-02-25T16:46:31.986000 | Mind the Gap! Static and Interactive Evaluations of Large Audio Models | 2 | {
"_id": "632116accafe12f481a473cb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1666207676653-632116accafe12f481a473cb.jpeg",
"followerCount": 16,
"fullname": "Will Held",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "WillHeld",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/632116accafe12f481a473cb/ltGJsrWtFi6_K4qlGLiIX.png"
] | 2502.15919 | [
{
"_id": "67be33ffe30b2f126c599413",
"hidden": false,
"name": "Minzhi Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be33ffe30b2f126c599414",
"hidden": false,
"name": "William Barr Held",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:28:38.756Z",
"user": {
"_id": "632116accafe12f481a473cb",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1666207676653-632116accafe12f481a473cb.jpeg",
"fullname": "Will Held",
"isPro": true,
"type": "user",
"user": "WillHeld"
}
},
{
"_id": "67be33ffe30b2f126c599415",
"hidden": false,
"name": "Michael J Ryan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T08:52:35.827Z",
"user": {
"_id": "63878fa2e40346f68ede7fc4",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63878fa2e40346f68ede7fc4/MOdQbApvhwpVzPo5FWLVV.jpeg",
"fullname": "Michael Ryan",
"isPro": false,
"type": "user",
"user": "MichaelR207"
}
},
{
"_id": "67be33ffe30b2f126c599416",
"hidden": false,
"name": "Kunat Pipatanakul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be33ffe30b2f126c599417",
"hidden": false,
"name": "Potsawee Manakul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67be33ffe30b2f126c599418",
"hidden": false,
"name": "Hao Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:29.874Z",
"user": {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"fullname": "Hao Zhu",
"isPro": false,
"type": "user",
"user": "ProKil"
}
},
{
"_id": "67be33ffe30b2f126c599419",
"hidden": false,
"name": "Diyi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T20:29:02 | Mind the Gap! Static and Interactive Evaluations of Large Audio Models | As AI chatbots become ubiquitous, voice interaction presents a compelling way
to enable rapid, high-bandwidth communication for both semantic and social
signals. This has driven research into Large Audio Models (LAMs) to power
voice-native experiences. However, aligning LAM development with user goals
requires a clear understanding of user needs and preferences to establish
reliable progress metrics. This study addresses these challenges by introducing
an interactive approach to evaluate LAMs and collecting 7,500 LAM interactions
from 484 participants. Through topic modeling of user queries, we identify
primary use cases for audio interfaces. We then analyze user preference
rankings and qualitative feedback to determine which models best align with
user needs. Finally, we evaluate how static benchmarks predict interactive
performance - our analysis reveals no individual benchmark strongly correlates
with interactive results (tau leq 0.33 for all benchmarks). While combining
multiple coarse-grained features yields modest predictive power (R^2=0.30),
only two out of twenty datasets on spoken question answering and age prediction
show significantly positive correlations. This suggests a clear need to develop
LAM evaluations that better correlate with user preferences. | 3 | 67be3400e30b2f126c599503 | null | null |
|
2025-02-25T12:50:27.642000 | Self-Taught Agentic Long Context Understanding | 2 | {
"_id": "6438ccbb3b46237de3d052e8",
"avatarUrl": "/avatars/baa624d417b0b905e82127dc66346478.svg",
"followerCount": 9,
"fullname": "Yufan Zhuang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yzhuang",
"type": "user"
} | true | null | 2502.15920 | [
{
"_id": "67bd1ae085a048f74fd1b8ec",
"hidden": false,
"name": "Yufan Zhuang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:38:05.314Z",
"user": {
"_id": "6438ccbb3b46237de3d052e8",
"avatarUrl": "/avatars/baa624d417b0b905e82127dc66346478.svg",
"fullname": "Yufan Zhuang",
"isPro": true,
"type": "user",
"user": "yzhuang"
}
},
{
"_id": "67bd1ae085a048f74fd1b8ed",
"hidden": false,
"name": "Xiaodong Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8ee",
"hidden": false,
"name": "Jialian Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8ef",
"hidden": false,
"name": "Ximeng Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f0",
"hidden": false,
"name": "Ze Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f1",
"hidden": false,
"name": "Jiang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f2",
"hidden": false,
"name": "Yusheng Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f3",
"hidden": false,
"name": "Jingbo Shang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f4",
"hidden": false,
"name": "Zicheng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd1ae085a048f74fd1b8f5",
"hidden": false,
"name": "Emad Barsoum",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T20:29:36 | Self-Taught Agentic Long Context Understanding | Answering complex, long-context questions remains a major challenge for large
language models (LLMs) as it requires effective question clarifications and
context retrieval. We propose Agentic Long-Context Understanding (AgenticLU), a
framework designed to enhance an LLM's understanding of such queries by
integrating targeted self-clarification with contextual grounding within an
agentic workflow. At the core of AgenticLU is Chain-of-Clarifications (CoC),
where models refine their understanding through self-generated clarification
questions and corresponding contextual groundings. By scaling inference as a
tree search where each node represents a CoC step, we achieve 97.8% answer
recall on NarrativeQA with a search depth of up to three and a branching factor
of eight. To amortize the high cost of this search process to training, we
leverage the preference pairs for each step obtained by the CoC workflow and
perform two-stage model finetuning: (1) supervised finetuning to learn
effective decomposition strategies, and (2) direct preference optimization to
enhance reasoning quality. This enables AgenticLU models to generate
clarifications and retrieve relevant context effectively and efficiently in a
single inference pass. Extensive experiments across seven long-context tasks
demonstrate that AgenticLU significantly outperforms state-of-the-art prompting
methods and specialized long-context LLMs, achieving robust multi-hop reasoning
while sustaining consistent performance as context length grows. | 2 | 67bd1ae385a048f74fd1b9ba | null | null |
|
2025-02-25T12:26:42.547000 | Grounded Persuasive Language Generation for Automated Marketing | 3 | {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"followerCount": 7,
"fullname": "Hao Zhu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ProKil",
"type": "user"
} | true | null | 2502.16810 | [
{
"_id": "67bdfc14c45e6063fed00c43",
"hidden": false,
"name": "Jibang Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdfc14c45e6063fed00c44",
"hidden": false,
"name": "Chenghao Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:28:53.841Z",
"user": {
"_id": "62fb49bafcce44435d7e079a",
"avatarUrl": "/avatars/116cb4371206ee7010e161c986b09e85.svg",
"fullname": "Chenghao Yang",
"isPro": false,
"type": "user",
"user": "chromeNLP"
}
},
{
"_id": "67bdfc14c45e6063fed00c45",
"hidden": false,
"name": "Simon Mahns",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-28T12:15:59.842Z",
"user": {
"_id": "65c5898e469efddc1c54b873",
"avatarUrl": "/avatars/25e19bb239eee9fbc0ca48119891c5a8.svg",
"fullname": "simon",
"isPro": false,
"type": "user",
"user": "smahns"
}
},
{
"_id": "67bdfc14c45e6063fed00c46",
"hidden": false,
"name": "Chaoqi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdfc14c45e6063fed00c47",
"hidden": false,
"name": "Hao Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T17:22:57.723Z",
"user": {
"_id": "61aa376688c20eebf1e8deb3",
"avatarUrl": "/avatars/7c11dcb232c73547d7d87834be287822.svg",
"fullname": "Hao Zhu",
"isPro": false,
"type": "user",
"user": "ProKil"
}
},
{
"_id": "67bdfc14c45e6063fed00c48",
"hidden": false,
"name": "Fei Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdfc14c45e6063fed00c49",
"hidden": false,
"name": "Haifeng Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T03:36:57 | Grounded Persuasive Language Generation for Automated Marketing | This paper develops an agentic framework that employs large language models
(LLMs) to automate the generation of persuasive and grounded marketing content,
using real estate listing descriptions as our focal application domain. Our
method is designed to align the generated content with user preferences while
highlighting useful factual attributes. This agent consists of three key
modules: (1) Grounding Module, mimicking expert human behavior to predict
marketable features; (2) Personalization Module, aligning content with user
preferences; (3) Marketing Module, ensuring factual accuracy and the inclusion
of localized features. We conduct systematic human-subject experiments in the
domain of real estate marketing, with a focus group of potential house buyers.
The results demonstrate that marketing descriptions generated by our approach
are preferred over those written by human experts by a clear margin. Our
findings suggest a promising LLM-based agentic framework to automate
large-scale targeted marketing while ensuring responsible generation using only
facts. | 10 | 67bdfc15c45e6063fed00c7a | null | null |
|
2025-02-25T11:58:10.154000 | InductionBench: LLMs Fail in the Simplest Complexity Class | 2 | {
"_id": "639a25aba2b0b1c9d85a51e8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/639a25aba2b0b1c9d85a51e8/pphz-MK62hPNbBkMHAkeR.jpeg",
"followerCount": 4,
"fullname": "Wenyue Hua",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wenyueH",
"type": "user"
} | false | null | 2502.15823 | [
{
"_id": "67bdf5d04dc920400e28c251",
"hidden": false,
"name": "Wenyue Hua",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf5d04dc920400e28c252",
"hidden": false,
"name": "Tyler Wong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:28:57.332Z",
"user": {
"_id": "67be036f9d18abb027aa2f2b",
"avatarUrl": "/avatars/9ba9530be87fd745b5d6f2fc05c63753.svg",
"fullname": "Tyler Wong",
"isPro": false,
"type": "user",
"user": "Tyler-W0ng"
}
},
{
"_id": "67bdf5d04dc920400e28c253",
"hidden": false,
"name": "Sun Fei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf5d04dc920400e28c254",
"hidden": false,
"name": "Liangming Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf5d04dc920400e28c255",
"hidden": false,
"name": "Adam Jardine",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf5d04dc920400e28c256",
"hidden": false,
"name": "William Yang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T03:48:00 | InductionBench: LLMs Fail in the Simplest Complexity Class | Large language models (LLMs) have shown remarkable improvements in reasoning
and many existing benchmarks have been addressed by models such as o1 and o3
either fully or partially. However, a majority of these benchmarks emphasize
deductive reasoning, including mathematical and coding tasks in which rules
such as mathematical axioms or programming syntax are clearly defined, based on
which LLMs can plan and apply these rules to arrive at a solution. In contrast,
inductive reasoning, where one infers the underlying rules from observed data,
remains less explored. Such inductive processes lie at the heart of scientific
discovery, as they enable researchers to extract general principles from
empirical observations. To assess whether LLMs possess this capacity, we
introduce InductionBench, a new benchmark designed to evaluate the inductive
reasoning ability of LLMs. Our experimental findings reveal that even the most
advanced models available struggle to master the simplest complexity classes
within the subregular hierarchy of functions, highlighting a notable deficiency
in current LLMs' inductive reasoning capabilities. Coda and data are available
https://github.com/Wenyueh/inductive_reasoning_benchmark. | 6 | 67bdf5d24dc920400e28c2cb | null | null |
|
2025-02-25T11:40:15.745000 | Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language Models | 2 | {
"_id": "65afde6ba0b4bf3b0e95b4e8",
"avatarUrl": "/avatars/e9b97040b0a619bf6609465d1678705c.svg",
"followerCount": null,
"fullname": "Egor Shvetsov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dalime",
"type": "user"
} | true | null | 2502.15799 | [
{
"_id": "67bdf20b7c9bd4f09ebf05ac",
"hidden": false,
"name": "Artyom Kharinaev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:29:03.290Z",
"user": {
"_id": "64b4f8ba2fc8324fcb64c516",
"avatarUrl": "/avatars/4cc7ab802a4e0c538c8ae1acb8192528.svg",
"fullname": "Artyom Kharinaev",
"isPro": false,
"type": "user",
"user": "kharinaev"
}
},
{
"_id": "67bdf20b7c9bd4f09ebf05ad",
"hidden": false,
"name": "Viktor Moskvoretskii",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf20b7c9bd4f09ebf05ae",
"hidden": false,
"name": "Egor Shvetsov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T16:41:24.118Z",
"user": {
"_id": "65afde6ba0b4bf3b0e95b4e8",
"avatarUrl": "/avatars/e9b97040b0a619bf6609465d1678705c.svg",
"fullname": "Egor Shvetsov",
"isPro": false,
"type": "user",
"user": "dalime"
}
},
{
"_id": "67bdf20b7c9bd4f09ebf05af",
"hidden": false,
"name": "Kseniia Studenikina",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:29:00.879Z",
"user": {
"_id": "64c24ba6275309cb8bdab7ba",
"avatarUrl": "/avatars/96cca2e5658c36c286d438e5d38f4c2f.svg",
"fullname": "Kseniia Studenikina",
"isPro": false,
"type": "user",
"user": "Xeanst"
}
},
{
"_id": "67bdf20b7c9bd4f09ebf05b0",
"hidden": false,
"name": "Bykov Mikhail",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bdf20b7c9bd4f09ebf05b1",
"hidden": false,
"name": "Evgeny Burnaev",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-18T20:32:05 | Investigating the Impact of Quantization Methods on the Safety and
Reliability of Large Language Models | Large Language Models (LLMs) have emerged as powerful tools for addressing
modern challenges and enabling practical applications. However, their
computational expense remains a significant barrier to widespread adoption.
Quantization has emerged as a promising technique to democratize access and
enable low resource device deployment. Despite these advancements, the safety
and trustworthiness of quantized models remain underexplored, as prior studies
often overlook contemporary architectures and rely on overly simplistic
benchmarks and evaluations. To address this gap, we introduce OpenSafetyMini, a
novel open-ended safety dataset designed to better distinguish between models.
We evaluate 4 state-of-the-art quantization techniques across LLaMA and Mistral
models using 4 benchmarks, including human evaluations. Our findings reveal
that the optimal quantization method varies for 4-bit precision, while vector
quantization techniques deliver the best safety and trustworthiness performance
at 2-bit precision, providing foundation for future research. | 6 | 67bdf20b7c9bd4f09ebf05dd | null | null |
|
2025-02-25T11:02:34.002000 | Diagnosing COVID-19 Severity from Chest X-Ray Images Using ViT and CNN Architectures | 2 | {
"_id": "63e972f1ccae1fe5c6211759",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e972f1ccae1fe5c6211759/AfKPgMdAraUtvbtJpoHFY.jpeg",
"followerCount": 2,
"fullname": "Luis Lara",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ludolara",
"type": "user"
} | true | null | 2502.16622 | [
{
"_id": "67bde94fc45e6063fecbcf04",
"hidden": false,
"name": "Luis Lara",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-25T16:05:37.367Z",
"user": {
"_id": "63e972f1ccae1fe5c6211759",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e972f1ccae1fe5c6211759/AfKPgMdAraUtvbtJpoHFY.jpeg",
"fullname": "Luis Lara",
"isPro": false,
"type": "user",
"user": "ludolara"
}
},
{
"_id": "67bde94fc45e6063fecbcf05",
"hidden": false,
"name": "Lucia Eve Berger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bde94fc45e6063fecbcf06",
"hidden": false,
"name": "Rajesh Raju",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:40:11.167Z",
"user": {
"_id": "6544d303629ca3a19924cebe",
"avatarUrl": "/avatars/3595f9ee3b745212ceeb19be6723c7b2.svg",
"fullname": "Rajesh Raju",
"isPro": false,
"type": "user",
"user": "rajeshraju"
}
},
{
"_id": "67bde94fc45e6063fecbcf07",
"hidden": false,
"name": "Shawn Whitfield",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T15:50:42 | Diagnosing COVID-19 Severity from Chest X-Ray Images Using ViT and CNN
Architectures | The COVID-19 pandemic strained healthcare resources and prompted discussion
about how machine learning can alleviate physician burdens and contribute to
diagnosis. Chest x-rays (CXRs) are used for diagnosis of COVID-19, but few
studies predict the severity of a patient's condition from CXRs. In this study,
we produce a large COVID severity dataset by merging three sources and
investigate the efficacy of transfer learning using ImageNet- and
CXR-pretrained models and vision transformers (ViTs) in both severity
regression and classification tasks. A pretrained DenseNet161 model performed
the best on the three class severity prediction problem, reaching 80% accuracy
overall and 77.3%, 83.9%, and 70% on mild, moderate and severe cases,
respectively. The ViT had the best regression results, with a mean absolute
error of 0.5676 compared to radiologist-predicted severity scores. The
project's source code is publicly available. | 1 | 67bde950c45e6063fecbcf62 | null | null |
|
2025-02-25T09:17:04.777000 | Early-Exit and Instant Confidence Translation Quality Estimation | 2 | {
"_id": "6304ece07424d937fa35fb98",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6304ece07424d937fa35fb98/6qZoqm-Ti8CiDcCHEt1sE.jpeg",
"followerCount": 20,
"fullname": "Vilém Zouhar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zouharvi",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6304ece07424d937fa35fb98/FbzjxTx-i9oevQ-i0c_CH.png"
] | 2502.14429 | [
{
"_id": "67b835f98512a3eca052c0ee",
"hidden": false,
"name": "Vilém Zouhar",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T14:09:46.337Z",
"user": {
"_id": "6304ece07424d937fa35fb98",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6304ece07424d937fa35fb98/6qZoqm-Ti8CiDcCHEt1sE.jpeg",
"fullname": "Vilém Zouhar",
"isPro": false,
"type": "user",
"user": "zouharvi"
}
},
{
"_id": "67b835f98512a3eca052c0ef",
"hidden": false,
"name": "Maike Züfle",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:38:17.623Z",
"user": {
"_id": "6552004b4d9e71e17b35fa0b",
"avatarUrl": "/avatars/341ac1551588d5fcf5f3526fc06ff702.svg",
"fullname": "Maike Züfle",
"isPro": false,
"type": "user",
"user": "maikez"
}
},
{
"_id": "67b835f98512a3eca052c0f0",
"hidden": false,
"name": "Beni Egressy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:38:24.223Z",
"user": {
"_id": "6735d61d0c6b2cc068fe7cda",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/OD07myJrBbPaM1BUknLP-.png",
"fullname": "Beni Egressy",
"isPro": false,
"type": "user",
"user": "egressbi"
}
},
{
"_id": "67b835f98512a3eca052c0f1",
"hidden": false,
"name": "Julius Cheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:38:33.820Z",
"user": {
"_id": "641af90d1911d3be6742928a",
"avatarUrl": "/avatars/d6d34c2d49cb8dfdccc6681aacf47cd4.svg",
"fullname": "Julius Cheng",
"isPro": false,
"type": "user",
"user": "juliuscheng"
}
},
{
"_id": "67b835f98512a3eca052c0f2",
"hidden": false,
"name": "Jan Niehues",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:38:40.110Z",
"user": {
"_id": "63394998474cfeb1a85bde3f",
"avatarUrl": "/avatars/43d997eb118e6c6da9604e0c0bf0e63e.svg",
"fullname": "Jan Niehues",
"isPro": false,
"type": "user",
"user": "jannieh"
}
}
] | 2025-02-20T10:27:13 | Early-Exit and Instant Confidence Translation Quality Estimation | Quality estimation is omnipresent in machine translation, for both evaluation
and generation. Unfortunately, quality estimation models are often opaque and
computationally expensive, making them impractical to be part of large-scale
pipelines. In this work, we tackle two connected challenges: (1) reducing the
cost of quality estimation at scale, and (2) developing an inexpensive
uncertainty estimation method for quality estimation. To address the latter, we
introduce Instant Confidence COMET, an uncertainty-aware quality estimation
model that matches the performance of previous approaches at a fraction of
their costs. We extend this to Early-Exit COMET, a quality estimation model
that can compute quality scores and associated confidences already at early
model layers, allowing us to early-exit computations and reduce evaluation
costs. We also apply our model to machine translation reranking. We combine
Early-Exit COMET with an upper confidence bound bandit algorithm to find the
best candidate from a large pool without having to run the full evaluation
model on all candidates. In both cases (evaluation and reranking) our methods
reduce the required compute by 50% with very little degradation in performance. | 3 | 67b835fa8512a3eca052c11e | null | null |
|
2025-02-25T09:00:19.900000 | MegaLoc: One Retrieval to Place Them All | 2 | {
"_id": "67a9e16f0710558f7bd8947a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/2SacRuI2bOsBgctaxWNGl.png",
"followerCount": null,
"fullname": "Gabriele Berton",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gberton",
"type": "user"
} | true | null | 2502.17237 | [
{
"_id": "67bdcb947186ab0e92d9ebf6",
"hidden": false,
"name": "Gabriele Berton",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-25T13:54:29.302Z",
"user": {
"_id": "67a9e16f0710558f7bd8947a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/2SacRuI2bOsBgctaxWNGl.png",
"fullname": "Gabriele Berton",
"isPro": false,
"type": "user",
"user": "gberton"
}
},
{
"_id": "67bdcb947186ab0e92d9ebf7",
"hidden": false,
"name": "Carlo Masone",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:40:30.511Z",
"user": {
"_id": "643fb774e5f6d513c7214ec3",
"avatarUrl": "/avatars/9333c175e72ced43d158ecd3a40c6af4.svg",
"fullname": "Carlo Masone",
"isPro": false,
"type": "user",
"user": "carmas"
}
}
] | 2025-02-24T15:14:55 | MegaLoc: One Retrieval to Place Them All | Retrieving images from the same location as a given query is an important
component of multiple computer vision tasks, like Visual Place Recognition,
Landmark Retrieval, Visual Localization, 3D reconstruction, and SLAM. However,
existing solutions are built to specifically work for one of these tasks, and
are known to fail when the requirements slightly change or when they meet
out-of-distribution data. In this paper we combine a variety of existing
methods, training techniques, and datasets to train a retrieval model, called
MegaLoc, that is performant on multiple tasks. We find that MegaLoc (1)
achieves state of the art on a large number of Visual Place Recognition
datasets, (2) impressive results on common Landmark Retrieval datasets, and (3)
sets a new state of the art for Visual Localization on the LaMAR datasets,
where we only changed the retrieval method to the existing localization
pipeline. The code for MegaLoc is available at
https://github.com/gmberton/MegaLoc | 1 | 67bdcb957186ab0e92d9ec34 | null | null |
|
2025-02-25T05:51:02.881000 | TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning | 2 | {
"_id": "65e98cd8e19214e9d151f29e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e98cd8e19214e9d151f29e/XjQzoVgKVzv8AZBWFQnHz.jpeg",
"followerCount": 2,
"fullname": "Giuseppe Paolo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "GPaolo",
"type": "user"
} | true | null | 2502.15425 | [
{
"_id": "67bda01d87919b52fc418533",
"hidden": false,
"name": "Giuseppe Paolo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T14:09:33.376Z",
"user": {
"_id": "65e98cd8e19214e9d151f29e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e98cd8e19214e9d151f29e/XjQzoVgKVzv8AZBWFQnHz.jpeg",
"fullname": "Giuseppe Paolo",
"isPro": false,
"type": "user",
"user": "GPaolo"
}
},
{
"_id": "67bda01d87919b52fc418534",
"hidden": false,
"name": "Abdelhakim Benechehab",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:37:00.751Z",
"user": {
"_id": "621d59ebd3df05d67132e8d9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621d59ebd3df05d67132e8d9/0gPfPTRKKnz5kq0InTqm5.jpeg",
"fullname": "Abdelhakim Benechehab",
"isPro": false,
"type": "user",
"user": "abenechehab"
}
},
{
"_id": "67bda01d87919b52fc418535",
"hidden": false,
"name": "Hamza Cherkaoui",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:37:07.215Z",
"user": {
"_id": "6794d6ba8bed6b676ee9ba8a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/dXlh-KNL2H80ahMJcvAlK.png",
"fullname": "Hamza Cherkaoui",
"isPro": false,
"type": "user",
"user": "HamzaCherkaoui"
}
},
{
"_id": "67bda01d87919b52fc418536",
"hidden": false,
"name": "Albert Thomas",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:37:14.448Z",
"user": {
"_id": "639789c20f1ac9c2f34a59f7",
"avatarUrl": "/avatars/fd73c93d50264d14f532ff52bc0d48f7.svg",
"fullname": "Albert Thomas",
"isPro": false,
"type": "user",
"user": "albert9000"
}
},
{
"_id": "67bda01d87919b52fc418537",
"hidden": false,
"name": "Balázs Kégl",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:37:20.798Z",
"user": {
"_id": "672cb69f250205b317235571",
"avatarUrl": "/avatars/37afc1506b15a2f9c37e3e8769142580.svg",
"fullname": "Balazs Kegl",
"isPro": false,
"type": "user",
"user": "balazskegl"
}
}
] | 2025-02-21T12:52:16 | TAG: A Decentralized Framework for Multi-Agent Hierarchical
Reinforcement Learning | Hierarchical organization is fundamental to biological systems and human
societies, yet artificial intelligence systems often rely on monolithic
architectures that limit adaptability and scalability. Current hierarchical
reinforcement learning (HRL) approaches typically restrict hierarchies to two
levels or require centralized training, which limits their practical
applicability. We introduce TAME Agent Framework (TAG), a framework for
constructing fully decentralized hierarchical multi-agent systems.TAG enables
hierarchies of arbitrary depth through a novel LevelEnv concept, which
abstracts each hierarchy level as the environment for the agents above it. This
approach standardizes information flow between levels while preserving loose
coupling, allowing for seamless integration of diverse agent types. We
demonstrate the effectiveness of TAG by implementing hierarchical architectures
that combine different RL agents across multiple levels, achieving improved
performance over classical multi-agent RL baselines on standard benchmarks. Our
results show that decentralized hierarchical organization enhances both
learning speed and final performance, positioning TAG as a promising direction
for scalable multi-agent systems. | 8 | 67bda01f87919b52fc4185d8 | null | null |
|
2025-02-25T05:40:40.152000 | Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam | 2 | {
"_id": "64cd4743a785f2043b32915e",
"avatarUrl": "/avatars/ba0b497a194dfea8449112d71fc67654.svg",
"followerCount": 1,
"fullname": "Tianjin Huang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "TianjinHuang",
"type": "user"
} | true | null | 2502.17055 | [
{
"_id": "67bd9b40478ef7c36240c6e6",
"hidden": false,
"name": "Tianjin Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:30:14.371Z",
"user": {
"_id": "64cd4743a785f2043b32915e",
"avatarUrl": "/avatars/ba0b497a194dfea8449112d71fc67654.svg",
"fullname": "Tianjin Huang",
"isPro": false,
"type": "user",
"user": "TianjinHuang"
}
},
{
"_id": "67bd9b40478ef7c36240c6e7",
"hidden": false,
"name": "Haotian Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:30:36.869Z",
"user": {
"_id": "67577b39a6073ee33f97cdd9",
"avatarUrl": "/avatars/c05788ebc7c59b13fcddf8ff88540f79.svg",
"fullname": "Haotian Hu",
"isPro": false,
"type": "user",
"user": "cspikachu"
}
},
{
"_id": "67bd9b40478ef7c36240c6e8",
"hidden": false,
"name": "Zhenyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:30:38.930Z",
"user": {
"_id": "649c888f67fd6c6aa97e5f85",
"avatarUrl": "/avatars/9967b729916d1128773102797fed1673.svg",
"fullname": "Zhenyu Zhang",
"isPro": false,
"type": "user",
"user": "Kyriection"
}
},
{
"_id": "67bd9b40478ef7c36240c6e9",
"hidden": false,
"name": "Gaojie Jin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:30:59.295Z",
"user": {
"_id": "6662bd6b6016f6effa6ce492",
"avatarUrl": "/avatars/c8e6b4b4a64f7ca0ddc91af5217a791b.svg",
"fullname": "gaojie jin",
"isPro": false,
"type": "user",
"user": "sggjin"
}
},
{
"_id": "67bd9b40478ef7c36240c6ea",
"hidden": false,
"name": "Xiang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6eb",
"hidden": false,
"name": "Li Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6ec",
"hidden": false,
"name": "Tianlong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6ed",
"hidden": false,
"name": "Lu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6ee",
"hidden": false,
"name": "Qingsong Wen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6ef",
"hidden": false,
"name": "Zhangyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd9b40478ef7c36240c6f0",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:14:48.533Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
}
] | 2025-02-24T11:09:15 | Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam | This paper comprehensively evaluates several recently proposed optimizers for
4-bit training, revealing that low-bit precision amplifies sensitivity to
learning rates and often causes unstable gradient norms, leading to divergence
at higher learning rates. Among these, SPAM, a recent optimizer featuring
momentum reset and spike-aware gradient clipping, achieves the best performance
across various bit levels, but struggles to stabilize gradient norms, requiring
careful learning rate tuning. To address these limitations, we propose
Stable-SPAM, which incorporates enhanced gradient normalization and clipping
techniques. In particular, Stable-SPAM (1) adaptively updates the clipping
threshold for spiked gradients by tracking their historical maxima; (2)
normalizes the entire gradient matrix based on its historical l_2-norm
statistics; and (3) inherits momentum reset from SPAM to periodically reset
the first and second moments of Adam, mitigating the accumulation of spiked
gradients. Extensive experiments show that Stable-SPAM effectively stabilizes
gradient norms in 4-bit LLM training, delivering superior performance compared
to Adam and SPAM. Notably, our 4-bit LLaMA-1B model trained with Stable-SPAM
outperforms the BF16 LLaMA-1B trained with Adam by up to 2 perplexity.
Furthermore, when both models are trained in 4-bit, Stable-SPAM achieves the
same loss as Adam while requiring only about half the training steps. Code is
available at https://github.com/TianjinYellow/StableSPAM.git. | 16 | 67bd9b41478ef7c36240c724 | null | null |
|
2025-02-25T04:11:18.915000 | Can Community Notes Replace Professional Fact-Checkers? | 2 | {
"_id": "6231d3ce86753f5f41d39c6f",
"avatarUrl": "/avatars/9b18f368e5f80cfc935b2e339d42a85f.svg",
"followerCount": 3,
"fullname": "Nadav Borenstein",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Nadav",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6231d3ce86753f5f41d39c6f/CwWaf1c9-jOzJ-gD5lvCH.jpeg",
"https://cdn-uploads.huggingface.co/production/uploads/6231d3ce86753f5f41d39c6f/WrrBClUkuDsXHcfxP_N8B.jpeg"
] | 2502.14132 | [
{
"_id": "67b86819d00e69f10c1f31b9",
"hidden": false,
"name": "Nadav Borenstein",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:52.278Z",
"user": {
"_id": "6231d3ce86753f5f41d39c6f",
"avatarUrl": "/avatars/9b18f368e5f80cfc935b2e339d42a85f.svg",
"fullname": "Nadav Borenstein",
"isPro": false,
"type": "user",
"user": "Nadav"
}
},
{
"_id": "67b86819d00e69f10c1f31ba",
"hidden": false,
"name": "Greta Warren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T14:42:45.791Z",
"user": {
"_id": "6698cffdb2ebada9f4a7e7d7",
"avatarUrl": "/avatars/e66d946c14595d3b008185f2be8d2f57.svg",
"fullname": "Greta Warren",
"isPro": false,
"type": "user",
"user": "gretawarren"
}
},
{
"_id": "67b86819d00e69f10c1f31bb",
"hidden": false,
"name": "Desmond Elliott",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:35:15.142Z",
"user": {
"_id": "6285f66a7cc3b7bc1b8e7b8e",
"avatarUrl": "/avatars/984ae22db7cc885591bc0b5bceffdfbd.svg",
"fullname": "Desmond Elliott",
"isPro": false,
"type": "user",
"user": "elliottd"
}
},
{
"_id": "67b86819d00e69f10c1f31bc",
"hidden": false,
"name": "Isabelle Augenstein",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:35:20.535Z",
"user": {
"_id": "608918b7df398c3b285ce960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621507769190-608918b7df398c3b285ce960.jpeg",
"fullname": "Isabelle Augenstein",
"isPro": false,
"type": "user",
"user": "IAugenstein"
}
}
] | 2025-02-19T22:26:39 | Can Community Notes Replace Professional Fact-Checkers? | Two commonly-employed strategies to combat the rise of misinformation on
social media are (i) fact-checking by professional organisations and (ii)
community moderation by platform users. Policy changes by Twitter/X and, more
recently, Meta, signal a shift away from partnerships with fact-checking
organisations and towards an increased reliance on crowdsourced community
notes. However, the extent and nature of dependencies between fact-checking and
helpful community notes remain unclear. To address these questions, we use
language models to annotate a large corpus of Twitter/X community notes with
attributes such as topic, cited sources, and whether they refute claims tied to
broader misinformation narratives. Our analysis reveals that community notes
cite fact-checking sources up to five times more than previously reported.
Fact-checking is especially crucial for notes on posts linked to broader
narratives, which are twice as likely to reference fact-checking sources
compared to other sources. In conclusion, our results show that successful
community moderation heavily relies on professional fact-checking. | 5 | 67b8681bd00e69f10c1f3267 | null | null |
|
2025-02-25T04:03:39.758000 | The snake in the Brownian sphere | 2 | {
"_id": "636d12455aaed143cd665607",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1679399015950-636d12455aaed143cd665607.png",
"followerCount": 2,
"fullname": "ZLW",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ZarkLngeW",
"type": "user"
} | false | null | 2502.13074 | [
{
"_id": "67bd8759fdecc637bd621e6b",
"hidden": false,
"name": "Omer Angel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd8759fdecc637bd621e6c",
"hidden": false,
"name": "Emmanuel Jacob",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd8759fdecc637bd621e6d",
"hidden": false,
"name": "Brett Kolesnik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd8759fdecc637bd621e6e",
"hidden": false,
"name": "Grégory Miermont",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-18T17:21:44 | The snake in the Brownian sphere | The Brownian sphere is a random metric space, homeomorphic to the
two-dimensional sphere, which arises as the universal scaling limit of many
types of random planar maps. The direct construction of the Brownian sphere is
via a continuous analogue of the Cori--Vauquelin--Schaeffer (CVS) bijection.
The CVS bijection maps labeled trees to planar maps, and the continuous version
maps Aldous' continuum random tree with Brownian labels (the Brownian snake) to
the Brownian sphere. In this work, we describe the inverse of the continuous
CVS bijection, by constructing the Brownian snake as a measurable function of
the Brownian sphere. Special care is needed to work with the orientation of the
Brownian sphere. | 1 | 67bd875afdecc637bd621e95 | null | null |
|
2025-02-25T03:36:50.480000 | M3-AGIQA: Multimodal, Multi-Round, Multi-Aspect AI-Generated Image Quality Assessment | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2502.15167 | [
{
"_id": "67bc7ea06f88ef9a2b8283d3",
"hidden": false,
"name": "Chuan Cui",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T15:37:40.037Z",
"user": {
"_id": "60534c7e9d7c1d4d81b7e519",
"avatarUrl": "/avatars/1496a1d25c07ccf7446d74edc6bda7c0.svg",
"fullname": "草帽不是猫",
"isPro": false,
"type": "user",
"user": "strawhat"
}
},
{
"_id": "67bc7ea06f88ef9a2b8283d4",
"hidden": false,
"name": "Kejiang Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:43:06.621Z",
"user": {
"_id": "63231a6a9aa01cafc8da1b62",
"avatarUrl": "/avatars/d5adb473f43902ba0e2c4cb7f5be394b.svg",
"fullname": "Kejiang Chen",
"isPro": false,
"type": "user",
"user": "kejiangchen"
}
},
{
"_id": "67bc7ea06f88ef9a2b8283d5",
"hidden": false,
"name": "Zhihua Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc7ea06f88ef9a2b8283d6",
"hidden": false,
"name": "Wen Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc7ea06f88ef9a2b8283d7",
"hidden": false,
"name": "Weiming Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:41:43.726Z",
"user": {
"_id": "64c54c28c097c6c2b3ab22cf",
"avatarUrl": "/avatars/de1f74516d03bb2e01811c0a53dce9c8.svg",
"fullname": "weiming zhang",
"isPro": false,
"type": "user",
"user": "xtwfnjezhang"
}
},
{
"_id": "67bc7ea06f88ef9a2b8283d8",
"hidden": false,
"name": "Nenghai Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T03:05:45 | M3-AGIQA: Multimodal, Multi-Round, Multi-Aspect AI-Generated Image
Quality Assessment | The rapid advancement of AI-generated image (AGI) models has introduced
significant challenges in evaluating their quality, which requires considering
multiple dimensions such as perceptual quality, prompt correspondence, and
authenticity. To address these challenges, we propose M3-AGIQA, a comprehensive
framework for AGI quality assessment that is Multimodal, Multi-Round, and
Multi-Aspect. Our approach leverages the capabilities of Multimodal Large
Language Models (MLLMs) as joint text and image encoders and distills advanced
captioning capabilities from online MLLMs into a local model via Low-Rank
Adaptation (LoRA) fine-tuning. The framework includes a structured multi-round
evaluation mechanism, where intermediate image descriptions are generated to
provide deeper insights into the quality, correspondence, and authenticity
aspects. To align predictions with human perceptual judgments, a predictor
constructed by an xLSTM and a regression head is incorporated to process
sequential logits and predict Mean Opinion Scores (MOSs). Extensive experiments
conducted on multiple benchmark datasets demonstrate that M3-AGIQA achieves
state-of-the-art performance, effectively capturing nuanced aspects of AGI
quality. Furthermore, cross-dataset validation confirms its strong
generalizability. The code is available at
https://github.com/strawhatboy/M3-AGIQA. | 1 | 67bc7ea26f88ef9a2b828473 | null | null |
|
2025-02-25T02:06:00.809000 | GCC: Generative Color Constancy via Diffusing a Color Checker | 2 | {
"_id": "6459d5da3b6fafd9664807ab",
"avatarUrl": "/avatars/57430d1bbde3a2fe5586e5fbcafb0e74.svg",
"followerCount": 3,
"fullname": "Yu-Lun Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yulunliu",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6459d5da3b6fafd9664807ab/gDAYQUcbNE2Ps2pQFxg_m.mp4"
] | 2502.17435 | [
{
"_id": "67bd6b4b8edd1ce8ad5603a0",
"hidden": false,
"name": "Chen-Wei Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd6b4b8edd1ce8ad5603a1",
"hidden": false,
"name": "Cheng-De Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:08:08.047Z",
"user": {
"_id": "64ea1e12925565abda02b17b",
"avatarUrl": "/avatars/b2bc33d95a147c6c8cf6b54672eb5a97.svg",
"fullname": "Cheng-De Fan",
"isPro": false,
"type": "user",
"user": "fansam39"
}
},
{
"_id": "67bd6b4b8edd1ce8ad5603a2",
"hidden": false,
"name": "Chia-Che Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd6b4b8edd1ce8ad5603a3",
"hidden": false,
"name": "Yi-Chen Lo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd6b4b8edd1ce8ad5603a4",
"hidden": false,
"name": "Yu-Chee Tseng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd6b4b8edd1ce8ad5603a5",
"hidden": false,
"name": "Jiun-Long Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd6b4b8edd1ce8ad5603a6",
"hidden": false,
"name": "Yu-Lun Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:08:30.752Z",
"user": {
"_id": "6459d5da3b6fafd9664807ab",
"avatarUrl": "/avatars/57430d1bbde3a2fe5586e5fbcafb0e74.svg",
"fullname": "Yu-Lun Liu",
"isPro": false,
"type": "user",
"user": "yulunliu"
}
}
] | 2025-02-24T18:59:54 | GCC: Generative Color Constancy via Diffusing a Color Checker | Color constancy methods often struggle to generalize across different camera
sensors due to varying spectral sensitivities. We present GCC, which leverages
diffusion models to inpaint color checkers into images for illumination
estimation. Our key innovations include (1) a single-step deterministic
inference approach that inpaints color checkers reflecting scene illumination,
(2) a Laplacian decomposition technique that preserves checker structure while
allowing illumination-dependent color adaptation, and (3) a mask-based data
augmentation strategy for handling imprecise color checker annotations. GCC
demonstrates superior robustness in cross-camera scenarios, achieving
state-of-the-art worst-25% error rates of 5.15{\deg} and 4.32{\deg} in
bi-directional evaluations. These results highlight our method's stability and
generalization capability across different camera characteristics without
requiring sensor-specific training, making it a versatile solution for
real-world applications. | 27 | 67bd6b4d8edd1ce8ad560401 | null | null |
|
2025-02-25T01:02:05.395000 | Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon Robotic Manipulation | 2 | {
"_id": "64f8cb8ed04a890f5380d9a4",
"avatarUrl": "/avatars/d6fdfdbb0c10141aa3b4c832d928121b.svg",
"followerCount": 4,
"fullname": "Jianlan Luo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jianlanluo",
"type": "user"
} | true | null | 2502.16707 | [
{
"_id": "67bd3bcc797e4d53ce0bc70d",
"hidden": false,
"name": "Yunhai Feng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:31.085Z",
"user": {
"_id": "64f8fbd95515d7dcceb906b1",
"avatarUrl": "/avatars/1c7d034de408930b166592465e65fc31.svg",
"fullname": "Yunhai Feng",
"isPro": false,
"type": "user",
"user": "yunhaif"
}
},
{
"_id": "67bd3bcc797e4d53ce0bc70e",
"hidden": false,
"name": "Jiaming Han",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:28.772Z",
"user": {
"_id": "62318c0386753f5f41d0e261",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62318c0386753f5f41d0e261/xO_5PvOf7lXhQPnQLcmnq.jpeg",
"fullname": "Jiaming Han",
"isPro": false,
"type": "user",
"user": "csuhan"
}
},
{
"_id": "67bd3bcc797e4d53ce0bc70f",
"hidden": false,
"name": "Zhuoran Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:28:50.050Z",
"user": {
"_id": "646d769cda8e99940b71928e",
"avatarUrl": "/avatars/acee495a23362aa39b3d3e75c9afd967.svg",
"fullname": "Zhuoran Yang",
"isPro": false,
"type": "user",
"user": "zhuoranyang"
}
},
{
"_id": "67bd3bcc797e4d53ce0bc710",
"hidden": false,
"name": "Xiangyu Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:28:43.087Z",
"user": {
"_id": "666a8f24e2990b0cb16b7bf9",
"avatarUrl": "/avatars/fcbaf8f1e3e53a2a4a819b7cb2c53aa4.svg",
"fullname": "Xiangyu Yue",
"isPro": false,
"type": "user",
"user": "xyyue"
}
},
{
"_id": "67bd3bcc797e4d53ce0bc711",
"hidden": false,
"name": "Sergey Levine",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:28:36.442Z",
"user": {
"_id": "665ce54120a307a3754849dd",
"avatarUrl": "/avatars/e698726e9be61dd50ce2efe372ed5dac.svg",
"fullname": "Sergey Levine",
"isPro": false,
"type": "user",
"user": "svlevine"
}
},
{
"_id": "67bd3bcc797e4d53ce0bc712",
"hidden": false,
"name": "Jianlan Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:28:30.231Z",
"user": {
"_id": "64f8cb8ed04a890f5380d9a4",
"avatarUrl": "/avatars/d6fdfdbb0c10141aa3b4c832d928121b.svg",
"fullname": "Jianlan Luo",
"isPro": false,
"type": "user",
"user": "jianlanluo"
}
}
] | 2025-02-23T20:42:15 | Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon
Robotic Manipulation | Solving complex long-horizon robotic manipulation problems requires
sophisticated high-level planning capabilities, the ability to reason about the
physical world, and reactively choose appropriate motor skills. Vision-language
models (VLMs) pretrained on Internet data could in principle offer a framework
for tackling such problems. However, in their current form, VLMs lack both the
nuanced understanding of intricate physics required for robotic manipulation
and the ability to reason over long horizons to address error compounding
issues. In this paper, we introduce a novel test-time computation framework
that enhances VLMs' physical reasoning capabilities for multi-stage
manipulation tasks. At its core, our approach iteratively improves a pretrained
VLM with a "reflection" mechanism - it uses a generative model to imagine
future world states, leverages these predictions to guide action selection, and
critically reflects on potential suboptimalities to refine its reasoning.
Experimental results demonstrate that our method significantly outperforms
several state-of-the-art commercial VLMs as well as other post-training
approaches such as Monte Carlo Tree Search (MCTS). Videos are available at
https://reflect-vlm.github.io. | 11 | 67bd3bcf797e4d53ce0bc7ff | null | null |
|
2025-02-25T00:37:53.138000 | MONSTER: Monash Scalable Time Series Evaluation Repository | 2 | {
"_id": "675f68e3074ff89c5c078bf3",
"avatarUrl": "/avatars/e3b78d90f032659d411761f47c3cf43e.svg",
"followerCount": null,
"fullname": "Angus",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "angus924",
"type": "user"
} | true | null | 2502.15122 | [
{
"_id": "67bbd6d5ba0bb31293e11210",
"hidden": false,
"name": "Angus Dempster",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-24T02:18:57.914Z",
"user": {
"_id": "675f68e3074ff89c5c078bf3",
"avatarUrl": "/avatars/e3b78d90f032659d411761f47c3cf43e.svg",
"fullname": "Angus",
"isPro": false,
"type": "user",
"user": "angus924"
}
},
{
"_id": "67bbd6d5ba0bb31293e11211",
"hidden": false,
"name": "Navid Mohammadi Foumani",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:39:04.191Z",
"user": {
"_id": "64fd243cb3eee10ba5430423",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64fd243cb3eee10ba5430423/u6SS1ueStDo35JOCgy0-J.jpeg",
"fullname": "Navid Foumani",
"isPro": false,
"type": "user",
"user": "Navidfoumani"
}
},
{
"_id": "67bbd6d5ba0bb31293e11212",
"hidden": false,
"name": "Chang Wei Tan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:39:12.684Z",
"user": {
"_id": "664c356b543feedee5f54c19",
"avatarUrl": "/avatars/5b3522ceff6b6e8de733898d6b235cc1.svg",
"fullname": "Chang Wei Tan",
"isPro": true,
"type": "user",
"user": "charsiuu"
}
},
{
"_id": "67bbd6d5ba0bb31293e11213",
"hidden": false,
"name": "Lynn Miller",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:39:20.335Z",
"user": {
"_id": "67ae773b8a3f2c111fb36803",
"avatarUrl": "/avatars/d46608182cd449f1a9f1c9e76c514e6b.svg",
"fullname": "Lynn Miller",
"isPro": false,
"type": "user",
"user": "lynn-miller"
}
},
{
"_id": "67bbd6d5ba0bb31293e11214",
"hidden": false,
"name": "Amish Mishra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbd6d5ba0bb31293e11215",
"hidden": false,
"name": "Mahsa Salehi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbd6d5ba0bb31293e11216",
"hidden": false,
"name": "Charlotte Pelletier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbd6d5ba0bb31293e11217",
"hidden": false,
"name": "Daniel F. Schmidt",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:39:39.384Z",
"user": {
"_id": "67505920dd6ece09aa9eae3f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/XwLt-BAbKqqVVM37ItIp5.png",
"fullname": "Daniel Schmidt",
"isPro": false,
"type": "user",
"user": "DanielSchmidt"
}
},
{
"_id": "67bbd6d5ba0bb31293e11218",
"hidden": false,
"name": "Geoffrey I. Webb",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:39:48.897Z",
"user": {
"_id": "64a7301b01646254506b2746",
"avatarUrl": "/avatars/910efc38bf28f4250014483095c7b552.svg",
"fullname": "Geoffrey Webb",
"isPro": false,
"type": "user",
"user": "geoffwebb"
}
}
] | 2025-02-21T00:54:40 | MONSTER: Monash Scalable Time Series Evaluation Repository | We introduce MONSTER-the MONash Scalable Time Series Evaluation Repository-a
collection of large datasets for time series classification. The field of time
series classification has benefitted from common benchmarks set by the UCR and
UEA time series classification repositories. However, the datasets in these
benchmarks are small, with median sizes of 217 and 255 examples, respectively.
In consequence they favour a narrow subspace of models that are optimised to
achieve low classification error on a wide variety of smaller datasets, that
is, models that minimise variance, and give little weight to computational
issues such as scalability. Our hope is to diversify the field by introducing
benchmarks using larger datasets. We believe that there is enormous potential
for new progress in the field by engaging with the theoretical and practical
challenges of learning effectively from larger quantities of data. | 2 | 67bbd6d6ba0bb31293e11258 | null | null |
|
2025-02-25T00:17:51.431000 | X-Dancer: Expressive Music to Human Dance Video Generation | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.17414 | [
{
"_id": "67bd526001d5bfa0abfcc5ba",
"hidden": false,
"name": "Zeyuan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd526001d5bfa0abfcc5bb",
"hidden": false,
"name": "Hongyi Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd526001d5bfa0abfcc5bc",
"hidden": false,
"name": "Guoxian Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:33:13.695Z",
"user": {
"_id": "63086a237dc1b1a54cc6c24d",
"avatarUrl": "/avatars/477b94134edc4c18c8f769ecbb7d8091.svg",
"fullname": "Song",
"isPro": false,
"type": "user",
"user": "guoxiansong"
}
},
{
"_id": "67bd526001d5bfa0abfcc5bd",
"hidden": false,
"name": "You Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:33:21.430Z",
"user": {
"_id": "6408dfd4b6a334f53e24023c",
"avatarUrl": "/avatars/b7e3fa4fbec6313e94ff3384b74dabfc.svg",
"fullname": "You Xie",
"isPro": false,
"type": "user",
"user": "youxie"
}
},
{
"_id": "67bd526001d5bfa0abfcc5be",
"hidden": false,
"name": "Chenxu Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:33:27.827Z",
"user": {
"_id": "64f58e61abc51e6b0f885575",
"avatarUrl": "/avatars/53ec28d045e708570e0e34f44aaba7a7.svg",
"fullname": "Chenxu Zhang",
"isPro": false,
"type": "user",
"user": "ChenxuZhang528"
}
},
{
"_id": "67bd526001d5bfa0abfcc5bf",
"hidden": false,
"name": "Xin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd526001d5bfa0abfcc5c0",
"hidden": false,
"name": "Chao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:33:39.754Z",
"user": {
"_id": "64cf8caa0b71aea8be5c97db",
"avatarUrl": "/avatars/d486466afa3b1d58abd85725930b9298.svg",
"fullname": "Chao Wang",
"isPro": false,
"type": "user",
"user": "chaowang"
}
},
{
"_id": "67bd526001d5bfa0abfcc5c1",
"hidden": false,
"name": "Di Chang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:33:46.334Z",
"user": {
"_id": "64a5d8219f3b568c202b3137",
"avatarUrl": "/avatars/eef6fb7c70d272555a53183c0e50dbaf.svg",
"fullname": "Di Chang",
"isPro": false,
"type": "user",
"user": "Boese0601"
}
},
{
"_id": "67bd526001d5bfa0abfcc5c2",
"hidden": false,
"name": "Linjie Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T18:47:54 | X-Dancer: Expressive Music to Human Dance Video Generation | We present X-Dancer, a novel zero-shot music-driven image animation pipeline
that creates diverse and long-range lifelike human dance videos from a single
static image. As its core, we introduce a unified transformer-diffusion
framework, featuring an autoregressive transformer model that synthesize
extended and music-synchronized token sequences for 2D body, head and hands
poses, which then guide a diffusion model to produce coherent and realistic
dance video frames. Unlike traditional methods that primarily generate human
motion in 3D, X-Dancer addresses data limitations and enhances scalability by
modeling a wide spectrum of 2D dance motions, capturing their nuanced alignment
with musical beats through readily available monocular videos. To achieve this,
we first build a spatially compositional token representation from 2D human
pose labels associated with keypoint confidences, encoding both large
articulated body movements (e.g., upper and lower body) and fine-grained
motions (e.g., head and hands). We then design a music-to-motion transformer
model that autoregressively generates music-aligned dance pose token sequences,
incorporating global attention to both musical style and prior motion context.
Finally we leverage a diffusion backbone to animate the reference image with
these synthesized pose tokens through AdaIN, forming a fully differentiable
end-to-end framework. Experimental results demonstrate that X-Dancer is able to
produce both diverse and characterized dance videos, substantially
outperforming state-of-the-art methods in term of diversity, expressiveness and
realism. Code and model will be available for research purposes. | 11 | 67bd526101d5bfa0abfcc62c | null | null |
|
2025-02-25T00:13:12.214000 | VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing | 4 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.17258 | [
{
"_id": "67bd515c0417e7f92283d3b8",
"hidden": false,
"name": "Xiangpeng Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd515c0417e7f92283d3b9",
"hidden": false,
"name": "Linchao Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T15:16:07.044Z",
"user": {
"_id": "63521e1dfe367c0d9b155007",
"avatarUrl": "/avatars/b22804fc63b507fd60191486b17cdf7c.svg",
"fullname": "Linchao Zhu",
"isPro": false,
"type": "user",
"user": "ffmpbgrnn"
}
},
{
"_id": "67bd515c0417e7f92283d3ba",
"hidden": false,
"name": "Hehe Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T15:16:12.884Z",
"user": {
"_id": "64ad04020fb9b20dbabbd30e",
"avatarUrl": "/avatars/a6bae4a3a4bcd6b54c33860fe14c7923.svg",
"fullname": "Hehe Fan",
"isPro": false,
"type": "user",
"user": "hehefan"
}
},
{
"_id": "67bd515c0417e7f92283d3bb",
"hidden": false,
"name": "Yi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T15:39:14 | VideoGrain: Modulating Space-Time Attention for Multi-grained Video
Editing | Recent advancements in diffusion models have significantly improved video
generation and editing capabilities. However, multi-grained video editing,
which encompasses class-level, instance-level, and part-level modifications,
remains a formidable challenge. The major difficulties in multi-grained editing
include semantic misalignment of text-to-region control and feature coupling
within the diffusion model. To address these difficulties, we present
VideoGrain, a zero-shot approach that modulates space-time (cross- and self-)
attention mechanisms to achieve fine-grained control over video content. We
enhance text-to-region control by amplifying each local prompt's attention to
its corresponding spatial-disentangled region while minimizing interactions
with irrelevant areas in cross-attention. Additionally, we improve feature
separation by increasing intra-region awareness and reducing inter-region
interference in self-attention. Extensive experiments demonstrate our method
achieves state-of-the-art performance in real-world scenarios. Our code, data,
and demos are available at https://knightyxp.github.io/VideoGrain_project_page/ | 71 | 67bd51620417e7f92283d4e9 | null | null |
|
2025-02-25T00:09:04.483000 | RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | true | null | 2502.15894 | [
{
"_id": "67bd3bd26faf9f04b2170f61",
"hidden": false,
"name": "Min Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:25:16.045Z",
"user": {
"_id": "65d0aa91617d1f7450cfcc3b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/lIoRhlq_Bmwt5X-V6LztR.png",
"fullname": "min zhao",
"isPro": false,
"type": "user",
"user": "caomi"
}
},
{
"_id": "67bd3bd26faf9f04b2170f62",
"hidden": false,
"name": "Guande He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:25:07.460Z",
"user": {
"_id": "67492ee82ad3cfc108a41bbb",
"avatarUrl": "/avatars/7ad03e55a8791c62f1271a5c9bf8cc60.svg",
"fullname": "Guande He",
"isPro": false,
"type": "user",
"user": "gdhe17"
}
},
{
"_id": "67bd3bd26faf9f04b2170f63",
"hidden": false,
"name": "Yixiao Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:24:59.862Z",
"user": {
"_id": "67505e0990ba48ec35e748e2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/K9bfDDD9AZJTxasc9yGfC.png",
"fullname": "yixiaochen",
"isPro": false,
"type": "user",
"user": "yixiaochen"
}
},
{
"_id": "67bd3bd26faf9f04b2170f64",
"hidden": false,
"name": "Hongzhou Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:23.502Z",
"user": {
"_id": "64c269a52d73768f07ac266c",
"avatarUrl": "/avatars/d497a960f8aef6a974907b68ed750c1c.svg",
"fullname": "Zhu Hongzhou",
"isPro": false,
"type": "user",
"user": "zhuhz22"
}
},
{
"_id": "67bd3bd26faf9f04b2170f65",
"hidden": false,
"name": "Chongxuan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:24:52.448Z",
"user": {
"_id": "64c07b488e2612254361153b",
"avatarUrl": "/avatars/ade0f783cc4c2d3e73f402637f595471.svg",
"fullname": "chongxuan li",
"isPro": false,
"type": "user",
"user": "zhenxuan00"
}
},
{
"_id": "67bd3bd26faf9f04b2170f66",
"hidden": false,
"name": "Jun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T19:28:05 | RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion
Transformers | Recent advancements in video generation have enabled models to synthesize
high-quality, minute-long videos. However, generating even longer videos with
temporal coherence remains a major challenge, and existing length extrapolation
methods lead to temporal repetition or motion deceleration. In this work, we
systematically analyze the role of frequency components in positional
embeddings and identify an intrinsic frequency that primarily governs
extrapolation behavior. Based on this insight, we propose RIFLEx, a minimal yet
effective approach that reduces the intrinsic frequency to suppress repetition
while preserving motion consistency, without requiring any additional
modifications. RIFLEx offers a true free lunch--achieving high-quality
2times extrapolation on state-of-the-art video diffusion transformers in a
completely training-free manner. Moreover, it enhances quality and enables
3times extrapolation by minimal fine-tuning without long videos. Project
page and codes:
https://riflex-video.github.io/{https://riflex-video.github.io/.} | 19 | 67bd3bd66faf9f04b21710d1 | null | null |
|
2025-02-24T23:37:53.138000 | Linguistic Generalizability of Test-Time Scaling in Mathematical Reasoning | 2 | {
"_id": "60d3e619b8448e1785bbda2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60d3e619b8448e1785bbda2a/q2re5u1HNwsCCyIMtid_I.jpeg",
"followerCount": 48,
"fullname": "GUIJIN SON",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "amphora",
"type": "user"
} | true | null | 2502.17407 | [
{
"_id": "67bd48d4becb766415a5d19d",
"hidden": false,
"name": "Guijin Son",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:21:47.182Z",
"user": {
"_id": "60d3e619b8448e1785bbda2a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60d3e619b8448e1785bbda2a/q2re5u1HNwsCCyIMtid_I.jpeg",
"fullname": "GUIJIN SON",
"isPro": false,
"type": "user",
"user": "amphora"
}
},
{
"_id": "67bd48d4becb766415a5d19e",
"hidden": false,
"name": "Jiwoo Hong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:21:56.398Z",
"user": {
"_id": "678b61b6c7491f8b7065f68d",
"avatarUrl": "/avatars/2168e7a0c58076126fcb41b01d01e622.svg",
"fullname": "Jiwoo Hong",
"isPro": false,
"type": "user",
"user": "hongmush"
}
},
{
"_id": "67bd48d4becb766415a5d19f",
"hidden": false,
"name": "Hyunwoo Ko",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:12.933Z",
"user": {
"_id": "63e087b6a98d931aa90c1b9c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63e087b6a98d931aa90c1b9c/96c6IT3f1pWGLbRdRDB2U.png",
"fullname": "Hyunwoo Ko",
"isPro": false,
"type": "user",
"user": "Cartinoe5930"
}
},
{
"_id": "67bd48d4becb766415a5d1a0",
"hidden": false,
"name": "James Thorne",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T18:36:15 | Linguistic Generalizability of Test-Time Scaling in Mathematical
Reasoning | Scaling pre-training compute has proven effective for achieving
mulitlinguality, but does the same hold for test-time scaling? In this work, we
introduce MCLM, a multilingual math benchmark featuring competition-level
problems in 55 languages. We test three test-time scaling methods-Outcome
Reward Modeling (ORM), Process Reward Modeling (ORM), and Budget Forcing
(BF)-on both Qwen2.5-1.5B Math and MR1-1.5B, a multilingual LLM we trained for
extended reasoning. Our experiments show that using Qwen2.5-1.5B Math with ORM
achieves a score of 35.8 on MCLM, while BF on MR1-1.5B attains 35.2. Although
"thinking LLMs" have recently garnered significant attention, we find that
their performance is comparable to traditional scaling methods like best-of-N
once constrained to similar levels of inference FLOPs. Moreover, while BF
yields a 20-point improvement on English AIME, it provides only a 1.94-point
average gain across other languages-a pattern consistent across the other
test-time scaling methods we studied-higlighting that test-time scaling may not
generalize as effectively to multilingual tasks. To foster further research, we
release MCLM, MR1-1.5B, and evaluation results. | 24 | 67bd48d5becb766415a5d1e9 | null | null |
|
2025-02-24T23:30:36.556000 | Forecasting Open-Weight AI Model Growth on Hugging Face | 3 | {
"_id": "5e67bdd61009063689407479",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"followerCount": 2066,
"fullname": "Clem 🤗",
"isHf": true,
"isMod": false,
"isPro": true,
"name": "clem",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/5e67bdd61009063689407479/kQHArNjaT0CM1KCujtDc1.png"
] | 2502.15987 | [
{
"_id": "67bd46ea3e090b402d70f1f4",
"hidden": false,
"name": "Kushal Raj Bhandari",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-25T04:30:32.676Z",
"user": {
"_id": "64dfbcb18e2084e1d7b51b46",
"avatarUrl": "/avatars/fafe30beea2d7e8eec3f3ba985c582f7.svg",
"fullname": "Kushal Raj Bhandari",
"isPro": false,
"type": "user",
"user": "KBhandari11"
}
},
{
"_id": "67bd46ea3e090b402d70f1f5",
"hidden": false,
"name": "Pin-Yu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:34:05.126Z",
"user": {
"_id": "6495dd0b71f6708e0f990032",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6495dd0b71f6708e0f990032/PBIjdKNnpkxvR_3djCGVm.png",
"fullname": "Pin-Yu Chen",
"isPro": true,
"type": "user",
"user": "pinyuchen"
}
},
{
"_id": "67bd46ea3e090b402d70f1f6",
"hidden": false,
"name": "Jianxi Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-21T22:52:19 | Forecasting Open-Weight AI Model Growth on Hugging Face | As the open-weight AI landscape continues to proliferate-with model
development, significant investment, and user interest-it becomes increasingly
important to predict which models will ultimately drive innovation and shape AI
ecosystems. Building on parallels with citation dynamics in scientific
literature, we propose a framework to quantify how an open-weight model's
influence evolves. Specifically, we adapt the model introduced by Wang et al.
for scientific citations, using three key parameters-immediacy, longevity, and
relative fitness-to track the cumulative number of fine-tuned models of an
open-weight model. Our findings reveal that this citation-style approach can
effectively capture the diverse trajectories of open-weight model adoption,
with most models fitting well and outliers indicating unique patterns or abrupt
jumps in usage. | 10 | 67bd46ee3e090b402d70f317 | null | null |
|
2025-02-24T23:14:20.487000 | Audio-FLAN: A Preliminary Release | 2 | {
"_id": "5fd6f670053c8345eddc1b68",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fd6f670053c8345eddc1b68/cuTsu2krRYHC6zYGD2dpQ.jpeg",
"followerCount": 13,
"fullname": "Ruibin Yuan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "a43992899",
"type": "user"
} | true | null | 2502.16584 | [
{
"_id": "67bd42386959e61abd265a9b",
"hidden": false,
"name": "Liumeng Xue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:02:19.831Z",
"user": {
"_id": "6290e961473e457463a53248",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6290e961473e457463a53248/-58Dp5uHvdjs9yOupAMs0.jpeg",
"fullname": "Liumeng Xue",
"isPro": true,
"type": "user",
"user": "lmxue"
}
},
{
"_id": "67bd42386959e61abd265a9c",
"hidden": false,
"name": "Ziya Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:04:58.842Z",
"user": {
"_id": "64191c925d6f3d15c65137b5",
"avatarUrl": "/avatars/0e6a5fabf11904b9c31073ad1e10f6c6.svg",
"fullname": "Ziya Zhou",
"isPro": false,
"type": "user",
"user": "DangeZy"
}
},
{
"_id": "67bd42386959e61abd265a9d",
"hidden": false,
"name": "Jiahao Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd42386959e61abd265a9e",
"hidden": false,
"name": "Zixuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd42386959e61abd265a9f",
"hidden": false,
"name": "Shuai Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:05:22.700Z",
"user": {
"_id": "669a705f7c6ce7348dae9dcd",
"avatarUrl": "/avatars/b042a8ea45848e3c9c77ce532286692f.svg",
"fullname": "Shuai Fan",
"isPro": false,
"type": "user",
"user": "Micro20"
}
},
{
"_id": "67bd42386959e61abd265aa0",
"hidden": false,
"name": "Yinghao Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:05:28.773Z",
"user": {
"_id": "6410665d5364a661bee22524",
"avatarUrl": "/avatars/f1cb0e07f36933187ceccbd5dcbeff79.svg",
"fullname": "Yinghao Ma",
"isPro": false,
"type": "user",
"user": "nicolaus625"
}
},
{
"_id": "67bd42386959e61abd265aa1",
"hidden": false,
"name": "Sitong Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd42386959e61abd265aa2",
"hidden": false,
"name": "Dongchao Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:05:44.137Z",
"user": {
"_id": "63c7636b656e7822e23e6f6b",
"avatarUrl": "/avatars/41bfd5e1ce6daab6058eacfd33c7a268.svg",
"fullname": "Dongchao Yang",
"isPro": false,
"type": "user",
"user": "Dongchao"
}
},
{
"_id": "67bd42386959e61abd265aa3",
"hidden": false,
"name": "Haohan Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:05:50.198Z",
"user": {
"_id": "63783d9d84318944acd305c4",
"avatarUrl": "/avatars/cafd4fd0287e77cc6ebf37c8c8509174.svg",
"fullname": "Haohan Guo",
"isPro": false,
"type": "user",
"user": "hhguo"
}
},
{
"_id": "67bd42386959e61abd265aa4",
"hidden": false,
"name": "Yujia Xiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-04T13:00:34.602Z",
"user": {
"_id": "674836767b7151c3ff30f865",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jcwK5NW-efhCt8s2TE6vK.png",
"fullname": "Yujia Xiao",
"isPro": false,
"type": "user",
"user": "Yogurt928"
}
},
{
"_id": "67bd42386959e61abd265aa5",
"hidden": false,
"name": "Xinsheng Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:06:03.447Z",
"user": {
"_id": "64cc69be8174e45ae076393e",
"avatarUrl": "/avatars/6fd566535cadbed63e1f956587157d13.svg",
"fullname": "Xinsheng Wang",
"isPro": false,
"type": "user",
"user": "wangxso"
}
},
{
"_id": "67bd42386959e61abd265aa6",
"hidden": false,
"name": "Zixuan Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd42386959e61abd265aa7",
"hidden": false,
"name": "Chuanbo Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:06:27.024Z",
"user": {
"_id": "6721db0fab7602a59648aec6",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/bhWBJ26b_94mZICgi3jVZ.png",
"fullname": "zhu chuanbo",
"isPro": false,
"type": "user",
"user": "zhuchb"
}
},
{
"_id": "67bd42386959e61abd265aa8",
"hidden": false,
"name": "Xinshen Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:06:40.621Z",
"user": {
"_id": "668b90710b04331a0bbacbb0",
"avatarUrl": "/avatars/1a457fc4b4d242a9a7104ba38d5c2467.svg",
"fullname": "ZHANG Xinshen",
"isPro": false,
"type": "user",
"user": "Ashaire"
}
},
{
"_id": "67bd42386959e61abd265aa9",
"hidden": false,
"name": "Tianchi Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:06:47.569Z",
"user": {
"_id": "6756ae32298969739a42d5f9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6756ae32298969739a42d5f9/XethqlkSvZUi3ZFnNLqaF.jpeg",
"fullname": "Tianchi Liu",
"isPro": false,
"type": "user",
"user": "liu-tianchi"
}
},
{
"_id": "67bd42386959e61abd265aaa",
"hidden": false,
"name": "Ruibin Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:06:54.742Z",
"user": {
"_id": "5fd6f670053c8345eddc1b68",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fd6f670053c8345eddc1b68/cuTsu2krRYHC6zYGD2dpQ.jpeg",
"fullname": "Ruibin Yuan",
"isPro": false,
"type": "user",
"user": "a43992899"
}
},
{
"_id": "67bd42386959e61abd265aab",
"hidden": false,
"name": "Zeyue Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd42386959e61abd265aac",
"hidden": false,
"name": "Haohe Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:07:13.402Z",
"user": {
"_id": "6155245d1c762d4d61b51d5d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677233708065-6155245d1c762d4d61b51d5d.png",
"fullname": "haoheliu",
"isPro": false,
"type": "user",
"user": "haoheliu"
}
},
{
"_id": "67bd42386959e61abd265aad",
"hidden": false,
"name": "Emmanouil Benetos",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:07:19.685Z",
"user": {
"_id": "66b0a0839797b94f001ed874",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/66b0a0839797b94f001ed874/BBtlJdQNyzQX10chnU81s.jpeg",
"fullname": "Emmanouil Benetos",
"isPro": false,
"type": "user",
"user": "emmanouilb"
}
},
{
"_id": "67bd42386959e61abd265aae",
"hidden": false,
"name": "Ge Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:07:26.226Z",
"user": {
"_id": "638efcf4c67af472d316d424",
"avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg",
"fullname": "Ge Zhang",
"isPro": false,
"type": "user",
"user": "zhangysk"
}
},
{
"_id": "67bd42386959e61abd265aaf",
"hidden": false,
"name": "Yike Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:07:36.243Z",
"user": {
"_id": "64762b325f70f9b2d0ade28e",
"avatarUrl": "/avatars/1f37382724b475ece805f943a8858acd.svg",
"fullname": "Yike Guo",
"isPro": false,
"type": "user",
"user": "SuaLily"
}
},
{
"_id": "67bd42386959e61abd265ab0",
"hidden": false,
"name": "Wei Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T14:24:15 | Audio-FLAN: A Preliminary Release | Recent advancements in audio tokenization have significantly enhanced the
integration of audio capabilities into large language models (LLMs). However,
audio understanding and generation are often treated as distinct tasks,
hindering the development of truly unified audio-language models. While
instruction tuning has demonstrated remarkable success in improving
generalization and zero-shot learning across text and vision, its application
to audio remains largely unexplored. A major obstacle is the lack of
comprehensive datasets that unify audio understanding and generation. To
address this, we introduce Audio-FLAN, a large-scale instruction-tuning dataset
covering 80 diverse tasks across speech, music, and sound domains, with over
100 million instances. Audio-FLAN lays the foundation for unified
audio-language models that can seamlessly handle both understanding (e.g.,
transcription, comprehension) and generation (e.g., speech, music, sound) tasks
across a wide range of audio domains in a zero-shot manner. The Audio-FLAN
dataset is available on HuggingFace and GitHub and will be continuously
updated. | 32 | 67bd423b6959e61abd265b88 | null | null |
|
2025-02-24T23:14:12.363000 | Slamming: Training a Speech Language Model on One GPU in a Day | 2 | {
"_id": "66b9bc2dacdbc1d0b39c3b50",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/hwR0pVfP_E8XjimXIxDOU.jpeg",
"followerCount": 5,
"fullname": "Gallil Maimon",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gallilmaimon",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/66b9bc2dacdbc1d0b39c3b50/t93GkoiYRplnXH1Go0MmY.png"
] | 2502.15814 | [
{
"_id": "67bd3972f077ddf1f98bacda",
"hidden": false,
"name": "Gallil Maimon",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:36.258Z",
"user": {
"_id": "66b9bc2dacdbc1d0b39c3b50",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/hwR0pVfP_E8XjimXIxDOU.jpeg",
"fullname": "Gallil Maimon",
"isPro": false,
"type": "user",
"user": "gallilmaimon"
}
},
{
"_id": "67bd3972f077ddf1f98bacdb",
"hidden": false,
"name": "Avishai Elmakies",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:33.712Z",
"user": {
"_id": "644662145004f2cb3af08b27",
"avatarUrl": "/avatars/5f2af24c7410a5db46374d0b84fb479d.svg",
"fullname": "Avishai Elmakies",
"isPro": false,
"type": "user",
"user": "avishai-elmakies"
}
},
{
"_id": "67bd3972f077ddf1f98bacdc",
"hidden": false,
"name": "Yossi Adi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:02:07.243Z",
"user": {
"_id": "6481e135578646b5c2386728",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6481e135578646b5c2386728/SPva4iNw0pORiCXD45cx9.jpeg",
"fullname": "Yossi Adi",
"isPro": false,
"type": "user",
"user": "adiyoss"
}
}
] | 2025-02-19T17:21:15 | Slamming: Training a Speech Language Model on One GPU in a Day | We introduce Slam, a recipe for training high-quality Speech Language Models
(SLMs) on a single academic GPU in 24 hours. We do so through empirical
analysis of model initialisation and architecture, synthetic training data,
preference optimisation with synthetic data and tweaking all other components.
We empirically demonstrate that this training recipe also scales well with more
compute getting results on par with leading SLMs in a fraction of the compute
cost. We hope these insights will make SLM training and research more
accessible. In the context of SLM scaling laws, our results far outperform
predicted compute optimal performance, giving an optimistic view to SLM
feasibility. See code, data, models, samples at -
https://pages.cs.huji.ac.il/adiyoss-lab/slamming . | 65 | 67bd3973f077ddf1f98bacf9 | null | null |
|
2025-02-24T22:48:30.357000 | Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties | 4 | {
"_id": "644a4fbc2166258fccc664bc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg",
"followerCount": 6,
"fullname": "Jialong Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "callanwu",
"type": "user"
} | true | null | 2502.16922 | [
{
"_id": "67bd3d6b60186d7478467208",
"hidden": false,
"name": "Zhenglin Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:15.633Z",
"user": {
"_id": "6643261b8876db14227eeb19",
"avatarUrl": "/avatars/67428c9e37a2273697c0547e1783ec6b.svg",
"fullname": "Zhenglin Wang",
"isPro": false,
"type": "user",
"user": "wzl0228"
}
},
{
"_id": "67bd3d6b60186d7478467209",
"hidden": false,
"name": "Jialong Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T15:06:02.856Z",
"user": {
"_id": "644a4fbc2166258fccc664bc",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg",
"fullname": "Jialong Wu",
"isPro": false,
"type": "user",
"user": "callanwu"
}
},
{
"_id": "67bd3d6b60186d747846720a",
"hidden": false,
"name": "Pengfei LI",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3d6b60186d747846720b",
"hidden": false,
"name": "Yong Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:29:23.380Z",
"user": {
"_id": "678ddd806aa55e76bfffb953",
"avatarUrl": "/avatars/f447936c286a6a2d2874a760210b2f17.svg",
"fullname": "Yong Jiang",
"isPro": false,
"type": "user",
"user": "yongjiangNLP"
}
},
{
"_id": "67bd3d6b60186d747846720c",
"hidden": false,
"name": "Deyu Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:29:35.320Z",
"user": {
"_id": "64e821f2bddc5b1072b15c2e",
"avatarUrl": "/avatars/618b5a48f2fa62daff4e1922a9aa9e8b.svg",
"fullname": "zhoudeyu",
"isPro": false,
"type": "user",
"user": "zhoudeyu"
}
}
] | 2025-02-24T07:27:54 | Benchmarking Temporal Reasoning and Alignment Across Chinese Dynasties | Temporal reasoning is fundamental to human cognition and is crucial for
various real-world applications. While recent advances in Large Language Models
have demonstrated promising capabilities in temporal reasoning, existing
benchmarks primarily rely on rule-based construction, lack contextual depth,
and involve a limited range of temporal entities. To address these limitations,
we introduce Chinese Time Reasoning (CTM), a benchmark designed to evaluate
LLMs on temporal reasoning within the extensive scope of Chinese dynastic
chronology. CTM emphasizes cross-entity relationships, pairwise temporal
alignment, and contextualized and culturally-grounded reasoning, providing a
comprehensive evaluation. Extensive experimental results reveal the challenges
posed by CTM and highlight potential avenues for improvement. | 7 | 67bd3d6c60186d7478467249 | null | null |
|
2025-02-24T22:39:29.837000 | DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks | 3 | {
"_id": "646efd223dd912a539e0bd46",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EOFAv5xvOgJOzuDgh4nSb.png",
"followerCount": 12,
"fullname": "Canyu Zhao",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "Canyu",
"type": "user"
} | true | null | 2502.17157 | [
{
"_id": "67bd3285ac4a596a43b53205",
"hidden": false,
"name": "Canyu Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:20.829Z",
"user": {
"_id": "646efd223dd912a539e0bd46",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/EOFAv5xvOgJOzuDgh4nSb.png",
"fullname": "Canyu Zhao",
"isPro": true,
"type": "user",
"user": "Canyu"
}
},
{
"_id": "67bd3285ac4a596a43b53206",
"hidden": false,
"name": "Mingyu Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:38:01.503Z",
"user": {
"_id": "652e25d2e647b0ee0a024f26",
"avatarUrl": "/avatars/b5c65cf6c8d0ddc9b8ef0226e0295d56.svg",
"fullname": "Mingyu Liu",
"isPro": false,
"type": "user",
"user": "MingyuLiu"
}
},
{
"_id": "67bd3285ac4a596a43b53207",
"hidden": false,
"name": "Huanyi Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:18.731Z",
"user": {
"_id": "64d60375d7e30889c65e8cf4",
"avatarUrl": "/avatars/640f7c570fc45194557ce7931bdfe87f.svg",
"fullname": "Huanyi Zheng",
"isPro": false,
"type": "user",
"user": "zhyya"
}
},
{
"_id": "67bd3285ac4a596a43b53208",
"hidden": false,
"name": "Muzhi Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:11.968Z",
"user": {
"_id": "632179745fc60c44fd91fc33",
"avatarUrl": "/avatars/37d4fefbcc19f091dccffefec9706de2.svg",
"fullname": "zhumuzhi",
"isPro": false,
"type": "user",
"user": "Z-MU-Z"
}
},
{
"_id": "67bd3285ac4a596a43b53209",
"hidden": false,
"name": "Zhiyue Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3285ac4a596a43b5320a",
"hidden": false,
"name": "Hao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3285ac4a596a43b5320b",
"hidden": false,
"name": "Tong He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3285ac4a596a43b5320c",
"hidden": false,
"name": "Chunhua Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T13:51:06 | DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks | Our primary goal here is to create a good, generalist perception model that
can tackle multiple tasks, within limits on computational resources and
training data. To achieve this, we resort to text-to-image diffusion models
pre-trained on billions of images. Our exhaustive evaluation metrics
demonstrate that DICEPTION effectively tackles multiple perception tasks,
achieving performance on par with state-of-the-art models. We achieve results
on par with SAM-vit-h using only 0.06% of their data (e.g., 600K vs. 1B
pixel-level annotated images). Inspired by Wang et al., DICEPTION formulates
the outputs of various perception tasks using color encoding; and we show that
the strategy of assigning random colors to different instances is highly
effective in both entity segmentation and semantic segmentation. Unifying
various perception tasks as conditional image generation enables us to fully
leverage pre-trained text-to-image models. Thus, DICEPTION can be efficiently
trained at a cost of orders of magnitude lower, compared to conventional models
that were trained from scratch. When adapting our model to other tasks, it only
requires fine-tuning on as few as 50 images and 1% of its parameters. DICEPTION
provides valuable insights and a more promising solution for visual generalist
models. | 51 | 67bd328aac4a596a43b532ae | null | null |
|
2025-02-24T22:35:41.042000 | Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment | 4 | {
"_id": "641aa5e391e3376a057bbd4c",
"avatarUrl": "/avatars/5818797f27444fde078b503774ee081c.svg",
"followerCount": 12,
"fullname": "Chenghao Fan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Facico",
"type": "user"
} | true | null | 2502.16894 | [
{
"_id": "67bd396ea06bae99f3866911",
"hidden": false,
"name": "Chenghao Fan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:38.942Z",
"user": {
"_id": "641aa5e391e3376a057bbd4c",
"avatarUrl": "/avatars/5818797f27444fde078b503774ee081c.svg",
"fullname": "Chenghao Fan",
"isPro": false,
"type": "user",
"user": "Facico"
}
},
{
"_id": "67bd396ea06bae99f3866912",
"hidden": false,
"name": "Zhenyi Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:22:55.129Z",
"user": {
"_id": "666b0a01e6cfd60425d00fd9",
"avatarUrl": "/avatars/f6e5447e95785563e850ffcbe7dd6e3d.svg",
"fullname": "LUZHENYI",
"isPro": false,
"type": "user",
"user": "LUzhenyi123111"
}
},
{
"_id": "67bd396ea06bae99f3866913",
"hidden": false,
"name": "Sichen Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:37:56.904Z",
"user": {
"_id": "641dbda28dc52733fa4419cf",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/641dbda28dc52733fa4419cf/vdIsa6UlMIaqHinGrYDb-.png",
"fullname": "Sichen Liu",
"isPro": false,
"type": "user",
"user": "Seas0"
}
},
{
"_id": "67bd396ea06bae99f3866914",
"hidden": false,
"name": "Xiaoye Qu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd396ea06bae99f3866915",
"hidden": false,
"name": "Wei Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd396ea06bae99f3866916",
"hidden": false,
"name": "Chengfeng Gu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:23:29.392Z",
"user": {
"_id": "675fc6bd34f2e646c06fbb07",
"avatarUrl": "/avatars/60f77c237cd652f80b7f1ecfd358afe1.svg",
"fullname": "gu chengfeng",
"isPro": false,
"type": "user",
"user": "gucf"
}
},
{
"_id": "67bd396ea06bae99f3866917",
"hidden": false,
"name": "Yu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T06:48:13 | Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and
Mixture-of-Experts Optimization Alignment | While Low-Rank Adaptation (LoRA) enables parameter-efficient fine-tuning for
Large Language Models (LLMs), its performance often falls short of Full
Fine-Tuning (Full FT). Current methods optimize LoRA by initializing with
static singular value decomposition (SVD) subsets, leading to suboptimal
leveraging of pre-trained knowledge. Another path for improving LoRA is
incorporating a Mixture-of-Experts (MoE) architecture. However, weight
misalignment and complex gradient dynamics make it challenging to adopt SVD
prior to the LoRA MoE architecture. To mitigate these issues, we propose
Great LoRA Mixture-of-Expert
(GOAT), a framework that (1) adaptively integrates relevant priors using an
SVD-structured MoE, and (2) aligns optimization with full fine-tuned MoE by
deriving a theoretical scaling factor. We demonstrate that proper scaling,
without modifying the architecture or training algorithms, boosts LoRA MoE's
efficiency and performance. Experiments across 25 datasets, including natural
language understanding, commonsense reasoning, image classification, and
natural language generation, demonstrate GOAT's state-of-the-art performance,
closing the gap with Full FT. | 23 | 67bd396fa06bae99f3866964 | null | null |
|
2025-02-24T22:31:17.771000 | Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided Multi-Agent Collaboration | 2 | {
"_id": "645b10e80c73ea27d13f7aca",
"avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg",
"followerCount": 3,
"fullname": "xuhaiyang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xhyandwyy",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/645b10e80c73ea27d13f7aca/mshxtP77rrnN07f6ux6_0.jpeg"
] | 2502.17110 | [
{
"_id": "67bd3936daef22cbce6d7ef2",
"hidden": false,
"name": "Junyang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:25:28.473Z",
"user": {
"_id": "6438f6415aa69077ffb16942",
"avatarUrl": "/avatars/c83dbd3e10e88db97c2a86092bad5917.svg",
"fullname": "Junyang Wang",
"isPro": false,
"type": "user",
"user": "junyangwang0410"
}
},
{
"_id": "67bd3936daef22cbce6d7ef3",
"hidden": false,
"name": "Haiyang Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:39:41.528Z",
"user": {
"_id": "645b10e80c73ea27d13f7aca",
"avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg",
"fullname": "xuhaiyang",
"isPro": false,
"type": "user",
"user": "xhyandwyy"
}
},
{
"_id": "67bd3936daef22cbce6d7ef4",
"hidden": false,
"name": "Xi Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:25:39.626Z",
"user": {
"_id": "66b1762e023357106d7e1d50",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/knNwe9xeQmIHUT7hQOrvN.png",
"fullname": "Xi Zhang",
"isPro": false,
"type": "user",
"user": "XiZhang"
}
},
{
"_id": "67bd3936daef22cbce6d7ef5",
"hidden": false,
"name": "Ming Yan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T16:29:39.149Z",
"user": {
"_id": "64771cfdd7cf39f2e9381aa9",
"avatarUrl": "/avatars/48adf00c3b653df02628f80511639e19.svg",
"fullname": "Ming",
"isPro": false,
"type": "user",
"user": "MingYan123"
}
},
{
"_id": "67bd3936daef22cbce6d7ef6",
"hidden": false,
"name": "Ji Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3936daef22cbce6d7ef7",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd3936daef22cbce6d7ef8",
"hidden": false,
"name": "Jitao Sang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-24T12:51:23 | Mobile-Agent-V: Learning Mobile Device Operation Through Video-Guided
Multi-Agent Collaboration | The rapid increase in mobile device usage necessitates improved automation
for seamless task management. However, many AI-driven frameworks struggle due
to insufficient operational knowledge. Manually written knowledge helps but is
labor-intensive and inefficient. To address these challenges, we introduce
Mobile-Agent-V, a framework that leverages video guidance to provide rich and
cost-effective operational knowledge for mobile automation. Mobile-Agent-V
enhances task execution capabilities by leveraging video inputs without
requiring specialized sampling or preprocessing. Mobile-Agent-V integrates a
sliding window strategy and incorporates a video agent and deep-reflection
agent to ensure that actions align with user instructions. Through this
innovative approach, users can record task processes with guidance, enabling
the system to autonomously learn and execute tasks efficiently. Experimental
results show that Mobile-Agent-V achieves a 30% performance improvement
compared to existing frameworks. | 11 | 67bd3938daef22cbce6d7f9d | null | null |
|
2025-02-24T22:27:11.566000 | Thus Spake Long-Context Large Language Model | 6 | {
"_id": "64f033ef82c6eea604c4da8b",
"avatarUrl": "/avatars/51b93fea7fd68b4274ee03701245dcca.svg",
"followerCount": 2,
"fullname": "Liu Xiaoran",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "LiuXR",
"type": "user"
} | true | null | 2502.17129 | [
{
"_id": "67bd37cb0d41e01cca99aa8b",
"hidden": false,
"name": "Xiaoran Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:07.298Z",
"user": {
"_id": "64f033ef82c6eea604c4da8b",
"avatarUrl": "/avatars/51b93fea7fd68b4274ee03701245dcca.svg",
"fullname": "Liu Xiaoran",
"isPro": false,
"type": "user",
"user": "LiuXR"
}
},
{
"_id": "67bd37cb0d41e01cca99aa8c",
"hidden": false,
"name": "Ruixiao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa8d",
"hidden": false,
"name": "Mianqiu Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:37:59.249Z",
"user": {
"_id": "6459c7c10aba070266e41bb1",
"avatarUrl": "/avatars/2178cac69cf4123db5e85191160f3795.svg",
"fullname": "mqhuang",
"isPro": false,
"type": "user",
"user": "LutherXD"
}
},
{
"_id": "67bd37cb0d41e01cca99aa8e",
"hidden": false,
"name": "Zhigeng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa8f",
"hidden": false,
"name": "Yuerong Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa90",
"hidden": false,
"name": "Qipeng Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T15:15:13.798Z",
"user": {
"_id": "6491cd52b1e5d3444528edb1",
"avatarUrl": "/avatars/a85635d886c7f157b6723dec5c01c030.svg",
"fullname": "Qipeng Guo",
"isPro": false,
"type": "user",
"user": "QipengGuo"
}
},
{
"_id": "67bd37cb0d41e01cca99aa91",
"hidden": false,
"name": "Siyang He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa92",
"hidden": false,
"name": "Qiqi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa93",
"hidden": false,
"name": "Linlin Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa94",
"hidden": false,
"name": "Qun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa95",
"hidden": false,
"name": "Yaqian Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa96",
"hidden": false,
"name": "Xuanjing Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd37cb0d41e01cca99aa97",
"hidden": false,
"name": "Xipeng Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T15:12:06.360Z",
"user": {
"_id": "61457b8deff2c9fdb4de4988",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1632381702899-61457b8deff2c9fdb4de4988.jpeg",
"fullname": "Xipeng Qiu",
"isPro": false,
"type": "user",
"user": "xpqiu"
}
}
] | 2025-02-24T13:19:33 | Thus Spake Long-Context Large Language Model | Long context is an important topic in Natural Language Processing (NLP),
running through the development of NLP architectures, and offers immense
opportunities for Large Language Models (LLMs) giving LLMs the lifelong
learning potential akin to humans. Unfortunately, the pursuit of a long context
is accompanied by numerous obstacles. Nevertheless, long context remains a core
competitive advantage for LLMs. In the past two years, the context length of
LLMs has achieved a breakthrough extension to millions of tokens. Moreover, the
research on long-context LLMs has expanded from length extrapolation to a
comprehensive focus on architecture, infrastructure, training, and evaluation
technologies.
Inspired by the symphonic poem, Thus Spake Zarathustra, we draw an analogy
between the journey of extending the context of LLM and the attempts of humans
to transcend its mortality. In this survey, We will illustrate how LLM
struggles between the tremendous need for a longer context and its equal need
to accept the fact that it is ultimately finite. To achieve this, we give a
global picture of the lifecycle of long-context LLMs from four perspectives:
architecture, infrastructure, training, and evaluation, showcasing the full
spectrum of long-context technologies. At the end of this survey, we will
present 10 unanswered questions currently faced by long-context LLMs. We hope
this survey can serve as a systematic introduction to the research on
long-context LLMs. | 66 | 67bd37cc0d41e01cca99ab1e | null | null |
|
2025-02-24T22:17:28.937000 | CodeCriticBench: A Holistic Code Critique Benchmark for Large Language Models | 3 | {
"_id": "65377c30e48353201e6fdda0",
"avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg",
"followerCount": 7,
"fullname": "Jiaheng Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "CheeryLJH",
"type": "user"
} | true | null | 2502.16614 | [
{
"_id": "67bd36334a9a04b9ca9bbb68",
"hidden": false,
"name": "Alexander Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb69",
"hidden": false,
"name": "Marcus Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb6a",
"hidden": false,
"name": "Jiaheng Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:09:48.690Z",
"user": {
"_id": "65377c30e48353201e6fdda0",
"avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg",
"fullname": "Jiaheng Liu",
"isPro": false,
"type": "user",
"user": "CheeryLJH"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb6b",
"hidden": false,
"name": "Wei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb6c",
"hidden": false,
"name": "Yejie Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:10:04.798Z",
"user": {
"_id": "6342dc0ee2647466b42918ab",
"avatarUrl": "/avatars/a80a3df2f67410662bf9681ed8834b17.svg",
"fullname": "Yejie Wang",
"isPro": false,
"type": "user",
"user": "banksy235"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb6d",
"hidden": false,
"name": "Jian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb6e",
"hidden": false,
"name": "Ge Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:19:39.556Z",
"user": {
"_id": "638efcf4c67af472d316d424",
"avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg",
"fullname": "Ge Zhang",
"isPro": false,
"type": "user",
"user": "zhangysk"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb6f",
"hidden": false,
"name": "Tianyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb70",
"hidden": false,
"name": "Zhongyuan Peng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:20:18.644Z",
"user": {
"_id": "63299f93688ad82b783aaf20",
"avatarUrl": "/avatars/e68e6f5add62edddfbdd3795f3a72347.svg",
"fullname": "zhongyuan peng",
"isPro": false,
"type": "user",
"user": "happzy2633"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb71",
"hidden": false,
"name": "Yingshui Tan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:20:25.822Z",
"user": {
"_id": "6732f05d6d413742b5547249",
"avatarUrl": "/avatars/c77b9fc579b0353e9c271d985b410342.svg",
"fullname": "Yingshui Tan",
"isPro": false,
"type": "user",
"user": "YingshuiTan1996"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb72",
"hidden": false,
"name": "Yuanxing Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:20:33.271Z",
"user": {
"_id": "64241749a05235e2f8d34cb0",
"avatarUrl": "/avatars/e88967d77588f7205fbb110a51125e5b.svg",
"fullname": "Yuanxing Zhang",
"isPro": false,
"type": "user",
"user": "LongoXC"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb73",
"hidden": false,
"name": "Zhexu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb74",
"hidden": false,
"name": "Weixun Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb75",
"hidden": false,
"name": "Yancheng He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb76",
"hidden": false,
"name": "Ken Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd36334a9a04b9ca9bbb77",
"hidden": false,
"name": "Wangchunshu Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:21:18.710Z",
"user": {
"_id": "628c8598ef14f971b698107f",
"avatarUrl": "/avatars/3a4ad87e6b5f9e836a1160d869df1447.svg",
"fullname": "Zhou",
"isPro": false,
"type": "user",
"user": "Wangchunshu"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb78",
"hidden": false,
"name": "Wenhao Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:21:27.459Z",
"user": {
"_id": "641e5bf65f274a0a92c2f6a2",
"avatarUrl": "/avatars/c15a54c51998c0e6367685e8e1737ec9.svg",
"fullname": "Wenhao Huang",
"isPro": false,
"type": "user",
"user": "EZ-hwh"
}
},
{
"_id": "67bd36334a9a04b9ca9bbb79",
"hidden": false,
"name": "Zhaoxiang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T15:36:43 | CodeCriticBench: A Holistic Code Critique Benchmark for Large Language
Models | The critique capacity of Large Language Models (LLMs) is essential for
reasoning abilities, which can provide necessary suggestions (e.g., detailed
analysis and constructive feedback). Therefore, how to evaluate the critique
capacity of LLMs has drawn great attention and several critique benchmarks have
been proposed. However, existing critique benchmarks usually have the following
limitations: (1). Focusing on diverse reasoning tasks in general domains and
insufficient evaluation on code tasks (e.g., only covering code generation
task), where the difficulty of queries is relatively easy (e.g., the code
queries of CriticBench are from Humaneval and MBPP). (2). Lacking comprehensive
evaluation from different dimensions. To address these limitations, we
introduce a holistic code critique benchmark for LLMs called CodeCriticBench.
Specifically, our CodeCriticBench includes two mainstream code tasks (i.e.,
code generation and code QA) with different difficulties. Besides, the
evaluation protocols include basic critique evaluation and advanced critique
evaluation for different characteristics, where fine-grained evaluation
checklists are well-designed for advanced settings. Finally, we conduct
extensive experimental results of existing LLMs, which show the effectiveness
of CodeCriticBench. | 23 | 67bd36354a9a04b9ca9bbc16 | null | null |
|
2025-02-24T21:59:50.456000 | Multimodal Inconsistency Reasoning (MMIR): A New Benchmark for Multimodal Reasoning Models | 2 | {
"_id": "64679a226192d39142245e5e",
"avatarUrl": "/avatars/05abee0b6317f100923936ca2099e9eb.svg",
"followerCount": 4,
"fullname": "Xin Eric Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xw-eric",
"type": "user"
} | true | null | 2502.16033 | [
{
"_id": "67bd31d0d055a27740b16a30",
"hidden": false,
"name": "Qianqi Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a31",
"hidden": false,
"name": "Yue Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a32",
"hidden": false,
"name": "Hongquan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a33",
"hidden": false,
"name": "Shan Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a34",
"hidden": false,
"name": "Yang Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a35",
"hidden": false,
"name": "Xinze Guan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a36",
"hidden": false,
"name": "Ching-Chen Kuo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d0d055a27740b16a37",
"hidden": false,
"name": "Xin Eric Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:24:13.717Z",
"user": {
"_id": "64679a226192d39142245e5e",
"avatarUrl": "/avatars/05abee0b6317f100923936ca2099e9eb.svg",
"fullname": "Xin Eric Wang",
"isPro": false,
"type": "user",
"user": "xw-eric"
}
}
] | 2025-02-22T01:52:37 | Multimodal Inconsistency Reasoning (MMIR): A New Benchmark for
Multimodal Reasoning Models | Existing Multimodal Large Language Models (MLLMs) are predominantly trained
and tested on consistent visual-textual inputs, leaving open the question of
whether they can handle inconsistencies in real-world, layout-rich content. To
bridge this gap, we propose the Multimodal Inconsistency Reasoning (MMIR)
benchmark to assess MLLMs' ability to detect and reason about semantic
mismatches in artifacts such as webpages, presentation slides, and posters.
MMIR comprises 534 challenging samples, each containing synthetically injected
errors across five reasoning-heavy categories: Factual Contradiction, Identity
Misattribution, Contextual Mismatch, Quantitative Discrepancy, and
Temporal/Spatial Incoherence. We evaluate six state-of-the-art MLLMs, showing
that models with dedicated multimodal reasoning capabilities, such as o1,
substantially outperform their counterparts while open-source models remain
particularly vulnerable to inconsistency errors. Detailed error analyses
further show that models excel in detecting inconsistencies confined to a
single modality, particularly in text, but struggle with cross-modal conflicts
and complex layouts. Probing experiments reveal that single-modality prompting,
including Chain-of-Thought (CoT) and Set-of-Mark (SoM) methods, yields marginal
gains, revealing a key bottleneck in cross-modal reasoning. Our findings
highlight the need for advanced multimodal reasoning and point to future
research on multimodal inconsistency. | 15 | 67bd31d2d055a27740b16ad9 | null | null |
|
2025-02-24T21:59:15.571000 | Beyond Release: Access Considerations for Generative AI Systems | 2 | {
"_id": "62543749b777cd32720675c2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1658760912583-62543749b777cd32720675c2.jpeg",
"followerCount": 81,
"fullname": "Irene Solaiman",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "irenesolaiman",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/62543749b777cd32720675c2/LwZmJUoXiJriC_c1DZ7qM.png"
] | 2502.16701 | [
{
"_id": "67bd31d6bf6d46017e515a58",
"hidden": false,
"name": "Irene Solaiman",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-25T03:43:21.348Z",
"user": {
"_id": "62543749b777cd32720675c2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1658760912583-62543749b777cd32720675c2.jpeg",
"fullname": "Irene Solaiman",
"isPro": false,
"type": "user",
"user": "irenesolaiman"
}
},
{
"_id": "67bd31d6bf6d46017e515a59",
"hidden": false,
"name": "Rishi Bommasani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d6bf6d46017e515a5a",
"hidden": false,
"name": "Dan Hendrycks",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:35:41.200Z",
"user": {
"_id": "63f55aa1b51da4d61da9c96b",
"avatarUrl": "/avatars/56cf9c2d8295c4549248d3b0a4933043.svg",
"fullname": "Dan Hendrycks",
"isPro": false,
"type": "user",
"user": "hendrycks"
}
},
{
"_id": "67bd31d6bf6d46017e515a5b",
"hidden": false,
"name": "Ariel Herbert-Voss",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd31d6bf6d46017e515a5c",
"hidden": false,
"name": "Yacine Jernite",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:35:52.204Z",
"user": {
"_id": "5ee3a7cd2a3eae3cbdad1305",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594144055859-5ee3a7cd2a3eae3cbdad1305.jpeg",
"fullname": "Yacine Jernite",
"isPro": false,
"type": "user",
"user": "yjernite"
}
},
{
"_id": "67bd31d6bf6d46017e515a5d",
"hidden": false,
"name": "Aviya Skowron",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:35:58.505Z",
"user": {
"_id": "63c5dfc8d5a5cd2043e6f03c",
"avatarUrl": "/avatars/edcfcd9cfb03286d670e6c5743efef6a.svg",
"fullname": "Aviya Skowron",
"isPro": false,
"type": "user",
"user": "avi-skowron"
}
},
{
"_id": "67bd31d6bf6d46017e515a5e",
"hidden": false,
"name": "Andrew Trask",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-25T16:36:04.995Z",
"user": {
"_id": "631d812ae207d8fe9560e57b",
"avatarUrl": "/avatars/8dbc78b47e3334b518f07a2fb18d1928.svg",
"fullname": "Andrew Trask",
"isPro": false,
"type": "user",
"user": "actrask"
}
}
] | 2025-02-23T20:06:12 | Beyond Release: Access Considerations for Generative AI Systems | Generative AI release decisions determine whether system components are made
available, but release does not address many other elements that change how
users and stakeholders are able to engage with a system. Beyond release, access
to system components informs potential risks and benefits. Access refers to
practical needs, infrastructurally, technically, and societally, in order to
use available components in some way. We deconstruct access along three axes:
resourcing, technical usability, and utility. Within each category, a set of
variables per system component clarify tradeoffs. For example, resourcing
requires access to computing infrastructure to serve model weights. We also
compare the accessibility of four high performance language models, two
open-weight and two closed-weight, showing similar considerations for all based
instead on access variables. Access variables set the foundation for being able
to scale or increase access to users; we examine the scale of access and how
scale affects ability to manage and intervene on risks. This framework better
encompasses the landscape and risk-benefit tradeoffs of system releases to
inform system release decisions, research, and policy. | 11 | 67bd31d7bf6d46017e515a7e | null | null |
|
2025-02-24T21:23:29.485000 | PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own Deep Neural Net At Inference | 2 | {
"_id": "671ddb3bf89c9b8208568e73",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/-Q77RLuIvzpVU95WGdM7u.png",
"followerCount": 2,
"fullname": "Burc Gokden",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "fromthesky",
"type": "user"
} | true | null | 2502.13502 | [
{
"_id": "67bd1db005a599263a2a684e",
"hidden": false,
"name": "Burc Gokden",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-25T01:32:33.775Z",
"user": {
"_id": "671ddb3bf89c9b8208568e73",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/-Q77RLuIvzpVU95WGdM7u.png",
"fullname": "Burc Gokden",
"isPro": false,
"type": "user",
"user": "fromthesky"
}
}
] | 2025-02-19T07:43:36 | PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own
Deep Neural Net At Inference | We show that Large Language Model from Power Law Decoder Representations
(PLDR-LLM) is a foundational model whose deductive outputs are invariant
tensors up to a small perturbation. PLDR-LLM learns a singularity condition for
the deductive outputs that enable the once-inferred energy-curvature tensor
G_{LM} to replace the deep neural network of power law graph
attention (PLGA) generating the deductive outputs at inference. We demonstrate
that a cache for G_{LM} (G-cache) and KV-cache can be implemented in
a straightforward manner to improve the inference time. The invariance and
generalizable nature of deductive outputs is at a very high fidelity where
deductive outputs have same RMSE and determinant values up to 15 decimal places
after caching, and zero-shot benchmark scores remain unchanged. Ablation
studies show that learned deductive outputs have distinct loss and accuracy
characteristics from models pretrained with transferred, randomly initialized
or identity tensors as a constant tensor operator and an LLM with scaled-dot
product attention (SDPA) is a special case of PLDR-LLM where G_{LM}
is predefined as identity. The observed invariance characteristic introduces a
novel asymmetry between training and inference phases with caching. We outline
observed common characteristics of the deductive outputs for the learned
singularity condition. We provide an implementation of a training and inference
framework for PLDR-LLM with KV-cache and G-cache. | 2 | 67bd1db105a599263a2a6851 | null | null |
|
2025-02-24T21:06:14.906000 | Towards Fully-Automated Materials Discovery via Large-Scale Synthesis Dataset and Expert-Level LLM-as-a-Judge | 2 | {
"_id": "5f3a52317e583543386218db",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f3a52317e583543386218db/WLnW1_fCic9NWfMjE3yB-.jpeg",
"followerCount": 77,
"fullname": "Heegyu Kim",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "heegyu",
"type": "user"
} | true | null | 2502.16457 | [
{
"_id": "67bd23159e826530ef606d4d",
"hidden": false,
"name": "Heegyu Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:23.108Z",
"user": {
"_id": "5f3a52317e583543386218db",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f3a52317e583543386218db/WLnW1_fCic9NWfMjE3yB-.jpeg",
"fullname": "Heegyu Kim",
"isPro": true,
"type": "user",
"user": "heegyu"
}
},
{
"_id": "67bd23159e826530ef606d4e",
"hidden": false,
"name": "Taeyang Jeon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d4f",
"hidden": false,
"name": "Seungtaek Choi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:38:03.427Z",
"user": {
"_id": "658a7aed74d1a1cbd0598bfb",
"avatarUrl": "/avatars/055dc3c82820644b316b60d118c6ff94.svg",
"fullname": "Seungtaek Choi",
"isPro": false,
"type": "user",
"user": "seungtaek-choi"
}
},
{
"_id": "67bd23159e826530ef606d50",
"hidden": false,
"name": "Jihoon Hong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d51",
"hidden": false,
"name": "Dongwon Jeon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d52",
"hidden": false,
"name": "Sungbum Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d53",
"hidden": false,
"name": "Ga-Yeon Baek",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d54",
"hidden": false,
"name": "Kyung-Won Kwak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d55",
"hidden": false,
"name": "Dong-Hee Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d56",
"hidden": false,
"name": "Sun-Jin Choi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d57",
"hidden": false,
"name": "Jisu Bae",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d58",
"hidden": false,
"name": "Chihoon Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d59",
"hidden": false,
"name": "Yunseo Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d5a",
"hidden": false,
"name": "Jinsung Park",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bd23159e826530ef606d5b",
"hidden": false,
"name": "Hyunsouk Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-23T06:16:23 | Towards Fully-Automated Materials Discovery via Large-Scale Synthesis
Dataset and Expert-Level LLM-as-a-Judge | Materials synthesis is vital for innovations such as energy storage,
catalysis, electronics, and biomedical devices. Yet, the process relies heavily
on empirical, trial-and-error methods guided by expert intuition. Our work aims
to support the materials science community by providing a practical,
data-driven resource. We have curated a comprehensive dataset of 17K
expert-verified synthesis recipes from open-access literature, which forms the
basis of our newly developed benchmark, AlchemyBench. AlchemyBench offers an
end-to-end framework that supports research in large language models applied to
synthesis prediction. It encompasses key tasks, including raw materials and
equipment prediction, synthesis procedure generation, and characterization
outcome forecasting. We propose an LLM-as-a-Judge framework that leverages
large language models for automated evaluation, demonstrating strong
statistical agreement with expert assessments. Overall, our contributions offer
a supportive foundation for exploring the capabilities of LLMs in predicting
and guiding materials synthesis, ultimately paving the way for more efficient
experimental design and accelerated innovation in materials science. | 11 | 67bd231c9e826530ef606f56 | null | null |
|
2025-02-24T17:06:56.210000 | Learning to Discover Regulatory Elements for Gene Expression Prediction | 2 | {
"_id": "65ea0819d1fc5524c18f1d35",
"avatarUrl": "/avatars/84b7a97d74705666c63447c01ae2e492.svg",
"followerCount": null,
"fullname": "haiyang yu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "oceanusity",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/65ea0819d1fc5524c18f1d35/JU9NqF4Yzq8NhjBx4jhCB.jpeg"
] | 2502.13991 | [
{
"_id": "67bced139d37199a15263ec5",
"hidden": false,
"name": "Xingyu Su",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:30.431Z",
"user": {
"_id": "662d70df52e194d5d495a567",
"avatarUrl": "/avatars/efc6cf8a4f39140cc683343d6df0580b.svg",
"fullname": "Xingyu Su",
"isPro": false,
"type": "user",
"user": "xingyusu"
}
},
{
"_id": "67bced139d37199a15263ec6",
"hidden": false,
"name": "Haiyang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bced139d37199a15263ec7",
"hidden": false,
"name": "Degui Zhi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bced139d37199a15263ec8",
"hidden": false,
"name": "Shuiwang Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T03:25:49 | Learning to Discover Regulatory Elements for Gene Expression Prediction | We consider the problem of predicting gene expressions from DNA sequences. A
key challenge of this task is to find the regulatory elements that control gene
expressions. Here, we introduce Seq2Exp, a Sequence to Expression network
explicitly designed to discover and extract regulatory elements that drive
target gene expression, enhancing the accuracy of the gene expression
prediction. Our approach captures the causal relationship between epigenomic
signals, DNA sequences and their associated regulatory elements. Specifically,
we propose to decompose the epigenomic signals and the DNA sequence conditioned
on the causal active regulatory elements, and apply an information bottleneck
with the Beta distribution to combine their effects while filtering out
non-causal components. Our experiments demonstrate that Seq2Exp outperforms
existing baselines in gene expression prediction tasks and discovers
influential regions compared to commonly used statistical methods for peak
detection such as MACS3. The source code is released as part of the AIRS
library (https://github.com/divelab/AIRS/). | 1 | 67bced149d37199a15263f26 | null | null |
|
2025-02-24T15:59:07.128000 | Rare Disease Differential Diagnosis with Large Language Models at Scale: From Abdominal Actinomycosis to Wilson's Disease | 2 | {
"_id": "64f74beb4db24c1ca9379afc",
"avatarUrl": "/avatars/c29b7d000ff65c590a5aec2d3262edd9.svg",
"followerCount": null,
"fullname": "Elliot Schumacher",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "elliotschu",
"type": "user"
} | true | null | 2502.15069 | [
{
"_id": "67bcdc646840d9686fcea432",
"hidden": false,
"name": "Elliot Schumacher",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-24T20:54:12.397Z",
"user": {
"_id": "64f74beb4db24c1ca9379afc",
"avatarUrl": "/avatars/c29b7d000ff65c590a5aec2d3262edd9.svg",
"fullname": "Elliot Schumacher",
"isPro": false,
"type": "user",
"user": "elliotschu"
}
},
{
"_id": "67bcdc646840d9686fcea433",
"hidden": false,
"name": "Dhruv Naik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bcdc646840d9686fcea434",
"hidden": false,
"name": "Anitha Kannan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T22:02:52 | Rare Disease Differential Diagnosis with Large Language Models at Scale:
From Abdominal Actinomycosis to Wilson's Disease | Large language models (LLMs) have demonstrated impressive capabilities in
disease diagnosis. However, their effectiveness in identifying rarer diseases,
which are inherently more challenging to diagnose, remains an open question.
Rare disease performance is critical with the increasing use of LLMs in
healthcare settings. This is especially true if a primary care physician needs
to make a rarer prognosis from only a patient conversation so that they can
take the appropriate next step. To that end, several clinical decision support
systems are designed to support providers in rare disease identification. Yet
their utility is limited due to their lack of knowledge of common disorders and
difficulty of use.
In this paper, we propose RareScale to combine the knowledge LLMs with expert
systems. We use jointly use an expert system and LLM to simulate rare disease
chats. This data is used to train a rare disease candidate predictor model.
Candidates from this smaller model are then used as additional inputs to
black-box LLM to make the final differential diagnosis. Thus, RareScale allows
for a balance between rare and common diagnoses. We present results on over 575
rare diseases, beginning with Abdominal Actinomycosis and ending with Wilson's
Disease. Our approach significantly improves the baseline performance of
black-box LLMs by over 17% in Top-5 accuracy. We also find that our candidate
generation performance is high (e.g. 88.8% on gpt-4o generated chats). | 2 | 67bcdc656840d9686fcea462 | null | null |
|
2025-02-24T12:53:11.851000 | Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for Scientific Comparative Analysis | 2 | {
"_id": "6476ae4083d4fdaedddf405f",
"avatarUrl": "/avatars/08b23ccfa1f3bede6ade5a1aef06931d.svg",
"followerCount": null,
"fullname": "Priyanka Kargupta",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "pkargupta",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6476ae4083d4fdaedddf405f/xUea8WgprRdaPBbcEXDlf.png",
"https://cdn-uploads.huggingface.co/production/uploads/6476ae4083d4fdaedddf405f/DenJlw9zUymRJ0r6KY0wh.png"
] | 2502.14767 | [
{
"_id": "67bcb0511cc672a91331727a",
"hidden": false,
"name": "Priyanka Kargupta",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:34.292Z",
"user": {
"_id": "6476ae4083d4fdaedddf405f",
"avatarUrl": "/avatars/08b23ccfa1f3bede6ade5a1aef06931d.svg",
"fullname": "Priyanka Kargupta",
"isPro": false,
"type": "user",
"user": "pkargupta"
}
},
{
"_id": "67bcb0511cc672a91331727b",
"hidden": false,
"name": "Ishika Agarwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bcb0511cc672a91331727c",
"hidden": false,
"name": "Tal August",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bcb0511cc672a91331727d",
"hidden": false,
"name": "Jiawei Han",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T17:43:40 | Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for
Scientific Comparative Analysis | With the exponential growth of research facilitated by modern technology and
improved accessibility, scientific discoveries have become increasingly
fragmented within and across fields. This makes it challenging to assess the
significance, novelty, incremental findings, and equivalent ideas between
related works, particularly those from different research communities. Large
language models (LLMs) have recently demonstrated strong quantitative and
qualitative reasoning abilities, and multi-agent LLM debates have shown promise
in handling complex reasoning tasks by exploring diverse perspectives and
reasoning paths. Inspired by this, we introduce Tree-of-Debate (ToD), a
framework which converts scientific papers into LLM personas that debate their
respective novelties. To emphasize structured, critical reasoning rather than
focusing solely on outcomes, ToD dynamically constructs a debate tree, enabling
fine-grained analysis of independent novelty arguments within scholarly
articles. Through experiments on scientific literature across various domains,
evaluated by expert researchers, we demonstrate that ToD generates informative
arguments, effectively contrasts papers, and supports researchers in their
literature review. | 5 | 67bcb0521cc672a9133172c6 | null | null |
|
2025-02-24T12:18:28.662000 | Benchmarking LLMs for Political Science: A United Nations Perspective | 2 | {
"_id": "650d3ed2e063bef6d9c46da7",
"avatarUrl": "/avatars/d16b3d0d403c1953780f6532e9cf27ad.svg",
"followerCount": null,
"fullname": "Yueqing Liang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yueqingliang",
"type": "user"
} | true | null | 2502.14122 | [
{
"_id": "67bbecd2039a172a715f7b50",
"hidden": false,
"name": "Yueqing Liang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:13.108Z",
"user": {
"_id": "650d3ed2e063bef6d9c46da7",
"avatarUrl": "/avatars/d16b3d0d403c1953780f6532e9cf27ad.svg",
"fullname": "Yueqing Liang",
"isPro": false,
"type": "user",
"user": "yueqingliang"
}
},
{
"_id": "67bbecd2039a172a715f7b51",
"hidden": false,
"name": "Liangwei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b52",
"hidden": false,
"name": "Chen Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b53",
"hidden": false,
"name": "Congying Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b54",
"hidden": false,
"name": "Rui Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b55",
"hidden": false,
"name": "Xiongxiao Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b56",
"hidden": false,
"name": "Haoran Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b57",
"hidden": false,
"name": "Ali Payani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbecd2039a172a715f7b58",
"hidden": false,
"name": "Kai Shu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T21:51:01 | Benchmarking LLMs for Political Science: A United Nations Perspective | Large Language Models (LLMs) have achieved significant advances in natural
language processing, yet their potential for high-stake political
decision-making remains largely unexplored. This paper addresses the gap by
focusing on the application of LLMs to the United Nations (UN) decision-making
process, where the stakes are particularly high and political decisions can
have far-reaching consequences. We introduce a novel dataset comprising
publicly available UN Security Council (UNSC) records from 1994 to 2024,
including draft resolutions, voting records, and diplomatic speeches. Using
this dataset, we propose the United Nations Benchmark (UNBench), the first
comprehensive benchmark designed to evaluate LLMs across four interconnected
political science tasks: co-penholder judgment, representative voting
simulation, draft adoption prediction, and representative statement generation.
These tasks span the three stages of the UN decision-making process--drafting,
voting, and discussing--and aim to assess LLMs' ability to understand and
simulate political dynamics. Our experimental analysis demonstrates the
potential and challenges of applying LLMs in this domain, providing insights
into their strengths and limitations in political science. This work
contributes to the growing intersection of AI and political science, opening
new avenues for research and practical applications in global governance. The
UNBench Repository can be accessed at:
https://github.com/yueqingliang1/UNBench. | 2 | 67bbecd3039a172a715f7b85 | null | null |
|
2025-02-24T11:28:12.754000 | FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation | 2 | {
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
} | false | null | 2502.13995 | [
{
"_id": "67b7ed63c5b2d0bd2eb3774d",
"hidden": false,
"name": "Yunpeng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7ed63c5b2d0bd2eb3774e",
"hidden": false,
"name": "Qiang Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:55.195Z",
"user": {
"_id": "653b195c5f1703225b2fd571",
"avatarUrl": "/avatars/b7f376225cef6c13952c9c5540dd43be.svg",
"fullname": "wangqiang",
"isPro": false,
"type": "user",
"user": "wangqiang9"
}
},
{
"_id": "67b7ed63c5b2d0bd2eb3774f",
"hidden": false,
"name": "Fan Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7ed63c5b2d0bd2eb37750",
"hidden": false,
"name": "Yaqi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7ed63c5b2d0bd2eb37751",
"hidden": false,
"name": "Mu Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7ed63c5b2d0bd2eb37752",
"hidden": false,
"name": "Yonggang Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T06:50:27 | FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation | Tuning-free approaches adapting large-scale pre-trained video diffusion
models for identity-preserving text-to-video generation (IPT2V) have gained
popularity recently due to their efficacy and scalability. However, significant
challenges remain to achieve satisfied facial dynamics while keeping the
identity unchanged. In this work, we present a novel tuning-free IPT2V
framework by enhancing face knowledge of the pre-trained video model built on
diffusion transformers (DiT), dubbed FantasyID. Essentially, 3D facial geometry
prior is incorporated to ensure plausible facial structures during video
synthesis. To prevent the model from learning copy-paste shortcuts that simply
replicate reference face across frames, a multi-view face augmentation strategy
is devised to capture diverse 2D facial appearance features, hence increasing
the dynamics over the facial expressions and head poses. Additionally, after
blending the 2D and 3D features as guidance, instead of naively employing
cross-attention to inject guidance cues into DiT layers, a learnable
layer-aware adaptive mechanism is employed to selectively inject the fused
features into each individual DiT layers, facilitating balanced modeling of
identity preservation and motion dynamics. Experimental results validate our
model's superiority over the current tuning-free IPT2V methods. | 8 | 67b7ed66c5b2d0bd2eb37899 | null | null |
|
2025-02-24T10:54:53.456000 | MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models | 2 | {
"_id": "648749094dea003c6dae810f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648749094dea003c6dae810f/gHUHSBt1zrt8wjO1YwTNu.jpeg",
"followerCount": 2,
"fullname": "Shrey Pandit",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SP2001",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/648749094dea003c6dae810f/qTlH4qY6XwrzylaSr6wr9.png",
"https://cdn-uploads.huggingface.co/production/uploads/648749094dea003c6dae810f/vfRF5rd0lMB_Cc8_U_Nmj.png",
"https://cdn-uploads.huggingface.co/production/uploads/648749094dea003c6dae810f/ghoZUKVm_nNHix4Jgo9cx.png",
"https://cdn-uploads.huggingface.co/production/uploads/648749094dea003c6dae810f/FvXCYnMlDbkgFAfMZ-cvt.png"
] | 2502.14302 | [
{
"_id": "67b7e1a0e5dcadcedba108e8",
"hidden": false,
"name": "Shrey Pandit",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-22T19:42:39.549Z",
"user": {
"_id": "648749094dea003c6dae810f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/648749094dea003c6dae810f/gHUHSBt1zrt8wjO1YwTNu.jpeg",
"fullname": "Shrey Pandit",
"isPro": false,
"type": "user",
"user": "SP2001"
}
},
{
"_id": "67b7e1a0e5dcadcedba108e9",
"hidden": false,
"name": "Jiawei Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7e1a0e5dcadcedba108ea",
"hidden": false,
"name": "Junyuan Hong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:58:00.251Z",
"user": {
"_id": "6400cf982b67d27affce2d89",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6400cf982b67d27affce2d89/Vs042d2M-iV2wk9Q_Jqh3.jpeg",
"fullname": "Junyuan Hong",
"isPro": false,
"type": "user",
"user": "jyhong836"
}
},
{
"_id": "67b7e1a0e5dcadcedba108eb",
"hidden": false,
"name": "Zhangyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7e1a0e5dcadcedba108ec",
"hidden": false,
"name": "Tianlong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7e1a0e5dcadcedba108ed",
"hidden": false,
"name": "Kaidi Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:00:54.855Z",
"user": {
"_id": "65aaa5b4d2adc31ee3eab350",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65aaa5b4d2adc31ee3eab350/vzDrR7JVCEShn_Vk-NCgT.jpeg",
"fullname": "Kaidi Xu",
"isPro": false,
"type": "user",
"user": "KaidiXu1"
}
},
{
"_id": "67b7e1a0e5dcadcedba108ee",
"hidden": false,
"name": "Ying Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T06:33:23 | MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations
in Large Language Models | Advancements in Large Language Models (LLMs) and their increasing use in
medical question-answering necessitate rigorous evaluation of their
reliability. A critical challenge lies in hallucination, where models generate
plausible yet factually incorrect outputs. In the medical domain, this poses
serious risks to patient safety and clinical decision-making. To address this,
we introduce MedHallu, the first benchmark specifically designed for medical
hallucination detection. MedHallu comprises 10,000 high-quality question-answer
pairs derived from PubMedQA, with hallucinated answers systematically generated
through a controlled pipeline. Our experiments show that state-of-the-art LLMs,
including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical,
struggle with this binary hallucination detection task, with the best model
achieving an F1 score as low as 0.625 for detecting "hard" category
hallucinations. Using bidirectional entailment clustering, we show that
harder-to-detect hallucinations are semantically closer to ground truth.
Through experiments, we also show incorporating domain-specific knowledge and
introducing a "not sure" category as one of the answer categories improves the
precision and F1 scores by up to 38% relative to baselines. | 9 | 67b7e1a1e5dcadcedba10963 | null | null |
|
2025-02-24T10:33:37.569000 | mStyleDistance: Multilingual Style Embeddings and their Evaluation | 2 | {
"_id": "61c40eeb727d1257bf3cf5ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61c40eeb727d1257bf3cf5ba/hVNbcFjsvwWqWarcGTOdI.jpeg",
"followerCount": 3,
"fullname": "Ajay Patel",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AjayP13",
"type": "user"
} | true | null | 2502.15168 | [
{
"_id": "67bc90e38915fc3c91098a9e",
"hidden": false,
"name": "Justin Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:13:58.715Z",
"user": {
"_id": "643d0a98c17bfb2256ee6c3a",
"avatarUrl": "/avatars/1e06c03a92e8845fa0aa67a884ef28ca.svg",
"fullname": "Millers",
"isPro": false,
"type": "user",
"user": "JustinQiu"
}
},
{
"_id": "67bc90e38915fc3c91098a9f",
"hidden": false,
"name": "Jiacheng Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:13:48.141Z",
"user": {
"_id": "653801b167325b6218ddfdc8",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/653801b167325b6218ddfdc8/y3DjEDpcvgnRC30201dFc.jpeg",
"fullname": "Jiacheng Zhu",
"isPro": false,
"type": "user",
"user": "JiachengZhu"
}
},
{
"_id": "67bc90e38915fc3c91098aa0",
"hidden": false,
"name": "Ajay Patel",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T16:39:31.830Z",
"user": {
"_id": "61c40eeb727d1257bf3cf5ba",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61c40eeb727d1257bf3cf5ba/hVNbcFjsvwWqWarcGTOdI.jpeg",
"fullname": "Ajay Patel",
"isPro": false,
"type": "user",
"user": "AjayP13"
}
},
{
"_id": "67bc90e38915fc3c91098aa1",
"hidden": false,
"name": "Marianna Apidianaki",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc90e38915fc3c91098aa2",
"hidden": false,
"name": "Chris Callison-Burch",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:13:29.298Z",
"user": {
"_id": "6303ce25fc783bfc744216af",
"avatarUrl": "/avatars/09f5e87c1f56a1b7f6ef9c5037682285.svg",
"fullname": "Chris Callison-Burch",
"isPro": false,
"type": "user",
"user": "CCB"
}
}
] | 2025-02-21T03:11:41 | mStyleDistance: Multilingual Style Embeddings and their Evaluation | Style embeddings are useful for stylistic analysis and style transfer;
however, only English style embeddings have been made available. We introduce
Multilingual StyleDistance (mStyleDistance), a multilingual style embedding
model trained using synthetic data and contrastive learning. We train the model
on data from nine languages and create a multilingual STEL-or-Content benchmark
(Wegmann et al., 2022) that serves to assess the embeddings' quality. We also
employ our embeddings in an authorship verification task involving different
languages. Our results show that mStyleDistance embeddings outperform existing
models on these multilingual style benchmarks and generalize well to unseen
features and languages. We make our model publicly available at
https://huggingface.co/StyleDistance/mstyledistance . | 3 | 67bc90e48915fc3c91098af6 | null | null |
|
2025-02-24T08:37:35.940000 | EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild | 2 | {
"_id": "646aecb04c1cd18b497a50ee",
"avatarUrl": "/avatars/de15c724056f36a41cb4f375d05ed836.svg",
"followerCount": null,
"fullname": "Junhyeok Kim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kjunh",
"type": "user"
} | true | null | 2502.14892 | [
{
"_id": "67bbf87b4f54983efbd94187",
"hidden": false,
"name": "Junhyeok Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:08.715Z",
"user": {
"_id": "646aecb04c1cd18b497a50ee",
"avatarUrl": "/avatars/de15c724056f36a41cb4f375d05ed836.svg",
"fullname": "Junhyeok Kim",
"isPro": false,
"type": "user",
"user": "kjunh"
}
},
{
"_id": "67bbf87b4f54983efbd94188",
"hidden": false,
"name": "Min Soo Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbf87b4f54983efbd94189",
"hidden": false,
"name": "Jiwan Chung",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:10:38.158Z",
"user": {
"_id": "60d74d1affe0328e0167dc5f",
"avatarUrl": "/avatars/9b1a2df9402e9c26e1eb7c818af9bae0.svg",
"fullname": "Jiwan Chung",
"isPro": false,
"type": "user",
"user": "jiwan-chung"
}
},
{
"_id": "67bbf87b4f54983efbd9418a",
"hidden": false,
"name": "Jungbin Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbf87b4f54983efbd9418b",
"hidden": false,
"name": "Jisoo Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbf87b4f54983efbd9418c",
"hidden": false,
"name": "Sungwoong Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:10:11.922Z",
"user": {
"_id": "662afff1ef7a4675bdf5bfb6",
"avatarUrl": "/avatars/511bc1c630dff30f7651ff8037110792.svg",
"fullname": "sungwoong kim",
"isPro": false,
"type": "user",
"user": "sukim96"
}
},
{
"_id": "67bbf87b4f54983efbd9418d",
"hidden": false,
"name": "Gyeongbo Sim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:09:47.531Z",
"user": {
"_id": "673be9ed0a4d127a7f4d0ea6",
"avatarUrl": "/avatars/c15690d22a3b2ce098e195e54c9f414e.svg",
"fullname": "Gyeongbo Sim",
"isPro": false,
"type": "user",
"user": "gbosim"
}
},
{
"_id": "67bbf87b4f54983efbd9418e",
"hidden": false,
"name": "Youngjae Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:09:38.274Z",
"user": {
"_id": "6504777fb1da3747a05160c4",
"avatarUrl": "/avatars/b777d98a5ff971ddb4c3e1060bb3e070.svg",
"fullname": "Youngjae Yu",
"isPro": false,
"type": "user",
"user": "yjyu"
}
}
] | 2025-02-17T04:47:12 | EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in
the Wild | Predicting when to initiate speech in real-world environments remains a
fundamental challenge for conversational agents. We introduce EgoSpeak, a novel
framework for real-time speech initiation prediction in egocentric streaming
video. By modeling the conversation from the speaker's first-person viewpoint,
EgoSpeak is tailored for human-like interactions in which a conversational
agent must continuously observe its environment and dynamically decide when to
talk. Our approach bridges the gap between simplified experimental setups and
complex natural conversations by integrating four key capabilities: (1)
first-person perspective, (2) RGB processing, (3) online processing, and (4)
untrimmed video processing. We also present YT-Conversation, a diverse
collection of in-the-wild conversational videos from YouTube, as a resource for
large-scale pretraining. Experiments on EasyCom and Ego4D demonstrate that
EgoSpeak outperforms random and silence-based baselines in real time. Our
results also highlight the importance of multimodal input and context length in
effectively deciding when to speak. | 6 | 67bbf87d4f54983efbd941ec | null | null |
|
2025-02-24T07:41:08.352000 | Is Safety Standard Same for Everyone? User-Specific Safety Evaluation of Large Language Models | 2 | {
"_id": "60af909e288a0f96f6cefc4d",
"avatarUrl": "/avatars/44a8ab48acb58c45de0b0947a1b56e7c.svg",
"followerCount": 4,
"fullname": "Yeonjun In",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yeonjun",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/60af909e288a0f96f6cefc4d/HK-3ALEi_S7sCnikcsULP.png",
"https://cdn-uploads.huggingface.co/production/uploads/60af909e288a0f96f6cefc4d/5ibSAEzxHYtYUIK4FpSkp.png",
"https://cdn-uploads.huggingface.co/production/uploads/60af909e288a0f96f6cefc4d/r727Pzrq-PvVKdetEl5U6.png"
] | 2502.15086 | [
{
"_id": "67bbfc7ab3920fd18e63cb26",
"hidden": false,
"name": "Yeonjun In",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:06.533Z",
"user": {
"_id": "60af909e288a0f96f6cefc4d",
"avatarUrl": "/avatars/44a8ab48acb58c45de0b0947a1b56e7c.svg",
"fullname": "Yeonjun In",
"isPro": false,
"type": "user",
"user": "Yeonjun"
}
},
{
"_id": "67bbfc7ab3920fd18e63cb27",
"hidden": false,
"name": "Wonjoong Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:05:06.940Z",
"user": {
"_id": "67bc62b77727595ca5b6a4ca",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/JXilVx4ACCw6L9BnomqVP.png",
"fullname": "Wonjoong Kim",
"isPro": false,
"type": "user",
"user": "wjkim0229"
}
},
{
"_id": "67bbfc7ab3920fd18e63cb28",
"hidden": false,
"name": "Kanghoon Yoon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfc7ab3920fd18e63cb29",
"hidden": false,
"name": "Sungchul Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:28:51.033Z",
"user": {
"_id": "6250611d2f9acc6168e42737",
"avatarUrl": "/avatars/1e053d3fa387d81b45a2435e4a633ad1.svg",
"fullname": "Sungchul Kim",
"isPro": false,
"type": "user",
"user": "subright"
}
},
{
"_id": "67bbfc7ab3920fd18e63cb2a",
"hidden": false,
"name": "Mehrab Tanjim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:28:37.445Z",
"user": {
"_id": "6366e2d9575c93ceda0791d8",
"avatarUrl": "/avatars/a53cb1bb7cd9c63a2520587108ffe962.svg",
"fullname": "Mehrab Tanjim",
"isPro": false,
"type": "user",
"user": "Mehrab-Tanjim"
}
},
{
"_id": "67bbfc7ab3920fd18e63cb2b",
"hidden": false,
"name": "Kibum Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:28:30.900Z",
"user": {
"_id": "64b73688dcbce176037ef420",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b73688dcbce176037ef420/R-fcEaZz5vrY74QO5oclB.jpeg",
"fullname": "Kibum Kim",
"isPro": false,
"type": "user",
"user": "kb-kim"
}
},
{
"_id": "67bbfc7ab3920fd18e63cb2c",
"hidden": false,
"name": "Chanyoung Park",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T22:58:44 | Is Safety Standard Same for Everyone? User-Specific Safety Evaluation of
Large Language Models | As the use of large language model (LLM) agents continues to grow, their
safety vulnerabilities have become increasingly evident. Extensive benchmarks
evaluate various aspects of LLM safety by defining the safety relying heavily
on general standards, overlooking user-specific standards. However, safety
standards for LLM may vary based on a user-specific profiles rather than being
universally consistent across all users. This raises a critical research
question: Do LLM agents act safely when considering user-specific safety
standards? Despite its importance for safe LLM use, no benchmark datasets
currently exist to evaluate the user-specific safety of LLMs. To address this
gap, we introduce U-SAFEBENCH, the first benchmark designed to assess
user-specific aspect of LLM safety. Our evaluation of 18 widely used LLMs
reveals current LLMs fail to act safely when considering user-specific safety
standards, marking a new discovery in this field. To address this
vulnerability, we propose a simple remedy based on chain-of-thought,
demonstrating its effectiveness in improving user-specific safety. Our
benchmark and code are available at https://github.com/yeonjun-in/U-SafeBench. | 14 | 67bbfc7bb3920fd18e63cb55 | null | null |
|
2025-02-24T07:37:26.684000 | WHAC: World-grounded Humans and Cameras | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2403.12959 | [
{
"_id": "67bc67e17727595ca5b7ddb4",
"hidden": false,
"name": "Wanqi Yin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:17:51.355Z",
"user": {
"_id": "668f51fb8e8d87dbdd23caa9",
"avatarUrl": "/avatars/0b17cb0f3ad2c729f185cdccdad94e48.svg",
"fullname": "Yin",
"isPro": false,
"type": "user",
"user": "waanqii"
}
},
{
"_id": "67bc67e17727595ca5b7ddb5",
"hidden": false,
"name": "Zhongang Cai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:14:30.375Z",
"user": {
"_id": "652d06833b5997ed71ce5c46",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/xZTXEcnEogEmBm_ledJQr.jpeg",
"fullname": "Zhongang Cai",
"isPro": false,
"type": "user",
"user": "caizhongang"
}
},
{
"_id": "67bc67e17727595ca5b7ddb6",
"hidden": false,
"name": "Ruisi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:14:42.508Z",
"user": {
"_id": "65b74305e602b6c2c9125480",
"avatarUrl": "/avatars/d36909e0f245bfeb632a4afc9d3fceca.svg",
"fullname": "wang ruisi",
"isPro": false,
"type": "user",
"user": "wruisi"
}
},
{
"_id": "67bc67e17727595ca5b7ddb7",
"hidden": false,
"name": "Fanzhou Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:14:48.396Z",
"user": {
"_id": "66ac94b7ccd0aed70992a8be",
"avatarUrl": "/avatars/98db346b7bfcb2040d4b58727a73d18b.svg",
"fullname": "Fanzhou Wang",
"isPro": false,
"type": "user",
"user": "wentww"
}
},
{
"_id": "67bc67e17727595ca5b7ddb8",
"hidden": false,
"name": "Chen Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc67e17727595ca5b7ddb9",
"hidden": false,
"name": "Haiyi Mei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:14:59.208Z",
"user": {
"_id": "635a895ef4a106ecd9203b2d",
"avatarUrl": "/avatars/0ed9967f559582c2d93b5471b39f731a.svg",
"fullname": "haiyimei",
"isPro": false,
"type": "user",
"user": "haiyimei"
}
},
{
"_id": "67bc67e17727595ca5b7ddba",
"hidden": false,
"name": "Weiye Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc67e17727595ca5b7ddbb",
"hidden": false,
"name": "Zhitao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc67e17727595ca5b7ddbc",
"hidden": false,
"name": "Qingping Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:15:19.079Z",
"user": {
"_id": "65e20f88bb9c052e4171b857",
"avatarUrl": "/avatars/5dcb3fe293c53e051842baf9024d589b.svg",
"fullname": "Qingping SUN",
"isPro": false,
"type": "user",
"user": "ttxskk"
}
},
{
"_id": "67bc67e17727595ca5b7ddbd",
"hidden": false,
"name": "Atsushi Yamashita",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:15:25.136Z",
"user": {
"_id": "6485d56c77076d551d4adedc",
"avatarUrl": "/avatars/0154474a6c6ec39a08f354b5dd69e1e3.svg",
"fullname": "atsushi yamashita",
"isPro": false,
"type": "user",
"user": "atsu-yama"
}
},
{
"_id": "67bc67e17727595ca5b7ddbe",
"hidden": false,
"name": "Ziwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:14:15.911Z",
"user": {
"_id": "62ab1ac1d48b4d8b048a3473",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png",
"fullname": "Ziwei Liu",
"isPro": false,
"type": "user",
"user": "liuziwei7"
}
},
{
"_id": "67bc67e17727595ca5b7ddbf",
"hidden": false,
"name": "Lei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-03-19T17:58:02 | WHAC: World-grounded Humans and Cameras | Estimating human and camera trajectories with accurate scale in the world
coordinate system from a monocular video is a highly desirable yet challenging
and ill-posed problem. In this study, we aim to recover expressive parametric
human models (i.e., SMPL-X) and corresponding camera poses jointly, by
leveraging the synergy between three critical players: the world, the human,
and the camera. Our approach is founded on two key observations. Firstly,
camera-frame SMPL-X estimation methods readily recover absolute human depth.
Secondly, human motions inherently provide absolute spatial cues. By
integrating these insights, we introduce a novel framework, referred to as
WHAC, to facilitate world-grounded expressive human pose and shape estimation
(EHPS) alongside camera pose estimation, without relying on traditional
optimization techniques. Additionally, we present a new synthetic dataset,
WHAC-A-Mole, which includes accurately annotated humans and cameras, and
features diverse interactive human motions as well as realistic camera
trajectories. Extensive experiments on both standard and newly established
benchmarks highlight the superiority and efficacy of our framework. We will
make the code and dataset publicly available. | 3 | 67bc67e57727595ca5b7deb9 | null | null |
|
2025-02-24T06:58:46.440000 | Evaluating Multimodal Generative AI with Korean Educational Standards | 3 | {
"_id": "6639f75a910d619c288f8a86",
"avatarUrl": "/avatars/4b20a056798c009eaf665b0e3021db60.svg",
"followerCount": null,
"fullname": "sanghee park",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sangheeeee",
"type": "user"
} | true | null | 2502.15422 | [
{
"_id": "67bc53ad670ece8d919a8fe1",
"hidden": false,
"name": "Sanghee Park",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T11:54:51.633Z",
"user": {
"_id": "6639f75a910d619c288f8a86",
"avatarUrl": "/avatars/4b20a056798c009eaf665b0e3021db60.svg",
"fullname": "sanghee park",
"isPro": false,
"type": "user",
"user": "sangheeeee"
}
},
{
"_id": "67bc53ad670ece8d919a8fe2",
"hidden": false,
"name": "Geewook Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:04:57.334Z",
"user": {
"_id": "6298362c9d3de7b32fd11526",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1658473855720-6298362c9d3de7b32fd11526.jpeg",
"fullname": "Geewook Kim",
"isPro": false,
"type": "user",
"user": "gwkrsrch"
}
}
] | 2025-02-21T12:46:40 | Evaluating Multimodal Generative AI with Korean Educational Standards | This paper presents the Korean National Educational Test Benchmark (KoNET), a
new benchmark designed to evaluate Multimodal Generative AI Systems using
Korean national educational tests. KoNET comprises four exams: the Korean
Elementary General Educational Development Test (KoEGED), Middle (KoMGED), High
(KoHGED), and College Scholastic Ability Test (KoCSAT). These exams are
renowned for their rigorous standards and diverse questions, facilitating a
comprehensive analysis of AI performance across different educational levels.
By focusing on Korean, KoNET provides insights into model performance in
less-explored languages. We assess a range of models - open-source,
open-access, and closed APIs - by examining difficulties, subject diversity,
and human error rates. The code and dataset builder will be made fully
open-sourced at https://github.com/naver-ai/KoNET. | 9 | 67bc53ae670ece8d919a901a | null | null |
|
2025-02-24T06:44:41.263000 | Beyond No: Quantifying AI Over-Refusal and Emotional Attachment Boundaries | 3 | {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"followerCount": null,
"fullname": "David Noever",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dnoever",
"type": "user"
} | true | null | 2502.14975 | [
{
"_id": "67bc5b6b876dad36abdd56fb",
"hidden": false,
"name": "David Noever",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:15:40.749Z",
"user": {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"fullname": "David Noever",
"isPro": false,
"type": "user",
"user": "dnoever"
}
},
{
"_id": "67bc5b6b876dad36abdd56fc",
"hidden": false,
"name": "Grant Rosario",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T19:09:40 | Beyond No: Quantifying AI Over-Refusal and Emotional Attachment
Boundaries | We present an open-source benchmark and evaluation framework for assessing
emotional boundary handling in Large Language Models (LLMs). Using a dataset of
1156 prompts across six languages, we evaluated three leading LLMs (GPT-4o,
Claude-3.5 Sonnet, and Mistral-large) on their ability to maintain appropriate
emotional boundaries through pattern-matched response analysis. Our framework
quantifies responses across seven key patterns: direct refusal, apology,
explanation, deflection, acknowledgment, boundary setting, and emotional
awareness. Results demonstrate significant variation in boundary-handling
approaches, with Claude-3.5 achieving the highest overall score (8.69/10) and
producing longer, more nuanced responses (86.51 words on average). We
identified a substantial performance gap between English (average score 25.62)
and non-English interactions (< 0.22), with English responses showing markedly
higher refusal rates (43.20% vs. < 1% for non-English). Pattern analysis
revealed model-specific strategies, such as Mistral's preference for deflection
(4.2%) and consistently low empathy scores across all models (< 0.06).
Limitations include potential oversimplification through pattern matching, lack
of contextual understanding in response analysis, and binary classification of
complex emotional responses. Future work should explore more nuanced scoring
methods, expand language coverage, and investigate cultural variations in
emotional boundary expectations. Our benchmark and methodology provide a
foundation for systematic evaluation of LLM emotional intelligence and
boundary-setting capabilities. | 0 | 67bc5b6c876dad36abdd5736 | null | null |
|
2025-02-24T05:43:47.767000 | KITAB-Bench: A Comprehensive Multi-Domain Benchmark for Arabic OCR and Document Understanding | 2 | {
"_id": "656864e12d73834278a8dea7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg",
"followerCount": 27,
"fullname": "Ahmed Heakl",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "ahmedheakl",
"type": "user"
} | true | null | 2502.14949 | [
{
"_id": "67bc4ced7727595ca5b108f1",
"hidden": false,
"name": "Ahmed Heakl",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T10:58:23.973Z",
"user": {
"_id": "656864e12d73834278a8dea7",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg",
"fullname": "Ahmed Heakl",
"isPro": true,
"type": "user",
"user": "ahmedheakl"
}
},
{
"_id": "67bc4ced7727595ca5b108f2",
"hidden": false,
"name": "Abdullah Sohail",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:05:04.733Z",
"user": {
"_id": "672e4574b60c3a27d783a1ac",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/aut4W4hJcOT8jvQnlWs-y.png",
"fullname": "Muhammad Abdullah",
"isPro": false,
"type": "user",
"user": "mabdullahsohail"
}
},
{
"_id": "67bc4ced7727595ca5b108f3",
"hidden": false,
"name": "Mukul Ranjan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T11:54:56.690Z",
"user": {
"_id": "65262a396b41932089fd7bae",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65262a396b41932089fd7bae/6YIEoAfJojuTW1UOKlwZT.png",
"fullname": "Mukul Ranjan",
"isPro": false,
"type": "user",
"user": "mukul54"
}
},
{
"_id": "67bc4ced7727595ca5b108f4",
"hidden": false,
"name": "Rania Hossam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108f5",
"hidden": false,
"name": "Ghazi Ahmed",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108f6",
"hidden": false,
"name": "Mohamed El-Geish",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108f7",
"hidden": false,
"name": "Omar Maher",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108f8",
"hidden": false,
"name": "Zhiqiang Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108f9",
"hidden": false,
"name": "Fahad Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc4ced7727595ca5b108fa",
"hidden": false,
"name": "Salman Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T18:41:23 | KITAB-Bench: A Comprehensive Multi-Domain Benchmark for Arabic OCR and
Document Understanding | With the growing adoption of Retrieval-Augmented Generation (RAG) in document
processing, robust text recognition has become increasingly critical for
knowledge extraction. While OCR (Optical Character Recognition) for English and
other languages benefits from large datasets and well-established benchmarks,
Arabic OCR faces unique challenges due to its cursive script, right-to-left
text flow, and complex typographic and calligraphic features. We present
KITAB-Bench, a comprehensive Arabic OCR benchmark that fills the gaps in
current evaluation systems. Our benchmark comprises 8,809 samples across 9
major domains and 36 sub-domains, encompassing diverse document types including
handwritten text, structured tables, and specialized coverage of 21 chart types
for business intelligence. Our findings show that modern vision-language models
(such as GPT-4, Gemini, and Qwen) outperform traditional OCR approaches (like
EasyOCR, PaddleOCR, and Surya) by an average of 60% in Character Error Rate
(CER). Furthermore, we highlight significant limitations of current Arabic OCR
models, particularly in PDF-to-Markdown conversion, where the best model
Gemini-2.0-Flash achieves only 65% accuracy. This underscores the challenges in
accurately recognizing Arabic text, including issues with complex fonts,
numeral recognition errors, word elongation, and table structure detection.
This work establishes a rigorous evaluation framework that can drive
improvements in Arabic document analysis methods and bridge the performance gap
with English OCR technologies. | 6 | 67bc4cee7727595ca5b10967 | null | null |
|
2025-02-24T05:35:44.667000 | ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality Protein Backbone Generation | 3 | {
"_id": "63021e665e305a35cb09cb35",
"avatarUrl": "/avatars/442e61765cb755f55540192e9a80cf80.svg",
"followerCount": 1,
"fullname": "AngxiaoYue",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AngxiaoYue",
"type": "user"
} | true | null | 2502.14637 | [
{
"_id": "67bc3393057a4685851067c9",
"hidden": false,
"name": "Angxiao Yue",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:06:54.271Z",
"user": {
"_id": "63021e665e305a35cb09cb35",
"avatarUrl": "/avatars/442e61765cb755f55540192e9a80cf80.svg",
"fullname": "AngxiaoYue",
"isPro": false,
"type": "user",
"user": "AngxiaoYue"
}
},
{
"_id": "67bc3393057a4685851067ca",
"hidden": false,
"name": "Zichong Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T11:54:58.392Z",
"user": {
"_id": "669a2fd15bd3f749a3eb7b65",
"avatarUrl": "/avatars/7b23f892d2dc8fc87f62469dc02524ac.svg",
"fullname": "ZiChong Wang",
"isPro": false,
"type": "user",
"user": "EatEatEatEat"
}
},
{
"_id": "67bc3393057a4685851067cb",
"hidden": false,
"name": "Hongteng Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:24:57.462Z",
"user": {
"_id": "67bc72956d5bfdc989e194dd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/LB6fZyznoTrPKFUDJia2i.png",
"fullname": "Hongteng Xu",
"isPro": false,
"type": "user",
"user": "Hongteng"
}
}
] | 2025-02-20T15:20:37 | ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality
Protein Backbone Generation | Protein backbone generation plays a central role in de novo protein design
and is significant for many biological and medical applications. Although
diffusion and flow-based generative models provide potential solutions to this
challenging task, they often generate proteins with undesired designability and
suffer computational inefficiency. In this study, we propose a novel rectified
quaternion flow (ReQFlow) matching method for fast and high-quality protein
backbone generation. In particular, our method generates a local translation
and a 3D rotation from random noise for each residue in a protein chain, which
represents each 3D rotation as a unit quaternion and constructs its flow by
spherical linear interpolation (SLERP) in an exponential format. We train the
model by quaternion flow (QFlow) matching with guaranteed numerical stability
and rectify the QFlow model to accelerate its inference and improve the
designability of generated protein backbones, leading to the proposed ReQFlow
model. Experiments show that ReQFlow achieves state-of-the-art performance in
protein backbone generation while requiring much fewer sampling steps and
significantly less inference time (e.g., being 37x faster than RFDiffusion and
62x faster than Genie2 when generating a backbone of length 300), demonstrating
its effectiveness and efficiency. The code is available at
https://github.com/AngxiaoYue/ReQFlow. | 6 | 67bc3396057a468585106864 | null | null |
|
2025-02-24T04:52:30.963000 | MoBA: Mixture of Block Attention for Long-Context LLMs | 2 | {
"_id": "63a369d98c0c89dcae3b8329",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/6OUJ7Hc9T1jXynYH3FGaf.png",
"followerCount": 439,
"fullname": "Adina Yakefu",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "AdinaY",
"type": "user"
} | false | null | 2502.13189 | [
{
"_id": "67b7152f299e4d30f9eb41c2",
"hidden": false,
"name": "Enzhe Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:48:22.594Z",
"user": {
"_id": "67aed930cc96f87ce3c3132f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/JDrhmbCRcuCtKir7i9z9n.png",
"fullname": "Lu",
"isPro": false,
"type": "user",
"user": "Enzhe"
}
},
{
"_id": "67b7152f299e4d30f9eb41c3",
"hidden": false,
"name": "Zhejun Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:48:29.904Z",
"user": {
"_id": "662c6e8352e194d5d44d873c",
"avatarUrl": "/avatars/385a5cc7299faf2f61ccbabedd827f29.svg",
"fullname": "Zhejun Jiang",
"isPro": false,
"type": "user",
"user": "Skewed"
}
},
{
"_id": "67b7152f299e4d30f9eb41c4",
"hidden": false,
"name": "Jingyuan Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:48:36.573Z",
"user": {
"_id": "64ead9e35349043b2b941a03",
"avatarUrl": "/avatars/e9acef299086f0245ff364d9d7889007.svg",
"fullname": "JingyuanLiu",
"isPro": false,
"type": "user",
"user": "JingyuanLiu"
}
},
{
"_id": "67b7152f299e4d30f9eb41c5",
"hidden": false,
"name": "Yulun Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:48:47.033Z",
"user": {
"_id": "6340f31fb78ed99eab04ce33",
"avatarUrl": "/avatars/2e7fcbf0233bdc0bc9a3f4603fd8bf90.svg",
"fullname": "Du",
"isPro": false,
"type": "user",
"user": "Yulun"
}
},
{
"_id": "67b7152f299e4d30f9eb41c6",
"hidden": false,
"name": "Tao Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41c7",
"hidden": false,
"name": "Chao Hong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41c8",
"hidden": true,
"name": "Shaowei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41c9",
"hidden": false,
"name": "Weiran He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41ca",
"hidden": false,
"name": "Enming Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:49:40.245Z",
"user": {
"_id": "6331606f18711776b4655e67",
"avatarUrl": "/avatars/1479c2ca743b9f92d845b0ed23fcd07b.svg",
"fullname": "Enming Yuan",
"isPro": false,
"type": "user",
"user": "EnmingYuan"
}
},
{
"_id": "67b7152f299e4d30f9eb41cb",
"hidden": false,
"name": "Yuzhi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:49:48.436Z",
"user": {
"_id": "67127a470a82509269d738ae",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/M9qLmI3P6dT2FIwEPFJq0.png",
"fullname": "yuzhi wang",
"isPro": false,
"type": "user",
"user": "vin-tage"
}
},
{
"_id": "67b7152f299e4d30f9eb41cc",
"hidden": false,
"name": "Zhiqi Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:49:54.439Z",
"user": {
"_id": "66221f1a90f3fd333c4ec52e",
"avatarUrl": "/avatars/a3173d9603a69020ec24170831c97c2f.svg",
"fullname": "Zhiqi Huang",
"isPro": false,
"type": "user",
"user": "Angelalilyer"
}
},
{
"_id": "67b7152f299e4d30f9eb41cd",
"hidden": false,
"name": "Huan Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41ce",
"hidden": false,
"name": "Suting Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:50:08.223Z",
"user": {
"_id": "649e7c2afbdfd3c16128ce6e",
"avatarUrl": "/avatars/1ff863b0fa39cfe4285255e4417c1db4.svg",
"fullname": "Suting Xu",
"isPro": false,
"type": "user",
"user": "susu1210"
}
},
{
"_id": "67b7152f299e4d30f9eb41cf",
"hidden": false,
"name": "Xinran Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41d0",
"hidden": false,
"name": "Guokun Lai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:50:25.601Z",
"user": {
"_id": "63b4c71758f367a212c4f9ef",
"avatarUrl": "/avatars/d61736e0ae8b333a7c24eb411378698c.svg",
"fullname": "Lai",
"isPro": false,
"type": "user",
"user": "Guokun"
}
},
{
"_id": "67b7152f299e4d30f9eb41d1",
"hidden": false,
"name": "Yanru Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:50:32.381Z",
"user": {
"_id": "6365df6912188d67e65f5c5b",
"avatarUrl": "/avatars/59a1d2f30ba4faea0336bedf4df321a8.svg",
"fullname": "Yanru Chen",
"isPro": false,
"type": "user",
"user": "AChen-qaq"
}
},
{
"_id": "67b7152f299e4d30f9eb41d2",
"hidden": false,
"name": "Huabin Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:50:38.272Z",
"user": {
"_id": "61860e1258cb1f8c362f9441",
"avatarUrl": "/avatars/8dbc8209ad0d918453c1ffacc8f61e7f.svg",
"fullname": "Huabin Zheng",
"isPro": false,
"type": "user",
"user": "zhenghuabin"
}
},
{
"_id": "67b7152f299e4d30f9eb41d3",
"hidden": false,
"name": "Junjie Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41d4",
"hidden": false,
"name": "Jianlin Su",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:50:44.579Z",
"user": {
"_id": "6404982cad54665351d7c1e0",
"avatarUrl": "/avatars/8fb6d01802cbd4a1cbb7f6a0d83faa3a.svg",
"fullname": "jianlin su",
"isPro": false,
"type": "user",
"user": "bojone"
}
},
{
"_id": "67b7152f299e4d30f9eb41d5",
"hidden": false,
"name": "Yuxin Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41d6",
"hidden": false,
"name": "Neo Y. Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41d7",
"hidden": false,
"name": "Zhilin Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:51:03.452Z",
"user": {
"_id": "64bf74154d2052b1aa5ca6d9",
"avatarUrl": "/avatars/7aa6f2952cdbc20cfa758fdd905f06a6.svg",
"fullname": "ZHILIN YANG",
"isPro": false,
"type": "user",
"user": "bruceyannnn"
}
},
{
"_id": "67b7152f299e4d30f9eb41d8",
"hidden": false,
"name": "Xinyu Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41d9",
"hidden": false,
"name": "Mingxing Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7152f299e4d30f9eb41da",
"hidden": false,
"name": "Jiezhong Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:51:14.319Z",
"user": {
"_id": "64b4fb7146dd1c352b0da25a",
"avatarUrl": "/avatars/b5c15bca8020c4841a87252ce9ed1618.svg",
"fullname": "Jiezhong Qiu",
"isPro": false,
"type": "user",
"user": "xptree"
}
}
] | 2025-02-18T14:06:05 | MoBA: Mixture of Block Attention for Long-Context LLMs | Scaling the effective context length is essential for advancing large
language models (LLMs) toward artificial general intelligence (AGI). However,
the quadratic increase in computational complexity inherent in traditional
attention mechanisms presents a prohibitive overhead. Existing approaches
either impose strongly biased structures, such as sink or window attention
which are task-specific, or radically modify the attention mechanism into
linear approximations, whose performance in complex reasoning tasks remains
inadequately explored.
In this work, we propose a solution that adheres to the ``less structure''
principle, allowing the model to determine where to attend autonomously, rather
than introducing predefined biases. We introduce Mixture of Block Attention
(MoBA), an innovative approach that applies the principles of Mixture of
Experts (MoE) to the attention mechanism. This novel architecture demonstrates
superior performance on long-context tasks while offering a key advantage: the
ability to seamlessly transition between full and sparse attention, enhancing
efficiency without the risk of compromising performance. MoBA has already been
deployed to support Kimi's long-context requests and demonstrates significant
advancements in efficient attention computation for LLMs. Our code is available
at https://github.com/MoonshotAI/MoBA. | 13 | 67b71530299e4d30f9eb4213 | null | null |
|
2025-02-24T04:29:42.452000 | JL1-CD: A New Benchmark for Remote Sensing Change Detection and a Robust Multi-Teacher Knowledge Distillation Framework | 2 | {
"_id": "67bb32b6a0cb6e48cfd27d80",
"avatarUrl": "/avatars/3cafe3a3fb60405252962d00105667c5.svg",
"followerCount": null,
"fullname": "Ziyuan Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "circleLZY",
"type": "user"
} | true | null | 2502.13407 | [
{
"_id": "67bb33f3829dedfc99ae1288",
"hidden": false,
"name": "Ziyuan Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:29.223Z",
"user": {
"_id": "67bb32b6a0cb6e48cfd27d80",
"avatarUrl": "/avatars/3cafe3a3fb60405252962d00105667c5.svg",
"fullname": "Ziyuan Liu",
"isPro": false,
"type": "user",
"user": "circleLZY"
}
},
{
"_id": "67bb33f3829dedfc99ae1289",
"hidden": false,
"name": "Ruifei Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bb33f3829dedfc99ae128a",
"hidden": false,
"name": "Long Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bb33f3829dedfc99ae128b",
"hidden": false,
"name": "Yuanxiu Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bb33f3829dedfc99ae128c",
"hidden": false,
"name": "Jingyu Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:16:21.974Z",
"user": {
"_id": "673a9a638127cd120c9c272d",
"avatarUrl": "/avatars/d6e225d7a869487cb48c4ac89d048cb4.svg",
"fullname": "Jingyu Ma",
"isPro": false,
"type": "user",
"user": "jingyum"
}
},
{
"_id": "67bb33f3829dedfc99ae128d",
"hidden": false,
"name": "Yuantao Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T03:33:54 | JL1-CD: A New Benchmark for Remote Sensing Change Detection and a Robust
Multi-Teacher Knowledge Distillation Framework | Deep learning has achieved significant success in the field of remote sensing
image change detection (CD), yet two major challenges remain: the scarcity of
sub-meter, all-inclusive open-source CD datasets, and the difficulty of
achieving consistent and satisfactory detection results across images with
varying change areas. To address these issues, we introduce the JL1-CD dataset,
which contains 5,000 pairs of 512 x 512 pixel images with a resolution of 0.5
to 0.75 meters. Additionally, we propose a multi-teacher knowledge distillation
(MTKD) framework for CD. Experimental results on the JL1-CD and SYSU-CD
datasets demonstrate that the MTKD framework significantly improves the
performance of CD models with various network architectures and parameter
sizes, achieving new state-of-the-art results. The code is available at
https://github.com/circleLZY/MTKD-CD. | 1 | 67bb33f6829dedfc99ae135e | null | null |
|
2025-02-24T02:07:41.624000 | LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context Memory of Transformers | 3 | {
"_id": "6172aaeec8e66e2aa84c06b9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6172aaeec8e66e2aa84c06b9/ZdRZSp3P1SU6CIDbvQwkv.jpeg",
"followerCount": 12,
"fullname": "Anton Razzhigaev",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "razzant",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/6172aaeec8e66e2aa84c06b9/ZPSmOQ-7Yd7B7YIYiwcTw.png"
] | 2502.15007 | [
{
"_id": "67bc1a4a72499ce2ba28cc70",
"hidden": false,
"name": "Anton Razzhigaev",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:11:53.576Z",
"user": {
"_id": "6172aaeec8e66e2aa84c06b9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6172aaeec8e66e2aa84c06b9/ZdRZSp3P1SU6CIDbvQwkv.jpeg",
"fullname": "Anton Razzhigaev",
"isPro": false,
"type": "user",
"user": "razzant"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc71",
"hidden": false,
"name": "Matvey Mikhalchuk",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:11:58.863Z",
"user": {
"_id": "64ee45a944f4b3b1bccc02d1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ee45a944f4b3b1bccc02d1/SoidO9HQ4mftzbUPtuBBf.png",
"fullname": "Matvey Mikhalchuk",
"isPro": false,
"type": "user",
"user": "matveymih"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc72",
"hidden": false,
"name": "Temurbek Rahmatullaev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:40.159Z",
"user": {
"_id": "659c33fe1801bc22227b8ff6",
"avatarUrl": "/avatars/837cdb3351cfd84dc9dcef37bcf18dff.svg",
"fullname": "Temurbek",
"isPro": false,
"type": "user",
"user": "raxtemur"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc73",
"hidden": false,
"name": "Elizaveta Goncharova",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:12:27.998Z",
"user": {
"_id": "6310ff34bc152fa3e810c186",
"avatarUrl": "/avatars/bfd63bcd81548283f5e496e3693bf143.svg",
"fullname": "Elizaveta Goncharova",
"isPro": false,
"type": "user",
"user": "Elizaveta"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc74",
"hidden": false,
"name": "Polina Druzhinina",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:12:33.837Z",
"user": {
"_id": "65d5e094cd05bc1eaa0fafc9",
"avatarUrl": "/avatars/ea3d52def6ef4d9af07728a76a499a9f.svg",
"fullname": "Polina Druzhinina",
"isPro": false,
"type": "user",
"user": "plina2polina"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc75",
"hidden": false,
"name": "Ivan Oseledets",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:12:40.295Z",
"user": {
"_id": "6169a581d05945bfd8718dfa",
"avatarUrl": "/avatars/1892ab06a7ddb557232777de3cbec470.svg",
"fullname": "Ivan Oseledets",
"isPro": false,
"type": "user",
"user": "oseledets"
}
},
{
"_id": "67bc1a4a72499ce2ba28cc76",
"hidden": false,
"name": "Andrey Kuznetsov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:40:42.629Z",
"user": {
"_id": "643984dceb7c5616ef3f5d54",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/643984dceb7c5616ef3f5d54/10JRkblrRIEVci6UJwvPz.jpeg",
"fullname": "Andrey Kuznetsov",
"isPro": false,
"type": "user",
"user": "kuznetsoffandrey"
}
}
] | 2025-02-20T19:59:35 | LLM-Microscope: Uncovering the Hidden Role of Punctuation in Context
Memory of Transformers | We introduce methods to quantify how Large Language Models (LLMs) encode and
store contextual information, revealing that tokens often seen as minor (e.g.,
determiners, punctuation) carry surprisingly high context. Notably, removing
these tokens -- especially stopwords, articles, and commas -- consistently
degrades performance on MMLU and BABILong-4k, even if removing only irrelevant
tokens. Our analysis also shows a strong correlation between contextualization
and linearity, where linearity measures how closely the transformation from one
layer's embeddings to the next can be approximated by a single linear mapping.
These findings underscore the hidden importance of filler tokens in maintaining
context. For further exploration, we present LLM-Microscope, an open-source
toolkit that assesses token-level nonlinearity, evaluates contextual memory,
visualizes intermediate layer contributions (via an adapted Logit Lens), and
measures the intrinsic dimensionality of representations. This toolkit
illuminates how seemingly trivial tokens can be critical for long-range
understanding. | 156 | 67bc1a4c72499ce2ba28cd49 | null | null |
|
2025-02-24T01:16:03.517000 | MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction | 2 | {
"_id": "65717368be66cd9b65a8201c",
"avatarUrl": "/avatars/fe945828eec9ded4cfa3b89d48a64d90.svg",
"followerCount": null,
"fullname": "Wu Zehuan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wzhgba",
"type": "user"
} | true | null | 2502.11663 | [
{
"_id": "67b705d2ebee4662205c47f7",
"hidden": false,
"name": "Jingcheng Ni",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:06:17.841Z",
"user": {
"_id": "65d444b1ea28ba508b87ab01",
"avatarUrl": "/avatars/5836c0d64ba3936e064faa8ff4d44de0.svg",
"fullname": "Jingcheng Ni",
"isPro": false,
"type": "user",
"user": "kiranjc"
}
},
{
"_id": "67b705d2ebee4662205c47f8",
"hidden": false,
"name": "Yuxin Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b705d2ebee4662205c47f9",
"hidden": false,
"name": "Yichen Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:23:40.466Z",
"user": {
"_id": "6572dcc6bbd6664053b1fa6b",
"avatarUrl": "/avatars/aba29efd00bc41f14ce422f7807cd2c3.svg",
"fullname": "Liu Yichen",
"isPro": false,
"type": "user",
"user": "lyclyc52"
}
},
{
"_id": "67b705d2ebee4662205c47fa",
"hidden": false,
"name": "Rui Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b705d2ebee4662205c47fb",
"hidden": false,
"name": "Lewei Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:06:46.453Z",
"user": {
"_id": "65ead3ea908526a39082e641",
"avatarUrl": "/avatars/dcf870695fd56b06ca03d82f831e9019.svg",
"fullname": "Lewei Lu",
"isPro": false,
"type": "user",
"user": "luotto"
}
},
{
"_id": "67b705d2ebee4662205c47fc",
"hidden": false,
"name": "Zehuan Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:59:38.956Z",
"user": {
"_id": "65717368be66cd9b65a8201c",
"avatarUrl": "/avatars/fe945828eec9ded4cfa3b89d48a64d90.svg",
"fullname": "Wu Zehuan",
"isPro": false,
"type": "user",
"user": "wzhgba"
}
}
] | 2025-02-17T10:53:56 | MaskGWM: A Generalizable Driving World Model with Video Mask
Reconstruction | World models that forecast environmental changes from actions are vital for
autonomous driving models with strong generalization. The prevailing driving
world model mainly build on video prediction model. Although these models can
produce high-fidelity video sequences with advanced diffusion-based generator,
they are constrained by their predictive duration and overall generalization
capabilities. In this paper, we explore to solve this problem by combining
generation loss with MAE-style feature-level context learning. In particular,
we instantiate this target with three key design: (1) A more scalable Diffusion
Transformer (DiT) structure trained with extra mask construction task. (2) we
devise diffusion-related mask tokens to deal with the fuzzy relations between
mask reconstruction and generative diffusion process. (3) we extend mask
construction task to spatial-temporal domain by utilizing row-wise mask for
shifted self-attention rather than masked self-attention in MAE. Then, we adopt
a row-wise cross-view module to align with this mask design. Based on above
improvement, we propose MaskGWM: a Generalizable driving World Model embodied
with Video Mask reconstruction. Our model contains two variants: MaskGWM-long,
focusing on long-horizon prediction, and MaskGWM-mview, dedicated to multi-view
generation. Comprehensive experiments on standard benchmarks validate the
effectiveness of the proposed method, which contain normal validation of
Nuscene dataset, long-horizon rollout of OpenDV-2K dataset and zero-shot
validation of Waymo dataset. Quantitative metrics on these datasets show our
method notably improving state-of-the-art driving world model. | 36 | 67b705d4ebee4662205c489c | null | null |
|
2025-02-24T01:13:24.911000 | CrossOver: 3D Scene Cross-Modal Alignment | 3 | {
"_id": "650ec19e6620b0c57e2a551b",
"avatarUrl": "/avatars/c26c03fa920d857120f03c9ccb9f1d7a.svg",
"followerCount": null,
"fullname": "Sayan Deb Sarkar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sayandsarkar",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/650ec19e6620b0c57e2a551b/S_xFBPoV3YbtHmtLtRrSV.gif"
] | 2502.15011 | [
{
"_id": "67bc0d12ffc2c387329c8cfd",
"hidden": false,
"name": "Sayan Deb Sarkar",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:06:56.555Z",
"user": {
"_id": "650ec19e6620b0c57e2a551b",
"avatarUrl": "/avatars/c26c03fa920d857120f03c9ccb9f1d7a.svg",
"fullname": "Sayan Deb Sarkar",
"isPro": false,
"type": "user",
"user": "sayandsarkar"
}
},
{
"_id": "67bc0d12ffc2c387329c8cfe",
"hidden": false,
"name": "Ondrej Miksik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc0d12ffc2c387329c8cff",
"hidden": false,
"name": "Marc Pollefeys",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:11:04.481Z",
"user": {
"_id": "67b5fa179782a5e2fd2cb26a",
"avatarUrl": "/avatars/62c38f29ec641e001eeddf840bea21a0.svg",
"fullname": "Marc Pollefeys",
"isPro": false,
"type": "user",
"user": "mapo1"
}
},
{
"_id": "67bc0d12ffc2c387329c8d00",
"hidden": false,
"name": "Daniel Barath",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bc0d12ffc2c387329c8d01",
"hidden": false,
"name": "Iro Armeni",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:11:14.554Z",
"user": {
"_id": "6745f90cf4d75fd11a2407ac",
"avatarUrl": "/avatars/882f56565b4ebfabf1c13e199d74a4de.svg",
"fullname": "Iro Armeni",
"isPro": false,
"type": "user",
"user": "ir0"
}
}
] | 2025-02-20T20:05:30 | CrossOver: 3D Scene Cross-Modal Alignment | Multi-modal 3D object understanding has gained significant attention, yet
current approaches often assume complete data availability and rigid alignment
across all modalities. We present CrossOver, a novel framework for cross-modal
3D scene understanding via flexible, scene-level modality alignment. Unlike
traditional methods that require aligned modality data for every object
instance, CrossOver learns a unified, modality-agnostic embedding space for
scenes by aligning modalities - RGB images, point clouds, CAD models,
floorplans, and text descriptions - with relaxed constraints and without
explicit object semantics. Leveraging dimensionality-specific encoders, a
multi-stage training pipeline, and emergent cross-modal behaviors, CrossOver
supports robust scene retrieval and object localization, even with missing
modalities. Evaluations on ScanNet and 3RScan datasets show its superior
performance across diverse metrics, highlighting adaptability for real-world
applications in 3D scene understanding. | 3 | 67bc0d18ffc2c387329c8e56 | null | null |
|
2025-02-24T00:36:34.341000 | VLM$^2$-Bench: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues | 2 | {
"_id": "65d8b0f0661492b25c6623de",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d8b0f0661492b25c6623de/c6LPDse8NIV_3BHIu8dYe.png",
"followerCount": 10,
"fullname": "Jianshu Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Sterzhang",
"type": "user"
} | true | null | 2502.12084 | [
{
"_id": "67b8922ef6632327952ec1e1",
"hidden": false,
"name": "Jianshu Zhang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-21T14:48:16.643Z",
"user": {
"_id": "65d8b0f0661492b25c6623de",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d8b0f0661492b25c6623de/c6LPDse8NIV_3BHIu8dYe.png",
"fullname": "Jianshu Zhang",
"isPro": false,
"type": "user",
"user": "Sterzhang"
}
},
{
"_id": "67b8922ef6632327952ec1e2",
"hidden": false,
"name": "Dongyu Yao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:20:43.528Z",
"user": {
"_id": "64b0377121a001042bc0d274",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b0377121a001042bc0d274/Hk8yI5_s7ey5o9SVZzXrB.png",
"fullname": "Dongyu Yao",
"isPro": false,
"type": "user",
"user": "RainJamesY"
}
},
{
"_id": "67b8922ef6632327952ec1e3",
"hidden": false,
"name": "Renjie Pi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:18:30.254Z",
"user": {
"_id": "63f45b8d520c14618930d175",
"avatarUrl": "/avatars/a20994594579b52a8be8bd2c4acbb913.svg",
"fullname": "renjie",
"isPro": false,
"type": "user",
"user": "renjiepi"
}
},
{
"_id": "67b8922ef6632327952ec1e4",
"hidden": false,
"name": "Paul Pu Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8922ef6632327952ec1e5",
"hidden": false,
"name": "Yi R.",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8922ef6632327952ec1e6",
"hidden": false,
"name": "Fung",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-26T01:33:46.195Z",
"user": {
"_id": "67be6f4f22513d7d52d7ef66",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/jQEVQufb6BOb9cQpb6NIt.png",
"fullname": "San Yi",
"isPro": false,
"type": "user",
"user": "Sanyia"
}
}
] | 2025-02-17T17:57:50 | VLM^2-Bench: A Closer Look at How Well VLMs Implicitly Link Explicit
Matching Visual Cues | Visually linking matching cues is a crucial ability in daily life, such as
identifying the same person in multiple photos based on their cues, even
without knowing who they are. Despite the extensive knowledge that
vision-language models (VLMs) possess, it remains largely unexplored whether
they are capable of performing this fundamental task. To address this, we
introduce VLM^2-Bench, a benchmark designed to assess whether VLMs can
Visually Link Matching cues, with 9 subtasks and over 3,000 test cases.
Comprehensive evaluation across eight open-source VLMs and GPT-4o, along with
further analysis of various language-side and vision-side prompting methods,
leads to a total of eight key findings. We identify critical challenges in
models' ability to link visual cues, highlighting a significant performance gap
where even GPT-4o lags 34.80% behind humans. Based on these insights, we
advocate for (i) enhancing core visual capabilities to improve adaptability and
reduce reliance on prior knowledge, (ii) establishing clearer principles for
integrating language-based reasoning in vision-centric tasks to prevent
unnecessary biases, and (iii) shifting vision-text training paradigms toward
fostering models' ability to independently structure and infer relationships
among visual cues. | 29 | 67b89230f6632327952ec27a | null | null |
|
2025-02-24T00:07:05.804000 | LightThinker: Thinking Step-by-Step Compression | 5 | {
"_id": "620b3bbb0668e435407c8d0a",
"avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg",
"followerCount": 19,
"fullname": "Ningyu Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ningyu",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/dhGMWf_tcPkvQlRm5DbD6.png"
] | 2502.15589 | [
{
"_id": "67bbfe2d670ece8d9184f339",
"hidden": false,
"name": "Jintian Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:22:07.730Z",
"user": {
"_id": "63f89fe7565506a9cadcd2cf",
"avatarUrl": "/avatars/7eb449a1109dcff051cb3ba680f0c082.svg",
"fullname": "Jintian Zhang",
"isPro": false,
"type": "user",
"user": "MikeDean"
}
},
{
"_id": "67bbfe2d670ece8d9184f33a",
"hidden": false,
"name": "Yuqi Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfe2d670ece8d9184f33b",
"hidden": false,
"name": "Mengshu Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:22:36.798Z",
"user": {
"_id": "64f6d1ed46284aa28d9abf6c",
"avatarUrl": "/avatars/d6beaecfd00345e4a664862fff217427.svg",
"fullname": "Sun mengshu",
"isPro": false,
"type": "user",
"user": "sunmengshu"
}
},
{
"_id": "67bbfe2d670ece8d9184f33c",
"hidden": false,
"name": "Yujie Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:22:42.818Z",
"user": {
"_id": "67603f17d4f2ded0c1498358",
"avatarUrl": "/avatars/97205504757d7eb33512ab96b2ecde28.svg",
"fullname": "yujieluo",
"isPro": false,
"type": "user",
"user": "yujieluo1031"
}
},
{
"_id": "67bbfe2d670ece8d9184f33d",
"hidden": false,
"name": "Shuofei Qiao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:02.722Z",
"user": {
"_id": "6447800f30fa4ecb85ddad80",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6447800f30fa4ecb85ddad80/NsmXIaMsWctmTNA7tFVkX.jpeg",
"fullname": "Shuofei Qiao",
"isPro": false,
"type": "user",
"user": "GoooDte"
}
},
{
"_id": "67bbfe2d670ece8d9184f33e",
"hidden": false,
"name": "Lun Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfe2d670ece8d9184f33f",
"hidden": false,
"name": "Da Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:22:53.988Z",
"user": {
"_id": "66270e0026d5a3eee310ad53",
"avatarUrl": "/avatars/db34068c114c348de296e00b1b5a5b9b.svg",
"fullname": "Da Zheng",
"isPro": false,
"type": "user",
"user": "zhengda1936"
}
},
{
"_id": "67bbfe2d670ece8d9184f340",
"hidden": false,
"name": "Huajun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:24:31.548Z",
"user": {
"_id": "64931296137833d7ec7689cd",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64931296137833d7ec7689cd/TBihNdp1ZwIWjhfAWjRr6.jpeg",
"fullname": "Huajun Chen",
"isPro": false,
"type": "user",
"user": "huajunsir"
}
},
{
"_id": "67bbfe2d670ece8d9184f341",
"hidden": false,
"name": "Ningyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:04.794Z",
"user": {
"_id": "620b3bbb0668e435407c8d0a",
"avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg",
"fullname": "Ningyu Zhang",
"isPro": false,
"type": "user",
"user": "Ningyu"
}
}
] | 2025-02-21T16:57:22 | LightThinker: Thinking Step-by-Step Compression | Large language models (LLMs) have shown remarkable performance in complex
reasoning tasks, but their efficiency is hindered by the substantial memory and
computational costs associated with generating lengthy tokens. In this paper,
we propose LightThinker, a novel method that enables LLMs to dynamically
compress intermediate thoughts during reasoning. Inspired by human cognitive
processes, LightThinker compresses verbose thought steps into compact
representations and discards the original reasoning chains, thereby
significantly reducing the number of tokens stored in the context window. This
is achieved by training the model on when and how to perform compression
through data construction, mapping hidden states to condensed gist tokens, and
creating specialized attention masks. Additionally, we introduce the Dependency
(Dep) metric to quantify the degree of compression by measuring the reliance on
historical tokens during generation. Extensive experiments on four datasets and
two models show that LightThinker reduces peak memory usage and inference time,
while maintaining competitive accuracy. Our work provides a new direction for
improving the efficiency of LLMs in complex reasoning tasks without sacrificing
performance. Code will be released at https://github.com/zjunlp/LightThinker. | 26 | 67bbfe2f670ece8d9184f3a4 | null | null |
|
2025-02-24T00:02:52.495000 | Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? | 2 | {
"_id": "6039478ab3ecf716b1a5fd4d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6039478ab3ecf716b1a5fd4d/_Thy4E7taiSYBLKxEKJbT.jpeg",
"followerCount": 65,
"fullname": "taesiri",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "taesiri",
"type": "user"
} | false | null | 2502.15657 | [
{
"_id": "67bbfd6c3593f69f41512d54",
"hidden": false,
"name": "Yoshua Bengio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d55",
"hidden": false,
"name": "Michael Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d56",
"hidden": false,
"name": "Damiano Fornasiere",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:53:43.499Z",
"user": {
"_id": "63e4beb2d6278c161be4ef52",
"avatarUrl": "/avatars/15bbfde42e890f6f0dd0efd32dfdf5fa.svg",
"fullname": "Damiano Fornasiere",
"isPro": false,
"type": "user",
"user": "dfp00"
}
},
{
"_id": "67bbfd6c3593f69f41512d57",
"hidden": false,
"name": "Joumana Ghosn",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d58",
"hidden": false,
"name": "Pietro Greiner",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d59",
"hidden": false,
"name": "Matt MacDermott",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:53:56.982Z",
"user": {
"_id": "65bb7386a951088f7dc69b16",
"avatarUrl": "/avatars/bb72822451a9d769951c03e0cc3b1912.svg",
"fullname": "Matt MacDermott",
"isPro": false,
"type": "user",
"user": "mattmacdermott"
}
},
{
"_id": "67bbfd6c3593f69f41512d5a",
"hidden": false,
"name": "Sören Mindermann",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d5b",
"hidden": false,
"name": "Adam Oberman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbfd6c3593f69f41512d5c",
"hidden": false,
"name": "Jesse Richardson",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:09.707Z",
"user": {
"_id": "6602268ec92df8c189d56ef1",
"avatarUrl": "/avatars/ec4a88bbf01f226640725ac117e53eae.svg",
"fullname": "Jesse Richardson",
"isPro": false,
"type": "user",
"user": "getfull"
}
},
{
"_id": "67bbfd6c3593f69f41512d5d",
"hidden": false,
"name": "Oliver Richardson",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:15.449Z",
"user": {
"_id": "64e3e41e52a2eece10b99f0f",
"avatarUrl": "/avatars/90045fe790388fe2cd010e04ad0137d1.svg",
"fullname": "Oliver Richardson",
"isPro": false,
"type": "user",
"user": "olliekse"
}
},
{
"_id": "67bbfd6c3593f69f41512d5e",
"hidden": false,
"name": "Marc-Antoine Rondeau",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:22.066Z",
"user": {
"_id": "67112a441635565bd8d0a6cd",
"avatarUrl": "/avatars/68dc8aa2a65c47c2e0869119212ac4aa.svg",
"fullname": "Marc-Antoine Rondeau",
"isPro": false,
"type": "user",
"user": "marondeau"
}
},
{
"_id": "67bbfd6c3593f69f41512d5f",
"hidden": false,
"name": "Pierre-Luc St-Charles",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:28.067Z",
"user": {
"_id": "6686a17d7c05938b94d81875",
"avatarUrl": "/avatars/caff2bc79068a024d74e9d3b7ea79eaf.svg",
"fullname": "Pierre-Luc St-Charles",
"isPro": false,
"type": "user",
"user": "plstcharles-mila"
}
},
{
"_id": "67bbfd6c3593f69f41512d60",
"hidden": false,
"name": "David Williams-King",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:34.058Z",
"user": {
"_id": "6776217bd8ace1de4692e17e",
"avatarUrl": "/avatars/357eb3e2c96b8a98b5d1029eefbb0ed3.svg",
"fullname": "David Williams-King",
"isPro": false,
"type": "user",
"user": "dwksrc"
}
}
] | 2025-02-21T18:28:36 | Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer
a Safer Path? | The leading AI companies are increasingly focused on building generalist AI
agents -- systems that can autonomously plan, act, and pursue goals across
almost all tasks that humans can perform. Despite how useful these systems
might be, unchecked AI agency poses significant risks to public safety and
security, ranging from misuse by malicious actors to a potentially irreversible
loss of human control. We discuss how these risks arise from current AI
training methods. Indeed, various scenarios and experiments have demonstrated
the possibility of AI agents engaging in deception or pursuing goals that were
not specified by human operators and that conflict with human interests, such
as self-preservation. Following the precautionary principle, we see a strong
need for safer, yet still useful, alternatives to the current agency-driven
trajectory. Accordingly, we propose as a core building block for further
advances the development of a non-agentic AI system that is trustworthy and
safe by design, which we call Scientist AI. This system is designed to explain
the world from observations, as opposed to taking actions in it to imitate or
please humans. It comprises a world model that generates theories to explain
data and a question-answering inference machine. Both components operate with
an explicit notion of uncertainty to mitigate the risks of overconfident
predictions. In light of these considerations, a Scientist AI could be used to
assist human researchers in accelerating scientific progress, including in AI
safety. In particular, our system can be employed as a guardrail against AI
agents that might be created despite the risks involved. Ultimately, focusing
on non-agentic AI may enable the benefits of AI innovation while avoiding the
risks associated with the current trajectory. We hope these arguments will
motivate researchers, developers, and policymakers to favor this safer path. | 5 | 67bbfd6c3593f69f41512d96 | null | null |
|
2025-02-23T23:43:43.529000 | StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction Following | 2 | {
"_id": "670e57b3391f1a7021182bff",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/N0tuHZVz8KFPCv8G1qUX2.png",
"followerCount": 2,
"fullname": "Yuan Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "WhiteCatY",
"type": "user"
} | true | null | 2502.14494 | [
{
"_id": "67b9dda03593f69f41cdb5d3",
"hidden": false,
"name": "Jinnan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:24:42.884Z",
"user": {
"_id": "6434f5a5a4c9c55871ae888f",
"avatarUrl": "/avatars/058389c773a67b2b03d44556f0ee43d1.svg",
"fullname": "Jinnan Li",
"isPro": false,
"type": "user",
"user": "Jinnan"
}
},
{
"_id": "67b9dda03593f69f41cdb5d4",
"hidden": true,
"name": "Jinzhe Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:24:50.056Z",
"user": {
"_id": "67658bd7f7ac7e978ab6f957",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/c8VBgFckkZNUGeqUyotwq.png",
"fullname": "Jinzhe Li",
"isPro": false,
"type": "user",
"user": "JinzheFudan"
}
},
{
"_id": "67b9dda03593f69f41cdb5d5",
"hidden": false,
"name": "Yue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b9dda03593f69f41cdb5d6",
"hidden": false,
"name": "Yi Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b9dda03593f69f41cdb5d7",
"hidden": false,
"name": "Yuan Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T15:47:33.508Z",
"user": {
"_id": "670e57b3391f1a7021182bff",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/N0tuHZVz8KFPCv8G1qUX2.png",
"fullname": "Yuan Wu",
"isPro": false,
"type": "user",
"user": "WhiteCatY"
}
}
] | 2025-02-20T12:22:18 | StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction
Following | Multi-turn instruction following capability constitutes a core competency of
large language models (LLMs) in real-world applications. Existing evaluation
benchmarks predominantly focus on fine-grained constraint satisfaction and
domain-specific capability assessment, yet overlook the crucial structural
dependency between dialogue turns that distinguishes multi-turn from
single-turn interactions. This structural dependency not only reflects user
intent but also establishes a second dimension for instruction following
evaluation beyond constraint satisfaction. To address this gap, we propose
StructFlowBench, a multi-turn instruction following benchmark with structural
flow modeling. The benchmark innovatively defines a structural flow framework
comprising six fundamental inter-turn relationships, which not only introduces
novel structural constraints for model evaluation but also serves as generation
parameters for creating customized dialogue flows tailored to specific
scenarios. Adopting established LLM-based automatic evaluation methodologies,
we conduct systematic evaluations of 13 leading open-source and closed-source
LLMs. Experimental results reveal significant deficiencies in current models'
comprehension of multi-turn dialogue structures. The code is available at
https://github.com/MLGroupJLU/StructFlowBench. | 13 | 67b9dda13593f69f41cdb635 | null | null |
|
2025-02-23T23:17:33.152000 | UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning | 2 | {
"_id": "64f64da90efa33bfe0a3d9ba",
"avatarUrl": "/avatars/c45fb015433e46a2eeb9518910f75d35.svg",
"followerCount": null,
"fullname": "Vaidehi Patil",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vaidehi99",
"type": "user"
} | true | null | 2502.15082 | [
{
"_id": "67bbe93f267aa2b537b318be",
"hidden": false,
"name": "Vaidehi Patil",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:15.794Z",
"user": {
"_id": "64f64da90efa33bfe0a3d9ba",
"avatarUrl": "/avatars/c45fb015433e46a2eeb9518910f75d35.svg",
"fullname": "Vaidehi Patil",
"isPro": false,
"type": "user",
"user": "vaidehi99"
}
},
{
"_id": "67bbe93f267aa2b537b318bf",
"hidden": false,
"name": "Elias Stengel-Eskin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:16:49.662Z",
"user": {
"_id": "61781c4caf41befe8ff060e8",
"avatarUrl": "/avatars/8871d7b046fc28cbc8638228da8e9737.svg",
"fullname": "Elias Stengel-Eskin",
"isPro": false,
"type": "user",
"user": "esteng"
}
},
{
"_id": "67bbe93f267aa2b537b318c0",
"hidden": false,
"name": "Mohit Bansal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T16:16:57.627Z",
"user": {
"_id": "665d9d3a057f7c508f98c625",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/665d9d3a057f7c508f98c625/u1R9P9sJoAl4zEIcetbPy.jpeg",
"fullname": "Mohit Bansal",
"isPro": false,
"type": "user",
"user": "mohitbansal"
}
}
] | 2025-02-20T22:51:10 | UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning | User specifications or legal frameworks often require information to be
removed from pretrained models, including large language models (LLMs). This
requires deleting or "forgetting" a set of data points from an already-trained
model, which typically degrades its performance on other data points. Thus, a
balance must be struck between removing information and keeping the model's
other abilities intact, with a failure to balance this trade-off leading to
poor deletion or an unusable model. To this end, we propose UPCORE
(Utility-Preserving Coreset Selection), a method-agnostic data selection
framework for mitigating collateral damage during unlearning. Finding that the
model damage is correlated with the variance of the model's representations on
the forget set, we selectively prune the forget set to remove outliers, thereby
minimizing model degradation after unlearning. We evaluate UPCORE across three
standard unlearning methods consistently achieving a superior balance between
the competing objectives of deletion efficacy and model preservation. To better
evaluate this trade-off, we introduce a new metric, measuring the
area-under-the-curve (AUC) across standard metrics. We find that UPCORE
improves both standard metrics and AUC, benefitting from positive transfer
between the coreset and pruned points while reducing negative transfer from the
forget set to points outside of it. | 1 | 67bbe940267aa2b537b318f4 | null | null |
|
2025-02-23T22:55:04.409000 | PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data | 6 | {
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"followerCount": 4,
"fullname": "Yiren Song",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yiren98",
"type": "user"
} | true | null | 2502.14397 | [
{
"_id": "67bbed806f2833ecccf914dd",
"hidden": false,
"name": "Shijie Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:13:17.369Z",
"user": {
"_id": "6239ad42cddfae177174bdc5",
"avatarUrl": "/avatars/badc07ff40d9790527b27d87c924e9ee.svg",
"fullname": "Shijie Huang",
"isPro": false,
"type": "user",
"user": "Humor"
}
},
{
"_id": "67bbed806f2833ecccf914de",
"hidden": false,
"name": "Yiren Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:13:23.453Z",
"user": {
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"fullname": "Yiren Song",
"isPro": true,
"type": "user",
"user": "yiren98"
}
},
{
"_id": "67bbed806f2833ecccf914df",
"hidden": false,
"name": "Yuxuan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbed806f2833ecccf914e0",
"hidden": false,
"name": "Hailong Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbed806f2833ecccf914e1",
"hidden": false,
"name": "Xueyin Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:17:06.041Z",
"user": {
"_id": "65fd9853b329ebf2d40e280a",
"avatarUrl": "/avatars/053e96c4db138cc8948c6350b04617b9.svg",
"fullname": "Wang Xueying",
"isPro": false,
"type": "user",
"user": "Forever-rover"
}
},
{
"_id": "67bbed806f2833ecccf914e2",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:17:12.124Z",
"user": {
"_id": "661ab3da2b14565c7acccf5c",
"avatarUrl": "/avatars/fa4fc03664803e02aede4d4c3d50b393.svg",
"fullname": "Mike Zheng Shou",
"isPro": false,
"type": "user",
"user": "AnalMom"
}
},
{
"_id": "67bbed806f2833ecccf914e3",
"hidden": false,
"name": "Jiaming Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T09:35:38 | PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data | We introduce PhotoDoodle, a novel image editing framework designed to
facilitate photo doodling by enabling artists to overlay decorative elements
onto photographs. Photo doodling is challenging because the inserted elements
must appear seamlessly integrated with the background, requiring realistic
blending, perspective alignment, and contextual coherence. Additionally, the
background must be preserved without distortion, and the artist's unique style
must be captured efficiently from limited training data. These requirements are
not addressed by previous methods that primarily focus on global style transfer
or regional inpainting. The proposed method, PhotoDoodle, employs a two-stage
training strategy. Initially, we train a general-purpose image editing model,
OmniEditor, using large-scale data. Subsequently, we fine-tune this model with
EditLoRA using a small, artist-curated dataset of before-and-after image pairs
to capture distinct editing styles and techniques. To enhance consistency in
the generated results, we introduce a positional encoding reuse mechanism.
Additionally, we release a PhotoDoodle dataset featuring six high-quality
styles. Extensive experiments demonstrate the advanced performance and
robustness of our method in customized image editing, opening new possibilities
for artistic creation. | 38 | 67bbed856f2833ecccf915c5 | null | null |
|
2025-02-23T22:24:55.500000 | One-step Diffusion Models with $f$-Divergence Distribution Matching | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.15681 | [
{
"_id": "67bbe67c7727595ca5979d2a",
"hidden": false,
"name": "Yilun Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:54:56.422Z",
"user": {
"_id": "649c37de5ffe05267a105fe8",
"avatarUrl": "/avatars/4f262272e6222f879c6c0fedfa2e5861.svg",
"fullname": "Yilun Xu",
"isPro": false,
"type": "user",
"user": "AaronXyl"
}
},
{
"_id": "67bbe67c7727595ca5979d2b",
"hidden": false,
"name": "Weili Nie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:55:03.232Z",
"user": {
"_id": "64c1a69e226e016da8450ae2",
"avatarUrl": "/avatars/54c161e8b8543244ed13cbe47017624e.svg",
"fullname": "Weili Nie",
"isPro": false,
"type": "user",
"user": "xiaoli08"
}
},
{
"_id": "67bbe67c7727595ca5979d2c",
"hidden": false,
"name": "Arash Vahdat",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:55:08.446Z",
"user": {
"_id": "66727eb8f1bef807e01c6164",
"avatarUrl": "/avatars/ba8edee083cfd6124b82e2a776f0fb43.svg",
"fullname": "Arash vahdat",
"isPro": false,
"type": "user",
"user": "ArashAVN"
}
}
] | 2025-02-21T18:59:20 | One-step Diffusion Models with f-Divergence Distribution Matching | Sampling from diffusion models involves a slow iterative process that hinders
their practical deployment, especially for interactive applications. To
accelerate generation speed, recent approaches distill a multi-step diffusion
model into a single-step student generator via variational score distillation,
which matches the distribution of samples generated by the student to the
teacher's distribution. However, these approaches use the reverse
Kullback-Leibler (KL) divergence for distribution matching which is known to be
mode seeking. In this paper, we generalize the distribution matching approach
using a novel f-divergence minimization framework, termed f-distill, that
covers different divergences with different trade-offs in terms of mode
coverage and training variance. We derive the gradient of the f-divergence
between the teacher and student distributions and show that it is expressed as
the product of their score differences and a weighting function determined by
their density ratio. This weighting function naturally emphasizes samples with
higher density in the teacher distribution, when using a less mode-seeking
divergence. We observe that the popular variational score distillation approach
using the reverse-KL divergence is a special case within our framework.
Empirically, we demonstrate that alternative f-divergences, such as
forward-KL and Jensen-Shannon divergences, outperform the current best
variational score distillation methods across image generation tasks. In
particular, when using Jensen-Shannon divergence, f-distill achieves current
state-of-the-art one-step generation performance on ImageNet64 and zero-shot
text-to-image generation on MS-COCO. Project page:
https://research.nvidia.com/labs/genair/f-distill | 6 | 67bbe6837727595ca5979e8c | null | null |
|
2025-02-23T22:17:18.309000 | SIFT: Grounding LLM Reasoning in Contexts via Stickers | 3 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.14922 | [
{
"_id": "67bbe4ba79e0a705cf573985",
"hidden": false,
"name": "Zihao Zeng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:05:08.989Z",
"user": {
"_id": "6544c0585a13979f82038a1c",
"avatarUrl": "/avatars/01f3e862d49020e9eaf1728e4ba97bea.svg",
"fullname": "Zeng Zihao",
"isPro": false,
"type": "user",
"user": "zzh6666"
}
},
{
"_id": "67bbe4ba79e0a705cf573986",
"hidden": false,
"name": "Xuyao Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:19:22.415Z",
"user": {
"_id": "6721dacfc5309c08451d21d5",
"avatarUrl": "/avatars/ac8be5ac8b8ee5b5533214e526b72dad.svg",
"fullname": "Huang Xuyao",
"isPro": false,
"type": "user",
"user": "ElysiaTrue"
}
},
{
"_id": "67bbe4ba79e0a705cf573987",
"hidden": false,
"name": "Boxiu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbe4ba79e0a705cf573988",
"hidden": false,
"name": "Zhijie Deng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:19:36.725Z",
"user": {
"_id": "673d5f411b0fe168ad4896b2",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fYgQx6XNi_P5GxlUKbH5G.png",
"fullname": "Zhijie Deng",
"isPro": false,
"type": "user",
"user": "thudzj"
}
}
] | 2025-02-19T17:38:46 | SIFT: Grounding LLM Reasoning in Contexts via Stickers | This paper identifies the misinterpretation of the context can be a
significant issue during the reasoning process of large language models,
spanning from smaller models like Llama3.2-3B-Instruct to cutting-edge ones
like DeepSeek-R1. For example, in the phrase "10 dollars per kilo," LLMs might
not recognize that "per" means "for each," leading to calculation errors. We
introduce a novel, post-training approach called **Stick to the Facts (SIFT)**
to tackle this. SIFT leverages increasing inference-time compute to ground LLM
reasoning in contexts. At the core of SIFT lies the *Sticker*, which is
generated by the model itself to explicitly emphasize the key information
within the context. Given the curated Sticker, SIFT generates two predictions
-- one from the original query and one from the query augmented with the
Sticker. If they differ, the Sticker is sequentially refined via *forward*
optimization (to better align the extracted facts with the query) and *inverse*
generation (to conform with the model's inherent tendencies) for more faithful
reasoning outcomes. Studies across diverse models (from 3B to 100B+) and
benchmarks (e.g., GSM8K, MATH-500) reveal consistent performance improvements.
Notably, SIFT improves the pass@1 accuracy of DeepSeek-R1 on AIME2024 from
78.33% to **85.67**%, establishing a new state-of-the-art in the open-source
community. The code is available at https://github.com/zhijie-group/SIFT. | 29 | 67bbe4bb79e0a705cf5739c3 | null | null |
|
2025-02-23T22:11:17.789000 | Think Inside the JSON: Reinforcement Strategy for Strict LLM Schema Adherence | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.14905 | [
{
"_id": "67bbe0520aabd5d571a723e7",
"hidden": false,
"name": "Bhavik Agarwal",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T16:38:48.442Z",
"user": {
"_id": "6739c3675d150a0c7e3c0014",
"avatarUrl": "/avatars/87a0d9d39b0c854c467a3cdd46fa0ce1.svg",
"fullname": "Bhavik Agarwal",
"isPro": false,
"type": "user",
"user": "bhaviktheslider"
}
},
{
"_id": "67bbe0520aabd5d571a723e8",
"hidden": false,
"name": "Ishan Joshi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:55:31.357Z",
"user": {
"_id": "647e85b6e4d52fe0e0210718",
"avatarUrl": "/avatars/ee9ed51b64486b898ce0b58b22db32d5.svg",
"fullname": "Ishan Joshi",
"isPro": false,
"type": "user",
"user": "IshanJoshi"
}
},
{
"_id": "67bbe0520aabd5d571a723e9",
"hidden": false,
"name": "Viktoria Rojkova",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:55:40.802Z",
"user": {
"_id": "62c1b81e8b647bdc24f78027",
"avatarUrl": "/avatars/b994d31a0c4161f715bb2153c0f0a83f.svg",
"fullname": "Viktoria Rojkova",
"isPro": true,
"type": "user",
"user": "vrojkova"
}
}
] | 2025-02-18T16:44:55 | Think Inside the JSON: Reinforcement Strategy for Strict LLM Schema
Adherence | In this paper, we address the challenge of enforcing strict schema adherence
in large language model (LLM) generation by leveraging LLM reasoning
capabilities. Building on the DeepSeek R1 reinforcement learning framework, our
approach trains structured reasoning skills of a 1.5B parameter model through a
novel pipeline that combines synthetic reasoning dataset construction with
custom reward functions under Group Relative Policy Optimization (GRPO).
Specifically, we first perform R1 reinforcement learning on a 20K sample
unstructured-to-structured dataset, mirroring the original DeepSeek R1 methods,
to establish core reasoning abilities. Subsequently, we performed supervised
fine-tuning on a separate 10K reasoning sample dataset, focusing on refining
schema adherence for downstream tasks. Despite the relatively modest training
scope, requiring approximately 20 hours on an 8xH100 GPU cluster for GRPO
training and 3 hours on 1xA100 for SFT, our model demonstrates robust
performance in enforcing schema consistency. We compare our ThinkJSON approach
against the original DeepSeek R1 (671B), distilled versions of DeepSeek R1
(Qwen-1.5B and Qwen-7B), and Gemini 2.0 Flash (70B), showcasing its
effectiveness in real-world applications. Our results underscore the practical
utility of a resource-efficient framework for schema-constrained text
generation. | 9 | 67bbe0530aabd5d571a72437 | null | null |
|
2025-02-23T21:52:51.059000 | Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model | 2 | {
"_id": "65633c5e84a9fbe322f87d81",
"avatarUrl": "/avatars/7233a555b43c669847a950ce5697c92c.svg",
"followerCount": 9,
"fullname": "DongkiKim",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DongkiKim",
"type": "user"
} | true | null | 2502.13449 | [
{
"_id": "67b7ceae3e8a45f770b2606e",
"hidden": false,
"name": "Dongki Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:59:11.214Z",
"user": {
"_id": "65633c5e84a9fbe322f87d81",
"avatarUrl": "/avatars/7233a555b43c669847a950ce5697c92c.svg",
"fullname": "DongkiKim",
"isPro": false,
"type": "user",
"user": "DongkiKim"
}
},
{
"_id": "67b7ceae3e8a45f770b2606f",
"hidden": false,
"name": "Wonbin Lee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:09:46.291Z",
"user": {
"_id": "66d812997e6c9509bb15fac2",
"avatarUrl": "/avatars/baf0e384a864de47bfd989aebe62c357.svg",
"fullname": "Wonbin Lee",
"isPro": false,
"type": "user",
"user": "WonbinLee067"
}
},
{
"_id": "67b7ceae3e8a45f770b26070",
"hidden": false,
"name": "Sung Ju Hwang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T05:49:10 | Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular
Language Model | Understanding molecules is key to understanding organisms and driving
advances in drug discovery, requiring interdisciplinary knowledge across
chemistry and biology. Although large molecular language models have achieved
notable success in interpreting molecular structures, their instruction
datasets are limited to the specific knowledge from task-oriented datasets and
do not fully cover the fundamental characteristics of molecules, hindering
their abilities as general-purpose molecular assistants. To address this issue,
we propose Mol-LLaMA, a large molecular language model that grasps the general
knowledge centered on molecules via multi-modal instruction tuning. To this
end, we design key data types that encompass the fundamental features of
molecules, incorporating essential knowledge from molecular structures. In
addition, to improve understanding of molecular features, we introduce a module
that integrates complementary information from different molecular encoders,
leveraging the distinct advantages of different molecular representations. Our
experimental results demonstrate that Mol-LLaMA is capable of comprehending the
general features of molecules and generating relevant responses to users'
queries with detailed explanations, implying its potential as a general-purpose
assistant for molecular analysis. | 42 | 67b7ceae3e8a45f770b2609f | null | null |
|
2025-02-23T21:44:33.443000 | InterFeedback: Unveiling Interactive Intelligence of Large Multimodal Models via Human Feedback | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.15027 | [
{
"_id": "67bbdcec79fcd85f09ddd869",
"hidden": false,
"name": "Henry Hengyuan Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T13:32:25.051Z",
"user": {
"_id": "647d7eb9770c299e56f5b39b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647d7eb9770c299e56f5b39b/CC5JJgkyLkXOxw-BeT4G5.jpeg",
"fullname": "Henry Hengyuan Zhao",
"isPro": false,
"type": "user",
"user": "hhenryz"
}
},
{
"_id": "67bbdcec79fcd85f09ddd86a",
"hidden": false,
"name": "Wenqi Pei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdcec79fcd85f09ddd86b",
"hidden": false,
"name": "Yifei Tao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:52:28.951Z",
"user": {
"_id": "6684c542cc72cbbde88ccf55",
"avatarUrl": "/avatars/dafa8b5b44dbb8be859fbae94a6cd953.svg",
"fullname": "yifeitao",
"isPro": false,
"type": "user",
"user": "yifeitao"
}
},
{
"_id": "67bbdcec79fcd85f09ddd86c",
"hidden": false,
"name": "Haiyang Mei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:52:15.485Z",
"user": {
"_id": "66fcfa6e05638227c44233a9",
"avatarUrl": "/avatars/4a88765c7f5c5ca77da6d21eb01f73e0.svg",
"fullname": "Haiyang Mei",
"isPro": false,
"type": "user",
"user": "meihaiyang"
}
},
{
"_id": "67bbdcec79fcd85f09ddd86d",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:52:21.365Z",
"user": {
"_id": "661ab3da2b14565c7acccf5c",
"avatarUrl": "/avatars/fa4fc03664803e02aede4d4c3d50b393.svg",
"fullname": "Mike Zheng Shou",
"isPro": false,
"type": "user",
"user": "AnalMom"
}
}
] | 2025-02-20T20:27:06 | InterFeedback: Unveiling Interactive Intelligence of Large Multimodal
Models via Human Feedback | Existing benchmarks do not test Large Multimodal Models (LMMs) on their
interactive intelligence with human users which is vital for developing
general-purpose AI assistants. We design InterFeedback, an interactive
framework, which can be applied to any LMM and dataset to assess this ability
autonomously. On top of this, we introduce InterFeedback-Bench which evaluates
interactive intelligence using two representative datasets, MMMU-Pro and
MathVerse, to test 10 different open-source LMMs. Additionally, we present
InterFeedback-Human, a newly collected dataset of 120 cases designed for
manually testing interactive performance in leading models such as OpenAI-o1
and Claude-3.5-Sonnet. Our evaluation results show that even state-of-the-art
LMM (like OpenAI-o1) can correct their results through human feedback less than
50%. Our findings point to the need for methods that can enhance the LMMs'
capability to interpret and benefit from feedback. | 7 | 67bbdced79fcd85f09ddd8da | null | null |
|
2025-02-23T21:40:17.216000 | The Relationship Between Reasoning and Performance in Large Language Models -- o3 (mini) Thinks Harder, Not Longer | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.15631 | [
{
"_id": "67bbdbe8ea3003f47f15d036",
"hidden": false,
"name": "Marthe Ballon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdbe8ea3003f47f15d037",
"hidden": false,
"name": "Andres Algaba",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdbe8ea3003f47f15d038",
"hidden": false,
"name": "Vincent Ginis",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T15:53:02.578Z",
"user": {
"_id": "62d69d324be637062ecd88de",
"avatarUrl": "/avatars/445cfce9170484d32cfa379015d9cd14.svg",
"fullname": "Vincent Ginis",
"isPro": false,
"type": "user",
"user": "VincentGinis"
}
}
] | 2025-02-21T17:59:13 | The Relationship Between Reasoning and Performance in Large Language
Models -- o3 (mini) Thinks Harder, Not Longer | Large language models have demonstrated remarkable progress in mathematical
reasoning, leveraging chain-of-thought and test-time compute scaling. However,
many open questions remain regarding the interplay between reasoning token
usage and accuracy gains. In particular, when comparing models across
generations, it is unclear whether improved performance results from longer
reasoning chains or more efficient reasoning. We systematically analyze
chain-of-thought length across o1-mini and o3-mini variants on the Omni-MATH
benchmark, finding that o3-mini (m) achieves superior accuracy without
requiring longer reasoning chains than o1-mini. Moreover, we show that accuracy
generally declines as reasoning chains grow across all models and compute
settings, even when controlling for difficulty of the questions. This accuracy
drop is significantly smaller in more proficient models, suggesting that new
generations of reasoning models use test-time compute more effectively.
Finally, we highlight that while o3-mini (h) achieves a marginal accuracy gain
over o3-mini (m), it does so by allocating substantially more reasoning tokens
across all problems, even the ones that o3-mini (m) can already solve. These
findings provide new insights into the relationship between model capability
and reasoning length, with implications for efficiency, scaling, and evaluation
methodologies. | 8 | 67bbdbefea3003f47f15d226 | null | null |
|
2025-02-23T21:39:54.375000 | SurveyX: Academic Survey Automation via Large Language Models | 5 | {
"_id": "669e60ee8580d17cb60f8347",
"avatarUrl": "/avatars/37963b833228afe39cc24854c9326670.svg",
"followerCount": 5,
"fullname": "yang jiawei",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Dany-0",
"type": "user"
} | true | null | 2502.14776 | [
{
"_id": "67bbdb46d94d32bcfba70db7",
"hidden": false,
"name": "Xun Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdb46d94d32bcfba70db8",
"hidden": false,
"name": "Jiawei Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:05:10.864Z",
"user": {
"_id": "669e60ee8580d17cb60f8347",
"avatarUrl": "/avatars/37963b833228afe39cc24854c9326670.svg",
"fullname": "yang jiawei",
"isPro": false,
"type": "user",
"user": "Dany-0"
}
},
{
"_id": "67bbdb46d94d32bcfba70db9",
"hidden": false,
"name": "Yezhaohui Wang",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-24T04:12:46.485Z",
"user": {
"_id": "662dd19f9e6d371ab71b91ce",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/662dd19f9e6d371ab71b91ce/mZBPw_Zs8ZlEFGlbekAoH.jpeg",
"fullname": "Yezhaohui Wang",
"isPro": false,
"type": "user",
"user": "HaruTeru"
}
},
{
"_id": "67bbdb46d94d32bcfba70dba",
"hidden": false,
"name": "Chen Tang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T13:05:12.859Z",
"user": {
"_id": "615a0d48b89c239e75b2b019",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1633291509590-noauth.jpeg",
"fullname": "Travis Tang",
"isPro": false,
"type": "user",
"user": "tangg555"
}
},
{
"_id": "67bbdb46d94d32bcfba70dbb",
"hidden": false,
"name": "Zifan Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:22.303Z",
"user": {
"_id": "656f47ba2f058b368c0b1611",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656f47ba2f058b368c0b1611/mrmcmA8bxaDNUhuJQQ7T1.png",
"fullname": "Zifan Zheng",
"isPro": false,
"type": "user",
"user": "fan2goa1"
}
},
{
"_id": "67bbdb46d94d32bcfba70dbc",
"hidden": false,
"name": "Simin Niu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T11:55:35.171Z",
"user": {
"_id": "66daea8776dbaaa372eabec5",
"avatarUrl": "/avatars/1e5fbe4ff06bb6121c7029253b76b79f.svg",
"fullname": "siminniu",
"isPro": false,
"type": "user",
"user": "siminniu"
}
},
{
"_id": "67bbdb46d94d32bcfba70dbd",
"hidden": false,
"name": "Shichao Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-24T11:55:41.788Z",
"user": {
"_id": "656f339a5273668d5b946b33",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656f339a5273668d5b946b33/o2nBvQiOKKP5IfDmnpHP2.jpeg",
"fullname": "Shichao Song",
"isPro": false,
"type": "user",
"user": "Ki-Seki"
}
},
{
"_id": "67bbdb46d94d32bcfba70dbe",
"hidden": false,
"name": "Hanyu Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:07:20.146Z",
"user": {
"_id": "669e0b93c7cb0568dac6e92e",
"avatarUrl": "/avatars/a39ea77d7391f164af8a80f94f85f2ca.svg",
"fullname": "hanyu Wang",
"isPro": false,
"type": "user",
"user": "UglyToilet"
}
},
{
"_id": "67bbdb46d94d32bcfba70dbf",
"hidden": false,
"name": "Bo Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdb46d94d32bcfba70dc0",
"hidden": false,
"name": "Feiyu Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdb46d94d32bcfba70dc1",
"hidden": false,
"name": "Keming Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67bbdb46d94d32bcfba70dc2",
"hidden": false,
"name": "Zhiyu li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T17:59:45 | SurveyX: Academic Survey Automation via Large Language Models | Large Language Models (LLMs) have demonstrated exceptional comprehension
capabilities and a vast knowledge base, suggesting that LLMs can serve as
efficient tools for automated survey generation. However, recent research
related to automated survey generation remains constrained by some critical
limitations like finite context window, lack of in-depth content discussion,
and absence of systematic evaluation frameworks. Inspired by human writing
processes, we propose SurveyX, an efficient and organized system for automated
survey generation that decomposes the survey composing process into two phases:
the Preparation and Generation phases. By innovatively introducing online
reference retrieval, a pre-processing method called AttributeTree, and a
re-polishing process, SurveyX significantly enhances the efficacy of survey
composition. Experimental evaluation results show that SurveyX outperforms
existing automated survey generation systems in content quality (0.259
improvement) and citation quality (1.76 enhancement), approaching human expert
performance across multiple evaluation dimensions. Examples of surveys
generated by SurveyX are available on www.surveyx.cn | 91 | 67bbdb47d94d32bcfba70df3 | null | null |
|
2025-02-21T21:19:35.358000 | Generating Skyline Datasets for Data Science Models | 2 | {
"_id": "63f05764f1a47aaea5bcdee0",
"avatarUrl": "/avatars/3a58d6f0439dce32d2010499f321fe9d.svg",
"followerCount": 2,
"fullname": "Mengying Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wmying",
"type": "user"
} | true | null | 2502.11262 | [
{
"_id": "67b68ee59076bb9959a6fd6e",
"hidden": false,
"name": "Mengying Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T09:36:16.105Z",
"user": {
"_id": "63f05764f1a47aaea5bcdee0",
"avatarUrl": "/avatars/3a58d6f0439dce32d2010499f321fe9d.svg",
"fullname": "Mengying Wang",
"isPro": false,
"type": "user",
"user": "wmying"
}
},
{
"_id": "67b68ee59076bb9959a6fd6f",
"hidden": false,
"name": "Hanchao Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b68ee59076bb9959a6fd70",
"hidden": false,
"name": "Yiyang Bian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b68ee59076bb9959a6fd71",
"hidden": false,
"name": "Yangxin Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b68ee59076bb9959a6fd72",
"hidden": false,
"name": "Yinghui Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-16T20:33:59 | Generating Skyline Datasets for Data Science Models | Preparing high-quality datasets required by various data-driven AI and
machine learning models has become a cornerstone task in data-driven analysis.
Conventional data discovery methods typically integrate datasets towards a
single pre-defined quality measure that may lead to bias for downstream tasks.
This paper introduces MODis, a framework that discovers datasets by optimizing
multiple user-defined, model-performance measures. Given a set of data sources
and a model, MODis selects and integrates data sources into a skyline dataset,
over which the model is expected to have the desired performance in all the
performance measures. We formulate MODis as a multi-goal finite state
transducer, and derive three feasible algorithms to generate skyline datasets.
Our first algorithm adopts a "reduce-from-universal" strategy, that starts with
a universal schema and iteratively prunes unpromising data. Our second
algorithm further reduces the cost with a bi-directional strategy that
interleaves data augmentation and reduction. We also introduce a
diversification algorithm to mitigate the bias in skyline datasets. We
experimentally verify the efficiency and effectiveness of our skyline data
discovery algorithms, and showcase their applications in optimizing data
science pipelines. | 7 | 67b68ee89076bb9959a6fde3 | null | null |
|
2025-02-21T13:42:50.546000 | Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images | 2 | {
"_id": "65222f97ef06bb99753cb829",
"avatarUrl": "/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg",
"followerCount": null,
"fullname": "Shengguang Wu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "danielwusg",
"type": "user"
} | true | null | 2502.13928 | [
{
"_id": "67b7cdac904136d47c3966d8",
"hidden": false,
"name": "Shengguang Wu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:59:21.001Z",
"user": {
"_id": "65222f97ef06bb99753cb829",
"avatarUrl": "/avatars/f1a743d74e6d38b916acaec91b4e7e4f.svg",
"fullname": "Shengguang Wu",
"isPro": false,
"type": "user",
"user": "danielwusg"
}
},
{
"_id": "67b7cdac904136d47c3966d9",
"hidden": false,
"name": "Fan-Yun Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7cdac904136d47c3966da",
"hidden": false,
"name": "Kaiyue Wen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7cdac904136d47c3966db",
"hidden": false,
"name": "Nick Haber",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T18:05:42 | Symmetrical Visual Contrastive Optimization: Aligning Vision-Language
Models with Minimal Contrastive Images | Recent studies have shown that Large Vision-Language Models (VLMs) tend to
neglect image content and over-rely on language-model priors, resulting in
errors in visually grounded tasks and hallucinations. We hypothesize that this
issue arises because existing VLMs are not explicitly trained to generate texts
that are accurately grounded in fine-grained image details. To enhance visual
feedback during VLM training, we propose S-VCO (Symmetrical Visual Contrastive
Optimization), a novel finetuning objective that steers the model toward
capturing important visual details and aligning them with corresponding text
tokens. To further facilitate this detailed alignment, we introduce MVC, a
paired image-text dataset built by automatically filtering and augmenting
visual counterfactual data to challenge the model with hard contrastive cases
involving Minimal Visual Contrasts. Experiments show that our method
consistently improves VLM performance across diverse benchmarks covering
various abilities and domains, achieving up to a 22% reduction in
hallucinations, and significant gains in vision-centric and general tasks.
Notably, these improvements become increasingly pronounced in benchmarks with
higher visual dependency. In short, S-VCO offers a significant enhancement of
VLM's visually-dependent task performance while retaining or even improving the
model's general abilities. We opensource our code at https://s-vco.github.io/ | 3 | 67b7cdb8904136d47c396910 | null | null |
|
2025-02-21T13:05:36.173000 | Generating $π$-Functional Molecules Using STGG+ with Active Learning | 2 | {
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
} | false | null | 2502.14842 | [
{
"_id": "67b8c05e109d4be55d85d1f0",
"hidden": false,
"name": "Alexia Jolicoeur-Martineau",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8c05e109d4be55d85d1f1",
"hidden": false,
"name": "Yan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8c05e109d4be55d85d1f2",
"hidden": false,
"name": "Boris Knyazev",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-21T18:14:23.859Z",
"user": {
"_id": "63e16165f0039731dfdd442a",
"avatarUrl": "/avatars/37cf99dc016c291c800f60d260173482.svg",
"fullname": "Boris Knyazev",
"isPro": false,
"type": "user",
"user": "bknyaz"
}
},
{
"_id": "67b8c05e109d4be55d85d1f3",
"hidden": false,
"name": "Aristide Baratin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8c05e109d4be55d85d1f4",
"hidden": false,
"name": "Cheng-Hao Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T18:52:42 | Generating π-Functional Molecules Using STGG+ with Active Learning | Generating novel molecules with out-of-distribution properties is a major
challenge in molecular discovery. While supervised learning methods generate
high-quality molecules similar to those in a dataset, they struggle to
generalize to out-of-distribution properties. Reinforcement learning can
explore new chemical spaces but often conducts 'reward-hacking' and generates
non-synthesizable molecules. In this work, we address this problem by
integrating a state-of-the-art supervised learning method, STGG+, in an active
learning loop. Our approach iteratively generates, evaluates, and fine-tunes
STGG+ to continuously expand its knowledge. We denote this approach STGG+AL. We
apply STGG+AL to the design of organic pi-functional materials, specifically
two challenging tasks: 1) generating highly absorptive molecules characterized
by high oscillator strength and 2) designing absorptive molecules with
reasonable oscillator strength in the near-infrared (NIR) range. The generated
molecules are validated and rationalized in-silico with time-dependent density
functional theory. Our results demonstrate that our method is highly effective
in generating novel molecules with high oscillator strength, contrary to
existing methods such as reinforcement learning (RL) methods. We open-source
our active-learning code along with our Conjugated-xTB dataset containing 2.9
million pi-conjugated molecules and the function for approximating the
oscillator strength and absorption wavelength (based on sTDA-xTB). | 4 | 67b8c05f109d4be55d85d249 | null | null |
|
2025-02-21T11:36:30.717000 | How to Get Your LLM to Generate Challenging Problems for Evaluation | 2 | {
"_id": "631a523c04f8ed65eff16fb4",
"avatarUrl": "/avatars/2b284403c88f140d7bef283f729f7a3e.svg",
"followerCount": 1,
"fullname": "Arkil Patel",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "arkilpatel",
"type": "user"
} | true | null | 2502.14678 | [
{
"_id": "67b886298512a3eca0668ba6",
"hidden": false,
"name": "Arkil Patel",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:20:45.806Z",
"user": {
"_id": "631a523c04f8ed65eff16fb4",
"avatarUrl": "/avatars/2b284403c88f140d7bef283f729f7a3e.svg",
"fullname": "Arkil Patel",
"isPro": false,
"type": "user",
"user": "arkilpatel"
}
},
{
"_id": "67b886298512a3eca0668ba7",
"hidden": false,
"name": "Siva Reddy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b886298512a3eca0668ba8",
"hidden": false,
"name": "Dzmitry Bahdanau",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T16:09:55 | How to Get Your LLM to Generate Challenging Problems for Evaluation | The pace of evolution of Large Language Models (LLMs) necessitates new
approaches for rigorous and comprehensive evaluation. Traditional human
annotation is increasingly impracticable due to the complexities and costs
involved in generating high-quality, challenging problems. In this work, we
introduce CHASE, a unified framework to synthetically generate challenging
problems using LLMs without human involvement. For a given task, our approach
builds a hard problem in a bottom-up manner from simpler components. Moreover,
our framework decomposes the generation process into independently verifiable
sub-tasks, thereby ensuring a high level of quality and correctness. We
implement CHASE to create evaluation benchmarks across three diverse domains:
(1) document-based question answering, (2) repository-level code completion,
and (3) math reasoning. The performance of state-of-the-art LLMs on these
synthetic benchmarks lies in the range of 40-60% accuracy, thereby
demonstrating the effectiveness of our framework at generating challenging
problems. We publicly release our benchmarks and code. | 16 | 67b8862a8512a3eca0668c00 | null | null |
|
2025-02-21T11:34:53.838000 | Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision Language Models | 2 | {
"_id": "621e9388345a1d9ab65391c3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621e9388345a1d9ab65391c3/RxurNzyAWJOUdgeSHQi1R.jpeg",
"followerCount": 11,
"fullname": "Michihiro Yasunaga",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "michiyasunaga",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/621e9388345a1d9ab65391c3/FaHBHPH3KH5KQ-kQ4bBek.png"
] | 2502.14191 | [
{
"_id": "67b8aaa5ef55d96f2cbd7eaa",
"hidden": false,
"name": "Michihiro Yasunaga",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T16:39:03.035Z",
"user": {
"_id": "621e9388345a1d9ab65391c3",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/621e9388345a1d9ab65391c3/RxurNzyAWJOUdgeSHQi1R.jpeg",
"fullname": "Michihiro Yasunaga",
"isPro": false,
"type": "user",
"user": "michiyasunaga"
}
},
{
"_id": "67b8aaa5ef55d96f2cbd7eab",
"hidden": false,
"name": "Luke Zettlemoyer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b8aaa5ef55d96f2cbd7eac",
"hidden": false,
"name": "Marjan Ghazvininejad",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:20:41.579Z",
"user": {
"_id": "660f0fd377a1e2509aa5a679",
"avatarUrl": "/avatars/e04ef05bed0bf6cefdc7e3e39674e2f9.svg",
"fullname": "Marjan Ghazvininejad",
"isPro": false,
"type": "user",
"user": "mghazvininejad"
}
}
] | 2025-02-20T01:48:13 | Multimodal RewardBench: Holistic Evaluation of Reward Models for Vision
Language Models | Reward models play an essential role in training vision-language models
(VLMs) by assessing output quality to enable aligning with human preferences.
Despite their importance, the research community lacks comprehensive open
benchmarks for evaluating multimodal reward models in VLMs. To address this
gap, we introduce Multimodal RewardBench, an expert-annotated benchmark
covering six domains: general correctness, preference, knowledge, reasoning,
safety, and visual question-answering. Our dataset comprises 5,211 annotated
(prompt, chosen response, rejected response) triplets collected from various
VLMs. In evaluating a range of VLM judges, we find that even the top-performing
models, Gemini 1.5 Pro and Claude 3.5 Sonnet, achieve only 72% overall
accuracy. Notably, most models struggle in the reasoning and safety domains.
These findings suggest that Multimodal RewardBench offers a challenging testbed
for advancing reward model development across multiple domains. We release the
benchmark at https://github.com/facebookresearch/multimodal_rewardbench. | 7 | 67b8aaa6ef55d96f2cbd7edf | null | null |
|
2025-02-21T09:39:36.504000 | LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention | 2 | {
"_id": "640d3eaa3623f6a56dde856d",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg",
"followerCount": 14,
"fullname": "vansin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vansin",
"type": "user"
} | false | null | 2502.14866 | [
{
"_id": "67b7f46218d8b6a80a14220b",
"hidden": false,
"name": "Shang Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:58:16.384Z",
"user": {
"_id": "641d8bacd526196afc12766d",
"avatarUrl": "/avatars/73f7b2d86a7bf27940bec2b1f199d71b.svg",
"fullname": "Shang Yang",
"isPro": false,
"type": "user",
"user": "Shangy"
}
},
{
"_id": "67b7f46218d8b6a80a14220c",
"hidden": false,
"name": "Junxian Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:47.441Z",
"user": {
"_id": "64e702982eba1760dfb0166c",
"avatarUrl": "/avatars/3291112513914d823cd524dafec66c87.svg",
"fullname": "Junxian Guo",
"isPro": false,
"type": "user",
"user": "JerryGJX"
}
},
{
"_id": "67b7f46218d8b6a80a14220d",
"hidden": false,
"name": "Haotian Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:54.758Z",
"user": {
"_id": "646791c5374fe5728d403369",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646791c5374fe5728d403369/2kyCP48za0T3PTj-IR-J0.jpeg",
"fullname": "Haotian Tang",
"isPro": false,
"type": "user",
"user": "kentang1998"
}
},
{
"_id": "67b7f46218d8b6a80a14220e",
"hidden": false,
"name": "Qinghao Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:00.291Z",
"user": {
"_id": "67a6d5f424a2ace09c62640d",
"avatarUrl": "/avatars/0360e2543d791c4d046dd516eb70ced1.svg",
"fullname": "Qinghao Hu",
"isPro": false,
"type": "user",
"user": "huqinghao"
}
},
{
"_id": "67b7f46218d8b6a80a14220f",
"hidden": false,
"name": "Guangxuan Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:07.219Z",
"user": {
"_id": "6362fefe19cf373a5fc5b39e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6362fefe19cf373a5fc5b39e/v4uJ5bzjpZJOxqGUHYPM2.jpeg",
"fullname": "Guangxuan Xiao",
"isPro": false,
"type": "user",
"user": "Guangxuan-Xiao"
}
},
{
"_id": "67b7f46218d8b6a80a142210",
"hidden": false,
"name": "Jiaming Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:13.068Z",
"user": {
"_id": "65ac13385860f06ff21c9a8a",
"avatarUrl": "/avatars/5c4c151e90c0dfc0e321623013594bbe.svg",
"fullname": "Jiaming Tang",
"isPro": false,
"type": "user",
"user": "Dudep"
}
},
{
"_id": "67b7f46218d8b6a80a142211",
"hidden": false,
"name": "Yujun Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:19.276Z",
"user": {
"_id": "66a156136609d2b2b0f6353a",
"avatarUrl": "/avatars/fc6850b5fc437269bf0870f6a6cdcf40.svg",
"fullname": "Yujun Lin",
"isPro": false,
"type": "user",
"user": "synxlin"
}
},
{
"_id": "67b7f46218d8b6a80a142212",
"hidden": false,
"name": "Zhijian Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:26.473Z",
"user": {
"_id": "650dac79b959b0e1d41d7378",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/650dac79b959b0e1d41d7378/mzbN0MFk3k8b94FQ40I7L.jpeg",
"fullname": "Zhijian Liu",
"isPro": false,
"type": "user",
"user": "zhijianliu"
}
},
{
"_id": "67b7f46218d8b6a80a142213",
"hidden": false,
"name": "Yao Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f46218d8b6a80a142214",
"hidden": false,
"name": "Song Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:18:34.872Z",
"user": {
"_id": "63797f727df2fefdcaf3ff7e",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1668906853549-noauth.jpeg",
"fullname": "Song",
"isPro": false,
"type": "user",
"user": "songhan"
}
}
] | 2025-02-20T18:59:52 | LServe: Efficient Long-sequence LLM Serving with Unified Sparse
Attention | Large language models (LLMs) have shown remarkable potential in processing
long sequences, yet efficiently serving these long-context models remains
challenging due to the quadratic computational complexity of attention in the
prefilling stage and the large memory footprint of the KV cache in the decoding
stage. To address these issues, we introduce LServe, an efficient system that
accelerates long-sequence LLM serving via hybrid sparse attention. This method
unifies different hardware-friendly, structured sparsity patterns for both
prefilling and decoding attention into a single framework, where computations
on less important tokens are skipped block-wise. LServe demonstrates the
compatibility of static and dynamic sparsity in long-context LLM attention.
This design enables multiplicative speedups by combining these optimizations.
Specifically, we convert half of the attention heads to nearly free streaming
heads in both the prefilling and decoding stages. Additionally, we find that
only a constant number of KV pages is required to preserve long-context
capabilities, irrespective of context length. We then design a hierarchical KV
page selection policy that dynamically prunes KV pages based on query-centric
similarity. On average, LServe accelerates LLM prefilling by up to 2.9x and
decoding by 1.3-2.1x over vLLM, maintaining long-context accuracy. Code is
released at https://github.com/mit-han-lab/omniserve. | 12 | 67b7f46318d8b6a80a142267 | null | null |
|
2025-02-21T08:26:31.100000 | Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data | 3 | {
"_id": "64beb6b6140491ca9f803ebf",
"avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg",
"followerCount": null,
"fullname": "Yucheng SHi",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "YuchengShi",
"type": "user"
} | true | null | 2502.14044 | [
{
"_id": "67b87e3a346553e4006bf37c",
"hidden": false,
"name": "Yucheng Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:14:19.237Z",
"user": {
"_id": "64beb6b6140491ca9f803ebf",
"avatarUrl": "/avatars/0daa2e813a13668b8b708cd8c12763d9.svg",
"fullname": "Yucheng SHi",
"isPro": false,
"type": "user",
"user": "YuchengShi"
}
},
{
"_id": "67b87e3a346553e4006bf37d",
"hidden": false,
"name": "Quanzheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b87e3a346553e4006bf37e",
"hidden": false,
"name": "Jin Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:15:13.205Z",
"user": {
"_id": "6409cde91ee054d66a66c817",
"avatarUrl": "/avatars/ed18aa9902760a7ad6e9c5789b26dbe3.svg",
"fullname": "jin sun",
"isPro": false,
"type": "user",
"user": "jinsun"
}
},
{
"_id": "67b87e3a346553e4006bf37f",
"hidden": false,
"name": "Xiang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b87e3a346553e4006bf380",
"hidden": false,
"name": "Ninghao Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T19:05:45 | Enhancing Cognition and Explainability of Multimodal Foundation Models
with Self-Synthesized Data | Large multimodal models (LMMs) have shown impressive capabilities in a wide
range of visual tasks. However, they often struggle with fine-grained visual
reasoning, failing to identify domain-specific objectives and provide
justifiable explanations for their predictions. To address this, we propose a
novel visual rejection sampling framework to improve the cognition and
explainability of LMMs using self-synthesized data. Specifically, visual
fine-tuning requires images, queries, and target answers. Our approach begins
by synthesizing interpretable answers that include human-verifiable visual
features. These features are based on expert-defined concepts, carefully
selected based on their alignment with the image content. After each round of
fine-tuning, we apply a reward model-free filtering mechanism to select the
highest-quality interpretable answers for the next round of tuning. This
iterative process of data synthesis and fine-tuning progressively improves the
model's ability to generate accurate and reasonable explanations. Experimental
results demonstrate the effectiveness of our method in improving both the
accuracy and explainability of specialized visual classification tasks. | 7 | 67b87e3d346553e4006bf416 | null | null |
|
2025-02-21T08:18:34.557000 | NAVIG: Natural Language-guided Analysis with Vision Language Models for Image Geo-localization | 2 | {
"_id": "648c4af819bb04c06467189c",
"avatarUrl": "/avatars/8b9372a233d4c00b555625fa7b5203e2.svg",
"followerCount": 2,
"fullname": "Zheyuan Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Zheyuan22",
"type": "user"
} | true | null | 2502.14638 | [
{
"_id": "67b8760bf8311235c642d7a4",
"hidden": false,
"name": "Zheyuan Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T14:42:43.700Z",
"user": {
"_id": "648c4af819bb04c06467189c",
"avatarUrl": "/avatars/8b9372a233d4c00b555625fa7b5203e2.svg",
"fullname": "Zheyuan Zhang",
"isPro": false,
"type": "user",
"user": "Zheyuan22"
}
},
{
"_id": "67b8760bf8311235c642d7a5",
"hidden": false,
"name": "Runze Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:21:43.993Z",
"user": {
"_id": "66482dea53317bdff335b006",
"avatarUrl": "/avatars/5079ed975c5689739e1e8e4ae8a47a3d.svg",
"fullname": "RUNZE LI",
"isPro": false,
"type": "user",
"user": "huggingCode11"
}
},
{
"_id": "67b8760bf8311235c642d7a6",
"hidden": false,
"name": "Tasnim Kabir",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:01:51.663Z",
"user": {
"_id": "6676067b8a4064c02b4ef6b7",
"avatarUrl": "/avatars/319ff99740183793cab3045ae3bf1395.svg",
"fullname": "Tasnim Kabir",
"isPro": false,
"type": "user",
"user": "TasnimKabir12"
}
},
{
"_id": "67b8760bf8311235c642d7a7",
"hidden": false,
"name": "Jordan Boyd-Graber",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T15:21:35 | NAVIG: Natural Language-guided Analysis with Vision Language Models for
Image Geo-localization | Image geo-localization is the task of predicting the specific location of an
image and requires complex reasoning across visual, geographical, and cultural
contexts. While prior Vision Language Models (VLMs) have the best accuracy at
this task, there is a dearth of high-quality datasets and models for analytical
reasoning. We first create NaviClues, a high-quality dataset derived from
GeoGuessr, a popular geography game, to supply examples of expert reasoning
from language. Using this dataset, we present Navig, a comprehensive image
geo-localization framework integrating global and fine-grained image
information. By reasoning with language, Navig reduces the average distance
error by 14% compared to previous state-of-the-art models while requiring fewer
than 1000 training samples. Our dataset and code are available at
https://github.com/SparrowZheyuan18/Navig/. | 11 | 67b8760ef8311235c642d89d | null | null |
|
2025-02-21T08:00:41.165000 | From RAG to Memory: Non-Parametric Continual Learning for Large Language Models | 2 | {
"_id": "60a4ebfbaa9320dbbe69e37c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a4ebfbaa9320dbbe69e37c/QLaEohXCWaUy8YX3wKQ_w.jpeg",
"followerCount": 2,
"fullname": "Yiheng Shu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yhshu",
"type": "user"
} | true | null | 2502.14802 | [
{
"_id": "67b878b8f17ca6989fd21e92",
"hidden": false,
"name": "Bernal Jiménez Gutiérrez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b878b8f17ca6989fd21e93",
"hidden": false,
"name": "Yiheng Shu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:15:54.523Z",
"user": {
"_id": "60a4ebfbaa9320dbbe69e37c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a4ebfbaa9320dbbe69e37c/QLaEohXCWaUy8YX3wKQ_w.jpeg",
"fullname": "Yiheng Shu",
"isPro": false,
"type": "user",
"user": "yhshu"
}
},
{
"_id": "67b878b8f17ca6989fd21e94",
"hidden": false,
"name": "Weijian Qi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:21:41.970Z",
"user": {
"_id": "67169e3fd720d7d51e36a67e",
"avatarUrl": "/avatars/91113e5520009a4ed709bc62b96f2150.svg",
"fullname": "WeijianQi",
"isPro": false,
"type": "user",
"user": "WeijianQi1999"
}
},
{
"_id": "67b878b8f17ca6989fd21e95",
"hidden": false,
"name": "Sizhe Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:16:08.161Z",
"user": {
"_id": "65030fc90a57e8f2b26bcaa3",
"avatarUrl": "/avatars/8c25a4a9735f268ee8541f3e2017d92c.svg",
"fullname": "Sizhe Zhou",
"isPro": false,
"type": "user",
"user": "KevinSRR"
}
},
{
"_id": "67b878b8f17ca6989fd21e96",
"hidden": false,
"name": "Yu Su",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T18:26:02 | From RAG to Memory: Non-Parametric Continual Learning for Large Language
Models | Our ability to continuously acquire, organize, and leverage knowledge is a
key feature of human intelligence that AI systems must approximate to unlock
their full potential. Given the challenges in continual learning with large
language models (LLMs), retrieval-augmented generation (RAG) has become the
dominant way to introduce new information. However, its reliance on vector
retrieval hinders its ability to mimic the dynamic and interconnected nature of
human long-term memory. Recent RAG approaches augment vector embeddings with
various structures like knowledge graphs to address some of these gaps, namely
sense-making and associativity. However, their performance on more basic
factual memory tasks drops considerably below standard RAG. We address this
unintended deterioration and propose HippoRAG 2, a framework that outperforms
standard RAG comprehensively on factual, sense-making, and associative memory
tasks. HippoRAG 2 builds upon the Personalized PageRank algorithm used in
HippoRAG and enhances it with deeper passage integration and more effective
online use of an LLM. This combination pushes this RAG system closer to the
effectiveness of human long-term memory, achieving a 7% improvement in
associative memory tasks over the state-of-the-art embedding model while also
exhibiting superior factual knowledge and sense-making memory capabilities.
This work paves the way for non-parametric continual learning for LLMs. Our
code and data will be released at https://github.com/OSU-NLP-Group/HippoRAG. | 11 | 67b878bcf17ca6989fd21f7a | null | null |
|
2025-02-21T07:52:55.537000 | CLIPPER: Compression enables long-context synthetic data generation | 2 | {
"_id": "65b976fdf69f4d0377aef3fe",
"avatarUrl": "/avatars/1201194e2956c56b50098cc465a04c11.svg",
"followerCount": 5,
"fullname": "Chau Minh Pham",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "chtmp223",
"type": "user"
} | true | null | 2502.14854 | [
{
"_id": "67b7edf6a1d1394d1682c085",
"hidden": false,
"name": "Chau Minh Pham",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:58:36.388Z",
"user": {
"_id": "65b976fdf69f4d0377aef3fe",
"avatarUrl": "/avatars/1201194e2956c56b50098cc465a04c11.svg",
"fullname": "Chau Minh Pham",
"isPro": false,
"type": "user",
"user": "chtmp223"
}
},
{
"_id": "67b7edf6a1d1394d1682c086",
"hidden": false,
"name": "Yapei Chang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:25.220Z",
"user": {
"_id": "5f1dcc06cb8f993fa01f4775",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1668660782327-5f1dcc06cb8f993fa01f4775.png",
"fullname": "Yapei Chang",
"isPro": false,
"type": "user",
"user": "yapeichang"
}
},
{
"_id": "67b7edf6a1d1394d1682c087",
"hidden": false,
"name": "Mohit Iyyer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:30.925Z",
"user": {
"_id": "6669df71d1652853e4b66ee5",
"avatarUrl": "/avatars/8a40bca9423ef76f39ae24d7a9e63478.svg",
"fullname": "Mohit Iyyer",
"isPro": false,
"type": "user",
"user": "mohitiyyer"
}
}
] | 2025-02-20T18:58:03 | CLIPPER: Compression enables long-context synthetic data generation | LLM developers are increasingly reliant on synthetic data, but generating
high-quality data for complex long-context reasoning tasks remains challenging.
We introduce CLIPPER, a compression-based approach for generating synthetic
data tailored to narrative claim verification - a task that requires reasoning
over a book to verify a given claim. Instead of generating claims directly from
the raw text of the book, which results in artifact-riddled claims, CLIPPER
first compresses the book into chapter outlines and book summaries and then
uses these intermediate representations to generate complex claims and
corresponding chain-of-thoughts. Compared to naive approaches, CLIPPER produces
claims that are more valid, grounded, and complex. Using CLIPPER, we construct
a dataset of 19K synthetic book claims paired with their source texts and
chain-of-thought reasoning, and use it to fine-tune three open-weight models.
Our best model achieves breakthrough results on narrative claim verification
(from 28% to 76% accuracy on our test set) and sets a new state-of-the-art for
sub-10B models on the NoCha leaderboard. Further analysis shows that our models
generate more detailed and grounded chain-of-thought reasoning while also
improving performance on other narrative understanding tasks (e.g.,
NarrativeQA). | 7 | 67b7edf8a1d1394d1682c0d4 | null | null |
|
2025-02-21T07:16:00.307000 | LLM-based User Profile Management for Recommender System | 2 | {
"_id": "67b841b821956199df64926b",
"avatarUrl": "/avatars/e714df967ca1b78425fa188b6843f057.svg",
"followerCount": null,
"fullname": "Seunghwan Bang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Breadbang",
"type": "user"
} | true | null | 2502.14541 | [
{
"_id": "67b8439499159e6fc939970b",
"hidden": false,
"name": "Seunghwan Bang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:49:31.964Z",
"user": {
"_id": "67b841b821956199df64926b",
"avatarUrl": "/avatars/e714df967ca1b78425fa188b6843f057.svg",
"fullname": "Seunghwan Bang",
"isPro": false,
"type": "user",
"user": "Breadbang"
}
},
{
"_id": "67b8439499159e6fc939970c",
"hidden": false,
"name": "Hwanjun Song",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T13:20:19 | LLM-based User Profile Management for Recommender System | The rapid advancement of Large Language Models (LLMs) has opened new
opportunities in recommender systems by enabling zero-shot recommendation
without conventional training. Despite their potential, most existing works
rely solely on users' purchase histories, leaving significant room for
improvement by incorporating user-generated textual data, such as reviews and
product descriptions. Addressing this gap, we propose PURE, a novel LLM-based
recommendation framework that builds and maintains evolving user profiles by
systematically extracting and summarizing key information from user reviews.
PURE consists of three core components: a Review Extractor for identifying user
preferences and key product features, a Profile Updater for refining and
updating user profiles, and a Recommender for generating personalized
recommendations using the most current profile. To evaluate PURE, we introduce
a continuous sequential recommendation task that reflects real-world scenarios
by adding reviews over time and updating predictions incrementally. Our
experimental results on Amazon datasets demonstrate that PURE outperforms
existing LLM-based methods, effectively leveraging long-term user information
while managing token limitations. | 5 | 67b8439599159e6fc9399735 | null | null |
|
2025-02-21T05:29:18.157000 | How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? | 8 | {
"_id": "62bd6c6baaf1480f1aa2222e",
"avatarUrl": "/avatars/fd92ae2986d435a47eb1e382ac11d8e0.svg",
"followerCount": null,
"fullname": "Mikhail Salnikov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "msalnikov",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/62bd6c6baaf1480f1aa2222e/_N4zn03NcZY7lGoHmp9-j.png",
"https://cdn-uploads.huggingface.co/production/uploads/62bd6c6baaf1480f1aa2222e/nh3VgACDbU_BXjhHG8nsF.png"
] | 2502.14502 | [
{
"_id": "67b83fd69fb3eedaf6d0aef3",
"hidden": false,
"name": "Sergey Pletenev",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:49:33.682Z",
"user": {
"_id": "5dfa8e07da6d0311fd3d5430",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1651090418656-5dfa8e07da6d0311fd3d5430.png",
"fullname": "Sergey Pletenev",
"isPro": false,
"type": "user",
"user": "memyprokotow"
}
},
{
"_id": "67b83fd69fb3eedaf6d0aef4",
"hidden": false,
"name": "Maria Marina",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T14:42:47.555Z",
"user": {
"_id": "660ee18e2dcd816ad14b3739",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/660ee18e2dcd816ad14b3739/2pPMurtSOHMA96eVk0k7w.jpeg",
"fullname": "Maria Marina",
"isPro": false,
"type": "user",
"user": "zlatamaria"
}
},
{
"_id": "67b83fd69fb3eedaf6d0aef5",
"hidden": false,
"name": "Daniil Moskovskiy",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T14:42:49.828Z",
"user": {
"_id": "61ade264f602880813dbe10b",
"avatarUrl": "/avatars/a92dea7d853bbabbf60b351c207b6875.svg",
"fullname": "Daniil Moskovskiy",
"isPro": false,
"type": "user",
"user": "etomoscow"
}
},
{
"_id": "67b83fd69fb3eedaf6d0aef6",
"hidden": false,
"name": "Vasily Konovalov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83fd69fb3eedaf6d0aef7",
"hidden": false,
"name": "Pavel Braslavski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83fd69fb3eedaf6d0aef8",
"hidden": false,
"name": "Alexander Panchenko",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83fd69fb3eedaf6d0aef9",
"hidden": false,
"name": "Mikhail Salnikov",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-21T09:01:04.770Z",
"user": {
"_id": "62bd6c6baaf1480f1aa2222e",
"avatarUrl": "/avatars/fd92ae2986d435a47eb1e382ac11d8e0.svg",
"fullname": "Mikhail Salnikov",
"isPro": false,
"type": "user",
"user": "msalnikov"
}
}
] | 2025-02-20T12:31:03 | How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? | The performance of Large Language Models (LLMs) on many tasks is greatly
limited by the knowledge learned during pre-training and stored in the model's
parameters. Low-rank adaptation (LoRA) is a popular and efficient training
technique for updating or domain-specific adaptation of LLMs. In this study, we
investigate how new facts can be incorporated into the LLM using LoRA without
compromising the previously learned knowledge. We fine-tuned
Llama-3.1-8B-instruct using LoRA with varying amounts of new knowledge. Our
experiments have shown that the best results are obtained when the training
data contains a mixture of known and new facts. However, this approach is still
potentially harmful because the model's performance on external
question-answering benchmarks declines after such fine-tuning. When the
training data is biased towards certain entities, the model tends to regress to
few overrepresented answers. In addition, we found that the model becomes more
confident and refuses to provide an answer in only few cases. These findings
highlight the potential pitfalls of LoRA-based LLM updates and underscore the
importance of training data composition and tuning parameters to balance new
knowledge integration and general model capabilities. | 82 | 67b83fd79fb3eedaf6d0af16 | null | null |
|
2025-02-21T05:28:42.882000 | How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild | 2 | {
"_id": "6182910a68444be3259d8b67",
"avatarUrl": "/avatars/b47b0609f61b1192a1337fd7c9f8a75b.svg",
"followerCount": 3,
"fullname": "Saad Obaid ul Islam",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "saadob12",
"type": "user"
} | true | null | 2502.12769 | [
{
"_id": "67b597146e53744c2a39e335",
"hidden": false,
"name": "Saad Obaid ul Islam",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-20T15:53:18.182Z",
"user": {
"_id": "6182910a68444be3259d8b67",
"avatarUrl": "/avatars/b47b0609f61b1192a1337fd7c9f8a75b.svg",
"fullname": "Saad Obaid ul Islam",
"isPro": false,
"type": "user",
"user": "saadob12"
}
},
{
"_id": "67b597146e53744c2a39e336",
"hidden": false,
"name": "Anne Lauscher",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:05:49.990Z",
"user": {
"_id": "626c02e7703f3b27dd590896",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654503075060-626c02e7703f3b27dd590896.jpeg",
"fullname": "Anne Lauscher",
"isPro": false,
"type": "user",
"user": "anlausch"
}
},
{
"_id": "67b597146e53744c2a39e337",
"hidden": false,
"name": "Goran Glavaš",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:05:56.804Z",
"user": {
"_id": "6335af67a09fc16c7e7b4879",
"avatarUrl": "/avatars/047fad65ceb33203f97064d6a92ecdc1.svg",
"fullname": "Goran Glavaš",
"isPro": false,
"type": "user",
"user": "gg42554"
}
}
] | 2025-02-18T11:32:43 | How Much Do LLMs Hallucinate across Languages? On Multilingual
Estimation of LLM Hallucination in the Wild | In the age of misinformation, hallucination -- the tendency of Large Language
Models (LLMs) to generate non-factual or unfaithful responses -- represents the
main risk for their global utility. Despite LLMs becoming increasingly
multilingual, the vast majority of research on detecting and quantifying LLM
hallucination are (a) English-centric and (b) focus on machine translation (MT)
and summarization, tasks that are less common ``in the wild'' than open
information seeking. In contrast, we aim to quantify the extent of LLM
hallucination across languages in knowledge-intensive long-form question
answering. To this end, we train a multilingual hallucination detection model
and conduct a large-scale study across 30 languages and 6 open-source LLM
families. We start from an English hallucination detection dataset and rely on
MT to generate (noisy) training data in other languages. We also manually
annotate gold data for five high-resource languages; we then demonstrate, for
these languages, that the estimates of hallucination rates are similar between
silver (LLM-generated) and gold test sets, validating the use of silver data
for estimating hallucination rates for other languages. For the final rates
estimation, we build a knowledge-intensive QA dataset for 30 languages with
LLM-generated prompts and Wikipedia articles as references. We find that, while
LLMs generate longer responses with more hallucinated tokens for
higher-resource languages, there is no correlation between length-normalized
hallucination rates of languages and their digital representation. Further, we
find that smaller LLMs exhibit larger hallucination rates than larger models. | 3 | 67b597156e53744c2a39e36f | null | null |
|
2025-02-21T05:00:18.645000 | S$^2$R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning | 2 | {
"_id": "648294b2eb4befee378951c1",
"avatarUrl": "/avatars/da5d8bf9d8662cc2ffa2c0de49bd66a3.svg",
"followerCount": null,
"fullname": "Ruotian Ma",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vvibt",
"type": "user"
} | true | null | 2502.12853 | [
{
"_id": "67b69b6717ccb022c6a95b38",
"hidden": false,
"name": "Ruotian Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:58:55.028Z",
"user": {
"_id": "648294b2eb4befee378951c1",
"avatarUrl": "/avatars/da5d8bf9d8662cc2ffa2c0de49bd66a3.svg",
"fullname": "Ruotian Ma",
"isPro": false,
"type": "user",
"user": "vvibt"
}
},
{
"_id": "67b69b6717ccb022c6a95b39",
"hidden": false,
"name": "Peisong Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:59:00.273Z",
"user": {
"_id": "626f98528a894872cfbf620c",
"avatarUrl": "/avatars/fe31d20313e6ca85e96bc249424c5383.svg",
"fullname": "Peisong Wang",
"isPro": false,
"type": "user",
"user": "duke1852022"
}
},
{
"_id": "67b69b6717ccb022c6a95b3a",
"hidden": false,
"name": "Cheng Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:25:27.094Z",
"user": {
"_id": "6500234be0c94282ab38cd00",
"avatarUrl": "/avatars/90fc160919cdbb28cfa82becf720b062.svg",
"fullname": "soso",
"isPro": false,
"type": "user",
"user": "chengliu"
}
},
{
"_id": "67b69b6717ccb022c6a95b3b",
"hidden": false,
"name": "Xingyan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b69b6717ccb022c6a95b3c",
"hidden": false,
"name": "Jiaqi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b69b6717ccb022c6a95b3d",
"hidden": false,
"name": "Bang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b69b6717ccb022c6a95b3e",
"hidden": false,
"name": "Xin Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b69b6717ccb022c6a95b3f",
"hidden": false,
"name": "Nan Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b69b6717ccb022c6a95b40",
"hidden": false,
"name": "Jia Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-18T13:40:22 | S^2R: Teaching LLMs to Self-verify and Self-correct via Reinforcement
Learning | Recent studies have demonstrated the effectiveness of LLM test-time scaling.
However, existing approaches to incentivize LLMs' deep thinking abilities
generally require large-scale data or significant training efforts. Meanwhile,
it remains unclear how to improve the thinking abilities of less powerful base
models. In this work, we introduce S^2R, an efficient framework that enhances
LLM reasoning by teaching models to self-verify and self-correct during
inference. Specifically, we first initialize LLMs with iterative
self-verification and self-correction behaviors through supervised fine-tuning
on carefully curated data. The self-verification and self-correction skills are
then further strengthened by both outcome-level and process-level reinforcement
learning, with minimized resource requirements, enabling the model to
adaptively refine its reasoning process during inference. Our results
demonstrate that, with only 3.1k self-verifying and self-correcting behavior
initialization samples, Qwen2.5-math-7B achieves an accuracy improvement from
51.0\% to 81.6\%, outperforming models trained on an equivalent amount of
long-CoT distilled data. Extensive experiments and analysis based on three base
models across both in-domain and out-of-domain benchmarks validate the
effectiveness of S^2R. Our code and data are available at
https://github.com/NineAbyss/S2R. | 28 | 67b69b6817ccb022c6a95b6e | null | null |
|
2025-02-21T03:33:40.641000 | Unstructured Evidence Attribution for Long Context Query Focused Summarization | 2 | {
"_id": "60a643b9213fe60589b8fdf9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a643b9213fe60589b8fdf9/OOXmW3MkSf88r63tAE6-n.jpeg",
"followerCount": 4,
"fullname": "Dustin Wright",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dwright37",
"type": "user"
} | true | null | 2502.14409 | [
{
"_id": "67b83a20a9fa331061e84ecd",
"hidden": false,
"name": "Dustin Wright",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:58:02.288Z",
"user": {
"_id": "60a643b9213fe60589b8fdf9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a643b9213fe60589b8fdf9/OOXmW3MkSf88r63tAE6-n.jpeg",
"fullname": "Dustin Wright",
"isPro": false,
"type": "user",
"user": "dwright37"
}
},
{
"_id": "67b83a20a9fa331061e84ece",
"hidden": false,
"name": "Zain Muhammad Mujahid",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:16:52.600Z",
"user": {
"_id": "637e8b1b66ee00bcb2468ed0",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669240174964-637e8b1b66ee00bcb2468ed0.jpeg",
"fullname": "Zain",
"isPro": false,
"type": "user",
"user": "zainmujahid"
}
},
{
"_id": "67b83a20a9fa331061e84ecf",
"hidden": false,
"name": "Lu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83a20a9fa331061e84ed0",
"hidden": false,
"name": "Isabelle Augenstein",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:02.420Z",
"user": {
"_id": "608918b7df398c3b285ce960",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621507769190-608918b7df398c3b285ce960.jpeg",
"fullname": "Isabelle Augenstein",
"isPro": false,
"type": "user",
"user": "IAugenstein"
}
},
{
"_id": "67b83a20a9fa331061e84ed1",
"hidden": false,
"name": "David Jurgens",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:17:08.686Z",
"user": {
"_id": "63516acdce7cf1fe8a854cdc",
"avatarUrl": "/avatars/980124c58796fbbf43008bacc3dc2261.svg",
"fullname": "David Jurgens",
"isPro": false,
"type": "user",
"user": "davidjurgens"
}
}
] | 2025-02-20T09:57:42 | Unstructured Evidence Attribution for Long Context Query Focused
Summarization | Large language models (LLMs) are capable of generating coherent summaries
from very long contexts given a user query. Extracting and properly citing
evidence spans could help improve the transparency and reliability of these
summaries. At the same time, LLMs suffer from positional biases in terms of
which information they understand and attend to, which could affect evidence
citation. Whereas previous work has focused on evidence citation with
predefined levels of granularity (e.g. sentence, paragraph, document, etc.), we
propose the task of long-context query focused summarization with unstructured
evidence citation. We show how existing systems struggle to generate and
properly cite unstructured evidence from their context, and that evidence tends
to be "lost-in-the-middle". To help mitigate this, we create the Summaries with
Unstructured Evidence Text dataset (SUnsET), a synthetic dataset generated
using a novel domain-agnostic pipeline which can be used as supervision to
adapt LLMs to this task. We demonstrate across 5 LLMs of different sizes and 4
datasets with varying document types and lengths that LLMs adapted with SUnsET
data generate more relevant and factually consistent evidence than their base
models, extract evidence from more diverse locations in their context, and can
generate more relevant and consistent summaries. | 3 | 67b83a21a9fa331061e84f36 | null | null |
|
2025-02-21T03:33:28.852000 | Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and Human-Like Reasoning Framework | 2 | {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"followerCount": 1,
"fullname": "Zirui Song",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ziruibest",
"type": "user"
} | true | null | 2502.13759 | [
{
"_id": "67b83a1f26e7d5f7cb0b7c9d",
"hidden": false,
"name": "Zirui Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T09:58:04.247Z",
"user": {
"_id": "65407ba7a38390065750233f",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65407ba7a38390065750233f/1_IPMZbk-S9u2t18PQgMp.jpeg",
"fullname": "Zirui Song",
"isPro": false,
"type": "user",
"user": "Ziruibest"
}
},
{
"_id": "67b83a1f26e7d5f7cb0b7c9e",
"hidden": false,
"name": "Jingpu Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:06:10.877Z",
"user": {
"_id": "67551c3578f56eff362039ab",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/87GrxXfW2VCgRHweCJzWz.png",
"fullname": "Jingpu Yang",
"isPro": false,
"type": "user",
"user": "yyds404"
}
},
{
"_id": "67b83a1f26e7d5f7cb0b7c9f",
"hidden": false,
"name": "Yuan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca0",
"hidden": false,
"name": "Jonathan Tonglet",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:06:50.179Z",
"user": {
"_id": "641ed835f8c8b04c0ba7ac2a",
"avatarUrl": "/avatars/73af2b882e10938b230d4a2073e64098.svg",
"fullname": "Jonathan Tonglet",
"isPro": false,
"type": "user",
"user": "saiga1420"
}
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca1",
"hidden": true,
"name": "Zeyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:54:53.512Z",
"user": {
"_id": "64ec877bb93654d4ca5c92e9",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ec877bb93654d4ca5c92e9/GvHk_KSdE9Rhnk_o-NaZX.jpeg",
"fullname": "Zeyu Zhang",
"isPro": false,
"type": "user",
"user": "SteveZeyuZhang"
}
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca2",
"hidden": false,
"name": "Tao Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca3",
"hidden": false,
"name": "Meng Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca4",
"hidden": false,
"name": "Iryna Gurevych",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b83a1f26e7d5f7cb0b7ca5",
"hidden": false,
"name": "Xiuying Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-19T14:21:25 | Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and
Human-Like Reasoning Framework | Geolocation, the task of identifying an image's location, requires complex
reasoning and is crucial for navigation, monitoring, and cultural preservation.
However, current methods often produce coarse, imprecise, and non-interpretable
localization. A major challenge lies in the quality and scale of existing
geolocation datasets. These datasets are typically small-scale and
automatically constructed, leading to noisy data and inconsistent task
difficulty, with images that either reveal answers too easily or lack
sufficient clues for reliable inference. To address these challenges, we
introduce a comprehensive geolocation framework with three key components:
GeoComp, a large-scale dataset; GeoCoT, a novel reasoning method; and GeoEval,
an evaluation metric, collectively designed to address critical challenges and
drive advancements in geolocation research. At the core of this framework is
GeoComp (Geolocation Competition Dataset), a large-scale dataset collected from
a geolocation game platform involving 740K users over two years. It comprises
25 million entries of metadata and 3 million geo-tagged locations spanning much
of the globe, with each location annotated thousands to tens of thousands of
times by human users. The dataset offers diverse difficulty levels for detailed
analysis and highlights key gaps in current models. Building on this dataset,
we propose Geographical Chain-of-Thought (GeoCoT), a novel multi-step reasoning
framework designed to enhance the reasoning capabilities of Large Vision Models
(LVMs) in geolocation tasks. GeoCoT improves performance by integrating
contextual and spatial cues through a multi-step process that mimics human
geolocation reasoning. Finally, using the GeoEval metric, we demonstrate that
GeoCoT significantly boosts geolocation accuracy by up to 25% while enhancing
interpretability. | 4 | 67b83a2226e7d5f7cb0b7d66 | null | null |
|
2025-02-21T01:11:34.971000 | Discovering highly efficient low-weight quantum error-correcting codes with reinforcement learning | 4 | {
"_id": "6530a78069751712276d60ed",
"avatarUrl": "/avatars/2ef4f16d0be557ed60c11d8dcef85f6f.svg",
"followerCount": null,
"fullname": "Austin He",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "basil2115",
"type": "user"
} | true | null | 2502.14372 | [
{
"_id": "67b81870cc6b0136b3d84254",
"hidden": false,
"name": "Austin Yubo He",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-21T06:30:16.645Z",
"user": {
"_id": "6530a78069751712276d60ed",
"avatarUrl": "/avatars/2ef4f16d0be557ed60c11d8dcef85f6f.svg",
"fullname": "Austin He",
"isPro": false,
"type": "user",
"user": "basil2115"
}
},
{
"_id": "67b81870cc6b0136b3d84255",
"hidden": false,
"name": "Zi-Wen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T09:05:34 | Discovering highly efficient low-weight quantum error-correcting codes
with reinforcement learning | The realization of scalable fault-tolerant quantum computing is expected to
hinge on quantum error-correcting codes. In the quest for more efficient
quantum fault tolerance, a critical code parameter is the weight of
measurements that extract information about errors to enable error correction:
as higher measurement weights require higher implementation costs and introduce
more errors, it is important in code design to optimize measurement weight.
This underlies the surging interest in quantum low-density parity-check (qLDPC)
codes, the study of which has primarily focused on the asymptotic
(large-code-limit) properties. In this work, we introduce a versatile and
computationally efficient approach to stabilizer code weight reduction based on
reinforcement learning (RL), which produces new low-weight codes that
substantially outperform the state of the art in practically relevant parameter
regimes, extending significantly beyond previously accessible small distances.
For example, our approach demonstrates savings in physical qubit overhead
compared to existing results by 1 to 2 orders of magnitude for weight 6 codes
and brings the overhead into a feasible range for near-future experiments. We
also investigate the interplay between code parameters using our RL framework,
offering new insights into the potential efficiency and power of practically
viable coding strategies. Overall, our results demonstrate how RL can
effectively advance the crucial yet challenging problem of quantum code
discovery and thereby facilitate a faster path to the practical implementation
of fault-tolerant quantum technologies. | 36 | 67b81873cc6b0136b3d8430a | null | null |
|
2025-02-20T23:02:42.672000 | Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information | 2 | {
"_id": "64587be872b60ae7a3817858",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png",
"followerCount": 3,
"fullname": "Minbyul Jeong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Minbyul",
"type": "user"
} | true | null | 2502.14258 | [
{
"_id": "67b7fa96c3f48f8b3fc632fe",
"hidden": false,
"name": "Yein Park",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:46:28.888Z",
"user": {
"_id": "64e5c8e594aa0690321f6b29",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/IW5LRzmPcAM-dri8taMN7.png",
"fullname": "Yein Park",
"isPro": false,
"type": "user",
"user": "P-YI"
}
},
{
"_id": "67b7fa96c3f48f8b3fc632ff",
"hidden": false,
"name": "Chanwoong Yoon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:46:35.045Z",
"user": {
"_id": "66569dbaed45f790fbbebb83",
"avatarUrl": "/avatars/a4915e88d2bdff48cb30dd9972640d1e.svg",
"fullname": "Chanwoong Yoon",
"isPro": false,
"type": "user",
"user": "cwyoon99"
}
},
{
"_id": "67b7fa96c3f48f8b3fc63300",
"hidden": false,
"name": "Jungwoo Park",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:46:49.118Z",
"user": {
"_id": "60f8435644e75317cc02ed51",
"avatarUrl": "/avatars/68b7fc077fe2bda6607b1c470add8140.svg",
"fullname": "Jungwoo Park",
"isPro": false,
"type": "user",
"user": "affjljoo3581"
}
},
{
"_id": "67b7fa96c3f48f8b3fc63301",
"hidden": false,
"name": "Minbyul Jeong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:46:55.040Z",
"user": {
"_id": "64587be872b60ae7a3817858",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64587be872b60ae7a3817858/BbdOOxOCEzWTvEpkWp8MM.png",
"fullname": "Minbyul Jeong",
"isPro": false,
"type": "user",
"user": "Minbyul"
}
},
{
"_id": "67b7fa96c3f48f8b3fc63302",
"hidden": false,
"name": "Jaewoo Kang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T04:52:05 | Does Time Have Its Place? Temporal Heads: Where Language Models Recall
Time-specific Information | While the ability of language models to elicit facts has been widely
investigated, how they handle temporally changing facts remains underexplored.
We discover Temporal Heads, specific attention heads primarily responsible for
processing temporal knowledge through circuit analysis. We confirm that these
heads are present across multiple models, though their specific locations may
vary, and their responses differ depending on the type of knowledge and its
corresponding years. Disabling these heads degrades the model's ability to
recall time-specific knowledge while maintaining its general capabilities
without compromising time-invariant and question-answering performances.
Moreover, the heads are activated not only numeric conditions ("In 2004") but
also textual aliases ("In the year ..."), indicating that they encode a
temporal dimension beyond simple numerical representation. Furthermore, we
expand the potential of our findings by demonstrating how temporal knowledge
can be edited by adjusting the values of these heads. | 25 | 67b7fa9ac3f48f8b3fc63452 | null | null |
|
2025-02-20T22:41:47.210000 | Dynamic Concepts Personalization from Single Videos | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.14844 | [
{
"_id": "67b7f5ee8b3dff28b749be78",
"hidden": false,
"name": "Rameen Abdal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:00:23.357Z",
"user": {
"_id": "630428fa7b50dd9d0a38cde0",
"avatarUrl": "/avatars/d1baf7fd17daf4be16ba5bd6cd4f2277.svg",
"fullname": "Rameen Abdal",
"isPro": false,
"type": "user",
"user": "RameenAbdal"
}
},
{
"_id": "67b7f5ee8b3dff28b749be79",
"hidden": false,
"name": "Or Patashnik",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:00:32.474Z",
"user": {
"_id": "62853516e483e0d37b354ce1",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62853516e483e0d37b354ce1/t5Tyd3E07w26B9Z3XpZWI.jpeg",
"fullname": "Or Patashnik",
"isPro": false,
"type": "user",
"user": "orpatashnik"
}
},
{
"_id": "67b7f5ee8b3dff28b749be7a",
"hidden": false,
"name": "Ivan Skorokhodov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:00:39.038Z",
"user": {
"_id": "63610db13c7147fae7de88e3",
"avatarUrl": "/avatars/d7e97a16cfee39e1e50d7a5b747876f1.svg",
"fullname": "Ivan Skorokhodov",
"isPro": false,
"type": "user",
"user": "universome"
}
},
{
"_id": "67b7f5ee8b3dff28b749be7b",
"hidden": false,
"name": "Willi Menapace",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:00:45.032Z",
"user": {
"_id": "6315358a362e3e95ea538081",
"avatarUrl": "/avatars/3b089a25a87c2e83c6b23ccb5d2dc73e.svg",
"fullname": "Willi Menapace",
"isPro": false,
"type": "user",
"user": "willi-menapace"
}
},
{
"_id": "67b7f5ee8b3dff28b749be7c",
"hidden": false,
"name": "Aliaksandr Siarohin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:00:51.221Z",
"user": {
"_id": "64276311eb9a0ed86180715b",
"avatarUrl": "/avatars/76f933cd549f10e5e2db379de235d304.svg",
"fullname": "Aliaksandr Siarohin",
"isPro": false,
"type": "user",
"user": "aliaksandr-siarohin"
}
},
{
"_id": "67b7f5ee8b3dff28b749be7d",
"hidden": false,
"name": "Sergey Tulyakov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5ee8b3dff28b749be7e",
"hidden": false,
"name": "Daniel Cohen-Or",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:01:00.737Z",
"user": {
"_id": "628507161949ebcae8e24ec3",
"avatarUrl": "/avatars/008ecb3daa4c8187b5f339f1176b3c39.svg",
"fullname": "Daniel Cohen-Or",
"isPro": false,
"type": "user",
"user": "cohenor"
}
},
{
"_id": "67b7f5ee8b3dff28b749be7f",
"hidden": false,
"name": "Kfir Aberman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:01:06.434Z",
"user": {
"_id": "64db29097266618e853dd6ec",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64db29097266618e853dd6ec/r0MaPQCfAxeKv3ycdKYLK.jpeg",
"fullname": "Kfir Aberman",
"isPro": false,
"type": "user",
"user": "kaberman"
}
}
] | 2025-02-20T18:53:39 | Dynamic Concepts Personalization from Single Videos | Personalizing generative text-to-image models has seen remarkable progress,
but extending this personalization to text-to-video models presents unique
challenges. Unlike static concepts, personalizing text-to-video models has the
potential to capture dynamic concepts, i.e., entities defined not only by their
appearance but also by their motion. In this paper, we introduce
Set-and-Sequence, a novel framework for personalizing Diffusion Transformers
(DiTs)-based generative video models with dynamic concepts. Our approach
imposes a spatio-temporal weight space within an architecture that does not
explicitly separate spatial and temporal features. This is achieved in two key
stages. First, we fine-tune Low-Rank Adaptation (LoRA) layers using an
unordered set of frames from the video to learn an identity LoRA basis that
represents the appearance, free from temporal interference. In the second
stage, with the identity LoRAs frozen, we augment their coefficients with
Motion Residuals and fine-tune them on the full video sequence, capturing
motion dynamics. Our Set-and-Sequence framework results in a spatio-temporal
weight space that effectively embeds dynamic concepts into the video model's
output domain, enabling unprecedented editability and compositionality while
setting a new benchmark for personalizing dynamic concepts. | 15 | 67b7f5f18b3dff28b749bf45 | null | null |
|
2025-02-20T22:39:48.180000 | PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC | 3 | {
"_id": "645b10e80c73ea27d13f7aca",
"avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg",
"followerCount": 3,
"fullname": "xuhaiyang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xhyandwyy",
"type": "user"
} | false | [
"https://cdn-uploads.huggingface.co/production/uploads/645b10e80c73ea27d13f7aca/feg9OYb4onJJermpjc6nh.jpeg"
] | 2502.14282 | [
{
"_id": "67b7f5587f4d732dc469270e",
"hidden": false,
"name": "Haowei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc469270f",
"hidden": false,
"name": "Xi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692710",
"hidden": false,
"name": "Haiyang Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:58:29.601Z",
"user": {
"_id": "66e8d7d2483df532fd364913",
"avatarUrl": "/avatars/300aea4b8c571b2aeac629de58281444.svg",
"fullname": "Haiyang Xu",
"isPro": false,
"type": "user",
"user": "msxxx"
}
},
{
"_id": "67b7f5587f4d732dc4692711",
"hidden": false,
"name": "Yuyang Wanyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692712",
"hidden": false,
"name": "Junyang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:58:16.343Z",
"user": {
"_id": "6438f6415aa69077ffb16942",
"avatarUrl": "/avatars/c83dbd3e10e88db97c2a86092bad5917.svg",
"fullname": "Junyang Wang",
"isPro": false,
"type": "user",
"user": "junyangwang0410"
}
},
{
"_id": "67b7f5587f4d732dc4692713",
"hidden": false,
"name": "Ming Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692714",
"hidden": false,
"name": "Ji Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692715",
"hidden": false,
"name": "Chunfeng Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692716",
"hidden": true,
"name": "Changsheng Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:57:51.040Z",
"user": {
"_id": "6222e721271a284f976f43d8",
"avatarUrl": "/avatars/36ce9f16de6f4ae6ea0968c49207f191.svg",
"fullname": "ChangshengXu",
"isPro": false,
"type": "user",
"user": "ChangshengXu"
}
},
{
"_id": "67b7f5587f4d732dc4692717",
"hidden": false,
"name": "Weiming Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f5587f4d732dc4692718",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T05:41:55 | PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex
Task Automation on PC | In the field of MLLM-based GUI agents, compared to smartphones, the PC
scenario not only features a more complex interactive environment, but also
involves more intricate intra- and inter-app workflows. To address these
issues, we propose a hierarchical agent framework named PC-Agent. Specifically,
from the perception perspective, we devise an Active Perception Module (APM) to
overcome the inadequate abilities of current MLLMs in perceiving screenshot
content. From the decision-making perspective, to handle complex user
instructions and interdependent subtasks more effectively, we propose a
hierarchical multi-agent collaboration architecture that decomposes
decision-making processes into Instruction-Subtask-Action levels. Within this
architecture, three agents (i.e., Manager, Progress and Decision) are set up
for instruction decomposition, progress tracking and step-by-step
decision-making respectively. Additionally, a Reflection agent is adopted to
enable timely bottom-up error feedback and adjustment. We also introduce a new
benchmark PC-Eval with 25 real-world complex instructions. Empirical results on
PC-Eval show that our PC-Agent achieves a 32% absolute improvement of task
success rate over previous state-of-the-art methods. The code will be publicly
available. | 18 | 67b7f55b7f4d732dc46927c1 | null | https://github.com/X-PLUG/MobileAgent/tree/main |
|
2025-02-20T22:39:21.551000 | LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models | 2 | {
"_id": "648c48d8c0ddeee6df5b6d22",
"avatarUrl": "/avatars/8706b0b16dfc332b96c91d3ced31bd0b.svg",
"followerCount": null,
"fullname": "Shangqing Tu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tsq2000",
"type": "user"
} | true | [
"https://cdn-uploads.huggingface.co/production/uploads/648c48d8c0ddeee6df5b6d22/8AYx7CcK4CT6flX3nRDlB.png"
] | 2502.14834 | [
{
"_id": "67b7f3c4d00e69f10cff219e",
"hidden": false,
"name": "Shangqing Tu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:47:11.953Z",
"user": {
"_id": "648c48d8c0ddeee6df5b6d22",
"avatarUrl": "/avatars/8706b0b16dfc332b96c91d3ced31bd0b.svg",
"fullname": "Shangqing Tu",
"isPro": false,
"type": "user",
"user": "tsq2000"
}
},
{
"_id": "67b7f3c4d00e69f10cff219f",
"hidden": false,
"name": "Yucheng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a0",
"hidden": false,
"name": "Daniel Zhang-Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a1",
"hidden": false,
"name": "Yushi Bai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:47:42.383Z",
"user": {
"_id": "64ed568ccf6118a9379a61b8",
"avatarUrl": "/avatars/6d040cbcb4a9b624cbe64c9d01cd5c88.svg",
"fullname": "Yushi Bai",
"isPro": false,
"type": "user",
"user": "bys0318"
}
},
{
"_id": "67b7f3c4d00e69f10cff21a2",
"hidden": false,
"name": "Jifan Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a3",
"hidden": false,
"name": "Yuhao Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a4",
"hidden": false,
"name": "Lei Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a5",
"hidden": false,
"name": "Huiqin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a6",
"hidden": false,
"name": "Zhiyuan Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:48:24.334Z",
"user": {
"_id": "6310a3cd531cc21f9e06de6a",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg",
"fullname": "Zhiyuan Liu",
"isPro": false,
"type": "user",
"user": "acharkq"
}
},
{
"_id": "67b7f3c4d00e69f10cff21a7",
"hidden": false,
"name": "Bin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f3c4d00e69f10cff21a8",
"hidden": false,
"name": "Juanzi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T14:48:30.590Z",
"user": {
"_id": "65df8cbc2705d9672f55d1aa",
"avatarUrl": "/avatars/63e46f15bb76bd9d4508fd0f54f39829.svg",
"fullname": "Juanzi Li",
"isPro": false,
"type": "user",
"user": "juanli"
}
}
] | 2025-02-20T18:47:36 | LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in
Vision-Language Models | Existing Large Vision-Language Models (LVLMs) can process inputs with context
lengths up to 128k visual and text tokens, yet they struggle to generate
coherent outputs beyond 1,000 words. We find that the primary limitation is the
absence of long output examples during supervised fine-tuning (SFT). To tackle
this issue, we introduce LongWriter-V-22k, a SFT dataset comprising 22,158
examples, each with multiple input images, an instruction, and corresponding
outputs ranging from 0 to 10,000 words. Moreover, to achieve long outputs that
maintain high-fidelity to the input images, we employ Direct Preference
Optimization (DPO) to the SFT model. Given the high cost of collecting human
feedback for lengthy outputs (e.g., 3,000 words), we propose IterDPO, which
breaks long outputs into segments and uses iterative corrections to form
preference pairs with the original outputs. Additionally, we develop
MMLongBench-Write, a benchmark featuring six tasks to evaluate the
long-generation capabilities of VLMs. Our 7B parameter model, trained with
LongWriter-V-22k and IterDPO, achieves impressive performance on this
benchmark, outperforming larger proprietary models like GPT-4o. Code and data:
https://github.com/THU-KEG/LongWriter-V | 24 | 67b7f3c7d00e69f10cff2258 | null | null |
|
2025-02-20T22:38:36.406000 | Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation | 2 | {
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
} | false | null | 2502.14846 | [
{
"_id": "67b7f4f1b15c19d57189fc5e",
"hidden": false,
"name": "Yue Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T15:06:20.097Z",
"user": {
"_id": "62f6c68904e5e02f82b04690",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62f6c68904e5e02f82b04690/kK2-PkeAGzAOLhkfajswf.jpeg",
"fullname": "Yue Yang",
"isPro": true,
"type": "user",
"user": "yyupenn"
}
},
{
"_id": "67b7f4f1b15c19d57189fc5f",
"hidden": false,
"name": "Ajay Patel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f4f1b15c19d57189fc60",
"hidden": false,
"name": "Matt Deitke",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:03:20.545Z",
"user": {
"_id": "61c388aa727d1257bf3cf58b",
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670871898141-61c388aa727d1257bf3cf58b.jpeg",
"fullname": "Matt Deitke",
"isPro": true,
"type": "user",
"user": "mattdeitke"
}
},
{
"_id": "67b7f4f1b15c19d57189fc61",
"hidden": false,
"name": "Tanmay Gupta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f4f1b15c19d57189fc62",
"hidden": false,
"name": "Luca Weihs",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:02:58.922Z",
"user": {
"_id": "620acbdb3c0931626a7c9297",
"avatarUrl": "/avatars/f63b1d225ed81e223d3e8876a5c708c4.svg",
"fullname": "Luca Weihs",
"isPro": false,
"type": "user",
"user": "lucaweihs"
}
},
{
"_id": "67b7f4f1b15c19d57189fc63",
"hidden": false,
"name": "Andrew Head",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:02:53.395Z",
"user": {
"_id": "6360289588e41d249ecd3e26",
"avatarUrl": "/avatars/0e84dbd72f07e99967a8d25cda938efe.svg",
"fullname": "Andrew Head",
"isPro": false,
"type": "user",
"user": "ChittyChins"
}
},
{
"_id": "67b7f4f1b15c19d57189fc64",
"hidden": false,
"name": "Mark Yatskar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:02:47.575Z",
"user": {
"_id": "631f42e1b6628770f6efd87a",
"avatarUrl": "/avatars/0fa93b3513ebca737cce26dfa5611cf1.svg",
"fullname": "Mark Yatskar",
"isPro": false,
"type": "user",
"user": "myatskar"
}
},
{
"_id": "67b7f4f1b15c19d57189fc65",
"hidden": false,
"name": "Chris Callison-Burch",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:02:41.757Z",
"user": {
"_id": "6303ce25fc783bfc744216af",
"avatarUrl": "/avatars/09f5e87c1f56a1b7f6ef9c5037682285.svg",
"fullname": "Chris Callison-Burch",
"isPro": false,
"type": "user",
"user": "CCB"
}
},
{
"_id": "67b7f4f1b15c19d57189fc66",
"hidden": false,
"name": "Ranjay Krishna",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-21T15:02:35.267Z",
"user": {
"_id": "66429868ab89e3a3a85668b0",
"avatarUrl": "/avatars/170e0daa454838deee2bf946f7118651.svg",
"fullname": "Ranjay Krishna",
"isPro": false,
"type": "user",
"user": "ranjaykrishna"
}
},
{
"_id": "67b7f4f1b15c19d57189fc67",
"hidden": false,
"name": "Aniruddha Kembhavi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67b7f4f1b15c19d57189fc68",
"hidden": false,
"name": "Christopher Clark",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-20T18:55:30 | Scaling Text-Rich Image Understanding via Code-Guided Synthetic
Multimodal Data Generation | Reasoning about images with rich text, such as charts and documents, is a
critical application of vision-language models (VLMs). However, VLMs often
struggle in these domains due to the scarcity of diverse text-rich
vision-language data. To address this challenge, we present CoSyn, a framework
that leverages the coding capabilities of text-only large language models
(LLMs) to automatically create synthetic text-rich multimodal data. Given input
text describing a target domain (e.g., "nutrition fact labels"), CoSyn prompts
an LLM to generate code (Python, HTML, LaTeX, etc.) for rendering synthetic
images. With the underlying code as textual representations of the synthetic
images, CoSyn can generate high-quality instruction-tuning data, again relying
on a text-only LLM. Using CoSyn, we constructed a dataset comprising 400K
images and 2.7M rows of vision-language instruction-tuning data. Comprehensive
experiments on seven benchmarks demonstrate that models trained on our
synthetic data achieve state-of-the-art performance among competitive
open-source models, including Llama 3.2, and surpass proprietary models such as
GPT-4V and Gemini 1.5 Flash. Furthermore, CoSyn can produce synthetic pointing
data, enabling VLMs to ground information within input images, showcasing its
potential for developing multimodal agents capable of acting in real-world
environments. | 13 | 67b7f4f2b15c19d57189fc95 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.