arxiv_id
stringlengths 10
10
| github
stringlengths 0
104
| title
stringlengths 8
177
| upvotes
int64 0
581
| num_comments
int64 0
74
| hf_mention
float64 0
1
| num_models
float64 0
100
| num_datasets
float64 0
100
| num_spaces
float64 0
100
| reached_out_link
stringclasses 55
values | reached_out_success
float64 0
1
⌀ | has_artifact
bool 2
classes | submitted_by
stringclasses 393
values | is_staff
bool 2
classes | reached_out_note
stringclasses 1
value | date
stringclasses 297
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2406.12042
|
https://github.com/rezashkv/diffusion_pruning
|
Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models
| 8 | 1 | 1 | 1 | 0 | 0 | null | 0 | null |
rezashkv
| false |
2024-06-19
|
|
2406.12303
|
Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment
| 4 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
chenfengx
| false |
2024-06-19
|
||
2406.11909
|
https://github.com/wutaiqiang/moslora
|
Mixture-of-Subspaces in Low-Rank Adaptation
| 3 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
taki555
| false |
2024-06-19
|
|
2406.12274
|
SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models
| 13 | 2 | 0 | 0 | 1 | 0 | null | 0 | null |
rimahazra
| false |
2024-06-19
|
||
2406.12031
|
https://github.com/mlfoundations/tabliblib
|
Large Scale Transfer Learning for Tabular Data via Language Modeling
| 8 | 1 | 1 | 1 | 2 | 0 | null | 0 | null |
davanstrien
| true |
2024-06-19
|
|
2406.12742
|
https://github.com/dtennant/mirb_eval
|
Benchmarking Multi-Image Understanding in Vision and Language Models: Perception, Knowledge, Reasoning, and Multi-Hop Reasoning
| 14 | 3 | 1 | 0 | 1 | 0 | null | 0 | null |
tennant
| false |
2024-06-19
|
|
2406.12311
|
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
dongwonjo
| false |
2024-06-19
|
||
2406.12459
|
https://github.com/humansplat/humansplat.github.io
|
HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors
| 11 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
paulpanwang
| false |
2024-06-19
|
|
2406.12066
|
https://github.com/bittermanlab/rabbits
|
Language Models are Surprisingly Fragile to Drug Names in Biomedical Benchmarks
| 8 | 1 | 1 | 0 | 1 | 1 | null | 0 | null |
shanchen
| false |
2024-06-19
|
|
2406.11811
|
RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content
| 15 | 1 | 0 | 0 | 1 | 0 | null | 0 | null |
hughesthe1st
| false |
2024-06-19
|
||
2406.12275
|
https://github.com/Yxxxb/VoCo-LLaMA
|
VoCo-LLaMA: Towards Vision Compression with Large Language Models
| 29 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
akhaliq
| true |
2024-06-19
|
|
2406.12753
|
https://github.com/gair-nlp/olympicarena
|
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI
| 14 | 2 | 1 | 0 | 1 | 1 | null | 0 | null |
akhaliq
| true |
2024-06-19
|
|
2406.12246
|
https://github.com/byungkwanlee/trol
|
TroL: Traversal of Layers for Large Language and Vision Models
| 34 | 2 | 1 | 3 | 0 | 1 | null | 0 | null |
BK-Lee
| false |
2024-06-19
|
|
2406.11931
|
https://github.com/deepseek-ai/deepseek-coder-v2
|
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
| 54 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
akhaliq
| true |
2024-06-19
|
|
2406.12824
|
From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries
| 20 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
samyadeepbasu
| false |
2024-06-19
|
||
2406.12644
|
https://github.com/devichand579/HPT
|
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models
| 4 | 1 | 1 | 0 | 0 | 0 | null | 0 | null |
amanchadha
| false |
2024-06-19
|
|
2406.12292
|
JEN-1 DreamStyler: Customized Musical Concept Learning via Pivotal Parameters Tuning
| 4 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
akhaliq
| true |
2024-06-19
|
||
2406.12168
|
BPO: Supercharging Online Preference Learning by Adhering to the Proximity of Behavior LLM
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
jiachenli-ucsb
| false |
2024-06-19
|
||
2406.09760
|
https://github.com/sail-sg/dice
|
Bootstrapping Language Models with DPO Implicit Rewards
| 37 | 1 | 1 | 4 | 0 | 0 | null | 0 | null |
SivilTaram
| false |
2024-06-19
|
|
2406.11687
|
Tokenization Falling Short: The Curse of Tokenization
| 13 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
cyk1337
| false |
2024-06-19
|
||
2406.12793
|
https://github.com/thudm/chatglm-6b
|
ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
| 30 | 2 | 1 | 4 | 0 | 20 | null | 0 | null |
akhaliq
| true |
2024-06-19
|
|
2406.11614
|
https://github.com/yihuaihong/conceptvectors
|
Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces
| 3 | 2 | 1 | 0 | 1 | 0 | null | 0 | null |
YihuaiHong
| false |
2024-06-20
|
|
2406.11715
|
Measuring memorization in RLHF for code completion
| 6 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
billyporter
| false |
2024-06-20
|
||
2406.12209
|
https://github.com/atosystem/ssl_interface
|
Interface Design for Self-Supervised Speech Models
| 6 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
atosystem
| false |
2024-06-20
|
|
2406.12649
|
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
| 15 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
aaronwhy
| false |
2024-06-20
|
||
2406.12034
|
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts
| 12 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
jm-kang
| false |
2024-06-20
|
||
2406.11230
|
https://github.com/wang-ml-lab/multimodal-needle-in-a-haystack
|
Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models
| 34 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
aaronwhy
| false |
2024-06-20
|
|
2406.11612
|
https://github.com/jetbrains-research/lca-baselines
|
Long Code Arena: a Set of Benchmarks for Long-Context Code Models
| 20 | 1 | 1 | 0 | 6 | 1 | null | 0 | null |
jenyag
| false |
2024-06-20
|
|
2406.11139
|
Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance
| 12 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
leowin
| false |
2024-06-20
|
||
2406.14539
|
Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps
| 26 | 1 | 0 | 0 | 0 | 2 | null | 0 | null |
dbaranchuk
| false |
2024-06-21
|
||
2406.13099
|
Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models
| 4 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
pmh47
| false |
2024-06-21
|
||
2406.11289
|
A Systematic Survey of Text Summarization: From Statistical Methods to Large Language Models
| 5 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
hpzhang
| false |
2024-06-21
|
||
2406.13735
|
StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images
| 5 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
aluo-x
| false |
2024-06-21
|
||
2406.12618
|
From Insights to Actions: The Impact of Interpretability and Analysis Research on NLP
| 5 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
dippedrusk
| false |
2024-06-21
|
||
2406.13663
|
https://github.com/betswish/mirage
|
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
gsarti
| false |
2024-06-21
|
|
2406.10601
|
https://github.com/airi-institute/stylefeatureeditor
|
The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing
| 65 | 2 | 1 | 1 | 0 | 1 | null | 0 | null |
ai-alanov
| false |
2024-06-21
|
|
2406.13621
|
https://github.com/guyyariv/vlmig
|
Improving Visual Commonsense in Language Models via Multiple Image Generation
| 13 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
GuyYariv
| false |
2024-06-21
|
|
2406.13542
|
https://github.com/QwenLM/AutoIF
|
Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models
| 16 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
davanstrien
| true |
2024-06-21
|
|
2406.14319
|
https://github.com/chuangtaochen-tum/livemind
|
LiveMind: Low-latency Large Language Models with Simultaneous Inference
| 14 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
ChuangtaoChen-TUM
| false |
2024-06-21
|
|
2406.11410
|
https://github.com/liteai-team/hare
|
HARE: HumAn pRiors, a key to small language model Efficiency
| 38 | 1 | 1 | 3 | 0 | 0 | null | 0 | null |
lingyun1
| false |
2024-06-21
|
|
2406.14563
|
Model Merging and Safety Alignment: One Bad Model Spoils the Bunch
| 30 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
hammh0a
| false |
2024-06-21
|
||
2406.12925
|
GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks
| 20 | 2 | 0 | 1 | 0 | 7 | null | 0 | null |
whitemetalicdragon
| false |
2024-06-21
|
||
2406.11927
|
https://github.com/FSoft-AI4Code/RepoExec
|
REPOEXEC: Evaluate Code Generation with a Repository-Level Executable Benchmark
| 9 | 1 | 1 | 0 | 0 | 0 | null | 0 | null |
bdqnghi
| false |
2024-06-21
|
|
2406.11896
|
DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning
| 18 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
Jiayi-Pan
| false |
2024-06-21
|
||
2406.14130
|
https://github.com/modelscope/DiffSynth-Studio
|
ExVideo: Extending Video Diffusion Models via Parameter-Efficient Post-Tuning
| 10 | 2 | 1 | 1 | 0 | 4 | null | 0 | null |
akhaliq
| true |
2024-06-21
|
|
2406.14491
|
https://github.com/microsoft/lmops
|
Instruction Pre-Training: Language Models are Supervised Multitask Learners
| 76 | 8 | 0 | 15 | 3 | 1 | null | 0 | null |
daixuancheng
| false |
2024-06-21
|
|
2406.12045
|
https://github.com/sierra-research/tau-bench
|
τ-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains
| 4 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
sohampnow
| false |
2024-06-21
|
|
2406.14347
|
nabla^2DFT: A Universal Quantum Chemistry Dataset of Drug-Like Molecules and a Benchmark for Neural Network Potentials
| 99 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
ofantomas
| false |
2024-06-21
|
||
2406.14544
|
https://github.com/sparksjoe/prism
|
Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs
| 34 | 2 | 1 | 2 | 0 | 0 | null | 0 | null |
Lin-Chen
| false |
2024-06-21
|
|
2406.14515
|
https://github.com/open-compass/vlmevalkit
|
MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding
| 29 | 1 | 1 | 0 | 0 | 0 | null | 0 | null |
KennyUTC
| false |
2024-06-21
|
|
2406.14562
|
Whiteboard-of-Thought: Thinking Step-by-Step Across Modalities
| 27 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
sachit-menon
| false |
2024-06-21
|
||
2406.11817
|
Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level
| 13 | 1 | 0 | 1 | 0 | 0 | null | 0 | null |
zhangysk
| false |
2024-06-21
|
||
2406.13923
|
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
| 21 | 1 | 0 | 0 | 1 | 0 | null | 0 | null |
zhangysk
| false |
2024-06-21
|
||
2406.14938
|
Towards Retrieval Augmented Generation over Large Video Libraries
| 18 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
YannisTevissen
| false |
2024-06-24
|
||
2406.13393
|
Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images
| 5 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
haruo666
| false |
2024-06-24
|
||
2406.12564
|
https://github.com/vityavitalich/meritfed
|
Low-Resource Machine Translation through the Lens of Personalized Federated Learning
| 3 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
VityaVitalich
| false |
2024-06-24
|
|
2406.14213
|
Complexity of Symbolic Representation in Working Memory of Transformer Correlates with the Complexity of a Task
| 20 | 3 | 0 | 0 | 0 | 0 | null | 0 | null |
alsu-sagirova
| false |
2024-06-24
|
||
2406.14764
|
RE-AdaptIR: Improving Information Retrieval through Reverse Engineered Adaptation
| 4 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
will-fleshman
| false |
2024-06-24
|
||
2406.11617
|
https://github.com/declare-lab/della
|
DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
RishabhBhardwaj
| false |
2024-06-24
|
|
2406.13527
|
4K4DGen: Panoramic 4D Generation at 4K Resolution
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
Rj-Lee
| false |
2024-06-24
|
||
2406.11654
|
Ruby Teaming: Improving Quality Diversity Search with Memory for Automated Red Teaming
| 6 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
RishabhBhardwaj
| false |
2024-06-24
|
||
2406.13236
|
https://github.com/shangdatalab/deep-contam
|
Data Contamination Can Cross Language Barriers
| 8 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
fengyao1909
| false |
2024-06-24
|
|
2406.14972
|
https://github.com/florin-git/Base-vs-Instruct-LLMs-in-RAG-Systems
|
A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems
| 6 | 1 | 1 | 0 | 1 | 0 | null | 0 | null |
florin-hf
| false |
2024-06-24
|
|
2406.15877
|
https://github.com/bigcode-project/bigcodebench-annotation
|
BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
| 43 | 5 | 0 | 0 | 4 | 1 | null | 0 | null |
terryyz
| false |
2024-06-24
|
|
2406.14783
|
Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework
| 15 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
din0s
| false |
2024-06-24
|
||
2406.14596
|
ICAL: Continual Learning of Multimodal Agents by Transforming Trajectories into Actionable Insights
| 4 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
gsarch
| false |
2024-06-24
|
||
2406.13457
|
https://github.com/dachunkai/evtexture
|
EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
| 15 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
BoyDachun
| false |
2024-06-24
|
|
2406.11403
|
https://github.com/leloykun/mmfm-challenge
|
Multimodal Structured Generation: CVPR's 2nd MMFM Challenge Technical Report
| 4 | 1 | 1 | 0 | 0 | 0 | null | 0 | null |
leloy
| false |
2024-06-24
|
|
2406.15349
|
https://github.com/autonomousvision/navsim
|
NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking
| 5 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
kashyap7x
| false |
2024-06-24
|
|
2406.14835
|
ToVo: Toxicity Taxonomy via Voting
| 3 | 1 | 0 | 1 | 0 | 0 | null | 0 | null |
convoicon
| false |
2024-06-24
|
||
2406.14805
|
How Well Do LLMs Represent Values Across Cultures? Empirical Analysis of LLM Responses Based on Hofstede Cultural Dimensions
| 3 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
amanchadha
| false |
2024-06-24
|
||
2406.15319
|
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs
| 57 | 5 | 0 | 0 | 1 | 0 | null | 0 | null |
wenhu
| false |
2024-06-24
|
||
2406.15252
|
MantisScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation
| 14 | 1 | 0 | 2 | 2 | 1 | null | 0 | null |
wenhu
| false |
2024-06-24
|
||
2406.14393
|
https://github.com/zhxieml/remiss-jailbreak
|
Jailbreaking as a Reward Misspecification Problem
| 12 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
Zhihui
| false |
2024-06-24
|
|
2406.14599
|
Stylebreeder: Exploring and Democratizing Artistic Styles through Text-to-Image Models
| 16 | 2 | 0 | 0 | 1 | 0 | null | 0 | null |
akhaliq
| true |
2024-06-24
|
||
2406.12624
|
https://github.com/UMass-Meta-LLM-Eval/llm_eval
|
Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges
| 35 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
singh96aman
| false |
2024-06-24
|
|
2406.14035
|
Two Giraffes in a Dirt Field: Using Game Play to Investigate Situation Modelling in Large Multimodal Models
| 10 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
sherzod-hakimov
| false |
2024-06-24
|
||
2406.15275
|
Cognitive Map for Language Models: Optimal Planning via Verbally Representing the World Model
| 10 | 0 | 0 | 0 | 0 | 0 | null | 0 | null |
doyoungkim
| false |
2024-06-24
|
||
2406.12056
|
https://github.com/liugangcode/InfoAlign
|
Learning Molecular Representation in a Cell
| 6 | 1 | 1 | 1 | 1 | 0 | null | 0 | null |
liuganghuggingface
| false |
2024-06-24
|
|
2406.15193
|
https://github.com/declare-lab/darwin
|
Reward Steering with Evolutionary Heuristics for Decoding-time Alignment
| 12 | 3 | 0 | 0 | 0 | 0 | null | 0 | null |
hungchiayu
| false |
2024-06-24
|
|
2406.16772
|
https://github.com/gair-nlp/olympicarena
|
OlympicArena Medal Ranks: Who Is the Most Intelligent AI So Far?
| 2 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
SinclairWang
| false |
2024-06-25
|
|
2406.16048
|
Evaluating D-MERIT of Partial-annotation on Information Retrieval
| 34 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
Royir
| false |
2024-06-25
|
||
2406.14051
|
How Many Parameters Does it Take to Change a Light Bulb? Evaluating Performance in Self-Play of Conversational Games as a Function of Model Characteristics
| 9 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
sherzod-hakimov
| false |
2024-06-25
|
||
2406.16714
|
https://github.com/thu-coai/autodetect
|
AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models
| 10 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
CCCCCC
| false |
2024-06-25
|
|
2406.16254
|
Confidence Regulation Neurons in Language Models
| 10 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
gsarti
| false |
2024-06-25
|
||
2406.15718
|
https://github.com/thunlp/duplex-model
|
Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models
| 14 | 2 | 1 | 1 | 1 | 0 | null | 0 | null |
ShengdingHu
| false |
2024-06-25
|
|
2406.13632
|
Can Few-shot Work in Long-Context? Recycling the Context to Generate Demonstrations
| 5 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
cattana
| false |
2024-06-25
|
||
2406.16008
|
Found in the Middle: Calibrating Positional Attention Bias Improves Long Context Utilization
| 6 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
cydhsieh01
| false |
2024-06-25
|
||
2406.16860
|
https://github.com/cambrian-mllm/cambrian
|
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
| 52 | 4 | 1 | 4 | 3 | 0 | null | 0 | null |
ellisbrown
| false |
2024-06-25
|
|
2406.16747
|
Sparser is Faster and Less is More: Efficient Sparse Attention for Long-Range Transformers
| 16 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
zlzheng
| false |
2024-06-25
|
||
2406.15927
|
Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs
| 13 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
jlko
| false |
2024-06-25
|
||
2406.16683
|
https://github.com/nzilberstein/Repulsive-score-distillation-RSD-
|
Repulsive Score Distillation for Diverse Sampling of Diffusion Models
| 4 | 2 | 1 | 0 | 0 | 0 | null | 0 | null |
nicozilber
| false |
2024-06-25
|
|
2406.16815
|
ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
| 7 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
tangjs
| false |
2024-06-25
|
||
2406.16260
|
Video-Infinity: Distributed Long Video Generation
| 28 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
adamdad
| false |
2024-06-25
|
||
2406.16338
|
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models
| 23 | 2 | 0 | 0 | 1 | 0 | null | 0 | null |
zlzheng
| false |
2024-06-25
|
||
2406.16758
|
https://github.com/Kthyeon/Multilingual-SpecBench
|
Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters
| 18 | 2 | 0 | 0 | 0 | 0 | null | 0 | null |
Kthyeon
| false |
2024-06-25
|
|
2406.16855
|
https://github.com/yuangpeng/dreambench_plus
|
DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation
| 53 | 3 | 1 | 0 | 0 | 0 | null | 0 | null |
yuangpeng
| false |
2024-06-25
|
|
2406.14833
|
Efficient Continual Pre-training by Mitigating the Stability Gap
| 19 | 1 | 0 | 0 | 0 | 0 | null | 0 | null |
YiDuo1999
| false |
2024-06-25
|
||
2406.15704
|
https://github.com/bytedance/salmonn
|
video-SALMONN: Speech-Enhanced Audio-Visual Large Language Models
| 5 | 1 | 1 | 1 | 0 | 1 | null | 0 | null |
BrianatCambridge
| false |
2024-06-25
|
|
2406.16235
|
https://github.com/batsresearch/cross-lingual-detox
|
Preference Tuning For Toxicity Mitigation Generalizes Across Languages
| 12 | 1 | 1 | 6 | 0 | 0 | null | 0 | null |
yongzx
| false |
2024-06-25
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.