[FEEDBACK] Daily Papers
Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.
How to submit a paper to the Daily Papers, like @akhaliq (AK)?
- Submitting is available to paper authors
- Only recent papers (less than 7d) can be featured on the Daily
Then drop the arxiv id in the form at https://huggingface.co/papers/submit
- Add medias to the paper (images, videos) when relevant
- You can start the discussion to engage with the community
Please check out the documentation
We are excited to share our recent work on MLLM architecture design titled "Ovis: Structural Embedding Alignment for Multimodal Large Language Model".
Paper: https://arxiv.org/abs/2405.20797
Github: https://github.com/AIDC-AI/Ovis
Model: https://huggingface.co/AIDC-AI/Ovis-Clip-Llama3-8B
Data: https://huggingface.co/datasets/AIDC-AI/Ovis-dataset
@Yiwen-ntu for now we support only videos as paper covers in the Daily.
we are excited to share our work titled "Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models" : https://arxiv.org/abs/2406.12644
@akhaliq
@kramp
Dear AK and HF team ,
π We are pleased to share our latest research paper, "Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS," for your consideration, as we believe it may be of significant interest for HF Daily Paper. This work introduces HiAR-ICL, a novel paradigm to enhance the complex reasoning capabilities of large language models.
π Unlike traditional in-context learning, HiAR-ICL shifts the focus from example-based analogical learning to abstract thinking patterns. It employs Monte Carlo Tree Search to explore reasoning paths and creates "thought cards" to guide inferences. By dynamically matching test problems with appropriate thought cards through a proposed cognitive complexity framework, HiAR-ICL achieves state-of-the-art accuracy of 79.6% with 7B model on the challenging MATH benchmark, surpassing both GPT-4o and Claude 3.5.
π Paper: https://arxiv.org/pdf/2411.18478
π Project Page: https://jinyangwu.github.io/hiar-icl/
We would greatly appreciate your consideration of our paper for inclusion.
Best regards,
Jinyang Wu, Mingkuan Feng, Shuai Zhang, Feihu Che, Zengqi Wen, Jianhua Tao
Note that this is not a post about adding new papers, it's about feedback on the Daily Papers community update feature.
How to submit a paper to the Daily Papers, like @akhaliq (AK)?
- Submitting is available to paper authors
- Only recent papers (less than 7d) can be featured on the Daily
Then drop the arxiv id in the form at https://huggingface.co/papers/submit
- Add medias to the paper (images, videos) when relevant
- You can start the discussion to engage with the community
Please check out the documentation
Hi
@kramp
and
@akhaliq
please could you help me verify my authorship claim for this paper? https://huggingface.co/papers/2411.15640
Today makes it 6 days and I need to be able to feature it on the paper dailies.
I hope you're doing well! I would like to kindly request your assistance in verifying my authorship claim for this paper: https://huggingface.co/papers/2411.18478. Today marks the 6th day, and I would appreciate it if you could help expedite the verification process so that the paper can be featured on the daily papers.
Thank you so much for your help!
Best regards,
Jinyang Wu
Self-Supervised Unified Generation with Universal Editing: https://arxiv.org/pdf/2412.02114
Dear AK and HF teamοΌ
I would like to kindly request your assistance in verifying my authorship claim for this paper: https://huggingface.co/papers/2411.18478. Today marks the 7th day, and I would appreciate it if you could help expedite the verification process so that the paper can be featured on the daily papers.
Thank you so much for your help!
Best regards,
Mingyu Xu
@akhaliq
@kramp
Dear AK and HF team ,
π We would like to kindly request your assistance in sharing our latest research paper in less than 1 month(Nov. 14), "Golden Noise for Diffusion Models: A Learning Framework". We believe it may be of significant interest for HF Daily Paper.
π First, we identify a new concept termed noise prompt, which aims at turning a random noise into a golden noise by adding a small desirable perturbation derived from the text prompt. The golden noise perturbation can be considered as a kind of prompt for noise, as it is rich in semantic information and tailored to the given text prompt. Building upon this concept, we formulate a noise prompt learning framework that learns "prompted'' golden noises associated with text prompts for diffusion models.
π Second, to implement the formulated noise prompt learning framework, we propose the training dataset, namely the noise prompt dataset(NPD), and the learning model, namely the noise prompt network(NPNet). Specifically, we design a noise prompt data collection pipeline via re-denoise sampling, a way to produce noise pairs. We also incorporate AI-driven feedback mechanisms to ensure that the noise pairs are highly valuable. This pipeline enables us to collect a large-scale training dataset for noise prompt learning, so the trained NPNet can directly transform a random Gaussian noise into a golden noise to boost the performance of the T2I diffusion model.
π Third, we conduct extensive experiments across various mainstream diffusion models, including StableDiffusion-xl(SDXL), DreamShaper-xl-v2-turbo and Hunyuan-DiT, with 7 different samplers on 4 different datasets. We evaluate our model by utilizing 6 human preference metrics including Human Preference Score v2(HPSv2), PickScore Aesthetic Score(AES), ImageReward, CLIPScore and Multi-dimensional Preference Score(MPS). As illustrated in Fig.1, by leveraging the learned golden noises, not only is the overall quality and aesthetic style of the synthesized images visually enhanced, but all metrics also show significant improvements, demonstrating the effectiveness and generalization ability of our NPNet. For instance, on GenEval, our NPNet let SDXL improve the classical evaluation metric HPSv2 by 18%(24.04β28.41)}, which even surpasses a recent much stronger DiT-based diffusion model Hunyuan-DiT(27.78). Furthermore, the NPNet is a compact and efficient neural network that functions as a plug-and-play module, introducing only a 3% extra inference time per image compared to the standard pipeline, while requiring approximately 3% of the memory required by the standard pipeline. This efficiency underscores the practical applicability of NPNet in real-world scenarios.
π Paper: https://arxiv.org/abs/2411.09502
π Project Page: https://github.com/xie-lab-ml/Golden-Noise-for-Diffusion-Models
We would greatly appreciate your assistance and consideration of our paper for inclusion.
Best regards,
Zikai Zhou, Shitong Shao, Lichen Bai, Zhiqiang Xu, Bo Han, Zeke Xie