title
stringlengths
4
246
id
stringlengths
32
39
arxiv_url
stringlengths
32
39
pdf_url
stringlengths
32
39
published_date
stringlengths
10
10
updated_date
stringlengths
10
10
authors
sequencelengths
1
535
affiliations
sequencelengths
1
535
summary
stringlengths
23
3.54k
comment
stringlengths
0
762
journal_ref
stringlengths
0
545
doi
stringlengths
0
151
primary_category
stringclasses
156 values
categories
sequencelengths
1
11
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/pdf/2408.16768v1
2024-08-29
2024-08-29
[ "Ziyu Guo", "Renrui Zhang", "Xiangyang Zhu", "Chengzhuo Tong", "Peng Gao", "Chunyuan Li", "Pheng-Ann Heng" ]
[ "", "", "", "", "", "", "" ]
We introduce SAM2Point, a preliminary exploration adapting Segment Anything Model 2 (SAM 2) for zero-shot and promptable 3D segmentation. SAM2Point interprets any 3D data as a series of multi-directional videos, and leverages SAM 2 for 3D-space segmentation, without further training or 2D-3D projection. Our framework supports various prompt types, including 3D points, boxes, and masks, and can generalize across diverse scenarios, such as 3D objects, indoor scenes, outdoor environments, and raw sparse LiDAR. Demonstrations on multiple 3D datasets, e.g., Objaverse, S3DIS, ScanNet, Semantic3D, and KITTI, highlight the robust generalization capabilities of SAM2Point. To our best knowledge, we present the most faithful implementation of SAM in 3D, which may serve as a starting point for future research in promptable 3D segmentation. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point .
Work in progress. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/pdf/2408.16767v1
2024-08-29
2024-08-29
[ "Fangfu Liu", "Wenqiang Sun", "Hanyang Wang", "Yikai Wang", "Haowen Sun", "Junliang Ye", "Jun Zhang", "Yueqi Duan" ]
[ "", "", "", "", "", "", "", "" ]
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos. Despite great success in dense-view reconstruction scenarios, rendering a detailed scene from insufficient captured views is still an ill-posed optimization problem, often resulting in artifacts and distortions in unseen areas. In this paper, we propose ReconX, a novel 3D scene reconstruction paradigm that reframes the ambiguous reconstruction challenge as a temporal generation task. The key insight is to unleash the strong generative prior of large pre-trained video diffusion models for sparse-view reconstruction. However, 3D view consistency struggles to be accurately preserved in directly generated video frames from pre-trained models. To address this, given limited input views, the proposed ReconX first constructs a global point cloud and encodes it into a contextual space as the 3D structure condition. Guided by the condition, the video diffusion model then synthesizes video frames that are both detail-preserved and exhibit a high degree of 3D consistency, ensuring the coherence of the scene from various perspectives. Finally, we recover the 3D scene from the generated video through a confidence-aware 3D Gaussian Splatting optimization scheme. Extensive experiments on various real-world datasets show the superiority of our ReconX over state-of-the-art methods in terms of quality and generalizability.
Project page: https://liuff19.github.io/ReconX
cs.CV
[ "cs.CV", "cs.AI", "cs.GR" ]
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/pdf/2408.16768v1
2024-08-29
2024-08-29
[ "Ziyu Guo", "Renrui Zhang", "Xiangyang Zhu", "Chengzhuo Tong", "Peng Gao", "Chunyuan Li", "Pheng-Ann Heng" ]
[ "", "", "", "", "", "", "" ]
We introduce SAM2Point, a preliminary exploration adapting Segment Anything Model 2 (SAM 2) for zero-shot and promptable 3D segmentation. SAM2Point interprets any 3D data as a series of multi-directional videos, and leverages SAM 2 for 3D-space segmentation, without further training or 2D-3D projection. Our framework supports various prompt types, including 3D points, boxes, and masks, and can generalize across diverse scenarios, such as 3D objects, indoor scenes, outdoor environments, and raw sparse LiDAR. Demonstrations on multiple 3D datasets, e.g., Objaverse, S3DIS, ScanNet, Semantic3D, and KITTI, highlight the robust generalization capabilities of SAM2Point. To our best knowledge, we present the most faithful implementation of SAM in 3D, which may serve as a starting point for future research in promptable 3D segmentation. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point .
Work in progress. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/pdf/2408.16767v1
2024-08-29
2024-08-29
[ "Fangfu Liu", "Wenqiang Sun", "Hanyang Wang", "Yikai Wang", "Haowen Sun", "Junliang Ye", "Jun Zhang", "Yueqi Duan" ]
[ "", "", "", "", "", "", "", "" ]
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos. Despite great success in dense-view reconstruction scenarios, rendering a detailed scene from insufficient captured views is still an ill-posed optimization problem, often resulting in artifacts and distortions in unseen areas. In this paper, we propose ReconX, a novel 3D scene reconstruction paradigm that reframes the ambiguous reconstruction challenge as a temporal generation task. The key insight is to unleash the strong generative prior of large pre-trained video diffusion models for sparse-view reconstruction. However, 3D view consistency struggles to be accurately preserved in directly generated video frames from pre-trained models. To address this, given limited input views, the proposed ReconX first constructs a global point cloud and encodes it into a contextual space as the 3D structure condition. Guided by the condition, the video diffusion model then synthesizes video frames that are both detail-preserved and exhibit a high degree of 3D consistency, ensuring the coherence of the scene from various perspectives. Finally, we recover the 3D scene from the generated video through a confidence-aware 3D Gaussian Splatting optimization scheme. Extensive experiments on various real-world datasets show the superiority of our ReconX over state-of-the-art methods in terms of quality and generalizability.
Project page: https://liuff19.github.io/ReconX
cs.CV
[ "cs.CV", "cs.AI", "cs.GR" ]
A Score-Based Density Formula, with Applications in Diffusion Generative Models
http://arxiv.org/abs/2408.16765v1
http://arxiv.org/abs/2408.16765v1
http://arxiv.org/pdf/2408.16765v1
2024-08-29
2024-08-29
[ "Gen Li", "Yuling Yan" ]
[ "", "" ]
Score-based generative models (SGMs) have revolutionized the field of generative modeling, achieving unprecedented success in generating realistic and diverse content. Despite empirical advances, the theoretical basis for why optimizing the evidence lower bound (ELBO) on the log-likelihood is effective for training diffusion generative models, such as DDPMs, remains largely unexplored. In this paper, we address this question by establishing a density formula for a continuous-time diffusion process, which can be viewed as the continuous-time limit of the forward process in an SGM. This formula reveals the connection between the target density and the score function associated with each step of the forward process. Building on this, we demonstrate that the minimizer of the optimization objective for training DDPMs nearly coincides with that of the true objective, providing a theoretical foundation for optimizing DDPMs using the ELBO. Furthermore, we offer new insights into the role of score-matching regularization in training GANs, the use of ELBO in diffusion classifiers, and the recently proposed diffusion loss.
cs.LG
[ "cs.LG", "cs.AI", "math.PR", "math.ST", "stat.ML", "stat.TH" ]
Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks
http://arxiv.org/abs/2408.16757v1
http://arxiv.org/abs/2408.16757v1
http://arxiv.org/pdf/2408.16757v1
2024-08-29
2024-08-29
[ "Hongjun Wang", "Sagar Vaze", "Kai Han" ]
[ "", "", "" ]
Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: out-of-distribution (OOD) detection and open-set recognition (OSR). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) We perform rigorous cross-evaluation between state-of-the-art methods in the OOD detection and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD detection and OSR, re-evaluating state-of-the-art OOD detection and OSR methods in this setting; (iii) We surprisingly find that the best performing method on standard benchmarks (Outlier Exposure) struggles when tested at scale, while scoring rules which are sensitive to the deep feature magnitude consistently show promise; and (iv) We conduct empirical analysis to explain these phenomena and highlight directions for future research. Code: \url{https://github.com/Visual-AI/Dissect-OOD-OSR}
Accepted to IJCV, preprint version
cs.CV
[ "cs.CV", "cs.AI" ]
Assessing Large Language Models for Online Extremism Research: Identification, Explanation, and New Knowledge
http://arxiv.org/abs/2408.16749v1
http://arxiv.org/abs/2408.16749v1
http://arxiv.org/pdf/2408.16749v1
2024-08-29
2024-08-29
[ "Beidi Dong", "Jin R. Lee", "Ziwei Zhu", "Balassubramanian Srinivasan" ]
[ "", "", "", "" ]
The United States has experienced a significant increase in violent extremism, prompting the need for automated tools to detect and limit the spread of extremist ideology online. This study evaluates the performance of Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformers (GPT) in detecting and classifying online domestic extremist posts. We collected social media posts containing "far-right" and "far-left" ideological keywords and manually labeled them as extremist or non-extremist. Extremist posts were further classified into one or more of five contributing elements of extremism based on a working definitional framework. The BERT model's performance was evaluated based on training data size and knowledge transfer between categories. We also compared the performance of GPT 3.5 and GPT 4 models using different prompts: na\"ive, layperson-definition, role-playing, and professional-definition. Results showed that the best performing GPT models outperformed the best performing BERT models, with more detailed prompts generally yielding better results. However, overly complex prompts may impair performance. Different versions of GPT have unique sensitives to what they consider extremist. GPT 3.5 performed better at classifying far-left extremist posts, while GPT 4 performed better at classifying far-right extremist posts. Large language models, represented by GPT models, hold significant potential for online extremism classification tasks, surpassing traditional BERT models in a zero-shot setting. Future research should explore human-computer interactions in optimizing GPT models for extremist detection and classification tasks to develop more efficient (e.g., quicker, less effort) and effective (e.g., fewer errors or mistakes) methods for identifying extremist content.
cs.CL
[ "cs.CL", "cs.AI" ]
Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
http://arxiv.org/abs/2408.16737v1
http://arxiv.org/abs/2408.16737v1
http://arxiv.org/pdf/2408.16737v1
2024-08-29
2024-08-29
[ "Hritik Bansal", "Arian Hosseini", "Rishabh Agarwal", "Vinh Q. Tran", "Mehran Kazemi" ]
[ "", "", "", "", "" ]
Training on high-quality synthetic data from strong language models (LMs) is a common strategy to improve the reasoning performance of LMs. In this work, we revisit whether this strategy is compute-optimal under a fixed inference budget (e.g., FLOPs). To do so, we investigate the trade-offs between generating synthetic data using a stronger but more expensive (SE) model versus a weaker but cheaper (WC) model. We evaluate the generated data across three key metrics: coverage, diversity, and false positive rate, and show that the data from WC models may have higher coverage and diversity, but also exhibit higher false positive rates. We then finetune LMs on data from SE and WC models in different settings: knowledge distillation, self-improvement, and a novel weak-to-strong improvement setup where a weaker LM teaches reasoning to a stronger LM. Our findings reveal that models finetuned on WC-generated data consistently outperform those trained on SE-generated data across multiple benchmarks and multiple choices of WC and SE models. These results challenge the prevailing practice of relying on SE models for synthetic data generation, suggesting that WC may be the compute-optimal approach for training advanced LM reasoners.
cs.CL
[ "cs.CL", "cs.AI" ]
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
http://arxiv.org/abs/2408.16725v1
http://arxiv.org/abs/2408.16725v1
http://arxiv.org/pdf/2408.16725v1
2024-08-29
2024-08-29
[ "Zhifei Xie", "Changqiao Wu" ]
[ "", "" ]
Recent advances in language models have achieved significant progress. GPT-4o, as a new milestone, has enabled real-time conversations with humans, demonstrating near-human natural fluency. Such human-computer interaction necessitates models with the capability to perform reasoning directly with the audio modality and generate output in streaming. However, this remains beyond the reach of current academic models, as they typically depend on extra TTS systems for speech synthesis, resulting in undesirable latency. This paper introduces the Mini-Omni, an audio-based end-to-end conversational model, capable of real-time speech interaction. To achieve this capability, we propose a text-instructed speech generation method, along with batch-parallel strategies during inference to further boost the performance. Our method also helps to retain the original model's language capabilities with minimal degradation, enabling other works to establish real-time interaction capabilities. We call this training method "Any Model Can Talk". We also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for speech output. To our best knowledge, Mini-Omni is the first fully end-to-end, open-source model for real-time speech interaction, offering valuable potential for future research.
10 pages
cs.AI
[ "cs.AI", "cs.CL", "cs.HC", "cs.LG", "cs.SD", "eess.AS" ]
A GREAT Architecture for Edge-Based Graph Problems Like TSP
http://arxiv.org/abs/2408.16717v1
http://arxiv.org/abs/2408.16717v1
http://arxiv.org/pdf/2408.16717v1
2024-08-29
2024-08-29
[ "Attila Lischka", "Jiaming Wu", "Morteza Haghir Chehreghani", "Balázs Kulcsár" ]
[ "", "", "", "" ]
In the last years, many neural network-based approaches have been proposed to tackle combinatorial optimization problems such as routing problems. Many of these approaches are based on graph neural networks (GNNs) or related transformers, operating on the Euclidean coordinates representing the routing problems. However, GNNs are inherently not well suited to operate on dense graphs, such as in routing problems. Furthermore, models operating on Euclidean coordinates cannot be applied to non-Euclidean versions of routing problems that are often found in real-world settings. To overcome these limitations, we propose a novel GNN-related edge-based neural model called Graph Edge Attention Network (GREAT). We evaluate the performance of GREAT in the edge-classification task to predict optimal edges in the Traveling Salesman Problem (TSP). We can use such a trained GREAT model to produce sparse TSP graph instances, keeping only the edges GREAT finds promising. Compared to other, non-learning-based methods to sparsify TSP graphs, GREAT can produce very sparse graphs while keeping most of the optimal edges. Furthermore, we build a reinforcement learning-based GREAT framework which we apply to Euclidean and non-Euclidean asymmetric TSP. This framework achieves state-of-the-art results.
15 pages, 7 figures
cs.LG
[ "cs.LG", "cs.AI" ]
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
http://arxiv.org/abs/2408.16672v1
http://arxiv.org/abs/2408.16672v1
http://arxiv.org/pdf/2408.16672v1
2024-08-29
2024-08-29
[ "Rohan Jha", "Bo Wang", "Michael Günther", "Saba Sturua", "Mohammad Kalim Akram", "Han Xiao" ]
[ "", "", "", "", "", "" ]
Multi-vector dense models, such as ColBERT, have proven highly effective in information retrieval. ColBERT's late interaction scoring approximates the joint query-document attention seen in cross-encoders while maintaining inference efficiency closer to traditional dense retrieval models, thanks to its bi-encoder architecture and recent optimizations in indexing and search. In this paper, we introduce several improvements to the ColBERT model architecture and training pipeline, leveraging techniques successful in the more established single-vector embedding model paradigm, particularly those suited for heterogeneous multilingual data. Our new model, Jina-ColBERT-v2, demonstrates strong performance across a range of English and multilingual retrieval tasks, while also cutting storage requirements by up to 50% compared to previous models.
cs.IR
[ "cs.IR", "cs.AI", "cs.CL", "68T50", "I.2.7" ]
Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
http://arxiv.org/abs/2408.16673v1
http://arxiv.org/abs/2408.16673v1
http://arxiv.org/pdf/2408.16673v1
2024-08-29
2024-08-29
[ "Ziniu Li", "Congliang Chen", "Tian Xu", "Zeyu Qin", "Jiancong Xiao", "Ruoyu Sun", "Zhi-Quan Luo" ]
[ "", "", "", "", "", "", "" ]
Large language models rely on Supervised Fine-Tuning (SFT) to specialize in downstream tasks. Cross Entropy (CE) loss is the de facto choice in SFT, but it often leads to overfitting and limited output diversity due to its aggressive updates to the data distribution. This paper aim to address these issues by introducing the maximum entropy principle, which favors models with flatter distributions that still effectively capture the data. Specifically, we develop a new distribution matching method called GEM, which solves reverse Kullback-Leibler divergence minimization with an entropy regularizer. For the SFT of Llama-3-8B models, GEM outperforms CE in several aspects. First, when applied to the UltraFeedback dataset to develop general instruction-following abilities, GEM exhibits reduced overfitting, evidenced by lower perplexity and better performance on the IFEval benchmark. Furthermore, GEM enhances output diversity, leading to performance gains of up to 7 points on math reasoning and code generation tasks using best-of-n sampling, even without domain-specific data. Second, when fine-tuning with domain-specific datasets for math reasoning and code generation, GEM also shows less overfitting and improvements of up to 10 points compared with CE.
cs.LG
[ "cs.LG", "cs.AI" ]
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/abs/2408.16768v1
http://arxiv.org/pdf/2408.16768v1
2024-08-29
2024-08-29
[ "Ziyu Guo", "Renrui Zhang", "Xiangyang Zhu", "Chengzhuo Tong", "Peng Gao", "Chunyuan Li", "Pheng-Ann Heng" ]
[ "", "", "", "", "", "", "" ]
We introduce SAM2Point, a preliminary exploration adapting Segment Anything Model 2 (SAM 2) for zero-shot and promptable 3D segmentation. SAM2Point interprets any 3D data as a series of multi-directional videos, and leverages SAM 2 for 3D-space segmentation, without further training or 2D-3D projection. Our framework supports various prompt types, including 3D points, boxes, and masks, and can generalize across diverse scenarios, such as 3D objects, indoor scenes, outdoor environments, and raw sparse LiDAR. Demonstrations on multiple 3D datasets, e.g., Objaverse, S3DIS, ScanNet, Semantic3D, and KITTI, highlight the robust generalization capabilities of SAM2Point. To our best knowledge, we present the most faithful implementation of SAM in 3D, which may serve as a starting point for future research in promptable 3D segmentation. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point .
Work in progress. Online Demo: https://huggingface.co/spaces/ZiyuG/SAM2Point . Code: https://github.com/ZiyuGuo99/SAM2Point
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/abs/2408.16767v1
http://arxiv.org/pdf/2408.16767v1
2024-08-29
2024-08-29
[ "Fangfu Liu", "Wenqiang Sun", "Hanyang Wang", "Yikai Wang", "Haowen Sun", "Junliang Ye", "Jun Zhang", "Yueqi Duan" ]
[ "", "", "", "", "", "", "", "" ]
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos. Despite great success in dense-view reconstruction scenarios, rendering a detailed scene from insufficient captured views is still an ill-posed optimization problem, often resulting in artifacts and distortions in unseen areas. In this paper, we propose ReconX, a novel 3D scene reconstruction paradigm that reframes the ambiguous reconstruction challenge as a temporal generation task. The key insight is to unleash the strong generative prior of large pre-trained video diffusion models for sparse-view reconstruction. However, 3D view consistency struggles to be accurately preserved in directly generated video frames from pre-trained models. To address this, given limited input views, the proposed ReconX first constructs a global point cloud and encodes it into a contextual space as the 3D structure condition. Guided by the condition, the video diffusion model then synthesizes video frames that are both detail-preserved and exhibit a high degree of 3D consistency, ensuring the coherence of the scene from various perspectives. Finally, we recover the 3D scene from the generated video through a confidence-aware 3D Gaussian Splatting optimization scheme. Extensive experiments on various real-world datasets show the superiority of our ReconX over state-of-the-art methods in terms of quality and generalizability.
Project page: https://liuff19.github.io/ReconX
cs.CV
[ "cs.CV", "cs.AI", "cs.GR" ]
A Score-Based Density Formula, with Applications in Diffusion Generative Models
http://arxiv.org/abs/2408.16765v1
http://arxiv.org/abs/2408.16765v1
http://arxiv.org/pdf/2408.16765v1
2024-08-29
2024-08-29
[ "Gen Li", "Yuling Yan" ]
[ "", "" ]
Score-based generative models (SGMs) have revolutionized the field of generative modeling, achieving unprecedented success in generating realistic and diverse content. Despite empirical advances, the theoretical basis for why optimizing the evidence lower bound (ELBO) on the log-likelihood is effective for training diffusion generative models, such as DDPMs, remains largely unexplored. In this paper, we address this question by establishing a density formula for a continuous-time diffusion process, which can be viewed as the continuous-time limit of the forward process in an SGM. This formula reveals the connection between the target density and the score function associated with each step of the forward process. Building on this, we demonstrate that the minimizer of the optimization objective for training DDPMs nearly coincides with that of the true objective, providing a theoretical foundation for optimizing DDPMs using the ELBO. Furthermore, we offer new insights into the role of score-matching regularization in training GANs, the use of ELBO in diffusion classifiers, and the recently proposed diffusion loss.
cs.LG
[ "cs.LG", "cs.AI", "math.PR", "math.ST", "stat.ML", "stat.TH" ]
Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks
http://arxiv.org/abs/2408.16757v1
http://arxiv.org/abs/2408.16757v1
http://arxiv.org/pdf/2408.16757v1
2024-08-29
2024-08-29
[ "Hongjun Wang", "Sagar Vaze", "Kai Han" ]
[ "", "", "" ]
Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: out-of-distribution (OOD) detection and open-set recognition (OSR). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) We perform rigorous cross-evaluation between state-of-the-art methods in the OOD detection and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD detection and OSR, re-evaluating state-of-the-art OOD detection and OSR methods in this setting; (iii) We surprisingly find that the best performing method on standard benchmarks (Outlier Exposure) struggles when tested at scale, while scoring rules which are sensitive to the deep feature magnitude consistently show promise; and (iv) We conduct empirical analysis to explain these phenomena and highlight directions for future research. Code: \url{https://github.com/Visual-AI/Dissect-OOD-OSR}
Accepted to IJCV, preprint version
cs.CV
[ "cs.CV", "cs.AI" ]
Assessing Large Language Models for Online Extremism Research: Identification, Explanation, and New Knowledge
http://arxiv.org/abs/2408.16749v1
http://arxiv.org/abs/2408.16749v1
http://arxiv.org/pdf/2408.16749v1
2024-08-29
2024-08-29
[ "Beidi Dong", "Jin R. Lee", "Ziwei Zhu", "Balassubramanian Srinivasan" ]
[ "", "", "", "" ]
The United States has experienced a significant increase in violent extremism, prompting the need for automated tools to detect and limit the spread of extremist ideology online. This study evaluates the performance of Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-Trained Transformers (GPT) in detecting and classifying online domestic extremist posts. We collected social media posts containing "far-right" and "far-left" ideological keywords and manually labeled them as extremist or non-extremist. Extremist posts were further classified into one or more of five contributing elements of extremism based on a working definitional framework. The BERT model's performance was evaluated based on training data size and knowledge transfer between categories. We also compared the performance of GPT 3.5 and GPT 4 models using different prompts: na\"ive, layperson-definition, role-playing, and professional-definition. Results showed that the best performing GPT models outperformed the best performing BERT models, with more detailed prompts generally yielding better results. However, overly complex prompts may impair performance. Different versions of GPT have unique sensitives to what they consider extremist. GPT 3.5 performed better at classifying far-left extremist posts, while GPT 4 performed better at classifying far-right extremist posts. Large language models, represented by GPT models, hold significant potential for online extremism classification tasks, surpassing traditional BERT models in a zero-shot setting. Future research should explore human-computer interactions in optimizing GPT models for extremist detection and classification tasks to develop more efficient (e.g., quicker, less effort) and effective (e.g., fewer errors or mistakes) methods for identifying extremist content.
cs.CL
[ "cs.CL", "cs.AI" ]
Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling
http://arxiv.org/abs/2408.16737v1
http://arxiv.org/abs/2408.16737v1
http://arxiv.org/pdf/2408.16737v1
2024-08-29
2024-08-29
[ "Hritik Bansal", "Arian Hosseini", "Rishabh Agarwal", "Vinh Q. Tran", "Mehran Kazemi" ]
[ "", "", "", "", "" ]
Training on high-quality synthetic data from strong language models (LMs) is a common strategy to improve the reasoning performance of LMs. In this work, we revisit whether this strategy is compute-optimal under a fixed inference budget (e.g., FLOPs). To do so, we investigate the trade-offs between generating synthetic data using a stronger but more expensive (SE) model versus a weaker but cheaper (WC) model. We evaluate the generated data across three key metrics: coverage, diversity, and false positive rate, and show that the data from WC models may have higher coverage and diversity, but also exhibit higher false positive rates. We then finetune LMs on data from SE and WC models in different settings: knowledge distillation, self-improvement, and a novel weak-to-strong improvement setup where a weaker LM teaches reasoning to a stronger LM. Our findings reveal that models finetuned on WC-generated data consistently outperform those trained on SE-generated data across multiple benchmarks and multiple choices of WC and SE models. These results challenge the prevailing practice of relying on SE models for synthetic data generation, suggesting that WC may be the compute-optimal approach for training advanced LM reasoners.
cs.CL
[ "cs.CL", "cs.AI" ]
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
http://arxiv.org/abs/2408.16725v1
http://arxiv.org/abs/2408.16725v1
http://arxiv.org/pdf/2408.16725v1
2024-08-29
2024-08-29
[ "Zhifei Xie", "Changqiao Wu" ]
[ "", "" ]
Recent advances in language models have achieved significant progress. GPT-4o, as a new milestone, has enabled real-time conversations with humans, demonstrating near-human natural fluency. Such human-computer interaction necessitates models with the capability to perform reasoning directly with the audio modality and generate output in streaming. However, this remains beyond the reach of current academic models, as they typically depend on extra TTS systems for speech synthesis, resulting in undesirable latency. This paper introduces the Mini-Omni, an audio-based end-to-end conversational model, capable of real-time speech interaction. To achieve this capability, we propose a text-instructed speech generation method, along with batch-parallel strategies during inference to further boost the performance. Our method also helps to retain the original model's language capabilities with minimal degradation, enabling other works to establish real-time interaction capabilities. We call this training method "Any Model Can Talk". We also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for speech output. To our best knowledge, Mini-Omni is the first fully end-to-end, open-source model for real-time speech interaction, offering valuable potential for future research.
10 pages
cs.AI
[ "cs.AI", "cs.CL", "cs.HC", "cs.LG", "cs.SD", "eess.AS" ]
A GREAT Architecture for Edge-Based Graph Problems Like TSP
http://arxiv.org/abs/2408.16717v1
http://arxiv.org/abs/2408.16717v1
http://arxiv.org/pdf/2408.16717v1
2024-08-29
2024-08-29
[ "Attila Lischka", "Jiaming Wu", "Morteza Haghir Chehreghani", "Balázs Kulcsár" ]
[ "", "", "", "" ]
In the last years, many neural network-based approaches have been proposed to tackle combinatorial optimization problems such as routing problems. Many of these approaches are based on graph neural networks (GNNs) or related transformers, operating on the Euclidean coordinates representing the routing problems. However, GNNs are inherently not well suited to operate on dense graphs, such as in routing problems. Furthermore, models operating on Euclidean coordinates cannot be applied to non-Euclidean versions of routing problems that are often found in real-world settings. To overcome these limitations, we propose a novel GNN-related edge-based neural model called Graph Edge Attention Network (GREAT). We evaluate the performance of GREAT in the edge-classification task to predict optimal edges in the Traveling Salesman Problem (TSP). We can use such a trained GREAT model to produce sparse TSP graph instances, keeping only the edges GREAT finds promising. Compared to other, non-learning-based methods to sparsify TSP graphs, GREAT can produce very sparse graphs while keeping most of the optimal edges. Furthermore, we build a reinforcement learning-based GREAT framework which we apply to Euclidean and non-Euclidean asymmetric TSP. This framework achieves state-of-the-art results.
15 pages, 7 figures
cs.LG
[ "cs.LG", "cs.AI" ]
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
http://arxiv.org/abs/2408.16672v1
http://arxiv.org/abs/2408.16672v1
http://arxiv.org/pdf/2408.16672v1
2024-08-29
2024-08-29
[ "Rohan Jha", "Bo Wang", "Michael Günther", "Saba Sturua", "Mohammad Kalim Akram", "Han Xiao" ]
[ "", "", "", "", "", "" ]
Multi-vector dense models, such as ColBERT, have proven highly effective in information retrieval. ColBERT's late interaction scoring approximates the joint query-document attention seen in cross-encoders while maintaining inference efficiency closer to traditional dense retrieval models, thanks to its bi-encoder architecture and recent optimizations in indexing and search. In this paper, we introduce several improvements to the ColBERT model architecture and training pipeline, leveraging techniques successful in the more established single-vector embedding model paradigm, particularly those suited for heterogeneous multilingual data. Our new model, Jina-ColBERT-v2, demonstrates strong performance across a range of English and multilingual retrieval tasks, while also cutting storage requirements by up to 50% compared to previous models.
cs.IR
[ "cs.IR", "cs.AI", "cs.CL", "68T50", "I.2.7" ]
Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
http://arxiv.org/abs/2408.16673v1
http://arxiv.org/abs/2408.16673v1
http://arxiv.org/pdf/2408.16673v1
2024-08-29
2024-08-29
[ "Ziniu Li", "Congliang Chen", "Tian Xu", "Zeyu Qin", "Jiancong Xiao", "Ruoyu Sun", "Zhi-Quan Luo" ]
[ "", "", "", "", "", "", "" ]
Large language models rely on Supervised Fine-Tuning (SFT) to specialize in downstream tasks. Cross Entropy (CE) loss is the de facto choice in SFT, but it often leads to overfitting and limited output diversity due to its aggressive updates to the data distribution. This paper aim to address these issues by introducing the maximum entropy principle, which favors models with flatter distributions that still effectively capture the data. Specifically, we develop a new distribution matching method called GEM, which solves reverse Kullback-Leibler divergence minimization with an entropy regularizer. For the SFT of Llama-3-8B models, GEM outperforms CE in several aspects. First, when applied to the UltraFeedback dataset to develop general instruction-following abilities, GEM exhibits reduced overfitting, evidenced by lower perplexity and better performance on the IFEval benchmark. Furthermore, GEM enhances output diversity, leading to performance gains of up to 7 points on math reasoning and code generation tasks using best-of-n sampling, even without domain-specific data. Second, when fine-tuning with domain-specific datasets for math reasoning and code generation, GEM also shows less overfitting and improvements of up to 10 points compared with CE.
cs.LG
[ "cs.LG", "cs.AI" ]
Iterative Graph Alignment
http://arxiv.org/abs/2408.16667v1
http://arxiv.org/abs/2408.16667v1
http://arxiv.org/pdf/2408.16667v1
2024-08-29
2024-08-29
[ "Fangyuan Yu", "Hardeep Singh Arora", "Matt Johnson" ]
[ "", "", "" ]
By compressing diverse narratives, LLMs go beyond memorization, achieving intelligence by capturing generalizable causal relationships. However, they suffer from local 'representation gaps' due to insufficient training data diversity, limiting their real-world utility, especially in tasks requiring strict alignment to rules. Traditional alignment methods relying on heavy human annotations are inefficient and unscalable. Recent self-alignment techniques also fall short, as they often depend on self-selection based prompting and memorization-based learning. To address these issues, we introduce Iterative Graph Alignment (IGA), an annotation-free rule-based alignment algorithm. A teacher model (VLM) employs Iterative Graph Prompting (IGP) to create logical graphs and reference answers. The student model (LLM) identifies local knowledge gaps by attempting to align its responses with these references, collaborating with helper models to generate diverse answers. These aligned responses are then used for iterative supervised fine-tuning (SFT). Our evaluations across five rule-based scenarios demonstrate IGP's effectiveness, with a 73.12\% alignment improvement in Claude Sonnet 3.5, and Llama3-8B-Instruct achieving an 86.20\% improvement, outperforming Claude Sonnet 3.5 in rule-based alignment.
12 pages, 4 figures
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.MA" ]
DriveGenVLM: Real-world Video Generation for Vision Language Model based Autonomous Driving
http://arxiv.org/abs/2408.16647v1
http://arxiv.org/abs/2408.16647v1
http://arxiv.org/pdf/2408.16647v1
2024-08-29
2024-08-29
[ "Yongjie Fu", "Anmol Jain", "Xuan Di", "Xu Chen", "Zhaobin Mo" ]
[ "", "", "", "", "" ]
The advancement of autonomous driving technologies necessitates increasingly sophisticated methods for understanding and predicting real-world scenarios. Vision language models (VLMs) are emerging as revolutionary tools with significant potential to influence autonomous driving. In this paper, we propose the DriveGenVLM framework to generate driving videos and use VLMs to understand them. To achieve this, we employ a video generation framework grounded in denoising diffusion probabilistic models (DDPM) aimed at predicting real-world video sequences. We then explore the adequacy of our generated videos for use in VLMs by employing a pre-trained model known as Efficient In-context Learning on Egocentric Videos (EILEV). The diffusion model is trained with the Waymo open dataset and evaluated using the Fr\'echet Video Distance (FVD) score to ensure the quality and realism of the generated videos. Corresponding narrations are provided by EILEV for these generated videos, which may be beneficial in the autonomous driving domain. These narrations can enhance traffic scene understanding, aid in navigation, and improve planning capabilities. The integration of video generation with VLMs in the DriveGenVLM framework represents a significant step forward in leveraging advanced AI models to address complex challenges in autonomous driving.
cs.CV
[ "cs.CV", "cs.AI" ]
RLCP: A Reinforcement Learning-based Copyright Protection Method for Text-to-Image Diffusion Model
http://arxiv.org/abs/2408.16634v1
http://arxiv.org/abs/2408.16634v1
http://arxiv.org/pdf/2408.16634v1
2024-08-29
2024-08-29
[ "Zhuan Shi", "Jing Yan", "Xiaoli Tang", "Lingjuan Lyu", "Boi Faltings" ]
[ "", "", "", "", "" ]
The increasing sophistication of text-to-image generative models has led to complex challenges in defining and enforcing copyright infringement criteria and protection. Existing methods, such as watermarking and dataset deduplication, fail to provide comprehensive solutions due to the lack of standardized metrics and the inherent complexity of addressing copyright infringement in diffusion models. To deal with these challenges, we propose a Reinforcement Learning-based Copyright Protection(RLCP) method for Text-to-Image Diffusion Model, which minimizes the generation of copyright-infringing content while maintaining the quality of the model-generated dataset. Our approach begins with the introduction of a novel copyright metric grounded in copyright law and court precedents on infringement. We then utilize the Denoising Diffusion Policy Optimization (DDPO) framework to guide the model through a multi-step decision-making process, optimizing it using a reward function that incorporates our proposed copyright metric. Additionally, we employ KL divergence as a regularization term to mitigate some failure modes and stabilize RL fine-tuning. Experiments conducted on 3 mixed datasets of copyright and non-copyright images demonstrate that our approach significantly reduces copyright infringement risk while maintaining image quality.
arXiv admin note: text overlap with arXiv:2403.12052 by other authors
cs.CY
[ "cs.CY", "cs.AI", "cs.CR" ]
Optimizing Automated Picking Systems in Warehouse Robots Using Machine Learning
http://arxiv.org/abs/2408.16633v1
http://arxiv.org/abs/2408.16633v1
http://arxiv.org/pdf/2408.16633v1
2024-08-29
2024-08-29
[ "Keqin Li", "Jin Wang", "Xubo Wu", "Xirui Peng", "Runmian Chang", "Xiaoyu Deng", "Yiwen Kang", "Yue Yang", "Fanghao Ni", "Bo Hong" ]
[ "", "", "", "", "", "", "", "", "", "" ]
With the rapid growth of global e-commerce, the demand for automation in the logistics industry is increasing. This study focuses on automated picking systems in warehouses, utilizing deep learning and reinforcement learning technologies to enhance picking efficiency and accuracy while reducing system failure rates. Through empirical analysis, we demonstrate the effectiveness of these technologies in improving robot picking performance and adaptability to complex environments. The results show that the integrated machine learning model significantly outperforms traditional methods, effectively addressing the challenges of peak order processing, reducing operational errors, and improving overall logistics efficiency. Additionally, by analyzing environmental factors, this study further optimizes system design to ensure efficient and stable operation under variable conditions. This research not only provides innovative solutions for logistics automation but also offers a theoretical and empirical foundation for future technological development and application.
cs.RO
[ "cs.RO", "cs.AI" ]
Maelstrom Networks
http://arxiv.org/abs/2408.16632v1
http://arxiv.org/abs/2408.16632v1
http://arxiv.org/pdf/2408.16632v1
2024-08-29
2024-08-29
[ "Matthew Evanusa", "Cornelia Fermüller", "Yiannis Aloimonos" ]
[ "", "", "" ]
Artificial Neural Networks has struggled to devise a way to incorporate working memory into neural networks. While the ``long term'' memory can be seen as the learned weights, the working memory consists likely more of dynamical activity, that is missing from feed-forward models. Current state of the art models such as transformers tend to ``solve'' this by ignoring working memory entirely and simply process the sequence as an entire piece of data; however this means the network cannot process the sequence in an online fashion, and leads to an immense explosion in memory requirements. Here, inspired by a combination of controls, reservoir computing, deep learning, and recurrent neural networks, we offer an alternative paradigm that combines the strength of recurrent networks, with the pattern matching capability of feed-forward neural networks, which we call the \textit{Maelstrom Networks} paradigm. This paradigm leaves the recurrent component - the \textit{Maelstrom} - unlearned, and offloads the learning to a powerful feed-forward network. This allows the network to leverage the strength of feed-forward training without unrolling the network, and allows for the memory to be implemented in new neuromorphic hardware. It endows a neural network with a sequential memory that takes advantage of the inductive bias that data is organized causally in the temporal domain, and imbues the network with a state that represents the agent's ``self'', moving through the environment. This could also lead the way to continual learning, with the network modularized and ``'protected'' from overwrites that come with new data. In addition to aiding in solving these performance problems that plague current non-temporal deep networks, this also could finally lead towards endowing artificial networks with a sense of ``self''.
cs.NE
[ "cs.NE", "cs.AI" ]
LLMs generate structurally realistic social networks but overestimate political homophily
http://arxiv.org/abs/2408.16629v1
http://arxiv.org/abs/2408.16629v1
http://arxiv.org/pdf/2408.16629v1
2024-08-29
2024-08-29
[ "Serina Chang", "Alicja Chaszczewicz", "Emma Wang", "Maya Josifovska", "Emma Pierson", "Jure Leskovec" ]
[ "", "", "", "", "", "" ]
Generating social networks is essential for many applications, such as epidemic modeling and social simulations. Prior approaches either involve deep learning models, which require many observed networks for training, or stylized models, which are limited in their realism and flexibility. In contrast, LLMs offer the potential for zero-shot and flexible network generation. However, two key questions are: (1) are LLM's generated networks realistic, and (2) what are risks of bias, given the importance of demographics in forming social ties? To answer these questions, we develop three prompting methods for network generation and compare the generated networks to real social networks. We find that more realistic networks are generated with "local" methods, where the LLM constructs relations for one persona at a time, compared to "global" methods that construct the entire network at once. We also find that the generated networks match real networks on many characteristics, including density, clustering, community structure, and degree. However, we find that LLMs emphasize political homophily over all other types of homophily and overestimate political homophily relative to real-world measures.
cs.CY
[ "cs.CY", "cs.AI", "cs.SI" ]
Towards Infusing Auxiliary Knowledge for Distracted Driver Detection
http://arxiv.org/abs/2408.16621v1
http://arxiv.org/abs/2408.16621v1
http://arxiv.org/pdf/2408.16621v1
2024-08-29
2024-08-29
[ "Ishwar B Balappanawar", "Ashmit Chamoli", "Ruwan Wickramarachchi", "Aditya Mishra", "Ponnurangam Kumaraguru", "Amit P. Sheth" ]
[ "", "", "", "", "", "" ]
Distracted driving is a leading cause of road accidents globally. Identification of distracted driving involves reliably detecting and classifying various forms of driver distraction (e.g., texting, eating, or using in-car devices) from in-vehicle camera feeds to enhance road safety. This task is challenging due to the need for robust models that can generalize to a diverse set of driver behaviors without requiring extensive annotated datasets. In this paper, we propose KiD3, a novel method for distracted driver detection (DDD) by infusing auxiliary knowledge about semantic relations between entities in a scene and the structural configuration of the driver's pose. Specifically, we construct a unified framework that integrates the scene graphs, and driver pose information with the visual cues in video frames to create a holistic representation of the driver's actions.Our results indicate that KiD3 achieves a 13.64% accuracy improvement over the vision-only baseline by incorporating such auxiliary knowledge with visual information.
Accepted at KiL 2024: Workshop on Knowledge-infused Learning co-located with 30th ACM KDD Conference
cs.CV
[ "cs.CV", "cs.AI", "cs.LG", "I.2.0" ]
Hyperdimensional Vector Tsetlin Machines with Applications to Sequence Learning and Generation
http://arxiv.org/abs/2408.16620v1
http://arxiv.org/abs/2408.16620v1
http://arxiv.org/pdf/2408.16620v1
2024-08-29
2024-08-29
[ "Christian D. Blakely" ]
[ "" ]
We construct a two-layered model for learning and generating sequential data that is both computationally fast and competitive with vanilla Tsetlin machines, adding numerous advantages. Through the use of hyperdimensional vector computing (HVC) algebras and Tsetlin machine clause structures, we demonstrate that the combination of both inherits the generality of data encoding and decoding of HVC with the fast interpretable nature of Tsetlin machines to yield a powerful machine learning model. We apply the approach in two areas, namely in forecasting, generating new sequences, and classification. For the latter, we derive results for the entire UCR Time Series Archive and compare with the standard benchmarks to see how well the method competes in time series classification.
cs.LG
[ "cs.LG", "cs.AI" ]
Examination of Code generated by Large Language Models
http://arxiv.org/abs/2408.16601v1
http://arxiv.org/abs/2408.16601v1
http://arxiv.org/pdf/2408.16601v1
2024-08-29
2024-08-29
[ "Robin Beer", "Alexander Feix", "Tim Guttzeit", "Tamara Muras", "Vincent Müller", "Maurice Rauscher", "Florian Schäffler", "Welf Löwe" ]
[ "", "", "", "", "", "", "", "" ]
Large language models (LLMs), such as ChatGPT and Copilot, are transforming software development by automating code generation and, arguably, enable rapid prototyping, support education, and boost productivity. Therefore, correctness and quality of the generated code should be on par with manually written code. To assess the current state of LLMs in generating correct code of high quality, we conducted controlled experiments with ChatGPT and Copilot: we let the LLMs generate simple algorithms in Java and Python along with the corresponding unit tests and assessed the correctness and the quality (coverage) of the generated (test) codes. We observed significant differences between the LLMs, between the languages, between algorithm and test codes, and over time. The present paper reports these results together with the experimental methods allowing repeated and comparable assessments for more algorithms, languages, and LLMs over time.
cs.SE
[ "cs.SE", "cs.AI", "I.2.2" ]
Enhancing Dialogue Generation in Werewolf Game Through Situation Analysis and Persuasion Strategies
http://arxiv.org/abs/2408.16586v1
http://arxiv.org/abs/2408.16586v1
http://arxiv.org/pdf/2408.16586v1
2024-08-29
2024-08-29
[ "Zhiyang Qi", "Michimasa Inaba" ]
[ "", "" ]
Recent advancements in natural language processing, particularly with large language models (LLMs) like GPT-4, have significantly enhanced dialogue systems, enabling them to generate more natural and fluent conversations. Despite these improvements, challenges persist, such as managing continuous dialogues, memory retention, and minimizing hallucinations. The AIWolfDial2024 addresses these challenges by employing the Werewolf Game, an incomplete information game, to test the capabilities of LLMs in complex interactive environments. This paper introduces a LLM-based Werewolf Game AI, where each role is supported by situation analysis to aid response generation. Additionally, for the werewolf role, various persuasion strategies, including logical appeal, credibility appeal, and emotional appeal, are employed to effectively persuade other players to align with its actions.
Accepted to the AIWolfDial2024 workshop at INLG 2024
cs.CL
[ "cs.CL", "cs.AI" ]
Seeking the Sufficiency and Necessity Causal Features in Multimodal Representation Learning
http://arxiv.org/abs/2408.16577v1
http://arxiv.org/abs/2408.16577v1
http://arxiv.org/pdf/2408.16577v1
2024-08-29
2024-08-29
[ "Boyu Chen", "Junjie Liu", "Zhu Li", "Mengyue yang" ]
[ "", "", "", "" ]
Learning representations with a high Probability of Necessary and Sufficient Causes (PNS) has been shown to enhance deep learning models' ability. This task involves identifying causal features that are both sufficient (guaranteeing the outcome) and necessary (without which the outcome cannot occur). However, current research predominantly focuses on unimodal data, and extending PNS learning to multimodal settings presents significant challenges. The challenges arise as the conditions for PNS identifiability, Exogeneity and Monotonicity, need to be reconsidered in a multimodal context, where sufficient and necessary causal features are distributed across different modalities. To address this, we first propose conceptualizing multimodal representations as comprising modality-invariant and modality-specific components. We then analyze PNS identifiability for each component, while ensuring non-trivial PNS estimation. Finally, we formulate tractable optimization objectives that enable multimodal models to learn high-PNS representations, thereby enhancing their predictive performance. Experiments demonstrate the effectiveness of our method on both synthetic and real-world data.
cs.LG
[ "cs.LG", "cs.AI" ]
SFR-GNN: Simple and Fast Robust GNNs against Structural Attacks
http://arxiv.org/abs/2408.16537v1
http://arxiv.org/abs/2408.16537v1
http://arxiv.org/pdf/2408.16537v1
2024-08-29
2024-08-29
[ "Xing Ai", "Guanyu Zhu", "Yulin Zhu", "Yu Zheng", "Gaolei Li", "Jianhua Li", "Kai Zhou" ]
[ "", "", "", "", "", "", "" ]
Graph Neural Networks (GNNs) have demonstrated commendable performance for graph-structured data. Yet, GNNs are often vulnerable to adversarial structural attacks as embedding generation relies on graph topology. Existing efforts are dedicated to purifying the maliciously modified structure or applying adaptive aggregation, thereby enhancing the robustness against adversarial structural attacks. It is inevitable for a defender to consume heavy computational costs due to lacking prior knowledge about modified structures. To this end, we propose an efficient defense method, called Simple and Fast Robust Graph Neural Network (SFR-GNN), supported by mutual information theory. The SFR-GNN first pre-trains a GNN model using node attributes and then fine-tunes it over the modified graph in the manner of contrastive learning, which is free of purifying modified structures and adaptive aggregation, thus achieving great efficiency gains. Consequently, SFR-GNN exhibits a 24%--162% speedup compared to advanced robust models, demonstrating superior robustness for node classification tasks.
cs.LG
[ "cs.LG", "cs.AI" ]
Adaptive Variational Continual Learning via Task-Heuristic Modelling
http://arxiv.org/abs/2408.16517v1
http://arxiv.org/abs/2408.16517v1
http://arxiv.org/pdf/2408.16517v1
2024-08-29
2024-08-29
[ "Fan Yang" ]
[ "" ]
Variational continual learning (VCL) is a turn-key learning algorithm that has state-of-the-art performance among the best continual learning models. In our work, we explore an extension of the generalized variational continual learning (GVCL) model, named AutoVCL, which combines task heuristics for informed learning and model optimization. We demonstrate that our model outperforms the standard GVCL with fixed hyperparameters, benefiting from the automatic adjustment of the hyperparameter based on the difficulty and similarity of the incoming task compared to the previous tasks.
4 pages, 2 figures, 3 tables
cs.LG
[ "cs.LG", "cs.AI" ]
On-device AI: Quantization-aware Training of Transformers in Time-Series
http://arxiv.org/abs/2408.16495v1
http://arxiv.org/abs/2408.16495v1
http://arxiv.org/pdf/2408.16495v1
2024-08-29
2024-08-29
[ "Tianheng Ling", "Gregor Schiele" ]
[ "", "" ]
Artificial Intelligence (AI) models for time-series in pervasive computing keep getting larger and more complicated. The Transformer model is by far the most compelling of these AI models. However, it is difficult to obtain the desired performance when deploying such a massive model on a sensor device with limited resources. My research focuses on optimizing the Transformer model for time-series forecasting tasks. The optimized model will be deployed as hardware accelerators on embedded Field Programmable Gate Arrays (FPGAs). I will investigate the impact of applying Quantization-aware Training to the Transformer model to reduce its size and runtime memory footprint while maximizing the advantages of FPGAs.
This paper is accepted by 2023 IEEE International Conference on Pervasive Computing and Communications(PhD Forum)
10.1109/PerComWorkshops56833.2023.10150339
cs.LG
[ "cs.LG", "cs.AI" ]
Integrating Features for Recognizing Human Activities through Optimized Parameters in Graph Convolutional Networks and Transformer Architectures
http://arxiv.org/abs/2408.16442v1
http://arxiv.org/abs/2408.16442v1
http://arxiv.org/pdf/2408.16442v1
2024-08-29
2024-08-29
[ "Mohammad Belal", "Taimur Hassan", "Abdelfatah Hassan", "Nael Alsheikh", "Noureldin Elhendawi", "Irfan Hussain" ]
[ "", "", "", "", "", "" ]
Human activity recognition is a major field of study that employs computer vision, machine vision, and deep learning techniques to categorize human actions. The field of deep learning has made significant progress, with architectures that are extremely effective at capturing human dynamics. This study emphasizes the influence of feature fusion on the accuracy of activity recognition. This technique addresses the limitation of conventional models, which face difficulties in identifying activities because of their limited capacity to understand spatial and temporal features. The technique employs sensory data obtained from four publicly available datasets: HuGaDB, PKU-MMD, LARa, and TUG. The accuracy and F1-score of two deep learning models, specifically a Transformer model and a Parameter-Optimized Graph Convolutional Network (PO-GCN), were evaluated using these datasets. The feature fusion technique integrated the final layer features from both models and inputted them into a classifier. Empirical evidence demonstrates that PO-GCN outperforms standard models in activity recognition. HuGaDB demonstrated a 2.3% improvement in accuracy and a 2.2% increase in F1-score. TUG showed a 5% increase in accuracy and a 0.5% rise in F1-score. On the other hand, LARa and PKU-MMD achieved lower accuracies of 64% and 69% respectively. This indicates that the integration of features enhanced the performance of both the Transformer model and PO-GCN.
6 pages, 1 figure, conference
cs.CV
[ "cs.CV", "cs.AI", "cs.RO" ]
Gradient-free variational learning with conditional mixture networks
http://arxiv.org/abs/2408.16429v1
http://arxiv.org/abs/2408.16429v1
http://arxiv.org/pdf/2408.16429v1
2024-08-29
2024-08-29
[ "Conor Heins", "Hao Wu", "Dimitrije Markovic", "Alexander Tschantz", "Jeff Beck", "Christopher Buckley" ]
[ "", "", "", "", "", "" ]
Balancing computational efficiency with robust predictive performance is crucial in supervised learning, especially for critical applications. Standard deep learning models, while accurate and scalable, often lack probabilistic features like calibrated predictions and uncertainty quantification. Bayesian methods address these issues but can be computationally expensive as model and data complexity increase. Previous work shows that fast variational methods can reduce the compute requirements of Bayesian methods by eliminating the need for gradient computation or sampling, but are often limited to simple models. We demonstrate that conditional mixture networks (CMNs), a probabilistic variant of the mixture-of-experts (MoE) model, are suitable for fast, gradient-free inference and can solve complex classification tasks. CMNs employ linear experts and a softmax gating network. By exploiting conditional conjugacy and P\'olya-Gamma augmentation, we furnish Gaussian likelihoods for the weights of both the linear experts and the gating network. This enables efficient variational updates using coordinate ascent variational inference (CAVI), avoiding traditional gradient-based optimization. We validate this approach by training two-layer CMNs on standard benchmarks from the UCI repository. Our method, CAVI-CMN, achieves competitive and often superior predictive accuracy compared to maximum likelihood estimation (MLE) with backpropagation, while maintaining competitive runtime and full posterior distributions over all model parameters. Moreover, as input size or the number of experts increases, computation time scales competitively with MLE and other gradient-based solutions like black-box variational inference (BBVI), making CAVI-CMN a promising tool for deep, fast, and gradient-free Bayesian networks.
16 pages main text (3 figures), including references. 9 pages supplementary material (5 figures)
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
COIN: Control-Inpainting Diffusion Prior for Human and Camera Motion Estimation
http://arxiv.org/abs/2408.16426v1
http://arxiv.org/abs/2408.16426v1
http://arxiv.org/pdf/2408.16426v1
2024-08-29
2024-08-29
[ "Jiefeng Li", "Ye Yuan", "Davis Rempe", "Haotian Zhang", "Pavlo Molchanov", "Cewu Lu", "Jan Kautz", "Umar Iqbal" ]
[ "", "", "", "", "", "", "", "" ]
Estimating global human motion from moving cameras is challenging due to the entanglement of human and camera motions. To mitigate the ambiguity, existing methods leverage learned human motion priors, which however often result in oversmoothed motions with misaligned 2D projections. To tackle this problem, we propose COIN, a control-inpainting motion diffusion prior that enables fine-grained control to disentangle human and camera motions. Although pre-trained motion diffusion models encode rich motion priors, we find it non-trivial to leverage such knowledge to guide global motion estimation from RGB videos. COIN introduces a novel control-inpainting score distillation sampling method to ensure well-aligned, consistent, and high-quality motion from the diffusion prior within a joint optimization framework. Furthermore, we introduce a new human-scene relation loss to alleviate the scale ambiguity by enforcing consistency among the humans, camera, and scene. Experiments on three challenging benchmarks demonstrate the effectiveness of COIN, which outperforms the state-of-the-art methods in terms of global human motion estimation and camera motion estimation. As an illustrative example, COIN outperforms the state-of-the-art method by 33% in world joint position error (W-MPJPE) on the RICH dataset.
ECCV 2024
cs.CV
[ "cs.CV", "cs.AI" ]
Fourier Spectral Physics Informed Neural Network: An Efficient and Low-Memory PINN
http://arxiv.org/abs/2408.16414v1
http://arxiv.org/abs/2408.16414v1
http://arxiv.org/pdf/2408.16414v1
2024-08-29
2024-08-29
[ "Tianchi Yu", "Yiming Qi", "Ivan Oseledets", "Shiyi Chen" ]
[ "", "", "", "" ]
With growing investigations into solving partial differential equations by physics-informed neural networks (PINNs), more accurate and efficient PINNs are required to meet the practical demands of scientific computing. One bottleneck of current PINNs is computing the high-order derivatives via automatic differentiation which often necessitates substantial computing resources. In this paper, we focus on removing the automatic differentiation of the spatial derivatives and propose a spectral-based neural network that substitutes the differential operator with a multiplication. Compared to the PINNs, our approach requires lower memory and shorter training time. Thanks to the exponential convergence of the spectral basis, our approach is more accurate. Moreover, to handle the different situations between physics domain and spectral domain, we provide two strategies to train networks by their spectral information. Through a series of comprehensive experiments, We validate the aforementioned merits of our proposed network.
cs.LG
[ "cs.LG", "cs.AI", "cs.NA", "math.NA", "physics.comp-ph" ]
DetectBERT: Towards Full App-Level Representation Learning to Detect Android Malware
http://arxiv.org/abs/2408.16353v1
http://arxiv.org/abs/2408.16353v1
http://arxiv.org/pdf/2408.16353v1
2024-08-29
2024-08-29
[ "Tiezhu Sun", "Nadia Daoudi", "Kisub Kim", "Kevin Allix", "Tegawendé F. Bissyandé", "Jacques Klein" ]
[ "", "", "", "", "", "" ]
Recent advancements in ML and DL have significantly improved Android malware detection, yet many methodologies still rely on basic static analysis, bytecode, or function call graphs that often fail to capture complex malicious behaviors. DexBERT, a pre-trained BERT-like model tailored for Android representation learning, enriches class-level representations by analyzing Smali code extracted from APKs. However, its functionality is constrained by its inability to process multiple Smali classes simultaneously. This paper introduces DetectBERT, which integrates correlated Multiple Instance Learning (c-MIL) with DexBERT to handle the high dimensionality and variability of Android malware, enabling effective app-level detection. By treating class-level features as instances within MIL bags, DetectBERT aggregates these into a comprehensive app-level representation. Our evaluation demonstrates that DetectBERT not only surpasses existing state-of-the-art detection methods but also adapts to evolving malware threats. Moreover, the versatility of the DetectBERT framework holds promising potential for broader applications in app-level analysis and other software engineering tasks, offering new avenues for research and development.
Accepted at ESEM 2024
cs.SE
[ "cs.SE", "cs.AI", "cs.CR" ]
Toward Robust Early Detection of Alzheimer's Disease via an Integrated Multimodal Learning Approach
http://arxiv.org/abs/2408.16343v1
http://arxiv.org/abs/2408.16343v1
http://arxiv.org/pdf/2408.16343v1
2024-08-29
2024-08-29
[ "Yifei Chen", "Shenghao Zhu", "Zhaojie Fang", "Chang Liu", "Binfeng Zou", "Yuhe Wang", "Shuo Chang", "Fan Jia", "Feiwei Qin", "Jin Fan", "Yong Peng", "Changmiao Wang" ]
[ "", "", "", "", "", "", "", "", "", "", "", "" ]
Alzheimer's Disease (AD) is a complex neurodegenerative disorder marked by memory loss, executive dysfunction, and personality changes. Early diagnosis is challenging due to subtle symptoms and varied presentations, often leading to misdiagnosis with traditional unimodal diagnostic methods due to their limited scope. This study introduces an advanced multimodal classification model that integrates clinical, cognitive, neuroimaging, and EEG data to enhance diagnostic accuracy. The model incorporates a feature tagger with a tabular data coding architecture and utilizes the TimesBlock module to capture intricate temporal patterns in Electroencephalograms (EEG) data. By employing Cross-modal Attention Aggregation module, the model effectively fuses Magnetic Resonance Imaging (MRI) spatial information with EEG temporal data, significantly improving the distinction between AD, Mild Cognitive Impairment, and Normal Cognition. Simultaneously, we have constructed the first AD classification dataset that includes three modalities: EEG, MRI, and tabular data. Our innovative approach aims to facilitate early diagnosis and intervention, potentially slowing the progression of AD. The source code and our private ADMC dataset are available at https://github.com/JustlfC03/MSTNet.
5 pages, 2 figures
cs.CV
[ "cs.CV", "cs.AI" ]
Self-Improving Diffusion Models with Synthetic Data
http://arxiv.org/abs/2408.16333v1
http://arxiv.org/abs/2408.16333v1
http://arxiv.org/pdf/2408.16333v1
2024-08-29
2024-08-29
[ "Sina Alemohammad", "Ahmed Imtiaz Humayun", "Shruti Agarwal", "John Collomosse", "Richard Baraniuk" ]
[ "", "", "", "", "" ]
The artificial intelligence (AI) world is running out of real data for training increasingly large generative models, resulting in accelerating pressure to train on synthetic data. Unfortunately, training new generative models with synthetic data from current or past generation models creates an autophagous (self-consuming) loop that degrades the quality and/or diversity of the synthetic data in what has been termed model autophagy disorder (MAD) and model collapse. Current thinking around model autophagy recommends that synthetic data is to be avoided for model training lest the system deteriorate into MADness. In this paper, we take a different tack that treats synthetic data differently from real data. Self-IMproving diffusion models with Synthetic data (SIMS) is a new training concept for diffusion models that uses self-synthesized data to provide negative guidance during the generation process to steer a model's generative process away from the non-ideal synthetic data manifold and towards the real data distribution. We demonstrate that SIMS is capable of self-improvement; it establishes new records based on the Fr\'echet inception distance (FID) metric for CIFAR-10 and ImageNet-64 generation and achieves competitive results on FFHQ-64 and ImageNet-512. Moreover, SIMS is, to the best of our knowledge, the first prophylactic generative AI algorithm that can be iteratively trained on self-generated synthetic data without going MAD. As a bonus, SIMS can adjust a diffusion model's synthetic data distribution to match any desired in-domain target distribution to help mitigate biases and ensure fairness.
cs.LG
[ "cs.LG", "cs.AI" ]
Guided Reasoning: A Non-Technical Introduction
http://arxiv.org/abs/2408.16331v1
http://arxiv.org/abs/2408.16331v1
http://arxiv.org/pdf/2408.16331v1
2024-08-29
2024-08-29
[ "Gregor Betz" ]
[ "" ]
We introduce the concept and a default implementation of Guided Reasoning. A multi-agent system is a Guided Reasoning system iff one agent (the guide) primarily interacts with other agents in order to improve reasoning quality. We describe Logikon's default implementation of Guided Reasoning in non-technical terms. This is a living document we'll gradually enrich with more detailed information and examples. Code: https://github.com/logikon-ai/logikon
cs.AI
[ "cs.AI", "cs.HC" ]
FA-YOLO: Research On Efficient Feature Selection YOLO Improved Algorithm Based On FMDS and AGMF Modules
http://arxiv.org/abs/2408.16313v1
http://arxiv.org/abs/2408.16313v1
http://arxiv.org/pdf/2408.16313v1
2024-08-29
2024-08-29
[ "Yukang Huo", "Mingyuan Yao", "Qingbin Tian", "Tonghao Wang", "Ruifeng Wang", "Haihua Wang" ]
[ "", "", "", "", "", "" ]
Over the past few years, the YOLO series of models has emerged as one of the dominant methodologies in the realm of object detection. Many studies have advanced these baseline models by modifying their architectures, enhancing data quality, and developing new loss functions. However, current models still exhibit deficiencies in processing feature maps, such as overlooking the fusion of cross-scale features and a static fusion approach that lacks the capability for dynamic feature adjustment. To address these issues, this paper introduces an efficient Fine-grained Multi-scale Dynamic Selection Module (FMDS Module), which applies a more effective dynamic feature selection and fusion method on fine-grained multi-scale feature maps, significantly enhancing the detection accuracy of small, medium, and large-sized targets in complex environments. Furthermore, this paper proposes an Adaptive Gated Multi-branch Focus Fusion Module (AGMF Module), which utilizes multiple parallel branches to perform complementary fusion of various features captured by the gated unit branch, FMDS Module branch, and TripletAttention branch. This approach further enhances the comprehensiveness, diversity, and integrity of feature fusion. This paper has integrated the FMDS Module, AGMF Module, into Yolov9 to develop a novel object detection model named FA-YOLO. Extensive experimental results show that under identical experimental conditions, FA-YOLO achieves an outstanding 66.1% mean Average Precision (mAP) on the PASCAL VOC 2007 dataset, representing 1.0% improvement over YOLOv9's 65.1%. Additionally, the detection accuracies of FA-YOLO for small, medium, and large targets are 44.1%, 54.6%, and 70.8%, respectively, showing improvements of 2.0%, 3.1%, and 0.9% compared to YOLOv9's 42.1%, 51.5%, and 69.9%.
11 pages and 4 figures
cs.CV
[ "cs.CV", "cs.AI" ]
Safe Bayesian Optimization for High-Dimensional Control Systems via Additive Gaussian Processes
http://arxiv.org/abs/2408.16307v1
http://arxiv.org/abs/2408.16307v1
http://arxiv.org/pdf/2408.16307v1
2024-08-29
2024-08-29
[ "Hongxuan Wang", "Xiaocong Li", "Adrish Bhaumik", "Prahlad Vadakkepat" ]
[ "", "", "", "" ]
Controller tuning and optimization have been among the most fundamental problems in robotics and mechatronic systems. The traditional methodology is usually model-based, but its performance heavily relies on an accurate mathematical model of the system. In control applications with complex dynamics, obtaining a precise model is often challenging, leading us towards a data-driven approach. While optimizing a single controller has been explored by various researchers, it remains a challenge to obtain the optimal controller parameters safely and efficiently when multiple controllers are involved. In this paper, we propose a high-dimensional safe Bayesian optimization method based on additive Gaussian processes to optimize multiple controllers simultaneously and safely. Additive Gaussian kernels replace the traditional squared-exponential kernels or Mat\'ern kernels, enhancing the efficiency with which Gaussian processes update information on unknown functions. Experimental results on a permanent magnet synchronous motor (PMSM) demonstrate that compared to existing safe Bayesian optimization algorithms, our method can obtain optimal parameters more efficiently while ensuring safety.
cs.RO
[ "cs.RO", "cs.AI" ]
Physics of Language Models: Part 2.2, How to Learn From Mistakes on Grade-School Math Problems
http://arxiv.org/abs/2408.16293v1
http://arxiv.org/abs/2408.16293v1
http://arxiv.org/pdf/2408.16293v1
2024-08-29
2024-08-29
[ "Tian Ye", "Zicheng Xu", "Yuanzhi Li", "Zeyuan Allen-Zhu" ]
[ "", "", "", "" ]
Language models have demonstrated remarkable performance in solving reasoning tasks; however, even the strongest models still occasionally make reasoning mistakes. Recently, there has been active research aimed at improving reasoning accuracy, particularly by using pretrained language models to "self-correct" their mistakes via multi-round prompting. In this paper, we follow this line of work but focus on understanding the usefulness of incorporating "error-correction" data directly into the pretraining stage. This data consists of erroneous solution steps immediately followed by their corrections. Using a synthetic math dataset, we show promising results: this type of pretrain data can help language models achieve higher reasoning accuracy directly (i.e., through simple auto-regression, without multi-round prompting) compared to pretraining on the same amount of error-free data. We also delve into many details, such as (1) how this approach differs from beam search, (2) how such data can be prepared, (3) whether masking is needed on the erroneous tokens, (4) the amount of error required, (5) whether such data can be deferred to the fine-tuning stage, and many others.
arXiv admin note: text overlap with arXiv:2407.20311
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
OpenFGL: A Comprehensive Benchmarks for Federated Graph Learning
http://arxiv.org/abs/2408.16288v1
http://arxiv.org/abs/2408.16288v1
http://arxiv.org/pdf/2408.16288v1
2024-08-29
2024-08-29
[ "Xunkai Li", "Yinlin Zhu", "Boyang Pang", "Guochen Yan", "Yeyu Yan", "Zening Li", "Zhengyu Wu", "Wentao Zhang", "Rong-Hua Li", "Guoren Wang" ]
[ "", "", "", "", "", "", "", "", "", "" ]
Federated graph learning (FGL) has emerged as a promising distributed training paradigm for graph neural networks across multiple local systems without direct data sharing. This approach is particularly beneficial in privacy-sensitive scenarios and offers a new perspective on addressing scalability challenges in large-scale graph learning. Despite the proliferation of FGL, the diverse motivations from practical applications, spanning various research backgrounds and experimental settings, pose a significant challenge to fair evaluation. To fill this gap, we propose OpenFGL, a unified benchmark designed for the primary FGL scenarios: Graph-FL and Subgraph-FL. Specifically, OpenFGL includes 38 graph datasets from 16 application domains, 8 federated data simulation strategies that emphasize graph properties, and 5 graph-based downstream tasks. Additionally, it offers 18 recently proposed SOTA FGL algorithms through a user-friendly API, enabling a thorough comparison and comprehensive evaluation of their effectiveness, robustness, and efficiency. Empirical results demonstrate the ability of FGL while also revealing its potential limitations, offering valuable insights for future exploration in this thriving field.
Under Review
cs.LG
[ "cs.LG", "cs.AI", "cs.DB", "cs.SI" ]
Beyond Uncertainty: Evidential Deep Learning for Robust Video Temporal Grounding
http://arxiv.org/abs/2408.16272v1
http://arxiv.org/abs/2408.16272v1
http://arxiv.org/pdf/2408.16272v1
2024-08-29
2024-08-29
[ "Kaijing Ma", "Haojian Huang", "Jin Chen", "Haodong Chen", "Pengliang Ji", "Xianghao Zang", "Han Fang", "Chao Ban", "Hao Sun", "Mulin Chen", "Xuelong Li" ]
[ "", "", "", "", "", "", "", "", "", "", "" ]
Existing Video Temporal Grounding (VTG) models excel in accuracy but often overlook open-world challenges posed by open-vocabulary queries and untrimmed videos. This leads to unreliable predictions for noisy, corrupted, and out-of-distribution data. Adapting VTG models to dynamically estimate uncertainties based on user input can address this issue. To this end, we introduce SRAM, a robust network module that benefits from a two-stage cross-modal alignment task. More importantly, it integrates Deep Evidential Regression (DER) to explicitly and thoroughly quantify uncertainty during training, thus allowing the model to say "I do not know" in scenarios beyond its handling capacity. However, the direct application of traditional DER theory and its regularizer reveals structural flaws, leading to unintended constraints in VTG tasks. In response, we develop a simple yet effective Geom-regularizer that enhances the uncertainty learning framework from the ground up. To the best of our knowledge, this marks the first successful attempt of DER in VTG. Our extensive quantitative and qualitative results affirm the effectiveness, robustness, and interpretability of our modules and the uncertainty learning paradigm in VTG tasks. The code will be made available.
Ongoing work: 28pages, 19 figures, 7 tables. Code is available at: https://kaijing.space/SRAM/
cs.CV
[ "cs.CV", "cs.AI" ]
LoraMap: Harnessing the Power of LoRA Connections
http://arxiv.org/abs/2408.16264v1
http://arxiv.org/abs/2408.16264v1
http://arxiv.org/pdf/2408.16264v1
2024-08-29
2024-08-29
[ "Hyeryun Park", "Jeongwon Kwak", "Dongsuk Jang", "Sumin Park", "Jinwook Choi" ]
[ "", "", "", "", "" ]
Large Language Models (LLMs) can benefit from mitigating hallucinations through fact-checking and overcoming substantial computational overhead with parameter-efficient techniques such as Low-Rank Adaptation (LoRA). While some studies have explored the parallel integration of multiple LoRAs, these approaches need attention to the connections between them. This paper investigates methods to establish connections among multiple LoRAs. We create three reasoning datasets tailored to fact-checking and fine-tune individual LoRAs, allowing them to view and reason from diverse perspectives. Then, we explore strategies for allocating these reasoning LoRAs and introduce LoraMap, an approach to map connections between them. The results on the fact-checking task demonstrate that the performance of LoraMap is superior to LoraHub, an existing LoRA composition method. LoraMap also outperforms with significantly fewer parameters than LoraConcat, which concatenates LoRAs and further fine-tunes them.
13 pages, 9 figures, 5 tables
cs.CL
[ "cs.CL", "cs.AI" ]
Evaluating Time-Series Training Dataset through Lens of Spectrum in Deep State Space Models
http://arxiv.org/abs/2408.16261v1
http://arxiv.org/abs/2408.16261v1
http://arxiv.org/pdf/2408.16261v1
2024-08-29
2024-08-29
[ "Sekitoshi Kanai", "Yasutoshi Ida", "Kazuki Adachi", "Mihiro Uchida", "Tsukasa Yoshida", "Shin'ya Yamaguchi" ]
[ "", "", "", "", "", "" ]
This study investigates a method to evaluate time-series datasets in terms of the performance of deep neural networks (DNNs) with state space models (deep SSMs) trained on the dataset. SSMs have attracted attention as components inside DNNs to address time-series data. Since deep SSMs have powerful representation capacities, training datasets play a crucial role in solving a new task. However, the effectiveness of training datasets cannot be known until deep SSMs are actually trained on them. This can increase the cost of data collection for new tasks, as a trial-and-error process of data collection and time-consuming training are needed to achieve the necessary performance. To advance the practical use of deep SSMs, the metric of datasets to estimate the performance early in the training can be one key element. To this end, we introduce the concept of data evaluation methods used in system identification. In system identification of linear dynamical systems, the effectiveness of datasets is evaluated by using the spectrum of input signals. We introduce this concept to deep SSMs, which are nonlinear dynamical systems. We propose the K-spectral metric, which is the sum of the top-K spectra of signals inside deep SSMs, by focusing on the fact that each layer of a deep SSM can be regarded as a linear dynamical system. Our experiments show that the K-spectral metric has a large absolute value of the correlation coefficient with the performance and can be used to evaluate the quality of training datasets.
11 pages, 5 figures
cs.LG
[ "cs.LG", "cs.AI" ]
Coalitions of AI-based Methods Predict 15-Year Risks of Breast Cancer Metastasis Using Real-World Clinical Data with AUC up to 0.9
http://arxiv.org/abs/2408.16256v1
http://arxiv.org/abs/2408.16256v1
http://arxiv.org/pdf/2408.16256v1
2024-08-29
2024-08-29
[ "Xia Jiang", "Yijun Zhou", "Alan Wells", "Adam Brufsky" ]
[ "", "", "", "" ]
Breast cancer is one of the two cancers responsible for the most deaths in women, with about 42,000 deaths each year in the US. That there are over 300,000 breast cancers newly diagnosed each year suggests that only a fraction of the cancers result in mortality. Thus, most of the women undergo seemingly curative treatment for localized cancers, but a significant later succumb to metastatic disease for which current treatments are only temporizing for the vast majority. The current prognostic metrics are of little actionable value for 4 of the 5 women seemingly cured after local treatment, and many women are exposed to morbid and even mortal adjuvant therapies unnecessarily, with these adjuvant therapies reducing metastatic recurrence by only a third. Thus, there is a need for better prognostics to target aggressive treatment at those who are likely to relapse and spare those who were actually cured. While there is a plethora of molecular and tumor-marker assays in use and under-development to detect recurrence early, these are time consuming, expensive and still often un-validated as to actionable prognostic utility. A different approach would use large data techniques to determine clinical and histopathological parameters that would provide accurate prognostics using existing data. Herein, we report on machine learning, together with grid search and Bayesian Networks to develop algorithms that present a AUC of up to 0.9 in ROC analyses, using only extant data. Such algorithms could be rapidly translated to clinical management as they do not require testing beyond routine tumor evaluations.
cs.LG
[ "cs.LG", "cs.AI", "cs.NE", "q-bio.QM" ]
Enhancing Conditional Image Generation with Explainable Latent Space Manipulation
http://arxiv.org/abs/2408.16232v1
http://arxiv.org/abs/2408.16232v1
http://arxiv.org/pdf/2408.16232v1
2024-08-29
2024-08-29
[ "Kshitij Pathania" ]
[ "" ]
In the realm of image synthesis, achieving fidelity to a reference image while adhering to conditional prompts remains a significant challenge. This paper proposes a novel approach that integrates a diffusion model with latent space manipulation and gradient-based selective attention mechanisms to address this issue. Leveraging Grad-SAM (Gradient-based Selective Attention Manipulation), we analyze the cross attention maps of the cross attention layers and gradients for the denoised latent vector, deriving importance scores of elements of denoised latent vector related to the subject of interest. Using this information, we create masks at specific timesteps during denoising to preserve subjects while seamlessly integrating the reference image features. This approach ensures the faithful formation of subjects based on conditional prompts, while concurrently refining the background for a more coherent composition. Our experiments on places365 dataset demonstrate promising results, with our proposed model achieving the lowest mean and median Frechet Inception Distance (FID) scores compared to baseline models, indicating superior fidelity preservation. Furthermore, our model exhibits competitive performance in aligning the generated images with provided textual descriptions, as evidenced by high CLIP scores. These results highlight the effectiveness of our approach in both fidelity preservation and textual context preservation, offering a significant advancement in text-to-image synthesis tasks.
7 pages , 5 figures
cs.CV
[ "cs.CV", "cs.AI", "cs.LG", "26B10, 53A35,", "I.2.10; I.4.10" ]
Anchor-Controlled Generative Adversarial Network for High-Fidelity Electromagnetic and Structurally Diverse Metasurface Design
http://arxiv.org/abs/2408.16231v1
http://arxiv.org/abs/2408.16231v1
http://arxiv.org/pdf/2408.16231v1
2024-08-29
2024-08-29
[ "Yunhui Zeng", "Hongkun Cao", "Xin Jin" ]
[ "", "", "" ]
In optoelectronics, designing free-form metasurfaces presents significant challenges, particularly in achieving high electromagnetic response fidelity due to the complex relationship between physical structures and electromagnetic behaviors. A key difficulty arises from the one-to-many mapping dilemma, where multiple distinct physical structures can yield similar electromagnetic responses, complicating the design process. This paper introduces a novel generative framework, the Anchor-controlled Generative Adversarial Network (AcGAN), which prioritizes electromagnetic fidelity while effectively navigating the one-to-many challenge to create structurally diverse metasurfaces. Unlike existing methods that mainly replicate physical appearances, AcGAN excels in generating a variety of structures that, despite their differences in physical attributes, exhibit similar electromagnetic responses, thereby accommodating fabrication constraints and tolerances. We introduce the Spectral Overlap Coefficient (SOC) as a precise metric to measure the spectral fidelity between generated designs and their targets. Additionally, a cluster-guided controller refines input processing, ensuring multi-level spectral integration and enhancing electromagnetic fidelity. The integration of AnchorNet into our loss function facilitates a nuanced assessment of electromagnetic qualities, supported by a dynamic loss weighting strategy that optimizes spectral alignment. Collectively, these innovations represent a transformative stride in metasurface inverse design, advancing electromagnetic response-oriented engineering and overcoming the complexities of the one-to-many mapping dilemma.Empirical evidence underscores AcGAN's effectiveness in streamlining the design process, achieving superior electromagnetic precision, and fostering a broad spectrum of design possibilities.
physics.optics
[ "physics.optics", "cs.AI", "physics.app-ph" ]
LLaVA-SG: Leveraging Scene Graphs as Visual Semantic Expression in Vision-Language Models
http://arxiv.org/abs/2408.16224v1
http://arxiv.org/abs/2408.16224v1
http://arxiv.org/pdf/2408.16224v1
2024-08-29
2024-08-29
[ "Jingyi Wang", "Jianzhong Ju", "Jian Luan", "Zhidong Deng" ]
[ "", "", "", "" ]
Recent advances in large vision-language models (VLMs) typically employ vision encoders based on the Vision Transformer (ViT) architecture. The division of the images into patches by ViT results in a fragmented perception, thereby hindering the visual understanding capabilities of VLMs. In this paper, we propose an innovative enhancement to address this limitation by introducing a Scene Graph Expression (SGE) module in VLMs. This module extracts and structurally expresses the complex semantic information within images, thereby improving the foundational perception and understanding abilities of VLMs. Extensive experiments demonstrate that integrating our SGE module significantly enhances the VLM's performance in vision-language tasks, indicating its effectiveness in preserving intricate semantic details and facilitating better visual understanding. Code and data would be available.
cs.CV
[ "cs.CV", "cs.AI" ]
SSDM: Scalable Speech Dysfluency Modeling
http://arxiv.org/abs/2408.16221v1
http://arxiv.org/abs/2408.16221v1
http://arxiv.org/pdf/2408.16221v1
2024-08-29
2024-08-29
[ "Jiachen Lian", "Xuanru Zhou", "Zoe Ezzes", "Jet Vonk", "Brittany Morin", "David Baquirin", "Zachary Mille", "Maria Luisa Gorno Tempini", "Gopala Anumanchipalli" ]
[ "", "", "", "", "", "", "", "", "" ]
Speech dysfluency modeling is the core module for spoken language learning, and speech therapy. However, there are three challenges. First, current state-of-the-art solutions suffer from poor scalability. Second, there is a lack of a large-scale dysfluency corpus. Third, there is not an effective learning framework. In this paper, we propose \textit{SSDM: Scalable Speech Dysfluency Modeling}, which (1) adopts articulatory gestures as scalable forced alignment; (2) introduces connectionist subsequence aligner (CSA) to achieve dysfluency alignment; (3) introduces a large-scale simulated dysfluency corpus called Libri-Dys; and (4) develops an end-to-end system by leveraging the power of large language models (LLMs). We expect SSDM to serve as a standard in the area of dysfluency modeling. Demo is available at \url{https://eureka235.github.io}.
eess.AS
[ "eess.AS", "cs.AI", "cs.CL", "cs.SD" ]
M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation
http://arxiv.org/abs/2408.16213v1
http://arxiv.org/abs/2408.16213v1
http://arxiv.org/pdf/2408.16213v1
2024-08-29
2024-08-29
[ "Jonggwon Park", "Soobum Kim", "Byungmu Yoon", "Jihun Hyun", "Kyoyun Choi" ]
[ "", "", "", "", "" ]
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the multi-tasking capabilities of LLMs or lacking clinical accuracy. This paper presents M4CXR, a multi-modal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multi-image, and multi-study contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and also demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
Short-Term Electricity-Load Forecasting by Deep Learning: A Comprehensive Survey
http://arxiv.org/abs/2408.16202v1
http://arxiv.org/abs/2408.16202v1
http://arxiv.org/pdf/2408.16202v1
2024-08-29
2024-08-29
[ "Qi Dong", "Rubing Huang", "Chenhui Cui", "Dave Towey", "Ling Zhou", "Jinyu Tian", "Jianzhou Wang" ]
[ "", "", "", "", "", "", "" ]
Short-Term Electricity-Load Forecasting (STELF) refers to the prediction of the immediate demand (in the next few hours to several days) for the power system. Various external factors, such as weather changes and the emergence of new electricity consumption scenarios, can impact electricity demand, causing load data to fluctuate and become non-linear, which increases the complexity and difficulty of STELF. In the past decade, deep learning has been applied to STELF, modeling and predicting electricity demand with high accuracy, and contributing significantly to the development of STELF. This paper provides a comprehensive survey on deep-learning-based STELF over the past ten years. It examines the entire forecasting process, including data pre-processing, feature extraction, deep-learning modeling and optimization, and results evaluation. This paper also identifies some research challenges and potential research directions to be further investigated in future work.
cs.LG
[ "cs.LG", "cs.AI" ]
PolarBEVDet: Exploring Polar Representation for Multi-View 3D Object Detection in Bird's-Eye-View
http://arxiv.org/abs/2408.16200v1
http://arxiv.org/abs/2408.16200v1
http://arxiv.org/pdf/2408.16200v1
2024-08-29
2024-08-29
[ "Zichen Yu", "Quanli Liu", "Wei Wang", "Liyong Zhang", "Xiaoguang Zhao" ]
[ "", "", "", "", "" ]
Recently, LSS-based multi-view 3D object detection provides an economical and deployment-friendly solution for autonomous driving. However, all the existing LSS-based methods transform multi-view image features into a Cartesian Bird's-Eye-View(BEV) representation, which does not take into account the non-uniform image information distribution and hardly exploits the view symmetry. In this paper, in order to adapt the image information distribution and preserve the view symmetry by regular convolution, we propose to employ the polar BEV representation to substitute the Cartesian BEV representation. To achieve this, we elaborately tailor three modules: a polar view transformer to generate the polar BEV representation, a polar temporal fusion module for fusing historical polar BEV features and a polar detection head to predict the polar-parameterized representation of the object. In addition, we design a 2D auxiliary detection head and a spatial attention enhancement module to improve the quality of feature extraction in perspective view and BEV, respectively. Finally, we integrate the above improvements into a novel multi-view 3D object detector, PolarBEVDet. Experiments on nuScenes show that PolarBEVDet achieves the superior performance. The code is available at https://github.com/Yzichen/PolarBEVDet.git.
11 pages, 6 figures
cs.CV
[ "cs.CV", "cs.AI" ]
A More Unified Theory of Transfer Learning
http://arxiv.org/abs/2408.16189v1
http://arxiv.org/abs/2408.16189v1
http://arxiv.org/pdf/2408.16189v1
2024-08-29
2024-08-29
[ "Steve Hanneke", "Samory Kpotufe" ]
[ "", "" ]
We show that some basic moduli of continuity $\delta$ -- which measure how fast target risk decreases as source risk decreases -- appear to be at the root of many of the classical relatedness measures in transfer learning and related literature. Namely, bounds in terms of $\delta$ recover many of the existing bounds in terms of other measures of relatedness -- both in regression and classification -- and can at times be tighter. We are particularly interested in general situations where the learner has access to both source data and some or no target data. The unified perspective allowed by the moduli $\delta$ allow us to extend many existing notions of relatedness at once to these scenarios involving target data: interestingly, while $\delta$ itself might not be efficiently estimated, adaptive procedures exist -- based on reductions to confidence sets -- which can get nearly tight rates in terms of $\delta$ with no prior distributional knowledge. Such adaptivity to unknown $\delta$ immediately implies adaptivity to many classical relatedness notions, in terms of combined source and target samples' sizes.
stat.ML
[ "stat.ML", "cs.AI", "cs.LG", "math.ST", "stat.TH" ]
Real-Time Energy Pricing in New Zealand: An Evolving Stream Analysis
http://arxiv.org/abs/2408.16187v1
http://arxiv.org/abs/2408.16187v1
http://arxiv.org/pdf/2408.16187v1
2024-08-29
2024-08-29
[ "Yibin Sun", "Heitor Murilo Gomes", "Bernhard Pfahringer", "Albert Bifet" ]
[ "", "", "", "" ]
This paper introduces a group of novel datasets representing real-time time-series and streaming data of energy prices in New Zealand, sourced from the Electricity Market Information (EMI) website maintained by the New Zealand government. The datasets are intended to address the scarcity of proper datasets for streaming regression learning tasks. We conduct extensive analyses and experiments on these datasets, covering preprocessing techniques, regression tasks, prediction intervals, concept drift detection, and anomaly detection. Our experiments demonstrate the datasets' utility and highlight the challenges and opportunities for future research in energy price forecasting.
12 Pages, 8 figures, short version accepted by PRICAI
cs.LG
[ "cs.LG", "cs.AI" ]
LLM-assisted Labeling Function Generation for Semantic Type Detection
http://arxiv.org/abs/2408.16173v1
http://arxiv.org/abs/2408.16173v1
http://arxiv.org/pdf/2408.16173v1
2024-08-28
2024-08-28
[ "Chenjie Li", "Dan Zhang", "Jin Wang" ]
[ "", "", "" ]
Detecting semantic types of columns in data lake tables is an important application. A key bottleneck in semantic type detection is the availability of human annotation due to the inherent complexity of data lakes. In this paper, we propose using programmatic weak supervision to assist in annotating the training data for semantic type detection by leveraging labeling functions. One challenge in this process is the difficulty of manually writing labeling functions due to the large volume and low quality of the data lake table datasets. To address this issue, we explore employing Large Language Models (LLMs) for labeling function generation and introduce several prompt engineering strategies for this purpose. We conduct experiments on real-world web table datasets. Based on the initial results, we perform extensive analysis and provide empirical insights and future directions for researchers in this field.
VLDB'24-DATAI
cs.DB
[ "cs.DB", "cs.AI" ]
Simulating realistic short tandem repeat capillary electrophoretic signal using a generative adversarial network
http://arxiv.org/abs/2408.16169v1
http://arxiv.org/abs/2408.16169v1
http://arxiv.org/pdf/2408.16169v1
2024-08-28
2024-08-28
[ "Duncan Taylor", "Melissa Humphries" ]
[ "", "" ]
DNA profiles are made up from multiple series of electrophoretic signal measuring fluorescence over time. Typically, human DNA analysts 'read' DNA profiles using their experience to distinguish instrument noise, artefactual signal, and signal corresponding to DNA fragments of interest. Recent work has developed an artificial neural network, ANN, to carry out the task of classifying fluorescence types into categories in DNA profile electrophoretic signal. But the creation of the necessarily large amount of labelled training data for the ANN is time consuming and expensive, and a limiting factor in the ability to robustly train the ANN. If realistic, prelabelled, training data could be simulated then this would remove the barrier to training an ANN with high efficacy. Here we develop a generative adversarial network, GAN, modified from the pix2pix GAN to achieve this task. With 1078 DNA profiles we train the GAN and achieve the ability to simulate DNA profile information, and then use the generator from the GAN as a 'realism filter' that applies the noise and artefact elements exhibited in typical electrophoretic signal.
29 pages, 9 Figures
cs.LG
[ "cs.LG", "cs.AI" ]
FRACTURED-SORRY-Bench: Framework for Revealing Attacks in Conversational Turns Undermining Refusal Efficacy and Defenses over SORRY-Bench
http://arxiv.org/abs/2408.16163v1
http://arxiv.org/abs/2408.16163v1
http://arxiv.org/pdf/2408.16163v1
2024-08-28
2024-08-28
[ "Aman Priyanshu", "Supriti Vijay" ]
[ "", "" ]
This paper introduces FRACTURED-SORRY-Bench, a framework for evaluating the safety of Large Language Models (LLMs) against multi-turn conversational attacks. Building upon the SORRY-Bench dataset, we propose a simple yet effective method for generating adversarial prompts by breaking down harmful queries into seemingly innocuous sub-questions. Our approach achieves a maximum increase of +46.22\% in Attack Success Rates (ASRs) across GPT-4, GPT-4o, GPT-4o-mini, and GPT-3.5-Turbo models compared to baseline methods. We demonstrate that this technique poses a challenge to current LLM safety measures and highlights the need for more robust defenses against subtle, multi-turn attacks.
4 pages, 2 tables
cs.CL
[ "cs.CL", "cs.AI" ]
Improving Generalization of Speech Separation in Real-World Scenarios: Strategies in Simulation, Optimization, and Evaluation
http://arxiv.org/abs/2408.16126v1
http://arxiv.org/abs/2408.16126v1
http://arxiv.org/pdf/2408.16126v1
2024-08-28
2024-08-28
[ "Ke Chen", "Jiaqi Su", "Taylor Berg-Kirkpatrick", "Shlomo Dubnov", "Zeyu Jin" ]
[ "", "", "", "", "" ]
Achieving robust speech separation for overlapping speakers in various acoustic environments with noise and reverberation remains an open challenge. Although existing datasets are available to train separators for specific scenarios, they do not effectively generalize across diverse real-world scenarios. In this paper, we present a novel data simulation pipeline that produces diverse training data from a range of acoustic environments and content, and propose new training paradigms to improve quality of a general speech separation model. Specifically, we first introduce AC-SIM, a data simulation pipeline that incorporates broad variations in both content and acoustics. Then we integrate multiple training objectives into the permutation invariant training (PIT) to enhance separation quality and generalization of the trained model. Finally, we conduct comprehensive objective and human listening experiments across separation architectures and benchmarks to validate our methods, demonstrating substantial improvement of generalization on both non-homologous and real-world test sets.
In Proceedings of the 25th Annual Conference of the International Speech Communication Association, Interspeech 2024
cs.SD
[ "cs.SD", "cs.AI", "cs.LG", "eess.AS" ]
ChartEye: A Deep Learning Framework for Chart Information Extraction
http://arxiv.org/abs/2408.16123v1
http://arxiv.org/abs/2408.16123v1
http://arxiv.org/pdf/2408.16123v1
2024-08-28
2024-08-28
[ "Osama Mustafa", "Muhammad Khizer Ali", "Momina Moetesum", "Imran Siddiqi" ]
[ "", "", "", "" ]
The widespread use of charts and infographics as a means of data visualization in various domains has inspired recent research in automated chart understanding. However, information extraction from chart images is a complex multitasked process due to style variations and, as a consequence, it is challenging to design an end-to-end system. In this study, we propose a deep learning-based framework that provides a solution for key steps in the chart information extraction pipeline. The proposed framework utilizes hierarchal vision transformers for the tasks of chart-type and text-role classification, while YOLOv7 for text detection. The detected text is then enhanced using Super Resolution Generative Adversarial Networks to improve the recognition output of the OCR. Experimental results on a benchmark dataset show that our proposed framework achieves excellent performance at every stage with F1-scores of 0.97 for chart-type classification, 0.91 for text-role classification, and a mean Average Precision of 0.95 for text detection.
8 Pages, and 11 Figures
10.1109/DICTA60407.2023.00082
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Data Formulator 2: Iteratively Creating Rich Visualizations with AI
http://arxiv.org/abs/2408.16119v1
http://arxiv.org/abs/2408.16119v1
http://arxiv.org/pdf/2408.16119v1
2024-08-28
2024-08-28
[ "Chenglong Wang", "Bongshin Lee", "Steven Drucker", "Dan Marshall", "Jianfeng Gao" ]
[ "", "", "", "", "" ]
To create rich visualizations, data analysts often need to iterate back and forth among data processing and chart specification to achieve their goals. To achieve this, analysts need not only proficiency in data transformation and visualization tools but also efforts to manage the branching history consisting of many different versions of data and charts. Recent LLM-powered AI systems have greatly improved visualization authoring experiences, for example by mitigating manual data transformation barriers via LLMs' code generation ability. However, these systems do not work well for iterative visualization authoring, because they often require analysts to provide, in a single turn, a text-only prompt that fully describes the complex visualization task to be performed, which is unrealistic to both users and models in many cases. In this paper, we present Data Formulator 2, an LLM-powered visualization system to address these challenges. With Data Formulator 2, users describe their visualization intent with blended UI and natural language inputs, and data transformation are delegated to AI. To support iteration, Data Formulator 2 lets users navigate their iteration history and reuse previous designs towards new ones so that they don't need to start from scratch every time. In a user study with eight participants, we observed that Data Formulator 2 allows participants to develop their own iteration strategies to complete challenging data exploration sessions.
cs.HC
[ "cs.HC", "cs.AI" ]
Logic-Enhanced Language Model Agents for Trustworthy Social Simulations
http://arxiv.org/abs/2408.16081v1
http://arxiv.org/abs/2408.16081v1
http://arxiv.org/pdf/2408.16081v1
2024-08-28
2024-08-28
[ "Agnieszka Mensfelt", "Kostas Stathis", "Vince Trencsenyi" ]
[ "", "", "" ]
We introduce the Logic-Enhanced Language Model Agents (LELMA) framework, a novel approach to enhance the trustworthiness of social simulations that utilize large language models (LLMs). While LLMs have gained attention as agents for simulating human behaviour, their applicability in this role is limited by issues such as inherent hallucinations and logical inconsistencies. LELMA addresses these challenges by integrating LLMs with symbolic AI, enabling logical verification of the reasoning generated by LLMs. This verification process provides corrective feedback, refining the reasoning output. The framework consists of three main components: an LLM-Reasoner for producing strategic reasoning, an LLM-Translator for mapping natural language reasoning to logic queries, and a Solver for evaluating these queries. This study focuses on decision-making in game-theoretic scenarios as a model of human interaction. Experiments involving the Hawk-Dove game, Prisoner's Dilemma, and Stag Hunt highlight the limitations of state-of-the-art LLMs, GPT-4 Omni and Gemini 1.0 Pro, in producing correct reasoning in these contexts. LELMA demonstrates high accuracy in error detection and improves the reasoning correctness of LLMs via self-refinement, particularly in GPT-4 Omni.
Source code: https://github.com/dicelab-rhul/LELMA
cs.AI
[ "cs.AI", "cs.CL", "cs.GT", "cs.LO" ]
Verification methods for international AI agreements
http://arxiv.org/abs/2408.16074v1
http://arxiv.org/abs/2408.16074v1
http://arxiv.org/pdf/2408.16074v1
2024-08-28
2024-08-28
[ "Akash R. Wasil", "Tom Reed", "Jack William Miller", "Peter Barnett" ]
[ "", "", "", "" ]
What techniques can be used to verify compliance with international agreements about advanced AI development? In this paper, we examine 10 verification methods that could detect two types of potential violations: unauthorized AI training (e.g., training runs above a certain FLOP threshold) and unauthorized data centers. We divide the verification methods into three categories: (a) national technical means (methods requiring minimal or no access from suspected non-compliant nations), (b) access-dependent methods (methods that require approval from the nation suspected of unauthorized activities), and (c) hardware-dependent methods (methods that require rules around advanced hardware). For each verification method, we provide a description, historical precedents, and possible evasion techniques. We conclude by offering recommendations for future work related to the verification and enforcement of international AI governance agreements.
cs.CY
[ "cs.CY", "cs.AI" ]
Using Large Language Models to Create AI Personas for Replication and Prediction of Media Effects: An Empirical Test of 133 Published Experimental Research Findings
http://arxiv.org/abs/2408.16073v1
http://arxiv.org/abs/2408.16073v1
http://arxiv.org/pdf/2408.16073v1
2024-08-28
2024-08-28
[ "Leo Yeykelis", "Kaavya Pichai", "James J. Cummings", "Byron Reeves" ]
[ "", "", "", "" ]
This report analyzes the potential for large language models (LLMs) to expedite accurate replication of published message effects studies. We tested LLM-powered participants (personas) by replicating 133 experimental findings from 14 papers containing 45 recent studies in the Journal of Marketing (January 2023-May 2024). We used a new software tool, Viewpoints AI (https://viewpoints.ai/), that takes study designs, stimuli, and measures as input, automatically generates prompts for LLMs to act as a specified sample of unique personas, and collects their responses to produce a final output in the form of a complete dataset and statistical analysis. The underlying LLM used was Anthropic's Claude Sonnet 3.5. We generated 19,447 AI personas to replicate these studies with the exact same sample attributes, study designs, stimuli, and measures reported in the original human research. Our LLM replications successfully reproduced 76% of the original main effects (84 out of 111), demonstrating strong potential for AI-assisted replication of studies in which people respond to media stimuli. When including interaction effects, the overall replication rate was 68% (90 out of 133). The use of LLMs to replicate and accelerate marketing research on media effects is discussed with respect to the replication crisis in social science, potential solutions to generalizability problems in sampling subjects and experimental conditions, and the ability to rapidly test consumer responses to various media stimuli. We also address the limitations of this approach, particularly in replicating complex interaction effects in media response studies, and suggest areas for future research and improvement in AI-assisted experimental replication of media effects.
24 pages, 3 figures, 2 tables
cs.CL
[ "cs.CL", "cs.AI" ]
Identification of Prognostic Biomarkers for Stage III Non-Small Cell Lung Carcinoma in Female Nonsmokers Using Machine Learning
http://arxiv.org/abs/2408.16068v1
http://arxiv.org/abs/2408.16068v1
http://arxiv.org/pdf/2408.16068v1
2024-08-28
2024-08-28
[ "Huili Zheng", "Qimin Zhang", "Yiru Gong", "Zheyan Liu", "Shaohan Chen" ]
[ "", "", "", "", "" ]
Lung cancer remains a leading cause of cancer-related deaths globally, with non-small cell lung cancer (NSCLC) being the most common subtype. This study aimed to identify key biomarkers associated with stage III NSCLC in non-smoking females using gene expression profiling from the GDS3837 dataset. Utilizing XGBoost, a machine learning algorithm, the analysis achieved a strong predictive performance with an AUC score of 0.835. The top biomarkers identified - CCAAT enhancer binding protein alpha (C/EBP-alpha), lactate dehydrogenase A4 (LDHA), UNC-45 myosin chaperone B (UNC-45B), checkpoint kinase 1 (CHK1), and hypoxia-inducible factor 1 subunit alpha (HIF-1-alpha) - have been validated in the literature as being significantly linked to lung cancer. These findings highlight the potential of these biomarkers for early diagnosis and personalized therapy, emphasizing the value of integrating machine learning with molecular profiling in cancer research.
This paper has been accepted for publication in the IEEE ICBASE 2024 conference
q-bio.GN
[ "q-bio.GN", "cs.AI", "stat.ML" ]
Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
http://arxiv.org/abs/2408.15998v1
http://arxiv.org/abs/2408.15998v1
http://arxiv.org/pdf/2408.15998v1
2024-08-28
2024-08-28
[ "Min Shi", "Fuxiao Liu", "Shihao Wang", "Shijia Liao", "Subhashree Radhakrishnan", "De-An Huang", "Hongxu Yin", "Karan Sapra", "Yaser Yacoob", "Humphrey Shi", "Bryan Catanzaro", "Andrew Tao", "Jan Kautz", "Zhiding Yu", "Guilin Liu" ]
[ "", "", "", "", "", "", "", "", "", "", "", "", "", "", "" ]
The ability to accurately interpret complex visual information is a crucial topic of multimodal large language models (MLLMs). Recent work indicates that enhanced visual perception significantly reduces hallucinations and improves performance on resolution-sensitive tasks, such as optical character recognition and document analysis. A number of recent MLLMs achieve this goal using a mixture of vision encoders. Despite their success, there is a lack of systematic comparisons and detailed ablation studies addressing critical aspects, such as expert selection and the integration of multiple vision experts. This study provides an extensive exploration of the design space for MLLMs using a mixture of vision encoders and resolutions. Our findings reveal several underlying principles common to various existing strategies, leading to a streamlined yet effective design approach. We discover that simply concatenating visual tokens from a set of complementary vision encoders is as effective as more complex mixing architectures or strategies. We additionally introduce Pre-Alignment to bridge the gap between vision-focused encoders and language tokens, enhancing model coherence. The resulting family of MLLMs, Eagle, surpasses other leading open-source models on major MLLM benchmarks. Models and code: https://github.com/NVlabs/Eagle
Github: https://github.com/NVlabs/Eagle, HuggingFace: https://huggingface.co/NVEagle
cs.CV
[ "cs.CV", "cs.AI", "cs.LG", "cs.RO" ]
Mamba or Transformer for Time Series Forecasting? Mixture of Universals (MoU) Is All You Need
http://arxiv.org/abs/2408.15997v1
http://arxiv.org/abs/2408.15997v1
http://arxiv.org/pdf/2408.15997v1
2024-08-28
2024-08-28
[ "Sijia Peng", "Yun Xiong", "Yangyong Zhu", "Zhiqiang Shen" ]
[ "", "", "", "" ]
Time series forecasting requires balancing short-term and long-term dependencies for accurate predictions. Existing methods mainly focus on long-term dependency modeling, neglecting the complexities of short-term dynamics, which may hinder performance. Transformers are superior in modeling long-term dependencies but are criticized for their quadratic computational cost. Mamba provides a near-linear alternative but is reported less effective in time series longterm forecasting due to potential information loss. Current architectures fall short in offering both high efficiency and strong performance for long-term dependency modeling. To address these challenges, we introduce Mixture of Universals (MoU), a versatile model to capture both short-term and long-term dependencies for enhancing performance in time series forecasting. MoU is composed of two novel designs: Mixture of Feature Extractors (MoF), an adaptive method designed to improve time series patch representations for short-term dependency, and Mixture of Architectures (MoA), which hierarchically integrates Mamba, FeedForward, Convolution, and Self-Attention architectures in a specialized order to model long-term dependency from a hybrid perspective. The proposed approach achieves state-of-the-art performance while maintaining relatively low computational costs. Extensive experiments on seven real-world datasets demonstrate the superiority of MoU. Code is available at https://github.com/lunaaa95/mou/.
Code at https://github.com/lunaaa95/mou/
cs.LG
[ "cs.LG", "cs.AI" ]
Spatio-Temporal Context Prompting for Zero-Shot Action Detection
http://arxiv.org/abs/2408.15996v2
http://arxiv.org/abs/2408.15996v2
http://arxiv.org/pdf/2408.15996v2
2024-08-28
2024-08-29
[ "Wei-Jhe Huang", "Min-Hung Chen", "Shang-Hong Lai" ]
[ "", "", "" ]
Spatio-temporal action detection encompasses the tasks of localizing and classifying individual actions within a video. Recent works aim to enhance this process by incorporating interaction modeling, which captures the relationship between people and their surrounding context. However, these approaches have primarily focused on fully-supervised learning, and the current limitation lies in the lack of generalization capability to recognize unseen action categories. In this paper, we aim to adapt the pretrained image-language models to detect unseen actions. To this end, we propose a method which can effectively leverage the rich knowledge of visual-language models to perform Person-Context Interaction. Meanwhile, our Context Prompting module will utilize contextual information to prompt labels, thereby enhancing the generation of more representative text features. Moreover, to address the challenge of recognizing distinct actions by multiple people at the same timestamp, we design the Interest Token Spotting mechanism which employs pretrained visual knowledge to find each person's interest context tokens, and then these tokens will be used for prompting to generate text features tailored to each individual. To evaluate the ability to detect unseen actions, we propose a comprehensive benchmark on J-HMDB, UCF101-24, and AVA datasets. The experiments show that our method achieves superior results compared to previous approaches and can be further extended to multi-action videos, bringing it closer to real-world applications. The code and data can be found in https://webber2933.github.io/ST-CLIP-project-page.
Project page: https://webber2933.github.io/ST-CLIP-project-page
cs.CV
[ "cs.CV", "cs.AI" ]
CoGen: Learning from Feedback with Coupled Comprehension and Generation
http://arxiv.org/abs/2408.15992v1
http://arxiv.org/abs/2408.15992v1
http://arxiv.org/pdf/2408.15992v1
2024-08-28
2024-08-28
[ "Mustafa Omer Gul", "Yoav Artzi" ]
[ "", "" ]
Systems with both language comprehension and generation capabilities can benefit from the tight connection between the two. This work studies coupling comprehension and generation with focus on continually learning from interaction with users. We propose techniques to tightly integrate the two capabilities for both learning and inference. We situate our studies in two-player reference games, and deploy various models for thousands of interactions with human users, while learning from interaction feedback signals. We show dramatic improvements in performance over time, with comprehension-generation coupling leading to performance improvements up to 26% in absolute terms and up to 17% higher accuracies compared to a non-coupled system. Our analysis also shows coupling has substantial qualitative impact on the system's language, making it significantly more human-like.
17 pages, 9 figures
cs.CL
[ "cs.CL", "cs.AI", "cs.CV", "cs.LG" ]
In-Context Imitation Learning via Next-Token Prediction
http://arxiv.org/abs/2408.15980v1
http://arxiv.org/abs/2408.15980v1
http://arxiv.org/pdf/2408.15980v1
2024-08-28
2024-08-28
[ "Letian Fu", "Huang Huang", "Gaurav Datta", "Lawrence Yunliang Chen", "William Chung-Ho Panitch", "Fangchen Liu", "Hui Li", "Ken Goldberg" ]
[ "", "", "", "", "", "", "", "" ]
We explore how to enhance next-token prediction models to perform in-context imitation learning on a real robot, where the robot executes new tasks by interpreting contextual information provided during the input phase, without updating its underlying policy parameters. We propose In-Context Robot Transformer (ICRT), a causal transformer that performs autoregressive prediction on sensorimotor trajectories without relying on any linguistic data or reward function. This formulation enables flexible and training-free execution of new tasks at test time, achieved by prompting the model with sensorimotor trajectories of the new task composing of image observations, actions and states tuples, collected through human teleoperation. Experiments with a Franka Emika robot demonstrate that the ICRT can adapt to new tasks specified by prompts, even in environment configurations that differ from both the prompt and the training data. In a multitask environment setup, ICRT significantly outperforms current state-of-the-art next-token prediction models in robotics on generalizing to unseen tasks. Code, checkpoints and data are available on https://icrt.dev/
cs.RO
[ "cs.RO", "cs.AI" ]
WebPilot: A Versatile and Autonomous Multi-Agent System for Web Task Execution with Strategic Exploration
http://arxiv.org/abs/2408.15978v1
http://arxiv.org/abs/2408.15978v1
http://arxiv.org/pdf/2408.15978v1
2024-08-28
2024-08-28
[ "Yao Zhang", "Zijian Ma", "Yunpu Ma", "Zhen Han", "Yu Wu", "Volker Tresp" ]
[ "", "", "", "", "", "" ]
LLM-based autonomous agents often fail to execute complex web tasks that require dynamic interaction due to the inherent uncertainty and complexity of these environments. Existing LLM-based web agents typically rely on rigid, expert-designed policies specific to certain states and actions, which lack the flexibility and generalizability needed to adapt to unseen tasks. In contrast, humans excel by exploring unknowns, continuously adapting strategies, and resolving ambiguities through exploration. To emulate human-like adaptability, web agents need strategic exploration and complex decision-making. Monte Carlo Tree Search (MCTS) is well-suited for this, but classical MCTS struggles with vast action spaces, unpredictable state transitions, and incomplete information in web tasks. In light of this, we develop WebPilot, a multi-agent system with a dual optimization strategy that improves MCTS to better handle complex web environments. Specifically, the Global Optimization phase involves generating a high-level plan by breaking down tasks into manageable subtasks and continuously refining this plan, thereby focusing the search process and mitigating the challenges posed by vast action spaces in classical MCTS. Subsequently, the Local Optimization phase executes each subtask using a tailored MCTS designed for complex environments, effectively addressing uncertainties and managing incomplete information. Experimental results on WebArena and MiniWoB++ demonstrate the effectiveness of WebPilot. Notably, on WebArena, WebPilot achieves SOTA performance with GPT-4, achieving a 93% relative increase in success rate over the concurrent tree search-based method. WebPilot marks a significant advancement in general autonomous agent capabilities, paving the way for more advanced and reliable decision-making in practical environments.
cs.AI
[ "cs.AI" ]
Stability of Primal-Dual Gradient Flow Dynamics for Multi-Block Convex Optimization Problems
http://arxiv.org/abs/2408.15969v1
http://arxiv.org/abs/2408.15969v1
http://arxiv.org/pdf/2408.15969v1
2024-08-28
2024-08-28
[ "Ibrahim K. Ozaslan", "Panagiotis Patrinos", "Mihailo R. Jovanović" ]
[ "", "", "" ]
We examine stability properties of primal-dual gradient flow dynamics for composite convex optimization problems with multiple, possibly nonsmooth, terms in the objective function under the generalized consensus constraint. The proposed dynamics are based on the proximal augmented Lagrangian and they provide a viable alternative to ADMM which faces significant challenges from both analysis and implementation viewpoints in large-scale multi-block scenarios. In contrast to customized algorithms with individualized convergence guarantees, we provide a systematic approach for solving a broad class of challenging composite optimization problems. We leverage various structural properties to establish global (exponential) convergence guarantees for the proposed dynamics. Our assumptions are much weaker than those required to prove (exponential) stability of various primal-dual dynamics as well as (linear) convergence of discrete-time methods, e.g., standard two-block and multi-block ADMM and EXTRA algorithms. Finally, we show necessity of some of our structural assumptions for exponential stability and provide computational experiments to demonstrate the convenience of the proposed dynamics for parallel and distributed computing applications.
31 pages; 4 figures
math.OC
[ "math.OC", "cs.AI", "cs.LG", "cs.SY", "eess.SY" ]
More Text, Less Point: Towards 3D Data-Efficient Point-Language Understanding
http://arxiv.org/abs/2408.15966v1
http://arxiv.org/abs/2408.15966v1
http://arxiv.org/pdf/2408.15966v1
2024-08-28
2024-08-28
[ "Yuan Tang", "Xu Han", "Xianzhi Li", "Qiao Yu", "Jinfeng Xu", "Yixue Hao", "Long Hu", "Min Chen" ]
[ "", "", "", "", "", "", "", "" ]
Enabling Large Language Models (LLMs) to comprehend the 3D physical world remains a significant challenge. Due to the lack of large-scale 3D-text pair datasets, the success of LLMs has yet to be replicated in 3D understanding. In this paper, we rethink this issue and propose a new task: 3D Data-Efficient Point-Language Understanding. The goal is to enable LLMs to achieve robust 3D object understanding with minimal 3D point cloud and text data pairs. To address this task, we introduce GreenPLM, which leverages more text data to compensate for the lack of 3D data. First, inspired by using CLIP to align images and text, we utilize a pre-trained point cloud-text encoder to map the 3D point cloud space to the text space. This mapping leaves us to seamlessly connect the text space with LLMs. Once the point-text-LLM connection is established, we further enhance text-LLM alignment by expanding the intermediate text space, thereby reducing the reliance on 3D point cloud data. Specifically, we generate 6M free-text descriptions of 3D objects, and design a three-stage training strategy to help LLMs better explore the intrinsic connections between different modalities. To achieve efficient modality alignment, we design a zero-parameter cross-attention module for token pooling. Extensive experimental results show that GreenPLM requires only 12% of the 3D training data used by existing state-of-the-art models to achieve superior 3D understanding. Remarkably, GreenPLM also achieves competitive performance using text-only data. The code and weights are available at: https://github.com/TangYuan96/GreenPLM.
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
Atari-GPT: Investigating the Capabilities of Multimodal Large Language Models as Low-Level Policies for Atari Games
http://arxiv.org/abs/2408.15950v1
http://arxiv.org/abs/2408.15950v1
http://arxiv.org/pdf/2408.15950v1
2024-08-28
2024-08-28
[ "Nicholas R. Waytowich", "Devin White", "MD Sunbeam", "Vinicius G. Goecks" ]
[ "", "", "", "" ]
Recent advancements in large language models (LLMs) have expanded their capabilities beyond traditional text-based tasks to multimodal domains, integrating visual, auditory, and textual data. While multimodal LLMs have been extensively explored for high-level planning in domains like robotics and games, their potential as low-level controllers remains largely untapped. This paper explores the application of multimodal LLMs as low-level controllers in the domain of Atari video games, introducing Atari game performance as a new benchmark for evaluating the ability of multimodal LLMs to perform low-level control tasks. Unlike traditional reinforcement learning (RL) and imitation learning (IL) methods that require extensive computational resources as well as reward function specification, these LLMs utilize pre-existing multimodal knowledge to directly engage with game environments. Our study assesses multiple multimodal LLMs performance against traditional RL agents, human players, and random agents, focusing on their ability to understand and interact with complex visual scenes and formulate strategic responses. Additionally, we examine the impact of In-Context Learning (ICL) by incorporating human-demonstrated game-play trajectories to enhance the models contextual understanding. Through this investigation, we aim to determine the extent to which multimodal LLMs can leverage their extensive training to effectively function as low-level controllers, thereby redefining potential applications in dynamic and visually complex environments. Additional results and videos are available at our project webpage: https://sites.google.com/view/atari-gpt/.
Currently under review
cs.AI
[ "cs.AI" ]
Local Descriptors Weighted Adaptive Threshold Filtering For Few-Shot Learning
http://arxiv.org/abs/2408.15924v1
http://arxiv.org/abs/2408.15924v1
http://arxiv.org/pdf/2408.15924v1
2024-08-28
2024-08-28
[ "Bingchen Yan" ]
[ "" ]
Few-shot image classification is a challenging task in the field of machine learning, involving the identification of new categories using a limited number of labeled samples. In recent years, methods based on local descriptors have made significant progress in this area. However, the key to improving classification accuracy lies in effectively filtering background noise and accurately selecting critical local descriptors highly relevant to image category information. To address this challenge, we propose an innovative weighted adaptive threshold filtering (WATF) strategy for local descriptors. This strategy can dynamically adjust based on the current task and image context, thereby selecting local descriptors most relevant to the image category. This enables the model to better focus on category-related information while effectively mitigating interference from irrelevant background regions. To evaluate the effectiveness of our method, we adopted the N-way K-shot experimental framework. Experimental results show that our method not only improves the clustering effect of selected local descriptors but also significantly enhances the discriminative ability between image categories. Notably, our method maintains a simple and lightweight design philosophy without introducing additional learnable parameters. This feature ensures consistency in filtering capability during both training and testing phases, further enhancing the reliability and practicality of the method.
cs.CV
[ "cs.CV", "cs.AI" ]
Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
http://arxiv.org/abs/2408.15915v1
http://arxiv.org/abs/2408.15915v1
http://arxiv.org/pdf/2408.15915v1
2024-08-28
2024-08-28
[ "Yuncheng Yang", "Yulei Qin", "Tong Wu", "Zihan Xu", "Gang Li", "Pengcheng Guo", "Hang Shao", "Yucheng Shi", "Ke Li", "Xing Sun", "Jie Yang", "Yun Gu" ]
[ "", "", "", "", "", "", "", "", "", "", "", "" ]
The cultivation of expertise for large language models (LLMs) to solve tasks of specific areas often requires special-purpose tuning with calibrated behaviors on the expected stable outputs. To avoid huge cost brought by manual preparation of instruction datasets and training resources up to hundreds of hours, the exploitation of open knowledge including a wealth of low rank adaptation (LoRA) models and instruction datasets serves as a good starting point. However, existing methods on model and data selection focus on the performance of general-purpose capabilities while neglecting the knowledge gap exposed in domain-specific deployment. In the present study, we propose to bridge such gap by introducing few human-annotated samples (i.e., K-shot) for advancing task expertise of LLMs with open knowledge. Specifically, we develop an efficient and scalable pipeline to cost-efficiently produce task experts where K-shot data intervene in selecting the most promising expert candidates and the task-relevant instructions. A mixture-of-expert (MoE) system is built to make the best use of individual-yet-complementary knowledge between multiple experts. We unveil the two keys to the success of a MoE system, 1) the abidance by K-shot, and 2) the insistence on diversity. For the former, we ensure that models that truly possess problem-solving abilities on K-shot are selected rather than those blind guessers. Besides, during data selection, instructions that share task-relevant contexts with K-shot are prioritized. For the latter, we highlight the diversity of constituting experts and that of the fine-tuning instructions throughout the model and data selection process. Extensive experimental results confirm the superiority of our approach over existing methods on utilization of open knowledge across various tasks. Codes and models will be released later.
28 pages, 12 tables, 10 figures
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
Efficient $k$-NN Search in IoT Data: Overlap Optimization in Tree-Based Indexing Structures
http://arxiv.org/abs/2408.16036v1
http://arxiv.org/abs/2408.16036v1
http://arxiv.org/pdf/2408.16036v1
2024-08-28
2024-08-28
[ "Ala-Eddine Benrazek", "Zineddine Kouahla", "Brahim Farou", "Hamid Seridi", "Ibtissem Kemouguette" ]
[ "", "", "", "", "" ]
The proliferation of interconnected devices in the Internet of Things (IoT) has led to an exponential increase in data, commonly known as Big IoT Data. Efficient retrieval of this heterogeneous data demands a robust indexing mechanism for effective organization. However, a significant challenge remains: the overlap in data space partitions during index construction. This overlap increases node access during search and retrieval, resulting in higher resource consumption, performance bottlenecks, and impedes system scalability. To address this issue, we propose three innovative heuristics designed to quantify and strategically reduce data space partition overlap. The volume-based method (VBM) offers a detailed assessment by calculating the intersection volume between partitions, providing deeper insights into spatial relationships. The distance-based method (DBM) enhances efficiency by using the distance between partition centers and radii to evaluate overlap, offering a streamlined yet accurate approach. Finally, the object-based method (OBM) provides a practical solution by counting objects across multiple partitions, delivering an intuitive understanding of data space dynamics. Experimental results demonstrate the effectiveness of these methods in reducing search time, underscoring their potential to improve data space partitioning and enhance overall system performance.
28 pages, 21 figures, 1 table
cs.DB
[ "cs.DB", "cs.AI", "cs.IR", "cs.PF", "68P05, 68T01, 68P20", "E.1; H.2; H.3; I.2" ]
Nexus: Specialization meets Adaptability for Efficiently Training Mixture of Experts
http://arxiv.org/abs/2408.15901v1
http://arxiv.org/abs/2408.15901v1
http://arxiv.org/pdf/2408.15901v1
2024-08-28
2024-08-28
[ "Nikolas Gritsch", "Qizhen Zhang", "Acyr Locatelli", "Sara Hooker", "Ahmet Üstün" ]
[ "", "", "", "", "" ]
Efficiency, specialization, and adaptability to new data distributions are qualities that are hard to combine in current Large Language Models. The Mixture of Experts (MoE) architecture has been the focus of significant research because its inherent conditional computation enables such desirable properties. In this work, we focus on "upcycling" dense expert models into an MoE, aiming to improve specialization while also adding the ability to adapt to new tasks easily. We introduce Nexus, an enhanced MoE architecture with adaptive routing where the model learns to project expert embeddings from domain representations. This approach allows Nexus to flexibly add new experts after the initial upcycling through separately trained dense models, without requiring large-scale MoE training for unseen data domains. Our experiments show that Nexus achieves a relative gain of up to 2.1% over the baseline for initial upcycling, and a 18.8% relative gain for extending the MoE with a new expert by using limited finetuning data. This flexibility of Nexus is crucial to enable an open-source ecosystem where every user continuously assembles their own MoE-mix according to their needs.
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Airfoil Diffusion: Denoising Diffusion Model For Conditional Airfoil Generation
http://arxiv.org/abs/2408.15898v1
http://arxiv.org/abs/2408.15898v1
http://arxiv.org/pdf/2408.15898v1
2024-08-28
2024-08-28
[ "Reid Graves", "Amir Barati Farimani" ]
[ "", "" ]
The design of aerodynamic shapes, such as airfoils, has traditionally required significant computational resources and relied on predefined design parameters, which limit the potential for novel shape synthesis. In this work, we introduce a data-driven methodology for airfoil generation using a diffusion model. Trained on a dataset of preexisting airfoils, our model can generate an arbitrary number of new airfoils from random vectors, which can be conditioned on specific aerodynamic performance metrics such as lift and drag, or geometric criteria. Our results demonstrate that the diffusion model effectively produces airfoil shapes with realistic aerodynamic properties, offering substantial improvements in efficiency, flexibility, and the potential for discovering innovative airfoil designs. This approach significantly expands the design space, facilitating the synthesis of high-performance aerodynamic shapes that transcend the limitations of traditional methods.
12 Pages, 6 figures
cs.LG
[ "cs.LG", "cs.AI" ]
A New Method for Cross-Lingual-based Semantic Role Labeling
http://arxiv.org/abs/2408.15896v1
http://arxiv.org/abs/2408.15896v1
http://arxiv.org/pdf/2408.15896v1
2024-08-28
2024-08-28
[ "Mohammad Ebrahimi", "Behrouz Minaei Bidgoli", "Nasim Khozouei" ]
[ "", "", "" ]
Semantic role labeling is a crucial task in natural language processing, enabling better comprehension of natural language. However, the lack of annotated data in multiple languages has posed a challenge for researchers. To address this, a deep learning algorithm based on model transfer has been proposed. The algorithm utilizes a dataset consisting of the English portion of CoNLL2009 and a corpus of semantic roles in Persian. To optimize the efficiency of training, only ten percent of the educational data from each language is used. The results of the proposed model demonstrate significant improvements compared to Niksirt et al.'s model. In monolingual mode, the proposed model achieved a 2.05 percent improvement on F1-score, while in cross-lingual mode, the improvement was even more substantial, reaching 6.23 percent. Worth noting is that the compared model only trained two of the four stages of semantic role labeling and employed golden data for the remaining two stages. This suggests that the actual superiority of the proposed model surpasses the reported numbers by a significant margin. The development of cross-lingual methods for semantic role labeling holds promise, particularly in addressing the scarcity of annotated data for various languages. These advancements pave the way for further research in understanding and processing natural language across different linguistic contexts.
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Enhancing Intrusion Detection in IoT Environments: An Advanced Ensemble Approach Using Kolmogorov-Arnold Networks
http://arxiv.org/abs/2408.15886v2
http://arxiv.org/abs/2408.15886v2
http://arxiv.org/pdf/2408.15886v2
2024-08-28
2024-08-29
[ "Amar Amouri", "Mohamad Mahmoud Al Rahhal", "Yakoub Bazi", "Ismail Butun", "Imad Mahgoub" ]
[ "", "", "", "", "" ]
In recent years, the evolution of machine learning techniques has significantly impacted the field of intrusion detection, particularly within the context of the Internet of Things (IoT). As IoT networks expand, the need for robust security measures to counteract potential threats has become increasingly critical. This paper introduces a hybrid Intrusion Detection System (IDS) that synergistically combines Kolmogorov-Arnold Networks (KANs) with the XGBoost algorithm. Our proposed IDS leverages the unique capabilities of KANs, which utilize learnable activation functions to model complex relationships within data, alongside the powerful ensemble learning techniques of XGBoost, known for its high performance in classification tasks. This hybrid approach not only enhances the detection accuracy but also improves the interpretability of the model, making it suitable for dynamic and intricate IoT environments. Experimental evaluations demonstrate that our hybrid IDS achieves an impressive detection accuracy exceeding 99% in distinguishing between benign and malicious activities. Additionally, we were able to achieve F1 scores, precision, and recall that exceeded 98%. Furthermore, we conduct a comparative analysis against traditional Multi-Layer Perceptron (MLP) networks, assessing performance metrics such as Precision, Recall, and F1-score. The results underscore the efficacy of integrating KANs with XGBoost, highlighting the potential of this innovative approach to significantly strengthen the security framework of IoT networks.
To be presented at the 11th International Symposium on Networks, Computers and Communications (ISNCC'24) will be held in Washington DC- USA, from October 22 to 25, 2024. Accepted (6 pages and 5 figures)
cs.CR
[ "cs.CR", "cs.AI" ]
Persuasion Games using Large Language Models
http://arxiv.org/abs/2408.15879v1
http://arxiv.org/abs/2408.15879v1
http://arxiv.org/pdf/2408.15879v1
2024-08-28
2024-08-28
[ "Ganesh Prasath Ramani", "Shirish Karande", "Santhosh V", "Yash Bhatia" ]
[ "", "", "", "" ]
Large Language Models (LLMs) have emerged as formidable instruments capable of comprehending and producing human-like text. This paper explores the potential of LLMs, to shape human perspectives and subsequently influence their decisions on particular tasks. This capability finds applications in diverse domains such as Investment, Credit cards and Insurance, wherein they assist users in selecting appropriate insurance policies, investment plans, Credit cards, Retail, as well as in Behavioral Change Support Systems (BCSS). We present a sophisticated multi-agent framework wherein a consortium of agents operate in collaborative manner. The primary agent engages directly with users through persuasive dialogue, while the auxiliary agents perform tasks such as information retrieval, response analysis, development of persuasion strategies, and validation of facts. Empirical evidence from our experiments demonstrates that this collaborative methodology significantly enhances the persuasive efficacy of the LLM. We analyze user resistance to persuasive efforts continuously and counteract it by employing a combination of rule-based and LLM-based resistance-persuasion mapping techniques. We employ simulated personas and generate conversations in insurance, banking, and retail domains to evaluate the proficiency of large language models (LLMs) in recognizing, adjusting to, and influencing various personality types. Concurrently, we examine the resistance mechanisms employed by LLM simulated personas. Persuasion is quantified via measurable surveys before and after interaction, LLM-generated scores on conversation, and user decisions (purchase or non-purchase).
cs.AI
[ "cs.AI", "cs.CL" ]
Robust Statistical Scaling of Outlier Scores: Improving the Quality of Outlier Probabilities for Outliers (Extended Version)
http://arxiv.org/abs/2408.15874v1
http://arxiv.org/abs/2408.15874v1
http://arxiv.org/pdf/2408.15874v1
2024-08-28
2024-08-28
[ "Philipp Röchner", "Henrique O. Marques", "Ricardo J. G. B. Campello", "Arthur Zimek", "Franz Rothlauf" ]
[ "", "", "", "", "" ]
Outlier detection algorithms typically assign an outlier score to each observation in a dataset, indicating the degree to which an observation is an outlier. However, these scores are often not comparable across algorithms and can be difficult for humans to interpret. Statistical scaling addresses this problem by transforming outlier scores into outlier probabilities without using ground-truth labels, thereby improving interpretability and comparability across algorithms. However, the quality of this transformation can be different for outliers and inliers. Missing outliers in scenarios where they are of particular interest - such as healthcare, finance, or engineering - can be costly or dangerous. Thus, ensuring good probabilities for outliers is essential. This paper argues that statistical scaling, as commonly used in the literature, does not produce equally good probabilities for outliers as for inliers. Therefore, we propose robust statistical scaling, which uses robust estimators to improve the probabilities for outliers. We evaluate several variants of our method against other outlier score transformations for real-world datasets and outlier detection algorithms, where it can improve the probabilities for outliers.
15 pages, 4 figures, accepted for publication in SISAP 2024
cs.LG
[ "cs.LG", "cs.AI" ]
GenDDS: Generating Diverse Driving Video Scenarios with Prompt-to-Video Generative Model
http://arxiv.org/abs/2408.15868v1
http://arxiv.org/abs/2408.15868v1
http://arxiv.org/pdf/2408.15868v1
2024-08-28
2024-08-28
[ "Yongjie Fu", "Yunlong Li", "Xuan Di" ]
[ "", "", "" ]
Autonomous driving training requires a diverse range of datasets encompassing various traffic conditions, weather scenarios, and road types. Traditional data augmentation methods often struggle to generate datasets that represent rare occurrences. To address this challenge, we propose GenDDS, a novel approach for generating driving scenarios generation by leveraging the capabilities of Stable Diffusion XL (SDXL), an advanced latent diffusion model. Our methodology involves the use of descriptive prompts to guide the synthesis process, aimed at producing realistic and diverse driving scenarios. With the power of the latest computer vision techniques, such as ControlNet and Hotshot-XL, we have built a complete pipeline for video generation together with SDXL. We employ the KITTI dataset, which includes real-world driving videos, to train the model. Through a series of experiments, we demonstrate that our model can generate high-quality driving videos that closely replicate the complexity and variability of real-world driving scenarios. This research contributes to the development of sophisticated training data for autonomous driving systems and opens new avenues for creating virtual environments for simulation and validation purposes.
cs.CV
[ "cs.CV", "cs.AI" ]
Retrieval-Augmented Instruction Tuning for Automated Process Engineering Calculations : A Tool-Chaining Problem-Solving Framework with Attributable Reflection
http://arxiv.org/abs/2408.15866v1
http://arxiv.org/abs/2408.15866v1
http://arxiv.org/pdf/2408.15866v1
2024-08-28
2024-08-28
[ "Sagar Srinivas Sakhinana", "Geethan Sannidhi", "Venkataramana Runkana" ]
[ "", "", "" ]
The current technology landscape lacks a foundational AI model for solving process engineering calculations. In this work, we introduce a novel autonomous agent framework leveraging Retrieval-Augmented Instruction-Tuning (RAIT) to enhance open, customizable small code language models (SLMs) for these calculations. By combining instruction tuned code SLMs with Retrieval-Augmented Code Generation (RACG) using external tools, the agent generates, debugs, and optimizes code from natural language specifications. Our approach addresses the limitations of the current lack of a foundational AI model for specialized process engineering tasks and offers benefits of explainability, knowledge editing, and cost-effectiveness. Additionally, we curate custom datasets of chemical and process engineering problems and solutions to overcome data scarcity. Experimental results show that our framework matches the performance of large-scale proprietary models on benchmark datasets, proving its effectiveness and usability.
Accepted for publication at ML4CCE workshop at ECML PKDD 2024. Please find the link: https://ml4cce-ecml.com/#agenda
cs.SE
[ "cs.SE", "cs.AI", "cs.LG" ]
microYOLO: Towards Single-Shot Object Detection on Microcontrollers
http://arxiv.org/abs/2408.15865v1
http://arxiv.org/abs/2408.15865v1
http://arxiv.org/pdf/2408.15865v1
2024-08-28
2024-08-28
[ "Mark Deutel", "Christopher Mutschler", "Jürgen Teich" ]
[ "", "", "" ]
This work-in-progress paper presents results on the feasibility of single-shot object detection on microcontrollers using YOLO. Single-shot object detectors like YOLO are widely used, however due to their complexity mainly on larger GPU-based platforms. We present microYOLO, which can be used on Cortex-M based microcontrollers, such as the OpenMV H7 R2, achieving about 3.5 FPS when classifying 128x128 RGB images while using less than 800 KB Flash and less than 350 KB RAM. Furthermore, we share experimental results for three different object detection tasks, analyzing the accuracy of microYOLO on them.
Published at the ECML PKDD Conference 2023, at the 4th Workshop on IoT, Edge, and Mobile for Embedded Machine Learning
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Knowledge Navigator: LLM-guided Browsing Framework for Exploratory Search in Scientific Literature
http://arxiv.org/abs/2408.15836v1
http://arxiv.org/abs/2408.15836v1
http://arxiv.org/pdf/2408.15836v1
2024-08-28
2024-08-28
[ "Uri Katz", "Mosh Levy", "Yoav Goldberg" ]
[ "", "", "" ]
The exponential growth of scientific literature necessitates advanced tools for effective knowledge exploration. We present Knowledge Navigator, a system designed to enhance exploratory search abilities by organizing and structuring the retrieved documents from broad topical queries into a navigable, two-level hierarchy of named and descriptive scientific topics and subtopics. This structured organization provides an overall view of the research themes in a domain, while also enabling iterative search and deeper knowledge discovery within specific subtopics by allowing users to refine their focus and retrieve additional relevant documents. Knowledge Navigator combines LLM capabilities with cluster-based methods to enable an effective browsing method. We demonstrate our approach's effectiveness through automatic and manual evaluations on two novel benchmarks, CLUSTREC-COVID and SCITOC. Our code, prompts, and benchmarks are made publicly available.
cs.IR
[ "cs.IR", "cs.AI", "cs.CL" ]
Object Detection for Vehicle Dashcams using Transformers
http://arxiv.org/abs/2408.15809v1
http://arxiv.org/abs/2408.15809v1
http://arxiv.org/pdf/2408.15809v1
2024-08-28
2024-08-28
[ "Osama Mustafa", "Khizer Ali", "Anam Bibi", "Imran Siddiqi", "Momina Moetesum" ]
[ "", "", "", "", "" ]
The use of intelligent automation is growing significantly in the automotive industry, as it assists drivers and fleet management companies, thus increasing their productivity. Dash cams are now been used for this purpose which enables the instant identification and understanding of multiple objects and occurrences in the surroundings. In this paper, we propose a novel approach for object detection in dashcams using transformers. Our system is based on the state-of-the-art DEtection TRansformer (DETR), which has demonstrated strong performance in a variety of conditions, including different weather and illumination scenarios. The use of transformers allows for the consideration of contextual information in decisionmaking, improving the accuracy of object detection. To validate our approach, we have trained our DETR model on a dataset that represents real-world conditions. Our results show that the use of intelligent automation through transformers can significantly enhance the capabilities of dashcam systems. The model achieves an mAP of 0.95 on detection.
7 Pages, and 6 Figures
cs.CV
[ "cs.CV", "cs.AI" ]
ModalityMirror: Improving Audio Classification in Modality Heterogeneity Federated Learning with Multimodal Distillation
http://arxiv.org/abs/2408.15803v1
http://arxiv.org/abs/2408.15803v1
http://arxiv.org/pdf/2408.15803v1
2024-08-28
2024-08-28
[ "Tiantian Feng", "Tuo Zhang", "Salman Avestimehr", "Shrikanth S. Narayanan" ]
[ "", "", "", "" ]
Multimodal Federated Learning frequently encounters challenges of client modality heterogeneity, leading to undesired performances for secondary modality in multimodal learning. It is particularly prevalent in audiovisual learning, with audio is often assumed to be the weaker modality in recognition tasks. To address this challenge, we introduce ModalityMirror to improve audio model performance by leveraging knowledge distillation from an audiovisual federated learning model. ModalityMirror involves two phases: a modality-wise FL stage to aggregate uni-modal encoders; and a federated knowledge distillation stage on multi-modality clients to train an unimodal student model. Our results demonstrate that ModalityMirror significantly improves the audio classification compared to the state-of-the-art FL methods such as Harmony, particularly in audiovisual FL facing video missing. Our approach unlocks the potential for exploiting the diverse modality spectrum inherent in multi-modal FL.
eess.AS
[ "eess.AS", "cs.AI", "cs.SD" ]
Emulating Brain-like Rapid Learning in Neuromorphic Edge Computing
http://arxiv.org/abs/2408.15800v1
http://arxiv.org/abs/2408.15800v1
http://arxiv.org/pdf/2408.15800v1
2024-08-28
2024-08-28
[ "Kenneth Stewart", "Michael Neumeier", "Sumit Bam Shrestha", "Garrick Orchard", "Emre Neftci" ]
[ "", "", "", "", "" ]
Achieving personalized intelligence at the edge with real-time learning capabilities holds enormous promise in enhancing our daily experiences and helping decision making, planning, and sensing. However, efficient and reliable edge learning remains difficult with current technology due to the lack of personalized data, insufficient hardware capabilities, and inherent challenges posed by online learning. Over time and across multiple developmental stages, the brain has evolved to efficiently incorporate new knowledge by gradually building on previous knowledge. In this work, we emulate the multiple stages of learning with digital neuromorphic technology that simulates the neural and synaptic processes of the brain using two stages of learning. First, a meta-training stage trains the hyperparameters of synaptic plasticity for one-shot learning using a differentiable simulation of the neuromorphic hardware. This meta-training process refines a hardware local three-factor synaptic plasticity rule and its associated hyperparameters to align with the trained task domain. In a subsequent deployment stage, these optimized hyperparameters enable fast, data-efficient, and accurate learning of new classes. We demonstrate our approach using event-driven vision sensor data and the Intel Loihi neuromorphic processor with its plasticity dynamics, achieving real-time one-shot learning of new classes that is vastly improved over transfer learning. Our methodology can be deployed with arbitrary plasticity models and can be applied to situations demanding quick learning and adaptation at the edge, such as navigating unfamiliar environments or learning unexpected categories of data through user engagement.
17 page journal article. Submitted to IOP NCE
cs.NE
[ "cs.NE", "cs.AI" ]
Evaluating Named Entity Recognition Using Few-Shot Prompting with Large Language Models
http://arxiv.org/abs/2408.15796v1
http://arxiv.org/abs/2408.15796v1
http://arxiv.org/pdf/2408.15796v1
2024-08-28
2024-08-28
[ "Hédi Zhegidi", "Ludovic Moncla" ]
[ "", "" ]
This paper evaluates Few-Shot Prompting with Large Language Models for Named Entity Recognition (NER). Traditional NER systems rely on extensive labeled datasets, which are costly and time-consuming to obtain. Few-Shot Prompting or in-context learning enables models to recognize entities with minimal examples. We assess state-of-the-art models like GPT-4 in NER tasks, comparing their few-shot performance to fully supervised benchmarks. Results show that while there is a performance gap, large models excel in adapting to new entity types and domains with very limited data. We also explore the effects of prompt engineering, guided output format and context length on performance. This study underscores Few-Shot Learning's potential to reduce the need for large labeled datasets, enhancing NER scalability and accessibility.
Github repo: https://github.com/GEODE-project/ner-llm
cs.IR
[ "cs.IR", "cs.AI" ]
LogicGame: Benchmarking Rule-Based Reasoning Abilities of Large Language Models
http://arxiv.org/abs/2408.15778v1
http://arxiv.org/abs/2408.15778v1
http://arxiv.org/pdf/2408.15778v1
2024-08-28
2024-08-28
[ "Jiayi Gui", "Yiming Liu", "Jiale Cheng", "Xiaotao Gu", "Xiao Liu", "Hongning Wang", "Yuxiao Dong", "Jie Tang", "Minlie Huang" ]
[ "", "", "", "", "", "", "", "", "" ]
Large Language Models (LLMs) have demonstrated notable capabilities across various tasks, showcasing complex problem-solving abilities. Understanding and executing complex rules, along with multi-step planning, are fundamental to logical reasoning and critical for practical LLM agents and decision-making systems. However, evaluating LLMs as effective rule-based executors and planners remains underexplored. In this paper, we introduce LogicGame, a novel benchmark designed to evaluate the comprehensive rule understanding, execution, and planning capabilities of LLMs. Unlike traditional benchmarks, LogicGame provides diverse games that contain a series of rules with an initial state, requiring models to comprehend and apply predefined regulations to solve problems. We create simulated scenarios in which models execute or plan operations to achieve specific outcomes. These game scenarios are specifically designed to distinguish logical reasoning from mere knowledge by relying exclusively on predefined rules. This separation allows for a pure assessment of rule-based reasoning capabilities. The evaluation considers not only final outcomes but also intermediate steps, providing a comprehensive assessment of model performance. Moreover, these intermediate steps are deterministic and can be automatically verified. LogicGame defines game scenarios with varying difficulty levels, from simple rule applications to complex reasoning chains, in order to offer a precise evaluation of model performance on rule understanding and multi-step execution. Utilizing LogicGame, we test various LLMs and identify notable shortcomings in their rule-based logical reasoning abilities.
cs.AI
[ "cs.AI", "cs.CL" ]
Easy, Interpretable, Effective: openSMILE for voice deepfake detection
http://arxiv.org/abs/2408.15775v2
http://arxiv.org/abs/2408.15775v2
http://arxiv.org/pdf/2408.15775v2
2024-08-28
2024-08-29
[ "Octavian Pascu", "Dan Oneata", "Horia Cucu", "Nicolas M. Müller" ]
[ "", "", "", "" ]
In this paper, we demonstrate that attacks in the latest ASVspoof5 dataset -- a de facto standard in the field of voice authenticity and deepfake detection -- can be identified with surprising accuracy using a small subset of very simplistic features. These are derived from the openSMILE library, and are scalar-valued, easy to compute, and human interpretable. For example, attack A10`s unvoiced segments have a mean length of 0.09 +- 0.02, while bona fide instances have a mean length of 0.18 +- 0.07. Using this feature alone, a threshold classifier achieves an Equal Error Rate (EER) of 10.3% for attack A10. Similarly, across all attacks, we achieve up to 0.8% EER, with an overall EER of 15.7 +- 6.0%. We explore the generalization capabilities of these features and find that some of them transfer effectively between attacks, primarily when the attacks originate from similar Text-to-Speech (TTS) architectures. This finding may indicate that voice anti-spoofing is, in part, a problem of identifying and remembering signatures or fingerprints of individual TTS systems. This allows to better understand anti-spoofing models and their challenges in real-world application.
eess.AS
[ "eess.AS", "cs.AI", "cs.SD" ]
A Survey on Evaluation of Multimodal Large Language Models
http://arxiv.org/abs/2408.15769v1
http://arxiv.org/abs/2408.15769v1
http://arxiv.org/pdf/2408.15769v1
2024-08-28
2024-08-28
[ "Jiaxing Huang", "Jingyi Zhang" ]
[ "", "" ]
Multimodal Large Language Models (MLLMs) mimic human perception and reasoning system by integrating powerful Large Language Models (LLMs) with various modality encoders (e.g., vision, audio), positioning LLMs as the "brain" and various modality encoders as sensory organs. This framework endows MLLMs with human-like capabilities, and suggests a potential pathway towards achieving artificial general intelligence (AGI). With the emergence of all-round MLLMs like GPT-4V and Gemini, a multitude of evaluation methods have been developed to assess their capabilities across different dimensions. This paper presents a systematic and comprehensive review of MLLM evaluation methods, covering the following key aspects: (1) the background of MLLMs and their evaluation; (2) "what to evaluate" that reviews and categorizes existing MLLM evaluation tasks based on the capabilities assessed, including general multimodal recognition, perception, reasoning and trustworthiness, and domain-specific applications such as socioeconomic, natural sciences and engineering, medical usage, AI agent, remote sensing, video and audio processing, 3D point cloud analysis, and others; (3) "where to evaluate" that summarizes MLLM evaluation benchmarks into general and specific benchmarks; (4) "how to evaluate" that reviews and illustrates MLLM evaluation steps and metrics; Our overarching goal is to provide valuable insights for researchers in the field of MLLM evaluation, thereby facilitating the development of more capable and reliable MLLMs. We emphasize that evaluation should be regarded as a critical discipline, essential for advancing the field of MLLMs.
cs.CV
[ "cs.CV", "cs.AI", "cs.CL" ]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
13
Edit dataset card