title
stringlengths
4
246
id
stringlengths
32
39
arxiv_url
stringlengths
32
39
pdf_url
stringlengths
32
39
published_date
stringlengths
10
10
updated_date
stringlengths
10
10
authors
sequencelengths
1
535
affiliations
sequencelengths
1
535
summary
stringlengths
23
3.54k
comment
stringlengths
0
762
journal_ref
stringlengths
0
545
doi
stringlengths
0
151
primary_category
stringclasses
156 values
categories
sequencelengths
1
11
FedGlu: A personalized federated learning-based glucose forecasting algorithm for improved performance in glycemic excursion regions
http://arxiv.org/abs/2408.13926v1
http://arxiv.org/abs/2408.13926v1
http://arxiv.org/pdf/2408.13926v1
2024-08-25
2024-08-25
[ "Darpit Dave", "Kathan Vyas", "Jagadish Kumaran Jayagopal", "Alfredo Garcia", "Madhav Erraguntla", "Mark Lawley" ]
[ "", "", "", "", "", "" ]
Continuous glucose monitoring (CGM) devices provide real-time glucose monitoring and timely alerts for glycemic excursions, improving glycemic control among patients with diabetes. However, identifying rare events like hypoglycemia and hyperglycemia remain challenging due to their infrequency. Moreover, limited access to sensitive patient data hampers the development of robust machine learning models. Our objective is to accurately predict glycemic excursions while addressing data privacy concerns. To tackle excursion prediction, we propose a novel Hypo-Hyper (HH) loss function, which significantly improves performance in the glycemic excursion regions. The HH loss function demonstrates a 46% improvement over mean-squared error (MSE) loss across 125 patients. To address privacy concerns, we propose FedGlu, a machine learning model trained in a federated learning (FL) framework. FL allows collaborative learning without sharing sensitive data by training models locally and sharing only model parameters across other patients. FedGlu achieves a 35% superior glycemic excursion detection rate compared to local models. This improvement translates to enhanced performance in predicting both, hypoglycemia and hyperglycemia, for 105 out of 125 patients. These results underscore the effectiveness of the proposed HH loss function in augmenting the predictive capabilities of glucose predictions. Moreover, implementing models within a federated learning framework not only ensures better predictive capabilities but also safeguards sensitive data concurrently.
cs.LG
[ "cs.LG", "cs.AI" ]
Geo-Llama: Leveraging LLMs for Human Mobility Trajectory Generation with Spatiotemporal Constraints
http://arxiv.org/abs/2408.13918v2
http://arxiv.org/abs/2408.13918v2
http://arxiv.org/pdf/2408.13918v2
2024-08-25
2024-08-28
[ "Siyu Li", "Toan Tran", "Haowen Lin", "John Krumm", "Cyrus Shahabi", "Li Xiong" ]
[ "", "", "", "", "", "" ]
Simulating human mobility data is essential for various application domains, including transportation, urban planning, and epidemic control, since real data are often inaccessible to researchers due to expensive costs and privacy issues. Several existing deep generative solutions propose learning from real trajectories to generate synthetic ones. Despite the progress, most of them suffer from training stability issues and scale poorly with growing data size. More importantly, they generally lack control mechanisms to steer the generated trajectories based on spatiotemporal constraints such as fixing specific visits. To address such limitations, we formally define the controlled trajectory generation problem with spatiotemporal constraints and propose Geo-Llama. This novel LLM-inspired framework enforces explicit visit constraints in a contextually coherent way. It fine-tunes pre-trained LLMs on trajectories with a visit-wise permutation strategy where each visit corresponds to a time and location. This enables the model to capture the spatiotemporal patterns regardless of visit orders and allows flexible and in-context constraint integration through prompts during generation. Extensive experiments on real-world and synthetic datasets validate the effectiveness of Geo-Llama, demonstrating its versatility and robustness in handling a broad range of constraints to generate more realistic trajectories compared to existing methods.
cs.AI
[ "cs.AI" ]
LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback
http://arxiv.org/abs/2408.13915v1
http://arxiv.org/abs/2408.13915v1
http://arxiv.org/pdf/2408.13915v1
2024-08-25
2024-08-25
[ "Tanushree Banerjee", "Richard Zhu", "Runzhe Yang", "Karthik Narasimhan" ]
[ "", "", "", "" ]
Large Language Models (LLMs) excel at generating human-like dialogues and comprehending text. However, understanding the subtleties of complex exchanges in language remains a challenge. We propose a bootstrapping framework that leverages self-generated feedback to enhance LLM reasoning capabilities for lie detection. The framework consists of three stages: suggestion, feedback collection, and modification. In the suggestion stage, a cost-effective language model generates initial predictions based on game state and dialogue. The feedback-collection stage involves a language model providing feedback on these predictions. In the modification stage, a more advanced language model refines the initial predictions using the auto-generated feedback. We investigate the application of the proposed framework for detecting betrayal and deception in Diplomacy games, and compare it with feedback from professional human players. The LLM-generated feedback exhibits superior quality and significantly enhances the performance of the model. Our approach achieves a 39% improvement over the zero-shot baseline in lying-F1 without the need for any training data, rivaling state-of-the-art supervised learning results.
19 pages, 18 figures
cs.CL
[ "cs.CL", "cs.AI" ]
ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
http://arxiv.org/abs/2408.13906v1
http://arxiv.org/abs/2408.13906v1
http://arxiv.org/pdf/2408.13906v1
2024-08-25
2024-08-25
[ "Yeji Park", "Deokyeong Lee", "Junsuk Choe", "Buru Chang" ]
[ "", "", "", "" ]
Hallucinations in Multimodal Large Language Models (MLLMs) where generated responses fail to accurately reflect the given image pose a significant challenge to their reliability. To address this, we introduce ConVis, a novel training-free contrastive decoding method. ConVis leverages a text-to-image (T2I) generation model to semantically reconstruct the given image from hallucinated captions. By comparing the contrasting probability distributions produced by the original and reconstructed images, ConVis enables MLLMs to capture visual contrastive signals that penalize hallucination generation. Notably, this method operates purely within the decoding process, eliminating the need for additional data or model updates. Our extensive experiments on five popular benchmarks demonstrate that ConVis effectively reduces hallucinations across various MLLMs, highlighting its potential to enhance model reliability.
First two authors contributed equally. Source code is available at https://github.com/yejipark-m/ConVis
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
SPICED: Syntactical Bug and Trojan Pattern Identification in A/MS Circuits using LLM-Enhanced Detection
http://arxiv.org/abs/2408.16018v1
http://arxiv.org/abs/2408.16018v1
http://arxiv.org/pdf/2408.16018v1
2024-08-25
2024-08-25
[ "Jayeeta Chaudhuri", "Dhruv Thapar", "Arjun Chaudhuri", "Farshad Firouzi", "Krishnendu Chakrabarty" ]
[ "", "", "", "", "" ]
Analog and mixed-signal (A/MS) integrated circuits (ICs) are crucial in modern electronics, playing key roles in signal processing, amplification, sensing, and power management. Many IC companies outsource manufacturing to third-party foundries, creating security risks such as stealthy analog Trojans. Traditional detection methods, including embedding circuit watermarks or conducting hardware-based monitoring, often impose significant area and power overheads, and may not effectively identify all types of Trojans. To address these shortcomings, we propose SPICED, a Large Language Model (LLM)-based framework that operates within the software domain, eliminating the need for hardware modifications for Trojan detection and localization. This is the first work using LLM-aided techniques for detecting and localizing syntactical bugs and analog Trojans in circuit netlists, requiring no explicit training and incurring zero area overhead. Our framework employs chain-of-thought reasoning and few-shot examples to teach anomaly detection rules to LLMs. With the proposed method, we achieve an average Trojan coverage of 93.32% and an average true positive rate of 93.4% in identifying Trojan-impacted nodes for the evaluated analog benchmark circuits. These experimental results validate the effectiveness of LLMs in detecting and locating both syntactical bugs and Trojans within analog netlists.
Accepted at PAINE'24
cs.CR
[ "cs.CR", "cs.AI", "cs.LG" ]
Enhancing SQL Query Generation with Neurosymbolic Reasoning
http://arxiv.org/abs/2408.13888v1
http://arxiv.org/abs/2408.13888v1
http://arxiv.org/pdf/2408.13888v1
2024-08-25
2024-08-25
[ "Henrijs Princis", "Cristina David", "Alan Mycroft" ]
[ "", "", "" ]
Neurosymbolic approaches blend the effectiveness of symbolic reasoning with the flexibility of neural networks. In this work, we propose a neurosymbolic architecture for generating SQL queries that builds and explores a solution tree using Best-First Search, with the possibility of backtracking. For this purpose, it integrates a Language Model (LM) with symbolic modules that help catch and correct errors made by the LM on SQL queries, as well as guiding the exploration of the solution tree. We focus on improving the performance of smaller open-source LMs, and we find that our tool, Xander, increases accuracy by an average of 10.9% and reduces runtime by an average of 28% compared to the LM without Xander, enabling a smaller LM (with Xander) to outperform its four-times larger counterpart (without Xander).
11 pages, 8 figures
cs.DB
[ "cs.DB", "cs.AI", "cs.SE", "I.2" ]
Flexible game-playing AI with AlphaViT: adapting to multiple games and board sizes
http://arxiv.org/abs/2408.13871v1
http://arxiv.org/abs/2408.13871v1
http://arxiv.org/pdf/2408.13871v1
2024-08-25
2024-08-25
[ "Kazuhisa Fujita" ]
[ "" ]
This paper presents novel game AI agents based on the AlphaZero framework, enhanced with Vision Transformers (ViT): AlphaViT, AlphaViD, and AlphaVDA. These agents are designed to play various board games of different sizes using a single model, overcoming AlphaZero's limitation of being restricted to a fixed board size. AlphaViT uses only a transformer encoder, while AlphaViD and AlphaVDA contain both an encoder and a decoder. AlphaViD's decoder receives input from the encoder output, while AlphaVDA uses a learnable matrix as decoder input. Using the AlphaZero framework, the three proposed methods demonstrate their versatility in different game environments, including Connect4, Gomoku, and Othello. Experimental results show that these agents, whether trained on a single game or on multiple games simultaneously, consistently outperform traditional algorithms such as Minimax and Monte Carlo tree search using a single DNN with shared weights, while approaching the performance of AlphaZero. In particular, AlphaViT and AlphaViD show strong performance across games, with AlphaViD benefiting from an additional decoder layer that enhances its ability to adapt to different action spaces and board sizes. These results may suggest the potential of transformer-based architectures to develop more flexible and robust game AI agents capable of excelling in multiple games and dynamic environments.
cs.LG
[ "cs.LG", "cs.AI" ]
CodeGraph: Enhancing Graph Reasoning of LLMs with Code
http://arxiv.org/abs/2408.13863v1
http://arxiv.org/abs/2408.13863v1
http://arxiv.org/pdf/2408.13863v1
2024-08-25
2024-08-25
[ "Qiaolong Cai", "Zhaowei Wang", "Shizhe Diao", "James Kwok", "Yangqiu Song" ]
[ "", "", "", "", "" ]
With the increasing popularity of large language models (LLMs), reasoning on basic graph algorithm problems is an essential intermediate step in assessing their abilities to process and infer complex graph reasoning tasks. Existing methods usually convert graph-structured data to textual descriptions and then use LLMs for reasoning and computation. However, LLMs often produce computation errors on arithmetic parts in basic graph algorithm problems, such as counting number of edges. In addition, they struggle to control or understand the output of the reasoning process, raising concerns about whether LLMs are simply guessing. In this paper, we introduce CodeGraph, a method that encodes graph problem solutions as code. The methods solve new graph problems by learning from exemplars, generating programs, and executing them via a program interpreter. Using the few-shot setting, we evaluate CodeGraph with the base LLM being GPT-3.5 Turbo, Llama3-70B Instruct, Mixtral-8x22B Instruct, and Mixtral-8x7B Instruct. Experimental results on six tasks with six graph encoding methods in the GraphQA dataset demonstrate that CodeGraph can boost performance on graph reasoning tasks inside LLMs by 1.3% to 58.6%, depending on the task. Compared to the existing methods, CodeGraph demonstrates strong performance on arithmetic problems in graph tasks and offers a more controllable and interpretable approach to the reasoning process.
In Progress
cs.CL
[ "cs.CL", "cs.AI" ]
Tangram: A Challenging Benchmark for Geometric Element Recognizing
http://arxiv.org/abs/2408.13854v1
http://arxiv.org/abs/2408.13854v1
http://arxiv.org/pdf/2408.13854v1
2024-08-25
2024-08-25
[ "Jiamin Tang", "Chao Zhang", "Xudong Zhu", "Mengchi Liu" ]
[ "", "", "", "" ]
Significant advancements in Large Multimodal Models (LMMs) have enabled them to tackle complex problems involving visual-mathematical reasoning. However, their ability to identify geometric elements remains understudied. To bridge this gap, we introduce Tangram, a novel benchmark designed to evaluate the performance of LMMs on geometric element recognition. Tangram includes 1,080 diverse geometric diagrams sourced from primary and secondary school exams, competitions, and textbooks, covering from simple basic geometric shapes to complex combinations. Each diagram is associated with four questions, resulting in a total of 4,320 visual-question-answer pairs. Unlike existing benchmarks that seek higher-level cognition and reasoning, Tangram focuses on the understanding of geometric elements, requiring models to perform a "simple but interesting" counting task. Systematic evaluation of 10 prominent LMMs, such as GPT-4o and Claude 3.5 Sonnet, shows that even in the seemingly simple task, these models still face significant challenges. Notably, the overall accuracy of the top performer across all tested models is only 56.8%, marking a significant gap when compared to human performance. These findings highlight the limitations of current multimodal artificial intelligence systems in handling basic perception tasks, and will inspire the development of the next generation of expert-level multimodal foundational models. The Tangram and evaluation code will be available soon.
12 pages, 7 figures
cs.CV
[ "cs.CV", "cs.AI" ]
Condensed Sample-Guided Model Inversion for Knowledge Distillation
http://arxiv.org/abs/2408.13850v1
http://arxiv.org/abs/2408.13850v1
http://arxiv.org/pdf/2408.13850v1
2024-08-25
2024-08-25
[ "Kuluhan Binici", "Shivam Aggarwal", "Cihan Acar", "Nam Trung Pham", "Karianto Leman", "Gim Hee Lee", "Tulika Mitra" ]
[ "", "", "", "", "", "", "" ]
Knowledge distillation (KD) is a key element in neural network compression that allows knowledge transfer from a pre-trained teacher model to a more compact student model. KD relies on access to the training dataset, which may not always be fully available due to privacy concerns or logistical issues related to the size of the data. To address this, "data-free" KD methods use synthetic data, generated through model inversion, to mimic the target data distribution. However, conventional model inversion methods are not designed to utilize supplementary information from the target dataset, and thus, cannot leverage it to improve performance, even when it is available. In this paper, we consider condensed samples, as a form of supplementary information, and introduce a method for using them to better approximate the target data distribution, thereby enhancing the KD performance. Our approach is versatile, evidenced by improvements of up to 11.4% in KD accuracy across various datasets and model inversion-based methods. Importantly, it remains effective even when using as few as one condensed sample per class, and can also enhance performance in few-shot scenarios where only limited real data samples are available.
cs.LG
[ "cs.LG", "cs.AI" ]
PropSAM: A Propagation-Based Model for Segmenting Any 3D Objects in Multi-Modal Medical Images
http://arxiv.org/abs/2408.13836v1
http://arxiv.org/abs/2408.13836v1
http://arxiv.org/pdf/2408.13836v1
2024-08-25
2024-08-25
[ "Zifan Chen", "Xinyu Nan", "Jiazheng Li", "Jie Zhao", "Haifeng Li", "Zilin Lin", "Haoshen Li", "Heyun Chen", "Yiting Liu", "Bin Dong", "Li Zhang", "Lei Tang" ]
[ "", "", "", "", "", "", "", "", "", "", "", "" ]
Volumetric segmentation is crucial for medical imaging but is often constrained by labor-intensive manual annotations and the need for scenario-specific model training. Furthermore, existing general segmentation models are inefficient due to their design and inferential approaches. Addressing this clinical demand, we introduce PropSAM, a propagation-based segmentation model that optimizes the use of 3D medical structure information. PropSAM integrates a CNN-based UNet for intra-slice processing with a Transformer-based module for inter-slice propagation, focusing on structural and semantic continuities to enhance segmentation across various modalities. Distinctively, PropSAM operates on a one-view prompt, such as a 2D bounding box or sketch mask, unlike conventional models that require two-view prompts. It has demonstrated superior performance, significantly improving the Dice Similarity Coefficient (DSC) across 44 medical datasets and various imaging modalities, outperforming models like MedSAM and SegVol with an average DSC improvement of 18.1%. PropSAM also maintains stable predictions despite prompt deviations and varying propagation configurations, confirmed by one-way ANOVA tests with P>0.5985 and P>0.6131, respectively. Moreover, PropSAM's efficient architecture enables faster inference speeds (Wilcoxon rank-sum test, P<0.001) and reduces user interaction time by 37.8% compared to two-view prompt models. Its ability to handle irregular and complex objects with robust performance further demonstrates its potential in clinical settings, facilitating more automated and reliable medical imaging analyses with minimal retraining.
26 figures, 6 figures
cs.CV
[ "cs.CV", "cs.AI" ]
Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!
http://arxiv.org/abs/2408.13831v1
http://arxiv.org/abs/2408.13831v1
http://arxiv.org/pdf/2408.13831v1
2024-08-25
2024-08-25
[ "Stefano Perrella", "Lorenzo Proietti", "Alessandro Scirè", "Edoardo Barba", "Roberto Navigli" ]
[ "", "", "", "", "" ]
Annually, at the Conference of Machine Translation (WMT), the Metrics Shared Task organizers conduct the meta-evaluation of Machine Translation (MT) metrics, ranking them according to their correlation with human judgments. Their results guide researchers toward enhancing the next generation of metrics and MT systems. With the recent introduction of neural metrics, the field has witnessed notable advancements. Nevertheless, the inherent opacity of these metrics has posed substantial challenges to the meta-evaluation process. This work highlights two issues with the meta-evaluation framework currently employed in WMT, and assesses their impact on the metrics rankings. To do this, we introduce the concept of sentinel metrics, which are designed explicitly to scrutinize the meta-evaluation process's accuracy, robustness, and fairness. By employing sentinel metrics, we aim to validate our findings, and shed light on and monitor the potential biases or inconsistencies in the rankings. We discover that the present meta-evaluation framework favors two categories of metrics: i) those explicitly trained to mimic human quality assessments, and ii) continuous metrics. Finally, we raise concerns regarding the evaluation capabilities of state-of-the-art metrics, emphasizing that they might be basing their assessments on spurious correlations found in their training data.
Presented at ACL 2024 Main Conference. 29 pages
cs.CL
[ "cs.CL", "cs.AI" ]
RoCP-GNN: Robust Conformal Prediction for Graph Neural Networks in Node-Classification
http://arxiv.org/abs/2408.13825v1
http://arxiv.org/abs/2408.13825v1
http://arxiv.org/pdf/2408.13825v1
2024-08-25
2024-08-25
[ "S. Akansha" ]
[ "" ]
Graph Neural Networks (GNNs) have emerged as powerful tools for predicting outcomes in graph-structured data. However, a notable limitation of GNNs is their inability to provide robust uncertainty estimates, which undermines their reliability in contexts where errors are costly. One way to address this issue is by providing prediction sets that contain the true label with a predefined probability margin. Our approach builds upon conformal prediction (CP), a framework that promises to construct statistically robust prediction sets or intervals. There are two primary challenges: first, given dependent data like graphs, it is unclear whether the critical assumption in CP - exchangeability - still holds when applied to node classification. Second, even if the exchangeability assumption is valid for conformalized link prediction, we need to ensure high efficiency, i.e., the resulting prediction set or the interval length is small enough to provide useful information. In this article, we propose a novel approach termed Robust Conformal Prediction for GNNs (RoCP-GNN), which integrates conformal prediction (CP) directly into the GNN training process. This method generates prediction sets, instead of just point predictions, that are valid at a user-defined confidence level, assuming only exchangeability. Our approach robustly predicts outcomes with any predictive GNN model while quantifying the uncertainty in predictions within the realm of graph-based semi-supervised learning (SSL). Experimental results demonstrate that GNN models with size loss provide a statistically significant increase in performance. We validate our approach on standard graph benchmark datasets by coupling it with various state-of-the-art GNNs in node classification. The code will be made available after publication.
12, 5 figures
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
A Joint Learning Model with Variational Interaction for Multilingual Program Translation
http://arxiv.org/abs/2408.14515v1
http://arxiv.org/abs/2408.14515v1
http://arxiv.org/pdf/2408.14515v1
2024-08-25
2024-08-25
[ "Yali Du", "Hui Sun", "Ming Li" ]
[ "", "", "" ]
Programs implemented in various programming languages form the foundation of software applications. To alleviate the burden of program migration and facilitate the development of software systems, automated program translation across languages has garnered significant attention. Previous approaches primarily focus on pairwise translation paradigms, learning translation between pairs of languages using bilingual parallel data. However, parallel data is difficult to collect for some language pairs, and the distribution of program semantics across languages can shift, posing challenges for pairwise program translation. In this paper, we argue that jointly learning a unified model to translate code across multiple programming languages is superior to separately learning from bilingual parallel data. We propose Variational Interaction for Multilingual Program Translation~(VIM-PT), a disentanglement-based generative approach that jointly trains a unified model for multilingual program translation across multiple languages. VIM-PT disentangles code into language-shared and language-specific features, using variational inference and interaction information with a novel lower bound, then achieves program translation through conditional generation. VIM-PT demonstrates four advantages: 1) captures language-shared information more accurately from various implementations and improves the quality of multilingual program translation, 2) mines and leverages the capability of non-parallel data, 3) addresses the distribution shift of program semantics across languages, 4) and serves as a unified model, reducing deployment complexity.
Accepted by the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE 2024)
cs.SE
[ "cs.SE", "cs.AI", "cs.LG", "cs.PL" ]
Localization of Synthetic Manipulations in Western Blot Images
http://arxiv.org/abs/2408.13786v1
http://arxiv.org/abs/2408.13786v1
http://arxiv.org/pdf/2408.13786v1
2024-08-25
2024-08-25
[ "Anmol Manjunath", "Viola Negroni", "Sara Mandelli", "Daniel Moreira", "Paolo Bestagini" ]
[ "", "", "", "", "" ]
Recent breakthroughs in deep learning and generative systems have significantly fostered the creation of synthetic media, as well as the local alteration of real content via the insertion of highly realistic synthetic manipulations. Local image manipulation, in particular, poses serious challenges to the integrity of digital content and societal trust. This problem is not only confined to multimedia data, but also extends to biological images included in scientific publications, like images depicting Western blots. In this work, we address the task of localizing synthetic manipulations in Western blot images. To discriminate between pristine and synthetic pixels of an analyzed image, we propose a synthetic detector that operates on small patches extracted from the image. We aggregate patch contributions to estimate a tampering heatmap, highlighting synthetic pixels out of pristine ones. Our methodology proves effective when tested over two manipulated Western blot image datasets, one altered automatically and the other manually by exploiting advanced AI-based image manipulation tools that are unknown at our training stage. We also explore the robustness of our method over an external dataset of other scientific images depicting different semantics, manipulated through unseen generation techniques.
cs.CV
[ "cs.CV", "cs.AI", "cs.MM" ]
Analyzing the Impact of Splicing Artifacts in Partially Fake Speech Signals
http://arxiv.org/abs/2408.13784v1
http://arxiv.org/abs/2408.13784v1
http://arxiv.org/pdf/2408.13784v1
2024-08-25
2024-08-25
[ "Viola Negroni", "Davide Salvi", "Paolo Bestagini", "Stefano Tubaro" ]
[ "", "", "", "" ]
Speech deepfake detection has recently gained significant attention within the multimedia forensics community. Related issues have also been explored, such as the identification of partially fake signals, i.e., tracks that include both real and fake speech segments. However, generating high-quality spliced audio is not as straightforward as it may appear. Spliced signals are typically created through basic signal concatenation. This process could introduce noticeable artifacts that can make the generated data easier to detect. We analyze spliced audio tracks resulting from signal concatenation, investigate their artifacts and assess whether such artifacts introduce any bias in existing datasets. Our findings reveal that by analyzing splicing artifacts, we can achieve a detection EER of 6.16% and 7.36% on PartialSpoof and HAD datasets, respectively, without needing to train any detector. These results underscore the complexities of generating reliable spliced audio data and lead to discussions that can help improve future research in this area.
Accepted at ASVspoof 5 Workshop (Interspeech2024 Satellite)
cs.SD
[ "cs.SD", "cs.AI", "cs.MM", "eess.AS" ]
Variational autoencoder-based neural network model compression
http://arxiv.org/abs/2408.14513v1
http://arxiv.org/abs/2408.14513v1
http://arxiv.org/pdf/2408.14513v1
2024-08-25
2024-08-25
[ "Liang Cheng", "Peiyuan Guan", "Amir Taherkordi", "Lei Liu", "Dapeng Lan" ]
[ "", "", "", "", "" ]
Variational Autoencoders (VAEs), as a form of deep generative model, have been widely used in recent years, and shown great great peformance in a number of different domains, including image generation and anomaly detection, etc.. This paper aims to explore neural network model compression method based on VAE. The experiment uses different neural network models for MNIST recognition as compression targets, including Feedforward Neural Network (FNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). These models are the most basic models in deep learning, and other more complex and advanced models are based on them or inherit their features and evolve. In the experiment, the first step is to train the models mentioned above, each trained model will have different accuracy and number of total parameters. And then the variants of parameters for each model are processed as training data in VAEs separately, and the trained VAEs are tested by the true model parameters. The experimental results show that using the latent space as a representation of the model compression can improve the compression rate compared to some traditional methods such as pruning and quantization, meanwhile the accuracy is not greatly affected using the model parameters reconstructed based on the latent space. In the future, a variety of different large-scale deep learning models will be used more widely, so exploring different ways to save time and space on saving or transferring models will become necessary, and the use of VAE in this paper can provide a basis for these further explorations.
cs.LG
[ "cs.LG", "cs.AI" ]
SAB:A Stealing and Robust Backdoor Attack based on Steganographic Algorithm against Federated Learning
http://arxiv.org/abs/2408.13773v1
http://arxiv.org/abs/2408.13773v1
http://arxiv.org/pdf/2408.13773v1
2024-08-25
2024-08-25
[ "Weida Xu", "Yang Xu", "Sicong Zhang" ]
[ "", "", "" ]
Federated learning, an innovative network architecture designed to safeguard user privacy, is gaining widespread adoption in the realm of technology. However, given the existence of backdoor attacks in federated learning, exploring the security of federated learning is significance. Nevertheless, the backdoors investigated in current federated learning research can be readily detected by human inspection or resisted by detection algorithms. Accordingly, a new goal has been set to develop stealing and robust federated learning backdoor attacks. In this paper, we introduce a novel approach, SAB, tailored specifically for backdoor attacks in federated learning, presenting an alternative gradient updating mechanism. SAB attack based on steganographic algorithm, using image steganographic algorithm to build a full-size trigger to improve the accuracy of backdoors and use multiple loss joint computation to produce triggers. SAB exhibits smaller distances to benign samples and greater imperceptibility to the human eye. As such, our triggers are capable of mitigating or evading specific backdoor defense methods. In SAB, the bottom-95\% method is applied to extend the lifespan of backdoor attacks. It updates the gradient on minor value points to reduce the probability of being cleaned. Finally, the generalization of backdoors is enhanced with Sparse-update to improve the backdoor accuracy.
cs.CR
[ "cs.CR", "cs.AI" ]
Lecture Notes on Linear Neural Networks: A Tale of Optimization and Generalization in Deep Learning
http://arxiv.org/abs/2408.13767v1
http://arxiv.org/abs/2408.13767v1
http://arxiv.org/pdf/2408.13767v1
2024-08-25
2024-08-25
[ "Nadav Cohen", "Noam Razin" ]
[ "", "" ]
These notes are based on a lecture delivered by NC on March 2021, as part of an advanced course in Princeton University on the mathematical understanding of deep learning. They present a theory (developed by NC, NR and collaborators) of linear neural networks -- a fundamental model in the study of optimization and generalization in deep learning. Practical applications born from the presented theory are also discussed. The theory is based on mathematical tools that are dynamical in nature. It showcases the potential of such tools to push the envelope of our understanding of optimization and generalization in deep learning. The text assumes familiarity with the basics of statistical learning theory. Exercises (without solutions) are included.
Lecture notes
cs.LG
[ "cs.LG", "cs.AI", "stat.ML" ]
Multimodal Ensemble with Conditional Feature Fusion for Dysgraphia Diagnosis in Children from Handwriting Samples
http://arxiv.org/abs/2408.13754v1
http://arxiv.org/abs/2408.13754v1
http://arxiv.org/pdf/2408.13754v1
2024-08-25
2024-08-25
[ "Jayakanth Kunhoth", "Somaya Al-Maadeed", "Moutaz Saleh", "Younes Akbari" ]
[ "", "", "", "" ]
Developmental dysgraphia is a neurological disorder that hinders children's writing skills. In recent years, researchers have increasingly explored machine learning methods to support the diagnosis of dysgraphia based on offline and online handwriting. In most previous studies, the two types of handwriting have been analysed separately, which does not necessarily lead to promising results. In this way, the relationship between online and offline data cannot be explored. To address this limitation, we propose a novel multimodal machine learning approach utilizing both online and offline handwriting data. We created a new dataset by transforming an existing online handwritten dataset, generating corresponding offline handwriting images. We considered only different types of word data (simple word, pseudoword & difficult word) in our multimodal analysis. We trained SVM and XGBoost classifiers separately on online and offline features as well as implemented multimodal feature fusion and soft-voted ensemble. Furthermore, we proposed a novel ensemble with conditional feature fusion method which intelligently combines predictions from online and offline classifiers, selectively incorporating feature fusion when confidence scores fall below a threshold. Our novel approach achieves an accuracy of 88.8%, outperforming SVMs for single modalities by 12-14%, existing methods by 8-9%, and traditional multimodal approaches (soft-vote ensemble and feature fusion) by 3% and 5%, respectively. Our methodology contributes to the development of accurate and efficient dysgraphia diagnosis tools, requiring only a single instance of multimodal word/pseudoword data to determine the handwriting impairment. This work highlights the potential of multimodal learning in enhancing dysgraphia diagnosis, paving the way for accessible and practical diagnostic tools.
cs.CV
[ "cs.CV", "cs.AI", "I.2.6; I.2.10; I.4.9; I.5.1; I.5.4" ]
Multi-Agent Target Assignment and Path Finding for Intelligent Warehouse: A Cooperative Multi-Agent Deep Reinforcement Learning Perspective
http://arxiv.org/abs/2408.13750v1
http://arxiv.org/abs/2408.13750v1
http://arxiv.org/pdf/2408.13750v1
2024-08-25
2024-08-25
[ "Qi Liu", "Jianqi Gao", "Dongjie Zhu", "Xizheng Pang", "Pengbin Chen", "Jingxiang Guo", "Yanjie Li" ]
[ "", "", "", "", "", "", "" ]
Multi-agent target assignment and path planning (TAPF) are two key problems in intelligent warehouse. However, most literature only addresses one of these two problems separately. In this study, we propose a method to simultaneously solve target assignment and path planning from a perspective of cooperative multi-agent deep reinforcement learning (RL). To the best of our knowledge, this is the first work to model the TAPF problem for intelligent warehouse to cooperative multi-agent deep RL, and the first to simultaneously address TAPF based on multi-agent deep RL. Furthermore, previous literature rarely considers the physical dynamics of agents. In this study, the physical dynamics of the agents is considered. Experimental results show that our method performs well in various task settings, which means that the target assignment is solved reasonably well and the planned path is almost shortest. Moreover, our method is more time-efficient than baselines.
cs.AI
[ "cs.AI", "cs.MA" ]
DOCE: Finding the Sweet Spot for Execution-Based Code Generation
http://arxiv.org/abs/2408.13745v1
http://arxiv.org/abs/2408.13745v1
http://arxiv.org/pdf/2408.13745v1
2024-08-25
2024-08-25
[ "Haau-Sing Li", "Patrick Fernandes", "Iryna Gurevych", "André F. T. Martins" ]
[ "", "", "", "" ]
Recently, a diverse set of decoding and reranking procedures have been shown effective for LLM-based code generation. However, a comprehensive framework that links and experimentally compares these methods is missing. We address this by proposing Decoding Objectives for Code Execution, a comprehensive framework that includes candidate generation, $n$-best reranking, minimum Bayes risk (MBR) decoding, and self-debugging as the core components. We then study the contributions of these components through execution-based evaluation metrics. Our findings highlight the importance of execution-based methods and the difference gap between execution-based and execution-free methods. Furthermore, we assess the impact of filtering based on trial unit tests, a simple and effective strategy that has been often overlooked in prior works. We also propose self-debugging on multiple candidates, obtaining state-of-the-art performance on reranking for code generation. We expect our framework to provide a solid guideline for future research on code generation.
10 pages (32 including appendix), 5 figures, 25 tables. arXiv admin note: text overlap with arXiv:2304.05128 by other authors
cs.CL
[ "cs.CL", "cs.AI", "cs.PL" ]
LogParser-LLM: Advancing Efficient Log Parsing with Large Language Models
http://arxiv.org/abs/2408.13727v1
http://arxiv.org/abs/2408.13727v1
http://arxiv.org/pdf/2408.13727v1
2024-08-25
2024-08-25
[ "Aoxiao Zhong", "Dengyao Mo", "Guiyang Liu", "Jinbu Liu", "Qingda Lu", "Qi Zhou", "Jiesheng Wu", "Quanzheng Li", "Qingsong Wen" ]
[ "", "", "", "", "", "", "", "", "" ]
Logs are ubiquitous digital footprints, playing an indispensable role in system diagnostics, security analysis, and performance optimization. The extraction of actionable insights from logs is critically dependent on the log parsing process, which converts raw logs into structured formats for downstream analysis. Yet, the complexities of contemporary systems and the dynamic nature of logs pose significant challenges to existing automatic parsing techniques. The emergence of Large Language Models (LLM) offers new horizons. With their expansive knowledge and contextual prowess, LLMs have been transformative across diverse applications. Building on this, we introduce LogParser-LLM, a novel log parser integrated with LLM capabilities. This union seamlessly blends semantic insights with statistical nuances, obviating the need for hyper-parameter tuning and labeled training data, while ensuring rapid adaptability through online parsing. Further deepening our exploration, we address the intricate challenge of parsing granularity, proposing a new metric and integrating human interactions to allow users to calibrate granularity to their specific needs. Our method's efficacy is empirically demonstrated through evaluations on the Loghub-2k and the large-scale LogPub benchmark. In evaluations on the LogPub benchmark, involving an average of 3.6 million logs per dataset across 14 datasets, our LogParser-LLM requires only 272.5 LLM invocations on average, achieving a 90.6% F1 score for grouping accuracy and an 81.1% for parsing accuracy. These results demonstrate the method's high efficiency and accuracy, outperforming current state-of-the-art log parsers, including pattern-based, neural network-based, and existing LLM-enhanced approaches.
Accepted by ACM KDD 2024
cs.SE
[ "cs.SE", "cs.AI" ]
LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings
http://arxiv.org/abs/2408.14512v1
http://arxiv.org/abs/2408.14512v1
http://arxiv.org/pdf/2408.14512v1
2024-08-25
2024-08-25
[ "Duo Wang", "Yuan Zuo", "Fengzhi Li", "Junjie Wu" ]
[ "", "", "", "" ]
Zero-shot graph machine learning, especially with graph neural networks (GNNs), has garnered significant interest due to the challenge of scarce labeled data. While methods like self-supervised learning and graph prompt learning have been extensively explored, they often rely on fine-tuning with task-specific labels, limiting their effectiveness in zero-shot scenarios. Inspired by the zero-shot capabilities of instruction-fine-tuned large language models (LLMs), we introduce a novel framework named Token Embedding-Aligned Graph Language Model (TEA-GLM) that leverages LLMs as cross-dataset and cross-task zero-shot learners for graph machine learning. Concretely, we pretrain a GNN, aligning its representations with token embeddings of an LLM. We then train a linear projector that transforms the GNN's representations into a fixed number of graph token embeddings without tuning the LLM. A unified instruction is designed for various graph tasks at different levels, such as node classification (node-level) and link prediction (edge-level). These design choices collectively enhance our method's effectiveness in zero-shot learning, setting it apart from existing methods. Experiments show that our graph token embeddings help the LLM predictor achieve state-of-the-art performance on unseen datasets and tasks compared to other methods using LLMs as predictors.
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Count-based Novelty Exploration in Classical Planning
http://arxiv.org/abs/2408.13719v1
http://arxiv.org/abs/2408.13719v1
http://arxiv.org/pdf/2408.13719v1
2024-08-25
2024-08-25
[ "Giacomo Rosa", "Nir Lipovetzky" ]
[ "", "" ]
Count-based exploration methods are widely employed to improve the exploratory behavior of learning agents over sequential decision problems. Meanwhile, Novelty search has achieved success in Classical Planning through recording of the first, but not successive, occurrences of tuples. In order to structure the exploration, however, the number of tuples considered needs to grow exponentially as the search progresses. We propose a new novelty technique, classical count-based novelty, which aims to explore the state space with a constant number of tuples, by leveraging the frequency of each tuple's appearance in a search tree. We then justify the mechanisms through which lower tuple counts lead the search towards novel tuples. We also introduce algorithmic contributions in the form of a trimmed open list that maintains a constant size by pruning nodes with bad novelty values. These techniques are shown to complement existing novelty heuristics when integrated in a classical solver, achieving competitive results in challenging benchmarks from recent International Planning Competitions. Moreover, adapting our solver as the frontend planner in dual configurations that utilize both memory and time thresholds demonstrates a significant increase in instance coverage, surpassing current state-of-the-art solvers.
Extended version of paper accepted for publication at ECAI 2024
cs.AI
[ "cs.AI" ]
Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods
http://arxiv.org/abs/2408.14511v2
http://arxiv.org/abs/2408.14511v2
http://arxiv.org/pdf/2408.14511v2
2024-08-25
2024-08-28
[ "Xinyang Hu", "Fengzhuo Zhang", "Siyu Chen", "Zhuoran Yang" ]
[ "", "", "", "" ]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems using pretrained large language models (LLMs). In this work, we analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity. To this end, we introduce a multi-step latent variable model that encapsulates the reasoning process, where the latent variable encodes the task information. Under this framework, we demonstrate that when the pretraining dataset is sufficiently large, the estimator formed by CoT prompting is equivalent to a Bayesian estimator. This estimator effectively solves the multi-step reasoning problem by aggregating a posterior distribution inferred from the demonstration examples in the prompt. Moreover, we prove that the statistical error of the CoT estimator can be decomposed into two main components: (i) a prompting error, which arises from inferring the true task using CoT prompts, and (ii) the statistical error of the pretrained LLM. We establish that, under appropriate assumptions, the prompting error decays exponentially to zero as the number of demonstrations increases. Additionally, we explicitly characterize the approximation and generalization errors of the pretrained LLM. Notably, we construct a transformer model that approximates the target distribution of the multi-step reasoning problem with an error that decreases exponentially in the number of transformer blocks. Our analysis extends to other variants of CoT, including Self-Consistent CoT, Tree-of-Thought, and Selection-Inference, offering a broad perspective on the efficacy of these methods. We also provide numerical experiments to validate the theoretical findings.
150 pages, 18 figures, 3 tables
cs.AI
[ "cs.AI", "cs.CL", "cs.LG", "math.ST", "stat.ML", "stat.TH" ]
DHP Benchmark: Are LLMs Good NLG Evaluators?
http://arxiv.org/abs/2408.13704v1
http://arxiv.org/abs/2408.13704v1
http://arxiv.org/pdf/2408.13704v1
2024-08-25
2024-08-25
[ "Yicheng Wang", "Jiayi Yuan", "Yu-Neng Chuang", "Zhuoer Wang", "Yingchi Liu", "Mark Cusick", "Param Kulkarni", "Zhengping Ji", "Yasser Ibrahim", "Xia Hu" ]
[ "", "", "", "", "", "", "", "", "", "" ]
Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks. However, the capabilities of LLMs in scoring NLG quality remain inadequately explored. Current studies depend on human assessments and simple metrics that fail to capture the discernment of LLMs across diverse NLG tasks. To address this gap, we propose the Discernment of Hierarchical Perturbation (DHP) benchmarking framework, which provides quantitative discernment scores for LLMs utilizing hierarchically perturbed text data and statistical tests to measure the NLG evaluation capabilities of LLMs systematically. We have re-established six evaluation datasets for this benchmark, covering four NLG tasks: Summarization, Story Completion, Question Answering, and Translation. Our comprehensive benchmarking of five major LLM series provides critical insight into their strengths and limitations as NLG evaluators.
cs.CL
[ "cs.CL", "cs.AI" ]
Differentially Private Publication of Electricity Time Series Data in Smart Grids
http://arxiv.org/abs/2408.16017v1
http://arxiv.org/abs/2408.16017v1
http://arxiv.org/pdf/2408.16017v1
2024-08-24
2024-08-24
[ "Sina Shaham", "Gabriel Ghinita", "Bhaskar Krishnamachari", "Cyrus Shahabi" ]
[ "", "", "", "" ]
Smart grids are a valuable data source to study consumer behavior and guide energy policy decisions. In particular, time-series of power consumption over geographical areas are essential in deciding the optimal placement of expensive resources (e.g., transformers, storage elements) and their activation schedules. However, publication of such data raises significant privacy issues, as it may reveal sensitive details about personal habits and lifestyles. Differential privacy (DP) is well-suited for sanitization of individual data, but current DP techniques for time series lead to significant loss in utility, due to the existence of temporal correlation between data readings. We introduce {\em STPT (Spatio-Temporal Private Timeseries)}, a novel method for DP-compliant publication of electricity consumption data that analyzes spatio-temporal attributes and captures both micro and macro patterns by leveraging RNNs. Additionally, it employs a partitioning method for releasing electricity consumption time series based on identified patterns. We demonstrate through extensive experiments, on both real-world and synthetic datasets, that STPT significantly outperforms existing benchmarks, providing a well-balanced trade-off between data utility and user privacy.
cs.CR
[ "cs.CR", "cs.AI", "cs.LG" ]
Evaluating Alternative Training Interventions Using Personalized Computational Models of Learning
http://arxiv.org/abs/2408.13684v1
http://arxiv.org/abs/2408.13684v1
http://arxiv.org/pdf/2408.13684v1
2024-08-24
2024-08-24
[ "Christopher James MacLellan", "Kimberly Stowers", "Lisa Brady" ]
[ "", "", "" ]
Evaluating different training interventions to determine which produce the best learning outcomes is one of the main challenges faced by instructional designers. Typically, these designers use A/B experiments to evaluate each intervention; however, it is costly and time consuming to run such studies. To address this issue, we explore how computational models of learning might support designers in reasoning causally about alternative interventions within a fractions tutor. We present an approach for automatically tuning models to specific individuals and show that personalized models make better predictions of students' behavior than generic ones. Next, we conduct simulations to generate counterfactual predictions of performance and learning for two students (high and low performing) in different versions of the fractions tutor. Our approach makes predictions that align with previous human findings, as well as testable predictions that might be evaluated with future human experiments.
18 pages, 7 figures
Advances in Cognitive Systems, 10, 35-52 (2023)
cs.AI
[ "cs.AI", "cs.CY", "cs.HC" ]
Submodular Maximization Approaches for Equitable Client Selection in Federated Learning
http://arxiv.org/abs/2408.13683v2
http://arxiv.org/abs/2408.13683v2
http://arxiv.org/pdf/2408.13683v2
2024-08-24
2024-08-27
[ "Andrés Catalino Castillo Jiménez", "Ege C. Kaya", "Lintao Ye", "Abolfazl Hashemi" ]
[ "", "", "", "" ]
In a conventional Federated Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration. However, this random selection often leads to disparate performance among clients, raising concerns regarding fairness, particularly in applications where equitable outcomes are crucial, such as in medical or financial machine learning tasks. This disparity typically becomes more pronounced with the advent of performance-centric client sampling techniques. This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection. Both approaches utilize submodular function maximization to achieve more balanced models. By modifying the facility location problem, they aim to mitigate the fairness concerns associated with random selection. SUBTRUNC leverages client loss information to diversify solutions, while UNIONFL relies on historical client selection data to ensure a more equitable performance of the final model. Moreover, these algorithms are accompanied by robust theoretical guarantees regarding convergence under reasonable assumptions. The efficacy of these methods is demonstrated through extensive evaluations across heterogeneous scenarios, revealing significant improvements in fairness as measured by a client dissimilarity metric.
13 pages
cs.LG
[ "cs.LG", "cs.AI", "cs.SY", "eess.SP", "eess.SY" ]
Hierarchical Network Fusion for Multi-Modal Electron Micrograph Representation Learning with Foundational Large Language Models
http://arxiv.org/abs/2408.13661v1
http://arxiv.org/abs/2408.13661v1
http://arxiv.org/pdf/2408.13661v1
2024-08-24
2024-08-24
[ "Sakhinana Sagar Srinivas", "Geethan Sannidhi", "Venkataramana Runkana" ]
[ "", "", "" ]
Characterizing materials with electron micrographs is a crucial task in fields such as semiconductors and quantum materials. The complex hierarchical structure of micrographs often poses challenges for traditional classification methods. In this study, we propose an innovative backbone architecture for analyzing electron micrographs. We create multi-modal representations of the micrographs by tokenizing them into patch sequences and, additionally, representing them as vision graphs, commonly referred to as patch attributed graphs. We introduce the Hierarchical Network Fusion (HNF), a multi-layered network structure architecture that facilitates information exchange between the multi-modal representations and knowledge integration across different patch resolutions. Furthermore, we leverage large language models (LLMs) to generate detailed technical descriptions of nanomaterials as auxiliary information to assist in the downstream task. We utilize a cross-modal attention mechanism for knowledge fusion across cross-domain representations(both image-based and linguistic insights) to predict the nanomaterial category. This multi-faceted approach promises a more comprehensive and accurate representation and classification of micrographs for nanomaterial identification. Our framework outperforms traditional methods, overcoming challenges posed by distributional shifts, and facilitating high-throughput screening.
Our paper is published at the workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models at NeurIPS 2023
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Reactzyme: A Benchmark for Enzyme-Reaction Prediction
http://arxiv.org/abs/2408.13659v1
http://arxiv.org/abs/2408.13659v1
http://arxiv.org/pdf/2408.13659v1
2024-08-24
2024-08-24
[ "Chenqing Hua", "Bozitao Zhong", "Sitao Luan", "Liang Hong", "Guy Wolf", "Doina Precup", "Shuangjia Zheng" ]
[ "", "", "", "", "", "", "" ]
Enzymes, with their specific catalyzed reactions, are necessary for all aspects of life, enabling diverse biological processes and adaptations. Predicting enzyme functions is essential for understanding biological pathways, guiding drug development, enhancing bioproduct yields, and facilitating evolutionary studies. Addressing the inherent complexities, we introduce a new approach to annotating enzymes based on their catalyzed reactions. This method provides detailed insights into specific reactions and is adaptable to newly discovered reactions, diverging from traditional classifications by protein family or expert-derived reaction classes. We employ machine learning algorithms to analyze enzyme reaction datasets, delivering a much more refined view on the functionality of enzymes. Our evaluation leverages the largest enzyme-reaction dataset to date, derived from the SwissProt and Rhea databases with entries up to January 8, 2024. We frame the enzyme-reaction prediction as a retrieval problem, aiming to rank enzymes by their catalytic ability for specific reactions. With our model, we can recruit proteins for novel reactions and predict reactions in novel proteins, facilitating enzyme discovery and function annotation.
cs.LG
[ "cs.LG", "cs.AI", "cs.CE", "q-bio.QM" ]
Artificial intelligence for science: The easy and hard problems
http://arxiv.org/abs/2408.14508v1
http://arxiv.org/abs/2408.14508v1
http://arxiv.org/pdf/2408.14508v1
2024-08-24
2024-08-24
[ "Ruairidh M. Battleday", "Samuel J. Gershman" ]
[ "", "" ]
A suite of impressive scientific discoveries have been driven by recent advances in artificial intelligence. These almost all result from training flexible algorithms to solve difficult optimization problems specified in advance by teams of domain scientists and engineers with access to large amounts of data. Although extremely useful, this kind of problem solving only corresponds to one part of science - the "easy problem." The other part of scientific research is coming up with the problem itself - the "hard problem." Solving the hard problem is beyond the capacities of current algorithms for scientific discovery because it requires continual conceptual revision based on poorly defined constraints. We can make progress on understanding how humans solve the hard problem by studying the cognitive science of scientists, and then use the results to design new computational agents that automatically infer and update their scientific paradigms.
16 pages, 3 boxes, 4 figures
cs.AI
[ "cs.AI", "cs.LG", "q-bio.NC" ]
Studying the Effect of Audio Filters in Pre-Trained Models for Environmental Sound Classification
http://arxiv.org/abs/2408.13644v1
http://arxiv.org/abs/2408.13644v1
http://arxiv.org/pdf/2408.13644v1
2024-08-24
2024-08-24
[ "Aditya Dawn", "Wazib Ansar" ]
[ "", "" ]
Environmental Sound Classification is an important problem of sound recognition and is more complicated than speech recognition problems as environmental sounds are not well structured with respect to time and frequency. Researchers have used various CNN models to learn audio features from different audio features like log mel spectrograms, gammatone spectral coefficients, mel-frequency spectral coefficients, generated from the audio files, over the past years. In this paper, we propose a new methodology : Two-Level Classification; the Level 1 Classifier will be responsible to classify the audio signal into a broader class and the Level 2 Classifiers will be responsible to find the actual class to which the audio belongs, based on the output of the Level 1 Classifier. We have also shown the effects of different audio filters, among which a new method of Audio Crop is introduced in this paper, which gave the highest accuracies in most of the cases. We have used the ESC-50 dataset for our experiment and obtained a maximum accuracy of 78.75% in case of Level 1 Classification and 98.04% in case of Level 2 Classifications.
19 pages, 16 figures
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
Temporal Elections: Welfare, Strategyproofness, and Proportionality
http://arxiv.org/abs/2408.13637v1
http://arxiv.org/abs/2408.13637v1
http://arxiv.org/pdf/2408.13637v1
2024-08-24
2024-08-24
[ "Edith Elkind", "Tzeh Yuan Neoh", "Nicholas Teh" ]
[ "", "", "" ]
We investigate a model of sequential decision-making where a single alternative is chosen at each round. We focus on two objectives-utilitarian welfare (Util) and egalitarian welfare (Egal)-and consider the computational complexity of the associated maximization problems, as well as their compatibility with strategyproofness and proportionality. We observe that maximizing Util is easy, but the corresponding decision problem for Egal is NP-complete even in restricted cases. We complement this hardness result for Egal with parameterized complexity analysis and an approximation algorithm. Additionally, we show that, while a mechanism that outputs a Util outcome is strategyproof, all deterministic mechanisms for computing Egal outcomes fail a very weak variant of strategyproofness, called non-obvious manipulability (NOM). However, we show that when agents have non-empty approval sets at each timestep, choosing an Egal-maximizing outcome while breaking ties lexicographically satisfies NOM. Regarding proportionality, we prove that a proportional (PROP) outcome can be computed efficiently, but finding an outcome that maximizes Util while guaranteeing PROP is NP-hard. We also derive upper and lower bounds on the price of proportionality with respect to Util and Egal.
Appears in the 27th European Conference on Artificial Intelligence (ECAI), 2024
cs.GT
[ "cs.GT", "cs.AI" ]
DeepVoting: Learning Voting Rules with Tailored Embeddings
http://arxiv.org/abs/2408.13630v1
http://arxiv.org/abs/2408.13630v1
http://arxiv.org/pdf/2408.13630v1
2024-08-24
2024-08-24
[ "Leonardo Matone", "Ben Abramowitz", "Nicholas Mattei", "Avinash Balakrishnan" ]
[ "", "", "", "" ]
Aggregating the preferences of multiple agents into a collective decision is a common step in many important problems across areas of computer science including information retrieval, reinforcement learning, and recommender systems. As Social Choice Theory has shown, the problem of designing algorithms for aggregation rules with specific properties (axioms) can be difficult, or provably impossible in some cases. Instead of designing algorithms by hand, one can learn aggregation rules, particularly voting rules, from data. However, the prior work in this area has required extremely large models, or been limited by the choice of preference representation, i.e., embedding. We recast the problem of designing a good voting rule into one of learning probabilistic versions of voting rules that output distributions over a set of candidates. Specifically, we use neural networks to learn probabilistic social choice functions from the literature. We show that embeddings of preference profiles derived from the social choice literature allows us to learn existing voting rules more efficiently and scale to larger populations of voters more easily than other work if the embedding is tailored to the learning objective. Moreover, we show that rules learned using embeddings can be tweaked to create novel voting rules with improved axiomatic properties. Namely, we show that existing voting rules require only minor modification to combat a probabilistic version of the No Show Paradox.
cs.MA
[ "cs.MA", "cs.AI", "cs.GT", "cs.LG", "econ.GN", "q-fin.EC" ]
Enhancing Uplift Modeling in Multi-Treatment Marketing Campaigns: Leveraging Score Ranking and Calibration Techniques
http://arxiv.org/abs/2408.13628v2
http://arxiv.org/abs/2408.13628v2
http://arxiv.org/pdf/2408.13628v2
2024-08-24
2024-08-27
[ "Yoon Tae Park", "Ting Xu", "Mohamed Anany" ]
[ "", "", "" ]
Uplift modeling is essential for optimizing marketing strategies by selecting individuals likely to respond positively to specific marketing campaigns. This importance escalates in multi-treatment marketing campaigns, where diverse treatment is available and we may want to assign the customers to treatment that can make the most impact. While there are existing approaches with convenient frameworks like Causalml, there are potential spaces to enhance the effect of uplift modeling in multi treatment cases. This paper introduces a novel approach to uplift modeling in multi-treatment campaigns, leveraging score ranking and calibration techniques to improve overall performance of the marketing campaign. We review existing uplift models, including Meta Learner frameworks (S, T, X), and their application in real-world scenarios. Additionally, we delve into insights from multi-treatment studies to highlight the complexities and potential advancements in the field. Our methodology incorporates Meta-Learner calibration and a scoring rank-based offer selection strategy. Extensive experiment results with real-world datasets demonstrate the practical benefits and superior performance of our approach. The findings underscore the critical role of integrating score ranking and calibration techniques in refining the performance and reliability of uplift predictions, thereby advancing predictive modeling in marketing analytics and providing actionable insights for practitioners seeking to optimize their campaign strategies.
stat.ML
[ "stat.ML", "cs.AI", "cs.LG", "stat.AP" ]
Cost-Aware Uncertainty Reduction in Schema Matching with GPT-4: The Prompt-Matcher Framework
http://arxiv.org/abs/2408.14507v1
http://arxiv.org/abs/2408.14507v1
http://arxiv.org/pdf/2408.14507v1
2024-08-24
2024-08-24
[ "Longyu Feng", "Huahang Li", "Chen Jason Zhang" ]
[ "", "", "" ]
Schema matching is the process of identifying correspondences between the elements of two given schemata, essential for database management systems, data integration, and data warehousing. The inherent uncertainty of current schema matching algorithms leads to the generation of a set of candidate matches. Storing these results necessitates the use of databases and systems capable of handling probabilistic queries. This complicates the querying process and increases the associated storage costs. Motivated by GPT-4 outstanding performance, we explore its potential to reduce uncertainty. Our proposal is to supplant the role of crowdworkers with GPT-4 for querying the set of candidate matches. To get more precise correspondence verification responses from GPT-4, We have crafted Semantic-match and Abbreviation-match prompt for GPT-4, achieving state-of-the-art results on two benchmark datasets DeepMDatasets 100% (+0.0) and Fabricated-Datasets 91.8% (+2.2) recall rate. To optimise budget utilisation, we have devised a cost-aware solution. Within the constraints of the budget, our solution delivers favourable outcomes with minimal time expenditure. We introduce a novel framework, Prompt-Matcher, to reduce the uncertainty in the process of integration of multiple automatic schema matching algorithms and the selection of complex parameterization. It assists users in diminishing the uncertainty associated with candidate schema match results and in optimally ranking the most promising matches. We formally define the Correspondence Selection Problem, aiming to optimise the revenue within the confines of the GPT-4 budget. We demonstrate that CSP is NP-Hard and propose an approximation algorithm with minimal time expenditure. Ultimately, we demonstrate the efficacy of Prompt-Matcher through rigorous experiments.
cs.DB
[ "cs.DB", "cs.AI" ]
Towards Case-based Interpretability for Medical Federated Learning
http://arxiv.org/abs/2408.13626v1
http://arxiv.org/abs/2408.13626v1
http://arxiv.org/pdf/2408.13626v1
2024-08-24
2024-08-24
[ "Laura Latorre", "Liliana Petrychenko", "Regina Beets-Tan", "Taisiya Kopytova", "Wilson Silva" ]
[ "", "", "", "", "" ]
We explore deep generative models to generate case-based explanations in a medical federated learning setting. Explaining AI model decisions through case-based interpretability is paramount to increasing trust and allowing widespread adoption of AI in clinical practice. However, medical AI training paradigms are shifting towards federated learning settings in order to comply with data protection regulations. In a federated scenario, past data is inaccessible to the current user. Thus, we use a deep generative model to generate synthetic examples that protect privacy and explain decisions. Our proof-of-concept focuses on pleural effusion diagnosis and uses publicly available Chest X-ray data.
\c{opyright} 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
cs.LG
[ "cs.LG", "cs.AI" ]
No Dataset Needed for Downstream Knowledge Benchmarking: Response Dispersion Inversely Correlates with Accuracy on Domain-specific QA
http://arxiv.org/abs/2408.13624v1
http://arxiv.org/abs/2408.13624v1
http://arxiv.org/pdf/2408.13624v1
2024-08-24
2024-08-24
[ "Robert L Simione II" ]
[ "" ]
This research seeks to obviate the need for creating QA datasets and grading (chatbot) LLM responses when comparing LLMs' knowledge in specific topic domains. This is done in an entirely end-user centric way without need for access to any inner workings of the LLM, so long as it can be prompted and given a random seed to create different generations to the same prompt. The paper does this by, for a given topic domain, defining the "response dispersion" of an LLM by repeatedly asking an LLM the same opinion question about that topic domain. Namely, the response dispersion is the count of singular values needed to explain 95% of the variance in the embedding matrix of the LLM's responses. It is found that the response dispersion is inversely correlated with accuracy on relevant QA evaluations (average spearman rank correlation stronger than -.59). A use-case analysis shows that when comparing two different LLMs on the same topic domain, comparing their response dispersion is a suitable replacement for comparing their QA accuracy between 74% and 89% of the time, the range depending on certain reasonable accuracy-difference tolerances that may be acceptable to an end-user in exchange for the labor being saved using response dispersion instead of QA accuracy for comparison. Two response embeddings are studied for creating the embedding matrix in this study, one is from OpenAI's APIs and one is a novel embedding, here named reference sentence similarity embeddings, that can be computed locally and performs very nearly as well in calculating response dispersion. Also in this research, a pre-existing dataset called the IRC-Wiki Trivia dataset, originally developed for trivia games, has been re-purposed, curated, and the curation, called IRC-WikiTriviaQA, is made available for the purpose of this research.
16 pages, 3 tables, 1 figure
cs.CL
[ "cs.CL", "cs.AI", "I.2.7" ]
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings
http://arxiv.org/abs/2408.13622v1
http://arxiv.org/abs/2408.13622v1
http://arxiv.org/pdf/2408.13622v1
2024-08-24
2024-08-24
[ "Sagar Srinivas Sakhinana", "Geethan Sannidhi", "Chidaksh Ravuru", "Venkataramana Runkana" ]
[ "", "", "", "" ]
Spatio-temporal forecasting is crucial in transportation, logistics, and supply chain management. However, current methods struggle with large, complex datasets. We propose a dynamic, multi-modal approach that integrates the strengths of traditional forecasting methods and instruction tuning of small language models for time series trend analysis. This approach utilizes a mixture of experts (MoE) architecture with parameter-efficient fine-tuning (PEFT) methods, tailored for consumer hardware to scale up AI solutions in low resource settings while balancing performance and latency tradeoffs. Additionally, our approach leverages related past experiences for similar input time series to efficiently handle both intra-series and inter-series dependencies of non-stationary data with a time-then-space modeling approach, using grouped-query attention, while mitigating the limitations of traditional forecasting techniques in handling distributional shifts. Our approach models predictive uncertainty to improve decision-making. Our framework enables on-premises customization with reduced computational and memory demands, while maintaining inference speed and data privacy/security. Extensive experiments on various real-world datasets demonstrate that our framework provides robust and accurate forecasts, significantly outperforming existing methods.
Published at the ICLR 2024 Workshop on Practical ML for Low Resource Settings(PML4LRS)
cs.LG
[ "cs.LG", "cs.AI" ]
Preliminary Investigations of a Multi-Faceted Robust and Synergistic Approach in Semiconductor Electron Micrograph Analysis: Integrating Vision Transformers with Large Language and Multimodal Models
http://arxiv.org/abs/2408.13621v1
http://arxiv.org/abs/2408.13621v1
http://arxiv.org/pdf/2408.13621v1
2024-08-24
2024-08-24
[ "Sakhinana Sagar Srinivas", "Geethan Sannidhi", "Sreeja Gangasani", "Chidaksh Ravuru", "Venkataramana Runkana" ]
[ "", "", "", "", "" ]
Characterizing materials using electron micrographs is crucial in areas such as semiconductors and quantum materials. Traditional classification methods falter due to the intricatestructures of these micrographs. This study introduces an innovative architecture that leverages the generative capabilities of zero-shot prompting in Large Language Models (LLMs) such as GPT-4(language only), the predictive ability of few-shot (in-context) learning in Large Multimodal Models (LMMs) such as GPT-4(V)ision, and fuses knowledge across image based and linguistic insights for accurate nanomaterial category prediction. This comprehensive approach aims to provide a robust solution for the automated nanomaterial identification task in semiconductor manufacturing, blending performance, efficiency, and interpretability. Our method surpasses conventional approaches, offering precise nanomaterial identification and facilitating high-throughput screening.
Published at Deployable AI (DAI) Workshop at AAAI-2024
cs.CV
[ "cs.CV", "cs.AI", "cs.CL", "cs.LG" ]
Balancing Diversity and Risk in LLM Sampling: How to Select Your Method and Parameter for Open-Ended Text Generation
http://arxiv.org/abs/2408.13586v1
http://arxiv.org/abs/2408.13586v1
http://arxiv.org/pdf/2408.13586v1
2024-08-24
2024-08-24
[ "Yuxuan Zhou", "Margret Keuper", "Mario Fritz" ]
[ "", "", "" ]
Sampling-based decoding strategies have been widely adopted for Large Language Models (LLMs) in numerous applications, which target a balance between diversity and quality via temperature tuning and tail truncation (e.g., top-k and top-p sampling). Considering the high dynamic range of the candidate next-token given different prefixes, recent studies propose to adaptively truncate the tail of LLM's predicted distribution. Although improved results haven been reported with these methods on open-ended text generation tasks, the results are highly dependent on the curated truncation parameters and exemplar text. In this paper, we propose a systematic way to estimate the intrinsic capacity of a truncation sampling method by considering the trade-off between diversity and risk at each decoding step, based on our collected prefix tree which preserves the context of a full sentence. Our work provides a comprehensive comparison between existing truncation sampling methods, as well as their recommended parameters as a guideline for users.
cs.CL
[ "cs.CL", "cs.AI" ]
Synesthesia of Machines (SoM)-Enhanced ISAC Precoding for Vehicular Networks with Double Dynamics
http://arxiv.org/abs/2408.13546v1
http://arxiv.org/abs/2408.13546v1
http://arxiv.org/pdf/2408.13546v1
2024-08-24
2024-08-24
[ "Zonghui Yang", "Shijian Gao", "Xiang Cheng", "Liuqing Yang" ]
[ "", "", "", "" ]
Integrated sensing and communication (ISAC) technology plays a crucial role in vehicular networks. However, the communication channel within this context exhibits time-varying characteristics, and potential targets may move rapidly, resulting in double dynamics. These presents significant challenges for real-time ISAC precoding design that have not been thoroughly explored. While optimization-based precoding methods have been extensively studied, they are computationally complex and heavily rely on perfect prior information that is rarely available in situations with double dynamics. In this paper, we propose a synesthesia of machine (SoM)-enhanced precoding paradigm, where the base station leverages various modalities such as positioning and channel information to adapt to double dynamics, and effectively utilizes environmental information to stretch ISAC performance boundaries through a deep reinforcement learning framework. Additionally, a parameter-shared actor-critic architecture is tailored to expedite training in complex state and action spaces. Extensive experimental validation has demonstrated the multifaceted superiority of our method over existing approaches.
13 pages, 17 figures, 4 tables
eess.SP
[ "eess.SP", "cs.AI" ]
Selective Preference Optimization via Token-Level Reward Function Estimation
http://arxiv.org/abs/2408.13518v1
http://arxiv.org/abs/2408.13518v1
http://arxiv.org/pdf/2408.13518v1
2024-08-24
2024-08-24
[ "Kailai Yang", "Zhiwei Liu", "Qianqian Xie", "Jimin Huang", "Erxue Min", "Sophia Ananiadou" ]
[ "", "", "", "", "", "" ]
Recent advancements in large language model alignment leverage token-level supervisions to perform fine-grained preference optimization. However, existing token-level alignment methods either optimize on all available tokens, which can be noisy and inefficient, or perform selective training with complex and expensive key token selection strategies. In this work, we propose Selective Preference Optimization (SePO), a novel selective alignment strategy that centers on efficient key token selection. SePO proposes the first token selection method based on Direct Preference Optimization (DPO), which trains an oracle model to estimate a token-level reward function on the target data. This method applies to any existing alignment datasets with response-level annotations and enables cost-efficient token selection with small-scale oracle models and training data. The estimated reward function is then utilized to score all tokens within the target dataset, where only the key tokens are selected to supervise the target policy model with a reference model-free contrastive objective function. Extensive experiments on three public evaluation benchmarks show that SePO significantly outperforms competitive baseline methods by only optimizing 30% key tokens on the target dataset. SePO applications on weak-to-strong generalization show that weak oracle models effectively supervise strong policy models with up to 16.8x more parameters. SePO also effectively selects key tokens from out-of-distribution data to enhance strong policy models and alleviate the over-optimization problem.
Work in progress
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
AnoPLe: Few-Shot Anomaly Detection via Bi-directional Prompt Learning with Only Normal Samples
http://arxiv.org/abs/2408.13516v1
http://arxiv.org/abs/2408.13516v1
http://arxiv.org/pdf/2408.13516v1
2024-08-24
2024-08-24
[ "Yujin Lee", "Seoyoon Jang", "Hyunsoo Yoon" ]
[ "", "", "" ]
Few-shot Anomaly Detection (FAD) poses significant challenges due to the limited availability of training samples and the frequent absence of abnormal samples. Previous approaches often rely on annotations or true abnormal samples to improve detection, but such textual or visual cues are not always accessible. To address this, we introduce AnoPLe, a multi-modal prompt learning method designed for anomaly detection without prior knowledge of anomalies. AnoPLe simulates anomalies and employs bidirectional coupling of textual and visual prompts to facilitate deep interaction between the two modalities. Additionally, we integrate a lightweight decoder with a learnable multi-view signal, trained on multi-scale images to enhance local semantic comprehension. To further improve performance, we align global and local semantics, enriching the image-level understanding of anomalies. The experimental results demonstrate that AnoPLe achieves strong FAD performance, recording 94.1% and 86.2% Image AUROC on MVTec-AD and VisA respectively, with only around a 1% gap compared to the SoTA, despite not being exposed to true anomalies. Code is available at https://github.com/YoojLee/AnoPLe.
Code is available at https://github.com/YoojLee/AnoPLe
cs.CV
[ "cs.CV", "cs.AI" ]
Empowering Pre-Trained Language Models for Spatio-Temporal Forecasting via Decoupling Enhanced Discrete Reprogramming
http://arxiv.org/abs/2408.14505v1
http://arxiv.org/abs/2408.14505v1
http://arxiv.org/pdf/2408.14505v1
2024-08-24
2024-08-24
[ "Hao Wang", "Jindong Han", "Wei Fan", "Hao Liu" ]
[ "", "", "", "" ]
Spatio-temporal time series forecasting plays a critical role in various real-world applications, such as transportation optimization, energy management, and climate analysis. The recent advancements in Pre-trained Language Models (PLMs) have inspired efforts to reprogram these models for time series forecasting tasks, by leveraging their superior reasoning and generalization capabilities. However, existing approaches fall short in handling complex spatial inter-series dependencies and intrinsic intra-series frequency components, limiting their spatio-temporal forecasting performance. Moreover, the linear mapping of continuous time series to a compressed subset vocabulary in reprogramming constrains the spatio-temporal semantic expressivity of PLMs and may lead to potential information bottleneck. To overcome the above limitations, we propose \textsc{RePST}, a tailored PLM reprogramming framework for spatio-temporal forecasting. The key insight of \textsc{RePST} is to decouple the spatio-temporal dynamics in the frequency domain, allowing better alignment with the PLM text space. Specifically, we first decouple spatio-temporal data in Fourier space and devise a structural diffusion operator to obtain temporal intrinsic and spatial diffusion signals, making the dynamics more comprehensible and predictable for PLMs. To avoid information bottleneck from a limited vocabulary, we further propose a discrete reprogramming strategy that selects relevant discrete textual information from an expanded vocabulary space in a differentiable manner. Extensive experiments on four real-world datasets show that our proposed approach significantly outperforms state-of-the-art spatio-temporal forecasting models, particularly in data-scarce scenarios.
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Is Functional Correctness Enough to Evaluate Code Language Models? Exploring Diversity of Generated Codes
http://arxiv.org/abs/2408.14504v1
http://arxiv.org/abs/2408.14504v1
http://arxiv.org/pdf/2408.14504v1
2024-08-24
2024-08-24
[ "Heejae Chon", "Seonghyeon Lee", "Jinyoung Yeo", "Dongha Lee" ]
[ "", "", "", "" ]
Language models (LMs) have exhibited impressive abilities in generating codes from natural language requirements. In this work, we highlight the diversity of code generated by LMs as a critical criterion for evaluating their code generation capabilities, in addition to functional correctness. Despite its practical implications, there is a lack of studies focused on assessing the diversity of generated code, which overlooks its importance in the development of code LMs. We propose a systematic approach to evaluate the diversity of generated code, utilizing various metrics for inter-code similarity as well as functional correctness. Specifically, we introduce a pairwise code similarity measure that leverages large LMs' capabilities in code understanding and reasoning, demonstrating the highest correlation with human judgment. We extensively investigate the impact of various factors on the quality of generated code, including model sizes, temperatures, training approaches, prompting strategies, and the difficulty of input problems. Our consistent observation of a positive correlation between the test pass score and the inter-code similarity score indicates that current LMs tend to produce functionally correct code with limited diversity.
15pages, 6 figures, 8 tables
cs.SE
[ "cs.SE", "cs.AI", "cs.PL" ]
Thresholded Lexicographic Ordered Multiobjective Reinforcement Learning
http://arxiv.org/abs/2408.13493v1
http://arxiv.org/abs/2408.13493v1
http://arxiv.org/pdf/2408.13493v1
2024-08-24
2024-08-24
[ "Alperen Tercan", "Vinayak S. Prabhu" ]
[ "", "" ]
Lexicographic multi-objective problems, which impose a lexicographic importance order over the objectives, arise in many real-life scenarios. Existing Reinforcement Learning work directly addressing lexicographic tasks has been scarce. The few proposed approaches were all noted to be heuristics without theoretical guarantees as the Bellman equation is not applicable to them. Additionally, the practical applicability of these prior approaches also suffers from various issues such as not being able to reach the goal state. While some of these issues have been known before, in this work we investigate further shortcomings, and propose fixes for improving practical performance in many cases. We also present a policy optimization approach using our Lexicographic Projection Optimization (LPO) algorithm that has the potential to address these theoretical and practical concerns. Finally, we demonstrate our proposed algorithms on benchmark problems.
Full version of ECAI 2024 paper
cs.LG
[ "cs.LG", "cs.AI" ]
MPruner: Optimizing Neural Network Size with CKA-Based Mutual Information Pruning
http://arxiv.org/abs/2408.13482v1
http://arxiv.org/abs/2408.13482v1
http://arxiv.org/pdf/2408.13482v1
2024-08-24
2024-08-24
[ "Seungbeom Hu", "ChanJun Park", "Andrew Ferraiuolo", "Sang-Ki Ko", "Jinwoo Kim", "Haein Song", "Jieung Kim" ]
[ "", "", "", "", "", "", "" ]
Determining the optimal size of a neural network is critical, as it directly impacts runtime performance and memory usage. Pruning is a well-established model compression technique that reduces the size of neural networks while mathematically guaranteeing accuracy preservation. However, many recent pruning methods overlook the global contributions of individual model components, making it difficult to ensure that a pruned model meets the desired dataset and performance requirements. To address these challenges, we developed a new pruning algorithm, MPruner, that leverages mutual information through vector similarity. MPruner utilizes layer clustering with the Centered Kernel Alignment (CKA) similarity metric, allowing us to incorporate global information from the neural network for more precise and efficient layer-wise pruning. We evaluated MPruner across various architectures and configurations, demonstrating its versatility and providing practical guidelines. MPruner achieved up to a 50% reduction in parameters and memory usage for CNN and transformer-based models, with minimal to no loss in accuracy.
cs.LG
[ "cs.LG", "cs.AI" ]
Disentangled Generative Graph Representation Learning
http://arxiv.org/abs/2408.13471v1
http://arxiv.org/abs/2408.13471v1
http://arxiv.org/pdf/2408.13471v1
2024-08-24
2024-08-24
[ "Xinyue Hu", "Zhibin Duan", "Xinyang Liu", "Yuxin Li", "Bo Chen", "Mingyuan Zhou" ]
[ "", "", "", "", "", "" ]
Recently, generative graph models have shown promising results in learning graph representations through self-supervised methods. However, most existing generative graph representation learning (GRL) approaches rely on random masking across the entire graph, which overlooks the entanglement of learned representations. This oversight results in non-robustness and a lack of explainability. Furthermore, disentangling the learned representations remains a significant challenge and has not been sufficiently explored in GRL research. Based on these insights, this paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework. DiGGR aims to learn latent disentangled factors and utilizes them to guide graph mask modeling, thereby enhancing the disentanglement of learned representations and enabling end-to-end joint learning. Extensive experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods, verifying the effectiveness of the proposed approach.
cs.LG
[ "cs.LG", "cs.AI" ]
LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs
http://arxiv.org/abs/2408.13467v2
http://arxiv.org/abs/2408.13467v2
http://arxiv.org/pdf/2408.13467v2
2024-08-24
2024-08-29
[ "Chansung Park", "Juyong Jiang", "Fan Wang", "Sayak Paul", "Jing Tang" ]
[ "", "", "", "", "" ]
The widespread adoption of cloud-based proprietary large language models (LLMs) has introduced significant challenges, including operational dependencies, privacy concerns, and the necessity of continuous internet connectivity. In this work, we introduce an LLMOps pipeline, "LlamaDuo", for the seamless migration of knowledge and abilities from service-oriented LLMs to smaller, locally manageable models. This pipeline is crucial for ensuring service continuity in the presence of operational failures, strict privacy policies, or offline requirements. Our LlamaDuo involves fine-tuning a small language model against the service LLM using a synthetic dataset generated by the latter. If the performance of the fine-tuned model falls short of expectations, it is enhanced by further fine-tuning with additional similar data created by the service LLM. This iterative process guarantees that the smaller model can eventually match or even surpass the service LLM's capabilities in specific downstream tasks, offering a practical and scalable solution for managing AI deployments in constrained environments. Extensive experiments with leading edge LLMs are conducted to demonstrate the effectiveness, adaptability, and affordability of LlamaDuo across various downstream tasks. Our pipeline implementation is available at https://github.com/deep-diver/llamaduo.
28 pages, 18 figures, 6 tables
cs.LG
[ "cs.LG", "cs.AI", "cs.DC" ]
Uncovering Biases with Reflective Large Language Models
http://arxiv.org/abs/2408.13464v1
http://arxiv.org/abs/2408.13464v1
http://arxiv.org/pdf/2408.13464v1
2024-08-24
2024-08-24
[ "Edward Y. Chang" ]
[ "" ]
Biases inherent in human endeavors pose significant challenges for machine learning, particularly in supervised learning that relies on potentially biased "ground truth" data. This reliance, coupled with models' tendency to generalize based on statistical maximal likelihood, can propagate and amplify biases, exacerbating societal issues. To address this, our study proposes a reflective methodology utilizing multiple Large Language Models (LLMs) engaged in a dynamic dialogue to uncover diverse perspectives. By leveraging conditional statistics, information theory, and divergence metrics, this novel approach fosters context-dependent linguistic behaviors, promoting unbiased outputs. Furthermore, it enables measurable progress tracking and explainable remediation actions to address identified biases.
16 pages, 3 figures, 8 tables
cs.AI
[ "cs.AI", "cs.CL", "cs.LG", "I.2.7" ]
Probing the Robustness of Vision-Language Pretrained Models: A Multimodal Adversarial Attack Approach
http://arxiv.org/abs/2408.13461v1
http://arxiv.org/abs/2408.13461v1
http://arxiv.org/pdf/2408.13461v1
2024-08-24
2024-08-24
[ "Jiwei Guan", "Tianyu Ding", "Longbing Cao", "Lei Pan", "Chen Wang", "Xi Zheng" ]
[ "", "", "", "", "", "" ]
Vision-language pretraining (VLP) with transformers has demonstrated exceptional performance across numerous multimodal tasks. However, the adversarial robustness of these models has not been thoroughly investigated. Existing multimodal attack methods have largely overlooked cross-modal interactions between visual and textual modalities, particularly in the context of cross-attention mechanisms. In this paper, we study the adversarial vulnerability of recent VLP transformers and design a novel Joint Multimodal Transformer Feature Attack (JMTFA) that concurrently introduces adversarial perturbations in both visual and textual modalities under white-box settings. JMTFA strategically targets attention relevance scores to disrupt important features within each modality, generating adversarial samples by fusing perturbations and leading to erroneous model predictions. Experimental results indicate that the proposed approach achieves high attack success rates on vision-language understanding and reasoning downstream tasks compared to existing baselines. Notably, our findings reveal that the textual modality significantly influences the complex fusion processes within VLP transformers. Moreover, we observe no apparent relationship between model size and adversarial robustness under our proposed attacks. These insights emphasize a new dimension of adversarial robustness and underscore potential risks in the reliable deployment of multimodal AI systems.
cs.CV
[ "cs.CV", "cs.AI" ]
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
http://arxiv.org/abs/2408.13457v1
http://arxiv.org/abs/2408.13457v1
http://arxiv.org/pdf/2408.13457v1
2024-08-24
2024-08-24
[ "Xinglin Wang", "Shaoxiong Feng", "Yiwei Li", "Peiwen Yuan", "Yueqi Zhang", "Boyuan Pan", "Heda Wang", "Yao Hu", "Kan Li" ]
[ "", "", "", "", "", "", "", "", "" ]
Self-consistency (SC), a widely used decoding strategy for chain-of-thought reasoning, shows significant gains across various multi-step reasoning tasks but comes with a high cost due to multiple sampling with the preset size. Its variants, Adaptive self-consistency (ASC) and Early-stopping self-consistency (ESC), dynamically adjust the number of samples based on the posterior distribution of a set of pre-samples, reducing the cost of SC with minimal impact on performance. Both methods, however, do not exploit the prior information about question difficulty. It often results in unnecessary repeated sampling for easy questions that could be accurately answered with just one attempt, wasting resources. To tackle this problem, we propose Difficulty-Adaptive Self-Consistency (DSC), which leverages the difficulty information from both prior and posterior perspectives to adaptively allocate inference resources, further reducing the cost of SC. To demonstrate the effectiveness of DSC, we conduct extensive experiments on three popular categories of reasoning tasks: arithmetic, commonsense and symbolic reasoning on six benchmarks. The empirical results show that DSC consistently surpasses the strong baseline ASC and ESC in terms of costs by a significant margin, while attaining comparable performances.
Preprint
cs.CL
[ "cs.CL", "cs.AI" ]
A Law of Next-Token Prediction in Large Language Models
http://arxiv.org/abs/2408.13442v1
http://arxiv.org/abs/2408.13442v1
http://arxiv.org/pdf/2408.13442v1
2024-08-24
2024-08-24
[ "Hangfeng He", "Weijie J. Su" ]
[ "", "" ]
Large language models (LLMs) have been widely employed across various application domains, yet their black-box nature poses significant challenges to understanding how these models process input data internally to make predictions. In this paper, we introduce a precise and quantitative law that governs the learning of contextualized token embeddings through intermediate layers in pre-trained LLMs for next-token prediction. Our findings reveal that each layer contributes equally to enhancing prediction accuracy, from the lowest to the highest layer -- a universal phenomenon observed across a diverse array of open-source LLMs, built on architectures such as Transformer, RWKV, and Mamba. We demonstrate that this law offers new perspectives and insights to inform and guide practices in LLM development and applications, including model scaling, pre-training tasks, and information flow. Overall, our law enables more fine-grained approaches to the design, training, and interpretation of LLMs through scrutinizing their internal data processing mechanisms.
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "stat.ML" ]
Applying graph neural network to SupplyGraph for supply chain network
http://arxiv.org/abs/2408.14501v1
http://arxiv.org/abs/2408.14501v1
http://arxiv.org/pdf/2408.14501v1
2024-08-23
2024-08-23
[ "Kihwan Han" ]
[ "" ]
Supply chain networks describe interactions between products, manufacture facilities, storages in the context of supply and demand of the products. Supply chain data are inherently under graph structure; thus, it can be fertile ground for applications of graph neural network (GNN). Very recently, supply chain dataset, SupplyGraph, has been released to the public. Though the SupplyGraph dataset is valuable given scarcity of publicly available data, there was less clarity on description of the dataset, data quality assurance process, and hyperparameters of the selected models. Further, for generalizability of findings, it would be more convincing to present the findings by performing statistical analyses on the distribution of errors rather than showing the average value of the errors. Therefore, this study assessed the supply chain dataset, SupplyGraph, with better clarity on analyses processes, data quality assurance, machine learning (ML) model specifications. After data quality assurance procedures, this study compared performance of Multilayer Perceptions (MLP), Graph Convolution Network (GCN), and Graph Attention Network (GAT) on a demanding forecasting task while matching hyperparameters as feasible as possible. The analyses revealed that GAT performed best, followed by GCN and MLP. Those performance improvements were statistically significant at $\alpha = 0.05$ after correction for multiple comparisons. This study also discussed several considerations in applying GNN to supply chain networks. The current study reinforces the previous study in supply chain benchmark dataset with respect to description of the dataset and methodology, so that the future research in applications of GNN to supply chain becomes more reproducible.
8 pages, 5 figures
cs.LG
[ "cs.LG", "cs.AI" ]
Optimizing Collaboration of LLM based Agents for Finite Element Analysis
http://arxiv.org/abs/2408.13406v1
http://arxiv.org/abs/2408.13406v1
http://arxiv.org/pdf/2408.13406v1
2024-08-23
2024-08-23
[ "Chuan Tian", "Yilei Zhang" ]
[ "", "" ]
This paper investigates the interactions between multiple agents within Large Language Models (LLMs) in the context of programming and coding tasks. We utilize the AutoGen framework to facilitate communication among agents, evaluating different configurations based on the success rates from 40 random runs for each setup. The study focuses on developing a flexible automation framework for applying the Finite Element Method (FEM) to solve linear elastic problems. Our findings emphasize the importance of optimizing agent roles and clearly defining their responsibilities, rather than merely increasing the number of agents. Effective collaboration among agents is shown to be crucial for addressing general FEM challenges. This research demonstrates the potential of LLM multi-agent systems to enhance computational automation in simulation methodologies, paving the way for future advancements in engineering and artificial intelligence.
cs.AI
[ "cs.AI", "cs.CE", "cs.MA" ]
Transforming Location Retrieval at Airbnb: A Journey from Heuristics to Reinforcement Learning
http://arxiv.org/abs/2408.13399v1
http://arxiv.org/abs/2408.13399v1
http://arxiv.org/pdf/2408.13399v1
2024-08-23
2024-08-23
[ "Dillon Davis", "Huiji Gao", "Weiwei Guo", "Thomas Legrand", "Malay Haldar", "Alex Deng", "Han Zhao", "Liwei He", "Sanjeev Katariya" ]
[ "", "", "", "", "", "", "", "", "" ]
The Airbnb search system grapples with many unique challenges as it continues to evolve. We oversee a marketplace that is nuanced by geography, diversity of homes, and guests with a variety of preferences. Crafting an efficient search system that can accommodate diverse guest needs, while showcasing relevant homes lies at the heart of Airbnb's success. Airbnb search has many challenges that parallel other recommendation and search systems but it has a unique information retrieval problem, upstream of ranking, called location retrieval. It requires defining a topological map area that is relevant to the searched query for homes listing retrieval. The purpose of this paper is to demonstrate the methodology, challenges, and impact of building a machine learning based location retrieval product from the ground up. Despite the lack of suitable, prevalent machine learning based approaches, we tackle cold start, generalization, differentiation and algorithmic bias. We detail the efficacy of heuristics, statistics, machine learning, and reinforcement learning approaches to solve these challenges, particularly for systems that are often unexplored by current literature.
cs.IR
[ "cs.IR", "cs.AI" ]
N-DriverMotion: Driver motion learning and prediction using an event-based camera and directly trained spiking neural networks
http://arxiv.org/abs/2408.13379v1
http://arxiv.org/abs/2408.13379v1
http://arxiv.org/pdf/2408.13379v1
2024-08-23
2024-08-23
[ "Hyo Jong Chung", "Byungkon Kang", "Yoonseok Yang" ]
[ "", "", "" ]
Driver motion recognition is a principal factor in ensuring the safety of driving systems. This paper presents a novel system for learning and predicting driver motions and an event-based high-resolution (1280x720) dataset, N-DriverMotion, newly collected to train on a neuromorphic vision system. The system comprises an event-based camera that generates the first high-resolution driver motion dataset representing spike inputs and efficient spiking neural networks (SNNs) that are effective in training and predicting the driver's gestures. The event dataset consists of 13 driver motion categories classified by direction (front, side), illumination (bright, moderate, dark), and participant. A novel simplified four-layer convolutional spiking neural network (CSNN) that we proposed was directly trained using the high-resolution dataset without any time-consuming preprocessing. This enables efficient adaptation to on-device SNNs for real-time inference on high-resolution event-based streams. Compared with recent gesture recognition systems adopting neural networks for vision processing, the proposed neuromorphic vision system achieves comparable accuracy, 94.04\%, in recognizing driver motions with the CSNN architecture. Our proposed CSNN and the dataset can be used to develop safer and more efficient driver monitoring systems for autonomous vehicles or edge devices requiring an efficient neural network architecture.
10 pages, 5 figures
cs.CV
[ "cs.CV", "cs.AI", "68T45", "I.4.8; I.4.9" ]
DrugAgent: Explainable Drug Repurposing Agent with Large Language Model-based Reasoning
http://arxiv.org/abs/2408.13378v1
http://arxiv.org/abs/2408.13378v1
http://arxiv.org/pdf/2408.13378v1
2024-08-23
2024-08-23
[ "Yoshitaka Inoue", "Tianci Song", "Tianfan Fu" ]
[ "", "", "" ]
Drug repurposing offers a promising avenue for accelerating drug development by identifying new therapeutic potentials of existing drugs. In this paper, we propose a multi-agent framework to enhance the drug repurposing process using state-of-the-art machine learning techniques and knowledge integration. Our framework comprises several specialized agents: an AI Agent trains robust drug-target interaction (DTI) models; a Knowledge Graph Agent utilizes the drug-gene interaction database (DGIdb), DrugBank, Comparative Toxicogenomics Database (CTD), and Search Tool for Interactions of Chemicals (STITCH) to systematically extract DTIs; and a Search Agent interacts with biomedical literature to annotate and verify computational predictions. By integrating outputs from these agents, our system effectively harnesses diverse data sources, including external databases, to propose viable repurposing candidates. Preliminary results demonstrate the potential of our approach in not only predicting drug-disease interactions but also in reducing the time and cost associated with traditional drug discovery methods. This paper highlights the scalability of multi-agent systems in biomedical research and their role in driving innovation in drug repurposing. Our approach not only outperforms existing methods in predicting drug repurposing potential but also provides interpretable results, paving the way for more efficient and cost-effective drug discovery processes.
18 pages, 1 figure
cs.AI
[ "cs.AI", "cs.CL", "cs.IR", "cs.LG", "q-bio.QM" ]
Reduce, Reuse, Recycle: Categories for Compositional Reinforcement Learning
http://arxiv.org/abs/2408.13376v1
http://arxiv.org/abs/2408.13376v1
http://arxiv.org/pdf/2408.13376v1
2024-08-23
2024-08-23
[ "Georgios Bakirtzis", "Michail Savvas", "Ruihan Zhao", "Sandeep Chinchali", "Ufuk Topcu" ]
[ "", "", "", "", "" ]
In reinforcement learning, conducting task composition by forming cohesive, executable sequences from multiple tasks remains challenging. However, the ability to (de)compose tasks is a linchpin in developing robotic systems capable of learning complex behaviors. Yet, compositional reinforcement learning is beset with difficulties, including the high dimensionality of the problem space, scarcity of rewards, and absence of system robustness after task composition. To surmount these challenges, we view task composition through the prism of category theory -- a mathematical discipline exploring structures and their compositional relationships. The categorical properties of Markov decision processes untangle complex tasks into manageable sub-tasks, allowing for strategical reduction of dimensionality, facilitating more tractable reward structures, and bolstering system robustness. Experimental results support the categorical theory of reinforcement learning by enabling skill reduction, reuse, and recycling when learning complex robotic arm tasks.
ECAI 2024
cs.AI
[ "cs.AI", "cs.LG", "cs.SY", "eess.SY", "math.CT" ]
Understanding Defects in Generated Codes by Language Models
http://arxiv.org/abs/2408.13372v1
http://arxiv.org/abs/2408.13372v1
http://arxiv.org/pdf/2408.13372v1
2024-08-23
2024-08-23
[ "Ali Mohammadi Esfahani", "Nafiseh Kahani", "Samuel A. Ajila" ]
[ "", "", "" ]
This study investigates the reliability of code generation by Large Language Models (LLMs), focusing on identifying and analyzing defects in the generated code. Despite the advanced capabilities of LLMs in automating code generation, ensuring the accuracy and functionality of the output remains a significant challenge. By using a structured defect classification method to understand their nature and origins this study categorizes and analyzes 367 identified defects from code snippets generated by LLMs, with a significant proportion being functionality and algorithm errors. These error categories indicate key areas where LLMs frequently fail, underscoring the need for targeted improvements. To enhance the accuracy of code generation, this paper implemented five prompt engineering techniques, including Scratchpad Prompting, Program of Thoughts Prompting, Chain-of-Thought Prompting, Chain of Code Prompting, and Structured Chain-of-Thought Prompting. These techniques were applied to refine the input prompts, aiming to reduce ambiguities and improve the models' accuracy rate. The research findings suggest that precise and structured prompting significantly mitigates common defects, thereby increasing the reliability of LLM-generated code.
cs.SE
[ "cs.SE", "cs.AI" ]
CodeRefine: A Pipeline for Enhancing LLM-Generated Code Implementations of Research Papers
http://arxiv.org/abs/2408.13366v1
http://arxiv.org/abs/2408.13366v1
http://arxiv.org/pdf/2408.13366v1
2024-08-23
2024-08-23
[ "Ekaterina Trofimova", "Emil Sataev", "Abhijit Singh Jowhari" ]
[ "", "", "" ]
This paper presents CodeRefine, a novel framework for automatically transforming research paper methodologies into functional code using Large Language Models (LLMs). Our multi-step approach first extracts and summarizes key text chunks from papers, analyzes their code relevance, and creates a knowledge graph using a predefined ontology. Code is then generated from this structured representation and enhanced through a proposed retrospective retrieval-augmented generation approach. CodeRefine addresses the challenge of bridging theoretical research and practical implementation, offering a more accurate alternative to LLM zero-shot prompting. Evaluations on diverse scientific papers demonstrate CodeRefine's ability to improve code implementation from the paper, potentially accelerating the adoption of cutting-edge algorithms in real-world applications.
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Reconciling Different Theories of Learning with an Agent-based Model of Procedural Learning
http://arxiv.org/abs/2408.13364v1
http://arxiv.org/abs/2408.13364v1
http://arxiv.org/pdf/2408.13364v1
2024-08-23
2024-08-23
[ "Sina Rismanchian", "Shayan Doroudi" ]
[ "", "" ]
Computational models of human learning can play a significant role in enhancing our knowledge about nuances in theoretical and qualitative learning theories and frameworks. There are many existing frameworks in educational settings that have shown to be verified using empirical studies, but at times we find these theories make conflicting claims or recommendations for instruction. In this study, we propose a new computational model of human learning, Procedural ABICAP, that reconciles the ICAP, Knowledge-Learning-Instruction (KLI), and cognitive load theory (CLT) frameworks for learning procedural knowledge. ICAP assumes that constructive learning generally yields better learning outcomes, while theories such as KLI and CLT claim that this is not always true. We suppose that one reason for this may be that ICAP is primarily used for conceptual learning and is underspecified as a framework for thinking about procedural learning. We show how our computational model, both by design and through simulations, can be used to reconcile different results in the literature. More generally, we position our computational model as an executable theory of learning that can be used to simulate various educational settings.
cs.CY
[ "cs.CY", "cs.AI" ]
Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler
http://arxiv.org/abs/2408.13359v1
http://arxiv.org/abs/2408.13359v1
http://arxiv.org/pdf/2408.13359v1
2024-08-23
2024-08-23
[ "Yikang Shen", "Matthew Stallone", "Mayank Mishra", "Gaoyuan Zhang", "Shawn Tan", "Aditya Prasad", "Adriana Meza Soria", "David D. Cox", "Rameswar Panda" ]
[ "", "", "", "", "", "", "", "", "" ]
Finding the optimal learning rate for language model pretraining is a challenging task. This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored. In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, Power scheduler, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (muP) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models. We open-source these pretrained models at https://ibm.biz/BdKhLa.
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Disentangled Training with Adversarial Examples For Robust Small-footprint Keyword Spotting
http://arxiv.org/abs/2408.13355v1
http://arxiv.org/abs/2408.13355v1
http://arxiv.org/pdf/2408.13355v1
2024-08-23
2024-08-23
[ "Zhenyu Wang", "Li Wan", "Biqiao Zhang", "Yiteng Huang", "Shang-Wen Li", "Ming Sun", "Xin Lei", "Zhaojun Yang" ]
[ "", "", "", "", "", "", "", "" ]
A keyword spotting (KWS) engine that is continuously running on device is exposed to various speech signals that are usually unseen before. It is a challenging problem to build a small-footprint and high-performing KWS model with robustness under different acoustic environments. In this paper, we explore how to effectively apply adversarial examples to improve KWS robustness. We propose datasource-aware disentangled learning with adversarial examples to reduce the mismatch between the original and adversarial data as well as the mismatch across original training datasources. The KWS model architecture is based on depth-wise separable convolution and a simple attention module. Experimental results demonstrate that the proposed learning strategy improves false reject rate by $40.31%$ at $1%$ false accept rate on the internal dataset, compared to the strongest baseline without using adversarial examples. Our best-performing system achieves $98.06%$ accuracy on the Google Speech Commands V1 dataset.
ICASSP 2023
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
Toward Improving Synthetic Audio Spoofing Detection Robustness via Meta-Learning and Disentangled Training With Adversarial Examples
http://arxiv.org/abs/2408.13341v1
http://arxiv.org/abs/2408.13341v1
http://arxiv.org/pdf/2408.13341v1
2024-08-23
2024-08-23
[ "Zhenyu Wang", "John H. L. Hansen" ]
[ "", "" ]
Advances in automatic speaker verification (ASV) promote research into the formulation of spoofing detection systems for real-world applications. The performance of ASV systems can be degraded severely by multiple types of spoofing attacks, namely, synthetic speech (SS), voice conversion (VC), replay, twins and impersonation, especially in the case of unseen synthetic spoofing attacks. A reliable and robust spoofing detection system can act as a security gate to filter out spoofing attacks instead of having them reach the ASV system. A weighted additive angular margin loss is proposed to address the data imbalance issue, and different margins has been assigned to improve generalization to unseen spoofing attacks in this study. Meanwhile, we incorporate a meta-learning loss function to optimize differences between the embeddings of support versus query set in order to learn a spoofing-category-independent embedding space for utterances. Furthermore, we craft adversarial examples by adding imperceptible perturbations to spoofing speech as a data augmentation strategy, then we use an auxiliary batch normalization (BN) to guarantee that corresponding normalization statistics are performed exclusively on the adversarial examples. Additionally, A simple attention module is integrated into the residual block to refine the feature extraction process. Evaluation results on the Logical Access (LA) track of the ASVspoof 2019 corpus provides confirmation of our proposed approaches' effectiveness in terms of a pooled EER of 0.87%, and a min t-DCF of 0.0277. These advancements offer effective options to reduce the impact of spoofing attacks on voice recognition/authentication systems.
IEEE ACCESS 2024
IEEE ACCESS 2024
10.1109/ACCESS.2024.3421281
cs.SD
[ "cs.SD", "cs.AI", "eess.AS" ]
SHEDAD: SNN-Enhanced District Heating Anomaly Detection for Urban Substations
http://arxiv.org/abs/2408.14499v1
http://arxiv.org/abs/2408.14499v1
http://arxiv.org/pdf/2408.14499v1
2024-08-23
2024-08-23
[ "Jonne van Dreven", "Abbas Cheddad", "Sadi Alawadi", "Ahmad Nauman Ghazi", "Jad Al Koussa", "Dirk Vanhoudt" ]
[ "", "", "", "", "", "" ]
District Heating (DH) systems are essential for energy-efficient urban heating. However, despite the advancements in automated fault detection and diagnosis (FDD), DH still faces challenges in operational faults that impact efficiency. This study introduces the Shared Nearest Neighbor Enhanced District Heating Anomaly Detection (SHEDAD) approach, designed to approximate the DH network topology and allow for local anomaly detection without disclosing sensitive information, such as substation locations. The approach leverages a multi-adaptive k-Nearest Neighbor (k-NN) graph to improve the initial neighborhood creation. Moreover, it introduces a merging technique that reduces noise and eliminates trivial edges. We use the Median Absolute Deviation (MAD) and modified z-scores to flag anomalous substations. The results reveal that SHEDAD outperforms traditional clustering methods, achieving significantly lower intra-cluster variance and distance. Additionally, SHEDAD effectively isolates and identifies two distinct categories of anomalies: supply temperatures and substation performance. We identified 30 anomalous substations and reached a sensitivity of approximately 65\% and specificity of approximately 97\%. By focusing on this subset of poor-performing substations in the network, SHEDAD enables more targeted and effective maintenance interventions, which can reduce energy usage while optimizing network performance.
12 pages, 5 figures, FMEC2024
cs.LG
[ "cs.LG", "cs.AI" ]
LalaEval: A Holistic Human Evaluation Framework for Domain-Specific Large Language Models
http://arxiv.org/abs/2408.13338v1
http://arxiv.org/abs/2408.13338v1
http://arxiv.org/pdf/2408.13338v1
2024-08-23
2024-08-23
[ "Chongyan Sun", "Ken Lin", "Shiwei Wang", "Hulong Wu", "Chengfei Fu", "Zhen Wang" ]
[ "", "", "", "", "", "" ]
This paper introduces LalaEval, a holistic framework designed for the human evaluation of domain-specific large language models (LLMs). LalaEval proposes a comprehensive suite of end-to-end protocols that cover five main components including domain specification, criteria establishment, benchmark dataset creation, construction of evaluation rubrics, and thorough analysis and interpretation of evaluation outcomes. This initiative aims to fill a crucial research gap by providing a systematic methodology for conducting standardized human evaluations within specific domains, a practice that, despite its widespread application, lacks substantial coverage in the literature and human evaluation are often criticized to be less reliable due to subjective factors, so standardized procedures adapted to the nuanced requirements of specific domains or even individual organizations are in great need. Furthermore, the paper demonstrates the framework's application within the logistics industry, presenting domain-specific evaluation benchmarks, datasets, and a comparative analysis of LLMs for the logistics domain use, highlighting the framework's capacity to elucidate performance differences and guide model selection and development for domain-specific LLMs. Through real-world deployment, the paper underscores the framework's effectiveness in advancing the field of domain-specific LLM evaluation, thereby contributing significantly to the ongoing discussion on LLMs' practical utility and performance in domain-specific applications.
cs.HC
[ "cs.HC", "cs.AI", "cs.CL" ]
Mastering the Digital Art of War: Developing Intelligent Combat Simulation Agents for Wargaming Using Hierarchical Reinforcement Learning
http://arxiv.org/abs/2408.13333v1
http://arxiv.org/abs/2408.13333v1
http://arxiv.org/pdf/2408.13333v1
2024-08-23
2024-08-23
[ "Scotty Black" ]
[ "" ]
In today's rapidly evolving military landscape, advancing artificial intelligence (AI) in support of wargaming becomes essential. Despite reinforcement learning (RL) showing promise for developing intelligent agents, conventional RL faces limitations in handling the complexity inherent in combat simulations. This dissertation proposes a comprehensive approach, including targeted observation abstractions, multi-model integration, a hybrid AI framework, and an overarching hierarchical reinforcement learning (HRL) framework. Our localized observation abstraction using piecewise linear spatial decay simplifies the RL problem, enhancing computational efficiency and demonstrating superior efficacy over traditional global observation methods. Our multi-model framework combines various AI methodologies, optimizing performance while still enabling the use of diverse, specialized individual behavior models. Our hybrid AI framework synergizes RL with scripted agents, leveraging RL for high-level decisions and scripted agents for lower-level tasks, enhancing adaptability, reliability, and performance. Our HRL architecture and training framework decomposes complex problems into manageable subproblems, aligning with military decision-making structures. Although initial tests did not show improved performance, insights were gained to improve future iterations. This study underscores AI's potential to revolutionize wargaming, emphasizing the need for continued research in this domain.
cs.LG
[ "cs.LG", "cs.AI" ]
Localized Observation Abstraction Using Piecewise Linear Spatial Decay for Reinforcement Learning in Combat Simulations
http://arxiv.org/abs/2408.13328v1
http://arxiv.org/abs/2408.13328v1
http://arxiv.org/pdf/2408.13328v1
2024-08-23
2024-08-23
[ "Scotty Black", "Christian Darken" ]
[ "", "" ]
In the domain of combat simulations, the training and deployment of deep reinforcement learning (RL) agents still face substantial challenges due to the dynamic and intricate nature of such environments. Unfortunately, as the complexity of the scenarios and available information increases, the training time required to achieve a certain threshold of performance does not just increase, but often does so exponentially. This relationship underscores the profound impact of complexity in training RL agents. This paper introduces a novel approach that addresses this limitation in training artificial intelligence (AI) agents using RL. Traditional RL methods have been shown to struggle in these high-dimensional, dynamic environments due to real-world computational constraints and the known sample inefficiency challenges of RL. To overcome these limitations, we propose a method of localized observation abstraction using piecewise linear spatial decay. This technique simplifies the state space, reducing computational demands while still preserving essential information, thereby enhancing AI training efficiency in dynamic environments where spatial relationships are often critical. Our analysis reveals that this localized observation approach consistently outperforms the more traditional global observation approach across increasing scenario complexity levels. This paper advances the research on observation abstractions for RL, illustrating how localized observation with piecewise linear spatial decay can provide an effective solution to large state representation challenges in dynamic environments.
cs.LG
[ "cs.LG", "cs.AI" ]
How Diffusion Models Learn to Factorize and Compose
http://arxiv.org/abs/2408.13256v1
http://arxiv.org/abs/2408.13256v1
http://arxiv.org/pdf/2408.13256v1
2024-08-23
2024-08-23
[ "Qiyao Liang", "Ziming Liu", "Mitchell Ostrow", "Ila Fiete" ]
[ "", "", "", "" ]
Diffusion models are capable of generating photo-realistic images that combine elements which likely do not appear together in the training set, demonstrating the ability to compositionally generalize. Nonetheless, the precise mechanism of compositionality and how it is acquired through training remains elusive. Inspired by cognitive neuroscientific approaches, we consider a highly reduced setting to examine whether and when diffusion models learn semantically meaningful and factorized representations of composable features. We performed extensive controlled experiments on conditional Denoising Diffusion Probabilistic Models (DDPMs) trained to generate various forms of 2D Gaussian data. We found that the models learn factorized but not fully continuous manifold representations for encoding continuous features of variation underlying the data. With such representations, models demonstrate superior feature compositionality but limited ability to interpolate over unseen values of a given feature. Our experimental results further demonstrate that diffusion models can attain compositionality with few compositional examples, suggesting a more efficient way to train DDPMs. Finally, we connect manifold formation in diffusion models to percolation theory in physics, offering insight into the sudden onset of factorized representation learning. Our thorough toy experiments thus contribute a deeper understanding of how diffusion models capture compositional structure in data.
11 pages, 6 figures, plus appendix, some content overlap with arXiv:2402.03305
cs.AI
[ "cs.AI", "cs.CV", "cs.LG" ]
Ensemble Modeling of Multiple Physical Indicators to Dynamically Phenotype Autism Spectrum Disorder
http://arxiv.org/abs/2408.13255v1
http://arxiv.org/abs/2408.13255v1
http://arxiv.org/pdf/2408.13255v1
2024-08-23
2024-08-23
[ "Marie Huynh", "Aaron Kline", "Saimourya Surabhi", "Kaitlyn Dunlap", "Onur Cezmi Mutlu", "Mohammadmahdi Honarmand", "Parnian Azizian", "Peter Washington", "Dennis P. Wall" ]
[ "", "", "", "", "", "", "", "", "" ]
Early detection of autism, a neurodevelopmental disorder marked by social communication challenges, is crucial for timely intervention. Recent advancements have utilized naturalistic home videos captured via the mobile application GuessWhat. Through interactive games played between children and their guardians, GuessWhat has amassed over 3,000 structured videos from 382 children, both diagnosed with and without Autism Spectrum Disorder (ASD). This collection provides a robust dataset for training computer vision models to detect ASD-related phenotypic markers, including variations in emotional expression, eye contact, and head movements. We have developed a protocol to curate high-quality videos from this dataset, forming a comprehensive training set. Utilizing this set, we trained individual LSTM-based models using eye gaze, head positions, and facial landmarks as input features, achieving test AUCs of 86%, 67%, and 78%, respectively. To boost diagnostic accuracy, we applied late fusion techniques to create ensemble models, improving the overall AUC to 90%. This approach also yielded more equitable results across different genders and age groups. Our methodology offers a significant step forward in the early detection of ASD by potentially reducing the reliance on subjective assessments and making early identification more accessibly and equitable.
cs.CV
[ "cs.CV", "cs.AI" ]
Foundational Model for Electron Micrograph Analysis: Instruction-Tuning Small-Scale Language-and-Vision Assistant for Enterprise Adoption
http://arxiv.org/abs/2408.13248v1
http://arxiv.org/abs/2408.13248v1
http://arxiv.org/pdf/2408.13248v1
2024-08-23
2024-08-23
[ "Sakhinana Sagar Srinivas", "Chidaksh Ravuru", "Geethan Sannidhi", "Venkataramana Runkana" ]
[ "", "", "", "" ]
Semiconductor imaging and analysis are critical yet understudied in deep learning, limiting our ability for precise control and optimization in semiconductor manufacturing. We introduce a small-scale multimodal framework for analyzing semiconductor electron microscopy images (MAEMI) through vision-language instruction tuning. We generate a customized instruction-following dataset using large multimodal models on microscopic image analysis. We perform knowledge transfer from larger to smaller models through knowledge distillation, resulting in improved accuracy of smaller models on visual question answering (VQA) tasks. This approach eliminates the need for expensive, human expert-annotated datasets for microscopic image analysis tasks. Enterprises can further finetune MAEMI on their intellectual data, enhancing privacy and performance on low-cost consumer hardware. Our experiments show that MAEMI outperforms traditional methods, adapts to data distribution shifts, and supports high-throughput screening.
Our paper is published at ICML 2024 Workshop ML for Life and Material Science: From Theory to Industry Applications, Vienna, Austria
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Data Exposure from LLM Apps: An In-depth Investigation of OpenAI's GPTs
http://arxiv.org/abs/2408.13247v1
http://arxiv.org/abs/2408.13247v1
http://arxiv.org/pdf/2408.13247v1
2024-08-23
2024-08-23
[ "Evin Jaff", "Yuhao Wu", "Ning Zhang", "Umar Iqbal" ]
[ "", "", "", "" ]
LLM app ecosystems are quickly maturing and supporting a wide range of use cases, which requires them to collect excessive user data. Given that the LLM apps are developed by third-parties and that anecdotal evidence suggests LLM platforms currently do not strictly enforce their policies, user data shared with arbitrary third-parties poses a significant privacy risk. In this paper we aim to bring transparency in data practices of LLM apps. As a case study, we study OpenAI's GPT app ecosystem. We develop an LLM-based framework to conduct the static analysis of natural language-based source code of GPTs and their Actions (external services) to characterize their data collection practices. Our findings indicate that Actions collect expansive data about users, including sensitive information prohibited by OpenAI, such as passwords. We find that some Actions, including related to advertising and analytics, are embedded in multiple GPTs, which allow them to track user activities across GPTs. Additionally, co-occurrence of Actions exposes as much as 9.5x more data to them, than it is exposed to individual Actions. Lastly, we develop an LLM-based privacy policy analysis framework to automatically check the consistency of data collection by Actions with disclosures in their privacy policies. Our measurements indicate that the disclosures for most of the collected data types are omitted in privacy policies, with only 5.8% of Actions clearly disclosing their data collection practices.
cs.CR
[ "cs.CR", "cs.AI", "cs.CL", "cs.CY", "cs.LG" ]
JacNet: Learning Functions with Structured Jacobians
http://arxiv.org/abs/2408.13237v1
http://arxiv.org/abs/2408.13237v1
http://arxiv.org/pdf/2408.13237v1
2024-08-23
2024-08-23
[ "Jonathan Lorraine", "Safwan Hossain" ]
[ "", "" ]
Neural networks are trained to learn an approximate mapping from an input domain to a target domain. Incorporating prior knowledge about true mappings is critical to learning a useful approximation. With current architectures, it is challenging to enforce structure on the derivatives of the input-output mapping. We propose to use a neural network to directly learn the Jacobian of the input-output function, which allows easy control of the derivative. We focus on structuring the derivative to allow invertibility and also demonstrate that other useful priors, such as $k$-Lipschitz, can be enforced. Using this approach, we can learn approximations to simple functions that are guaranteed to be invertible and easily compute the inverse. We also show similar results for 1-Lipschitz functions.
6 pages, 3 Figures, ICML 2019 INNF Workshop
cs.LG
[ "cs.LG", "cs.AI", "stat.ML", "68T07", "I.2.6; G.1.0; I.5.1" ]
Multi-Layer Transformers Gradient Can be Approximated in Almost Linear Time
http://arxiv.org/abs/2408.13233v1
http://arxiv.org/abs/2408.13233v1
http://arxiv.org/pdf/2408.13233v1
2024-08-23
2024-08-23
[ "Yingyu Liang", "Zhizhou Sha", "Zhenmei Shi", "Zhao Song", "Yufa Zhou" ]
[ "", "", "", "", "" ]
The quadratic computational complexity in the self-attention mechanism of popular transformer architectures poses significant challenges for training and inference, particularly in terms of efficiency and memory requirements. Towards addressing these challenges, this paper introduces a novel fast computation method for gradient calculation in multi-layer transformer models. Our approach enables the computation of gradients for the entire multi-layer transformer model in almost linear time $n^{1+o(1)}$, where $n$ is the input sequence length. This breakthrough significantly reduces the computational bottleneck associated with the traditional quadratic time complexity. Our theory holds for any loss function and maintains a bounded approximation error across the entire model. Furthermore, our analysis can hold when the multi-layer transformer model contains many practical sub-modules, such as residual connection, casual mask, and multi-head attention. By improving the efficiency of gradient computation in large language models, we hope that our work will facilitate the more effective training and deployment of long-context language models based on our theoretical results.
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition
http://arxiv.org/abs/2408.13227v1
http://arxiv.org/abs/2408.13227v1
http://arxiv.org/pdf/2408.13227v1
2024-08-23
2024-08-23
[ "Ahmad Pouramini", "Hesham Faili" ]
[ "", "" ]
In recent years, multi-task prompt tuning has garnered considerable attention for its inherent modularity and potential to enhance parameter-efficient transfer learning across diverse tasks. This paper aims to analyze and improve the performance of multiple tasks by facilitating the transfer of knowledge between their corresponding prompts in a multi-task setting. Our proposed approach decomposes the prompt for each target task into a combination of shared prompts (source prompts) and a task-specific prompt (private prompt). During training, the source prompts undergo fine-tuning and are integrated with the private prompt to drive the target prompt for each task. We present and compare multiple methods for combining source prompts to construct the target prompt, analyzing the roles of both source and private prompts within each method. We investigate their contributions to task performance and offer flexible, adjustable configurations based on these insights to optimize performance. Our empirical findings clearly showcase improvements in accuracy and robustness compared to the conventional practice of prompt tuning and related works. Notably, our results substantially outperform other methods in the field in few-shot settings, demonstrating superior performance in various tasks across GLUE benchmark, among other tasks. This achievement is attained with a significantly reduced amount of training data, making our method a promising one for few-shot settings.
cs.AI
[ "cs.AI", "cs.CL" ]
HBIC: A Biclustering Algorithm for Heterogeneous Datasets
http://arxiv.org/abs/2408.13217v1
http://arxiv.org/abs/2408.13217v1
http://arxiv.org/pdf/2408.13217v1
2024-08-23
2024-08-23
[ "Adán José-García", "Julie Jacques", "Clément Chauvet", "Vincent Sobanski", "Clarisse Dhaenens" ]
[ "", "", "", "", "" ]
Biclustering is an unsupervised machine-learning approach aiming to cluster rows and columns simultaneously in a data matrix. Several biclustering algorithms have been proposed for handling numeric datasets. However, real-world data mining problems often involve heterogeneous datasets with mixed attributes. To address this challenge, we introduce a biclustering approach called HBIC, capable of discovering meaningful biclusters in complex heterogeneous data, including numeric, binary, and categorical data. The approach comprises two stages: bicluster generation and bicluster model selection. In the initial stage, several candidate biclusters are generated iteratively by adding and removing rows and columns based on the frequency of values in the original matrix. In the second stage, we introduce two approaches for selecting the most suitable biclusters by considering their size and homogeneity. Through a series of experiments, we investigated the suitability of our approach on a synthetic benchmark and in a biomedical application involving clinical data of systemic sclerosis patients. The evaluation comparing our method to existing approaches demonstrates its ability to discover high-quality biclusters from heterogeneous data. Our biclustering approach is a starting point for heterogeneous bicluster discovery, leading to a better understanding of complex underlying data structures.
11 pages, 5 figures
cs.LG
[ "cs.LG", "cs.AI" ]
EUR-USD Exchange Rate Forecasting Based on Information Fusion with Large Language Models and Deep Learning Methods
http://arxiv.org/abs/2408.13214v1
http://arxiv.org/abs/2408.13214v1
http://arxiv.org/pdf/2408.13214v1
2024-08-23
2024-08-23
[ "Hongcheng Ding", "Xuanze Zhao", "Zixiao Jiang", "Shamsul Nahar Abdullah", "Deshinta Arrova Dewi" ]
[ "", "", "", "", "" ]
Accurate forecasting of the EUR/USD exchange rate is crucial for investors, businesses, and policymakers. This paper proposes a novel framework, IUS, that integrates unstructured textual data from news and analysis with structured data on exchange rates and financial indicators to enhance exchange rate prediction. The IUS framework employs large language models for sentiment polarity scoring and exchange rate movement classification of texts. These textual features are combined with quantitative features and input into a Causality-Driven Feature Generator. An Optuna-optimized Bi-LSTM model is then used to forecast the EUR/USD exchange rate. Experiments demonstrate that the proposed method outperforms benchmark models, reducing MAE by 10.69% and RMSE by 9.56% compared to the best performing baseline. Results also show the benefits of data fusion, with the combination of unstructured and structured data yielding higher accuracy than structured data alone. Furthermore, feature selection using the top 12 important quantitative features combined with the textual features proves most effective. The proposed IUS framework and Optuna-Bi-LSTM model provide a powerful new approach for exchange rate forecasting through multi-source data integration.
q-fin.CP
[ "q-fin.CP", "cs.AI", "cs.CE", "cs.CL" ]
Optimal Quantum Circuit Design via Unitary Neural Networks
http://arxiv.org/abs/2408.13211v1
http://arxiv.org/abs/2408.13211v1
http://arxiv.org/pdf/2408.13211v1
2024-08-23
2024-08-23
[ "M. Zomorodi", "H. Amini", "M. Abbaszadeh", "J. Sohrabi", "V. Salari", "P. Plawiak" ]
[ "", "", "", "", "", "" ]
The process of translating a quantum algorithm into a form suitable for implementation on a quantum computing platform is crucial but yet challenging. This entails specifying quantum operations with precision, a typically intricate task. In this paper, we present an alternative approach: an automated method for synthesizing the functionality of a quantum algorithm into a quantum circuit model representation. Our methodology involves training a neural network model using diverse input-output mappings of the quantum algorithm. We demonstrate that this trained model can effectively generate a quantum circuit model equivalent to the original algorithm. Remarkably, our observations indicate that the trained model achieves near-perfect mapping of unseen inputs to their respective outputs.
quant-ph
[ "quant-ph", "cs.AI" ]
Temporal Fairness in Decision Making Problems
http://arxiv.org/abs/2408.13208v1
http://arxiv.org/abs/2408.13208v1
http://arxiv.org/pdf/2408.13208v1
2024-08-23
2024-08-23
[ "Manuel R. Torres", "Parisa Zehtabi", "Michael Cashmore", "Daniele Magazzeni", "Manuela Veloso" ]
[ "", "", "", "", "" ]
In this work we consider a new interpretation of fairness in decision making problems. Building upon existing fairness formulations, we focus on how to reason over fairness from a temporal perspective, taking into account the fairness of a history of past decisions. After introducing the concept of temporal fairness, we propose three approaches that incorporate temporal fairness in decision making problems formulated as optimization problems. We present a qualitative evaluation of our approach in four different domains and compare the solutions against a baseline approach that does not consider the temporal aspect of fairness.
Paper accepted at ECAI 2024. This is an extended version that includes Supplementary Material
cs.AI
[ "cs.AI" ]
DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation
http://arxiv.org/abs/2408.13204v1
http://arxiv.org/abs/2408.13204v1
http://arxiv.org/pdf/2408.13204v1
2024-08-23
2024-08-23
[ "Qiming Zhu", "Jialun Cao", "Yaojie Lu", "Hongyu Lin", "Xianpei Han", "Le Sun", "Shing-Chi Cheung" ]
[ "", "", "", "", "", "", "" ]
Code benchmarks such as HumanEval are widely adopted to evaluate the capabilities of Large Language Models (LLMs), providing insights into their strengths and weaknesses. However, current benchmarks primarily exercise LLMs' capability on common coding tasks (e.g., bubble sort, greatest common divisor), leaving domain-specific coding tasks (e.g., computation, system, cryptography) unexplored. To fill this gap, we propose a multi-domain code benchmark, DOMAINEVAL, designed to evaluate LLMs' coding capabilities thoroughly. Our pipeline works in a fully automated manner, enabling a push-bottom construction from code repositories into formatted subjects under study. Interesting findings are observed by evaluating 12 representative LLMs against DOMAINEVAL. We notice that LLMs are generally good at computation tasks while falling short on cryptography and system coding tasks. The performance gap can be as much as 68.94% (80.94% - 12.0%) in some LLMs. We also observe that generating more samples can increase the overall performance of LLMs, while the domain bias may even increase. The contributions of this study include a code generation benchmark dataset DOMAINEVAL, encompassing six popular domains, a fully automated pipeline for constructing code benchmarks, and an identification of the limitations of LLMs in code generation tasks based on their performance on DOMAINEVAL, providing directions for future research improvements. The leaderboard is available at https://domaineval.github.io/.
cs.AI
[ "cs.AI", "cs.SE" ]
A New Era in Computational Pathology: A Survey on Foundation and Vision-Language Models
http://arxiv.org/abs/2408.14496v1
http://arxiv.org/abs/2408.14496v1
http://arxiv.org/pdf/2408.14496v1
2024-08-23
2024-08-23
[ "Dibaloke Chanda", "Milan Aryal", "Nasim Yahya Soltani", "Masoud Ganji" ]
[ "", "", "", "" ]
Recent advances in deep learning have completely transformed the domain of computational pathology (CPath), which in turn altered the diagnostic workflow of pathologists by integrating foundation models (FMs) and vision-language models (VLMs) in their assessment and decision-making process. FMs overcome the limitations of existing deep learning approaches in CPath by learning a representation space that can be adapted to a wide variety of downstream tasks without explicit supervision. VLMs allow pathology reports written in natural language to be used as a rich semantic information source to improve existing models as well as generate predictions in natural language form. In this survey, a holistic and systematic overview of recent innovations in FMs and VLMs in CPath is presented. Furthermore, the tools, datasets and training schemes for these models are summarized in addition to categorizing them into distinct groups. This extensive survey highlights the current trends in CPath and the way it is going to be transformed through FMs and VLMs in the future.
Initial Version
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "eess.IV" ]
Instruct-DeBERTa: A Hybrid Approach for Aspect-based Sentiment Analysis on Textual Reviews
http://arxiv.org/abs/2408.13202v1
http://arxiv.org/abs/2408.13202v1
http://arxiv.org/pdf/2408.13202v1
2024-08-23
2024-08-23
[ "Dineth Jayakody", "A V A Malkith", "Koshila Isuranda", "Vishal Thenuwara", "Nisansa de Silva", "Sachintha Rajith Ponnamperuma", "G G N Sandamali", "K L K Sudheera" ]
[ "", "", "", "", "", "", "", "" ]
Aspect-based Sentiment Analysis (ABSA) is a critical task in Natural Language Processing (NLP) that focuses on extracting sentiments related to specific aspects within a text, offering deep insights into customer opinions. Traditional sentiment analysis methods, while useful for determining overall sentiment, often miss the implicit opinions about particular product or service features. This paper presents a comprehensive review of the evolution of ABSA methodologies, from lexicon-based approaches to machine learning and deep learning techniques. We emphasize the recent advancements in Transformer-based models, particularly Bidirectional Encoder Representations from Transformers (BERT) and its variants, which have set new benchmarks in ABSA tasks. We focused on finetuning Llama and Mistral models, building hybrid models using the SetFit framework, and developing our own model by exploiting the strengths of state-of-the-art (SOTA) Transformer-based models for aspect term extraction (ATE) and aspect sentiment classification (ASC). Our hybrid model Instruct - DeBERTa uses SOTA InstructABSA for aspect extraction and DeBERTa-V3-baseabsa-V1 for aspect sentiment classification. We utilize datasets from different domains to evaluate our model's performance. Our experiments indicate that the proposed hybrid model significantly improves the accuracy and reliability of sentiment analysis across all experimented domains. As per our findings, our hybrid model Instruct - DeBERTa is the best-performing model for the joint task of ATE and ASC for both SemEval restaurant 2014 and SemEval laptop 2014 datasets separately. By addressing the limitations of existing methodologies, our approach provides a robust solution for understanding detailed consumer feedback, thus offering valuable insights for businesses aiming to enhance customer satisfaction and product development.
cs.CL
[ "cs.CL", "cs.AI" ]
An Overview and Comparison of Axiomatization Structures Regarding Inconsistency Indices' Properties in Pairwise Comparisons Methods
http://arxiv.org/abs/2408.13297v1
http://arxiv.org/abs/2408.13297v1
http://arxiv.org/pdf/2408.13297v1
2024-08-23
2024-08-23
[ "Sangeeta Pant", "Anuj Kumar", "Jiří Mazurek" ]
[ "", "", "" ]
Mathematical analysis of the analytic hierarchy process (AHP) led to the development of a mathematical function, usually called the inconsistency index, which has the center role in measuring the inconsistency of the judgements in AHP. Inconsistency index is a mathematical function which maps every pairwise comparison matrix (PCM) into a real number. An inconsistency index can be considered more trustworthy when it satisfies a set of suitable properties. Therefore, the research community has been trying to postulate a set of desirable rules (axioms, properties) for inconsistency indices. Subsequently, many axiomatic frameworks for these functions have been suggested independently, however, the literature on the topic is fragmented and missing a broader framework. Therefore, the objective of this article is twofold. Firstly, we provide a comprehensive review of the advancements in the axiomatization of inconsistency indices' properties during the last decade. Secondly, we provide a comparison and discussion of the aforementioned axiomatic structures along with directions of the future research.
21 pages, 2 figures
cs.LO
[ "cs.LO", "cs.AI" ]
Accelerating the k-means++ Algorithm by Using Geometric Information
http://arxiv.org/abs/2408.13189v1
http://arxiv.org/abs/2408.13189v1
http://arxiv.org/pdf/2408.13189v1
2024-08-23
2024-08-23
[ "Guillem Rodríguez Corominas", "Maria J. Blesa", "Christian Blum" ]
[ "", "", "" ]
In this paper, we propose an acceleration of the exact k-means++ algorithm using geometric information, specifically the Triangle Inequality and additional norm filters, along with a two-step sampling procedure. Our experiments demonstrate that the accelerated version outperforms the standard k-means++ version in terms of the number of visited points and distance calculations, achieving greater speedup as the number of clusters increases. The version utilizing the Triangle Inequality is particularly effective for low-dimensional data, while the additional norm-based filter enhances performance in high-dimensional instances with greater norm variance among points. Additional experiments show the behavior of our algorithms when executed concurrently across multiple jobs and examine how memory performance impacts practical speedup.
cs.LG
[ "cs.LG", "cs.AI", "91C20" ]
Say No to Freeloader: Protecting Intellectual Property of Your Deep Model
http://arxiv.org/abs/2408.13161v2
http://arxiv.org/abs/2408.13161v2
http://arxiv.org/pdf/2408.13161v2
2024-08-23
2024-08-27
[ "Lianyu Wang", "Meng Wang", "Huazhu Fu", "Daoqiang Zhang" ]
[ "", "", "", "" ]
Model intellectual property (IP) protection has attracted growing attention as science and technology advancements stem from human intellectual labor and computational expenses. Ensuring IP safety for trainers and owners is of utmost importance, particularly in domains where ownership verification and applicability authorization are required. A notable approach to safeguarding model IP involves proactively preventing the use of well-trained models of authorized domains from unauthorized domains. In this paper, we introduce a novel Compact Un-transferable Pyramid Isolation Domain (CUPI-Domain) which serves as a barrier against illegal transfers from authorized to unauthorized domains. Drawing inspiration from human transitive inference and learning abilities, the CUPI-Domain is designed to obstruct cross-domain transfers by emphasizing the distinctive style features of the authorized domain. This emphasis leads to failure in recognizing irrelevant private style features on unauthorized domains. To this end, we propose novel CUPI-Domain generators, which select features from both authorized and CUPI-Domain as anchors. Then, we fuse the style features and semantic features of these anchors to generate labeled and style-rich CUPI-Domain. Additionally, we design external Domain-Information Memory Banks (DIMB) for storing and updating labeled pyramid features to obtain stable domain class features and domain class-wise style features. Based on the proposed whole method, the novel style and discriminative loss functions are designed to effectively enhance the distinction in style and discriminative features between authorized and unauthorized domains, respectively. Moreover, we provide two solutions for utilizing CUPI-Domain based on whether the unauthorized domain is known: target-specified CUPI-Domain and target-free CUPI-Domain.
cs.AI
[ "cs.AI" ]
Causal machine learning for sustainable agroecosystems
http://arxiv.org/abs/2408.13155v1
http://arxiv.org/abs/2408.13155v1
http://arxiv.org/pdf/2408.13155v1
2024-08-23
2024-08-23
[ "Vasileios Sitokonstantinou", "Emiliano Díaz Salas Porras", "Jordi Cerdà Bautista", "Maria Piles", "Ioannis Athanasiadis", "Hannah Kerner", "Giulia Martini", "Lily-belle Sweet", "Ilias Tsoumas", "Jakob Zscheischler", "Gustau Camps-Valls" ]
[ "", "", "", "", "", "", "", "", "", "", "" ]
In a changing climate, sustainable agriculture is essential for food security and environmental health. However, it is challenging to understand the complex interactions among its biophysical, social, and economic components. Predictive machine learning (ML), with its capacity to learn from data, is leveraged in sustainable agriculture for applications like yield prediction and weather forecasting. Nevertheless, it cannot explain causal mechanisms and remains descriptive rather than prescriptive. To address this gap, we propose causal ML, which merges ML's data processing with causality's ability to reason about change. This facilitates quantifying intervention impacts for evidence-based decision-making and enhances predictive model robustness. We showcase causal ML through eight diverse applications that benefit stakeholders across the agri-food chain, including farmers, policymakers, and researchers.
cs.LG
[ "cs.LG", "cs.AI", "cs.CY" ]
ShapeICP: Iterative Category-level Object Pose and Shape Estimation from Depth
http://arxiv.org/abs/2408.13147v1
http://arxiv.org/abs/2408.13147v1
http://arxiv.org/pdf/2408.13147v1
2024-08-23
2024-08-23
[ "Yihao Zhang", "John J. Leonard" ]
[ "", "" ]
Category-level object pose and shape estimation from a single depth image has recently drawn research attention due to its wide applications in robotics and self-driving. The task is particularly challenging because the three unknowns, object pose, object shape, and model-to-measurement correspondences, are compounded together but only a single view of depth measurements is provided. The vast majority of the prior work heavily relies on data-driven approaches to obtain solutions to at least one of the unknowns and typically two, running with the risk of failing to generalize to unseen domains. The shape representations used in the prior work also mainly focus on point cloud and signed distance field (SDF). In stark contrast to the prior work, we approach the problem using an iterative estimation method that does not require learning from any pose-annotated data. In addition, we adopt a novel mesh-based object active shape model that has not been explored by the previous literature. Our algorithm, named ShapeICP, has its foundation in the iterative closest point (ICP) algorithm but is equipped with additional features for the category-level pose and shape estimation task. The results show that even without using any pose-annotated data, ShapeICP surpasses many data-driven approaches that rely on the pose data for training, opening up new solution space for researchers to consider.
cs.CV
[ "cs.CV", "cs.AI", "cs.RO" ]
Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation
http://arxiv.org/abs/2408.13140v2
http://arxiv.org/abs/2408.13140v2
http://arxiv.org/pdf/2408.13140v2
2024-08-23
2024-08-29
[ "Ben Batten", "Yang Zheng", "Alessandro De Palma", "Panagiotis Kouvaros", "Alessio Lomuscio" ]
[ "", "", "", "", "" ]
We address the problem of verifying neural networks against geometric transformations of the input image, including rotation, scaling, shearing, and translation. The proposed method computes provably sound piecewise linear constraints for the pixel values by using sampling and linear approximations in combination with branch-and-bound Lipschitz optimisation. The method obtains provably tighter over-approximations of the perturbation region than the present state-of-the-art. We report results from experiments on a comprehensive set of verification benchmarks on MNIST and CIFAR10. We show that our proposed implementation resolves up to 32% more verification cases than present approaches.
ECAI 2024
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
Deep Learning at the Intersection: Certified Robustness as a Tool for 3D Vision
http://arxiv.org/abs/2408.13135v1
http://arxiv.org/abs/2408.13135v1
http://arxiv.org/pdf/2408.13135v1
2024-08-23
2024-08-23
[ "Gabriel Pérez S", "Juan C. Pérez", "Motasem Alfarra", "Jesús Zarzar", "Sara Rojas", "Bernard Ghanem", "Pablo Arbeláez" ]
[ "", "", "", "", "", "", "" ]
This paper presents preliminary work on a novel connection between certified robustness in machine learning and the modeling of 3D objects. We highlight an intriguing link between the Maximal Certified Radius (MCR) of a classifier representing a space's occupancy and the space's Signed Distance Function (SDF). Leveraging this relationship, we propose to use the certification method of randomized smoothing (RS) to compute SDFs. Since RS' high computational cost prevents its practical usage as a way to compute SDFs, we propose an algorithm to efficiently run RS in low-dimensional applications, such as 3D space, by expressing RS' fundamental operations as Gaussian smoothing on pre-computed voxel grids. Our approach offers an innovative and practical tool to compute SDFs, validated through proof-of-concept experiments in novel view synthesis. This paper bridges two previously disparate areas of machine learning, opening new avenues for further exploration and potential cross-domain advancements.
This paper is an accepted extended abstract to the LatinX workshop at ICCV 2023. This was uploaded a year late
cs.CV
[ "cs.CV", "cs.AI" ]
DeTPP: Leveraging Object Detection for Robust Long-Horizon Event Prediction
http://arxiv.org/abs/2408.13131v1
http://arxiv.org/abs/2408.13131v1
http://arxiv.org/pdf/2408.13131v1
2024-08-23
2024-08-23
[ "Ivan Karpukhin", "Andrey Savchenko" ]
[ "", "" ]
Forecasting future events over extended periods, known as long-horizon prediction, is a fundamental task in various domains, including retail, finance, healthcare, and social networks. Traditional methods, such as Marked Temporal Point Processes (MTPP), typically use autoregressive models to predict multiple future events. However, these models frequently encounter issues such as converging to constant or repetitive outputs, which significantly limits their effectiveness and applicability. To overcome these limitations, we propose DeTPP (Detection-based Temporal Point Processes), a novel approach inspired by object detection methods from computer vision. DeTPP utilizes a novel matching-based loss function that selectively focuses on reliably predictable events, enhancing both training robustness and inference diversity. Our method sets a new state-of-the-art in long-horizon event prediction, significantly outperforming existing MTPP and next-K approaches. The implementation of DeTPP is publicly available on GitHub.
cs.LG
[ "cs.LG", "cs.AI" ]
Causally-Aware Spatio-Temporal Multi-Graph Convolution Network for Accurate and Reliable Traffic Prediction
http://arxiv.org/abs/2408.13293v1
http://arxiv.org/abs/2408.13293v1
http://arxiv.org/pdf/2408.13293v1
2024-08-23
2024-08-23
[ "Pingping Dong", "Xiao-Lin Wang", "Indranil Bose", "Kam K. H. Ng", "Xiaoning Zhang", "Xiaoge Zhang" ]
[ "", "", "", "", "", "" ]
Accurate and reliable prediction has profound implications to a wide range of applications. In this study, we focus on an instance of spatio-temporal learning problem--traffic prediction--to demonstrate an advanced deep learning model developed for making accurate and reliable forecast. Despite the significant progress in traffic prediction, limited studies have incorporated both explicit and implicit traffic patterns simultaneously to improve prediction performance. Meanwhile, the variability nature of traffic states necessitates quantifying the uncertainty of model predictions in a statistically principled way; however, extant studies offer no provable guarantee on the statistical validity of confidence intervals in reflecting its actual likelihood of containing the ground truth. In this paper, we propose an end-to-end traffic prediction framework that leverages three primary components to generate accurate and reliable traffic predictions: dynamic causal structure learning for discovering implicit traffic patterns from massive traffic data, causally-aware spatio-temporal multi-graph convolution network (CASTMGCN) for learning spatio-temporal dependencies, and conformal prediction for uncertainty quantification. CASTMGCN fuses several graphs that characterize different important aspects of traffic networks and an auxiliary graph that captures the effect of exogenous factors on the road network. On this basis, a conformal prediction approach tailored to spatio-temporal data is further developed for quantifying the uncertainty in node-wise traffic predictions over varying prediction horizons. Experimental results on two real-world traffic datasets demonstrate that the proposed method outperforms several state-of-the-art models in prediction accuracy; moreover, it generates more efficient prediction regions than other methods while strictly satisfying the statistical validity in coverage.
cs.LG
[ "cs.LG", "cs.AI" ]
Map-Free Visual Relocalization Enhanced by Instance Knowledge and Depth Knowledge
http://arxiv.org/abs/2408.13085v1
http://arxiv.org/abs/2408.13085v1
http://arxiv.org/pdf/2408.13085v1
2024-08-23
2024-08-23
[ "Mingyu Xiao", "Runze Chen", "Haiyong Luo", "Fang Zhao", "Juan Wang", "Xuepeng Ma" ]
[ "", "", "", "", "", "" ]
Map-free relocalization technology is crucial for applications in autonomous navigation and augmented reality, but relying on pre-built maps is often impractical. It faces significant challenges due to limitations in matching methods and the inherent lack of scale in monocular images. These issues lead to substantial rotational and metric errors and even localization failures in real-world scenarios. Large matching errors significantly impact the overall relocalization process, affecting both rotational and translational accuracy. Due to the inherent limitations of the camera itself, recovering the metric scale from a single image is crucial, as this significantly impacts the translation error. To address these challenges, we propose a map-free relocalization method enhanced by instance knowledge and depth knowledge. By leveraging instance-based matching information to improve global matching results, our method significantly reduces the possibility of mismatching across different objects. The robustness of instance knowledge across the scene helps the feature point matching model focus on relevant regions and enhance matching accuracy. Additionally, we use estimated metric depth from a single image to reduce metric errors and improve scale recovery accuracy. By integrating methods dedicated to mitigating large translational and rotational errors, our approach demonstrates superior performance in map-free relocalization techniques.
17 pages,6 figures
cs.CV
[ "cs.CV", "cs.AI" ]
Avatar Visual Similarity for Social HCI: Increasing Self-Awareness
http://arxiv.org/abs/2408.13084v1
http://arxiv.org/abs/2408.13084v1
http://arxiv.org/pdf/2408.13084v1
2024-08-23
2024-08-23
[ "Bernhard Hilpert", "Claudio Alves da Silva", "Leon Christidis", "Chirag Bhuvaneshwara", "Patrick Gebhard", "Fabrizio Nunnari", "Dimitra Tsovaltzi" ]
[ "", "", "", "", "", "", "" ]
Self-awareness is a critical factor in social human-human interaction and, hence, in social HCI interaction. Increasing self-awareness through mirrors or video recordings is common in face-to-face trainings, since it influences antecedents of self-awareness like explicit identification and implicit affective identification (affinity). However, increasing self-awareness has been scarcely examined in virtual trainings with virtual avatars, which allow for adjusting the similarity, e.g. to avoid negative effects of self-consciousness. Automatic visual similarity in avatars is an open issue related to high costs. It is important to understand which features need to be manipulated and which degree of similarity is necessary for self-awareness to leverage the added value of using avatars for self-awareness. This article examines the relationship between avatar visual similarity and increasing self-awareness in virtual training environments. We define visual similarity based on perceptually important facial features for human-human identification and develop a theory-based methodology to systematically manipulate visual similarity of virtual avatars and support self-awareness. Three personalized versions of virtual avatars with varying degrees of visual similarity to participants were created (weak, medium and strong facial features manipulation). In a within-subject study (N=33), we tested effects of degree of similarity on perceived similarity, explicit identification and implicit affective identification (affinity). Results show significant differences between the weak similarity manipulation, and both the strong manipulation and the random avatar for all three antecedents of self-awareness. An increasing degree of avatar visual similarity influences antecedents of self-awareness in virtual environments.
cs.HC
[ "cs.HC", "cs.AI" ]
Multivariate Time-Series Anomaly Detection based on Enhancing Graph Attention Networks with Topological Analysis
http://arxiv.org/abs/2408.13082v1
http://arxiv.org/abs/2408.13082v1
http://arxiv.org/pdf/2408.13082v1
2024-08-23
2024-08-23
[ "Zhe Liu", "Xiang Huang", "Jingyun Zhang", "Zhifeng Hao", "Li Sun", "Hao Peng" ]
[ "", "", "", "", "", "" ]
Unsupervised anomaly detection in time series is essential in industrial applications, as it significantly reduces the need for manual intervention. Multivariate time series pose a complex challenge due to their feature and temporal dimensions. Traditional methods use Graph Neural Networks (GNNs) or Transformers to analyze spatial while RNNs to model temporal dependencies. These methods focus narrowly on one dimension or engage in coarse-grained feature extraction, which can be inadequate for large datasets characterized by intricate relationships and dynamic changes. This paper introduces a novel temporal model built on an enhanced Graph Attention Network (GAT) for multivariate time series anomaly detection called TopoGDN. Our model analyzes both time and feature dimensions from a fine-grained perspective. First, we introduce a multi-scale temporal convolution module to extract detailed temporal features. Additionally, we present an augmented GAT to manage complex inter-feature dependencies, which incorporates graph topology into node features across multiple scales, a versatile, plug-and-play enhancement that significantly boosts the performance of GAT. Our experimental results confirm that our approach surpasses the baseline models on four datasets, demonstrating its potential for widespread application in fields requiring robust anomaly detection. The code is available at https://github.com/ljj-cyber/TopoGDN.
10 pages, 5 figures, to be published in CIKM 2024
10.1145/3627673.3679614
cs.LG
[ "cs.LG", "cs.AI" ]
AEMLO: AutoEncoder-Guided Multi-Label Oversampling
http://arxiv.org/abs/2408.13078v1
http://arxiv.org/abs/2408.13078v1
http://arxiv.org/pdf/2408.13078v1
2024-08-23
2024-08-23
[ "Ao Zhou", "Bin Liu", "Jin Wang", "Kaiwei Sun", "Kelin Liu" ]
[ "", "", "", "", "" ]
Class imbalance significantly impacts the performance of multi-label classifiers. Oversampling is one of the most popular approaches, as it augments instances associated with less frequent labels to balance the class distribution. Existing oversampling methods generate feature vectors of synthetic samples through replication or linear interpolation and assign labels through neighborhood information. Linear interpolation typically generates new samples between existing data points, which may result in insufficient diversity of synthesized samples and further lead to the overfitting issue. Deep learning-based methods, such as AutoEncoders, have been proposed to generate more diverse and complex synthetic samples, achieving excellent performance on imbalanced binary or multi-class datasets. In this study, we introduce AEMLO, an AutoEncoder-guided Oversampling technique specifically designed for tackling imbalanced multi-label data. AEMLO is built upon two fundamental components. The first is an encoder-decoder architecture that enables the model to encode input data into a low-dimensional feature space, learn its latent representations, and then reconstruct it back to its original dimension, thus applying to the generation of new data. The second is an objective function tailored to optimize the sampling task for multi-label scenarios. We show that AEMLO outperforms the existing state-of-the-art methods with extensive empirical studies.
cs.LG
[ "cs.LG", "cs.AI" ]
Hierarchical Spatio-Temporal State-Space Modeling for fMRI Analysis
http://arxiv.org/abs/2408.13074v1
http://arxiv.org/abs/2408.13074v1
http://arxiv.org/pdf/2408.13074v1
2024-08-23
2024-08-23
[ "Yuxiang Wei", "Anees Abrol", "Reihaneh Hassanzadeh", "Vince Calhoun" ]
[ "", "", "", "" ]
Recent advances in deep learning structured state space models, especially the Mamba architecture, have demonstrated remarkable performance improvements while maintaining linear complexity. In this study, we introduce functional spatiotemporal Mamba (FST-Mamba), a Mamba-based model designed for discovering neurological biomarkers using functional magnetic resonance imaging (fMRI). We focus on dynamic functional network connectivity (dFNC) derived from fMRI and propose a hierarchical spatiotemporal Mamba-based network that processes spatial and temporal information separately using Mamba-based encoders. Leveraging the topological uniqueness of the FNC matrix, we introduce a component-wise varied-scale aggregation (CVA) mechanism to aggregate connectivity across individual components within brain networks, enabling the model to capture both inter-component and inter-network information. To better handle the FNC data, we develop a new component-specific scanning order. Additionally, we propose symmetric rotary position encoding (SymRope) to encode the relative positions of each functional connection while considering the symmetric nature of the FNC matrix. Experimental results demonstrate significant improvements in the proposed FST-Mamba model on various brain-based classification and regression tasks. Our work reveals the substantial potential of attention-free sequence modeling in brain discovery.
cs.LG
[ "cs.LG", "cs.AI" ]