title
stringlengths 4
246
| id
stringlengths 32
39
| arxiv_url
stringlengths 32
39
| pdf_url
stringlengths 32
39
| published_date
stringlengths 10
10
| updated_date
stringlengths 10
10
| authors
sequencelengths 1
535
| affiliations
sequencelengths 1
535
| summary
stringlengths 23
3.54k
| comment
stringlengths 0
762
| journal_ref
stringlengths 0
545
| doi
stringlengths 0
151
| primary_category
stringclasses 156
values | categories
sequencelengths 1
11
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Optimizing Structured Data Processing through Robotic Process Automation | http://arxiv.org/abs/2408.14791v1 | http://arxiv.org/abs/2408.14791v1 | http://arxiv.org/pdf/2408.14791v1 | 2024-08-27 | 2024-08-27 | [
"Vivek Bhardwaj",
"Ajit Noonia",
"Sandeep Chaurasia",
"Mukesh Kumar",
"Abdulnaser Rashid",
"Mohamed Tahar Ben Othman"
] | [
"",
"",
"",
"",
"",
""
] | Robotic Process Automation (RPA) has emerged as a game-changing technology in
data extraction, revolutionizing the way organizations process and analyze
large volumes of documents such as invoices, purchase orders, and payment
advices. This study investigates the use of RPA for structured data extraction
and evaluates its advantages over manual processes. By comparing
human-performed tasks with those executed by RPA software bots, we assess
efficiency and accuracy in data extraction from invoices, focusing on the
effectiveness of the RPA system. Through four distinct scenarios involving
varying numbers of invoices, we measure efficiency in terms of time and effort
required for task completion, as well as accuracy by comparing error rates
between manual and RPA processes. Our findings highlight the significant
efficiency gains achieved by RPA, with bots completing tasks in significantly
less time compared to manual efforts across all cases. Moreover, the RPA system
consistently achieves perfect accuracy, mitigating the risk of errors and
enhancing process reliability. These results underscore the transformative
potential of RPA in optimizing operational efficiency, reducing human labor
costs, and improving overall business performance. | This manuscript has been accepted for publication in the journal
Revue d'Intelligence Artificielle | cs.AI | [
"cs.AI",
"cs.RO"
] |
||
GINN-KAN: Interpretability pipelining with applications in Physics
Informed Neural Networks | http://arxiv.org/abs/2408.14780v2 | http://arxiv.org/abs/2408.14780v2 | http://arxiv.org/pdf/2408.14780v2 | 2024-08-27 | 2024-08-28 | [
"Nisal Ranasinghe",
"Yu Xia",
"Sachith Seneviratne",
"Saman Halgamuge"
] | [
"",
"",
"",
""
] | Neural networks are powerful function approximators, yet their ``black-box"
nature often renders them opaque and difficult to interpret. While many
post-hoc explanation methods exist, they typically fail to capture the
underlying reasoning processes of the networks. A truly interpretable neural
network would be trained similarly to conventional models using techniques such
as backpropagation, but additionally provide insights into the learned
input-output relationships. In this work, we introduce the concept of
interpretability pipelineing, to incorporate multiple interpretability
techniques to outperform each individual technique. To this end, we first
evaluate several architectures that promise such interpretability, with a
particular focus on two recent models selected for their potential to
incorporate interpretability into standard neural network architectures while
still leveraging backpropagation: the Growing Interpretable Neural Network
(GINN) and Kolmogorov Arnold Networks (KAN). We analyze the limitations and
strengths of each and introduce a novel interpretable neural network GINN-KAN
that synthesizes the advantages of both models. When tested on the Feynman
symbolic regression benchmark datasets, GINN-KAN outperforms both GINN and KAN.
To highlight the capabilities and the generalizability of this approach, we
position GINN-KAN as an alternative to conventional black-box networks in
Physics-Informed Neural Networks (PINNs). We expect this to have far-reaching
implications in the application of deep learning pipelines in the natural
sciences. Our experiments with this interpretable PINN on 15 different partial
differential equations demonstrate that GINN-KAN augmented PINNs outperform
PINNs with black-box networks in solving differential equations and surpass the
capabilities of both GINN and KAN. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
MROVSeg: Breaking the Resolution Curse of Vision-Language Models in
Open-Vocabulary Semantic Segmentation | http://arxiv.org/abs/2408.14776v1 | http://arxiv.org/abs/2408.14776v1 | http://arxiv.org/pdf/2408.14776v1 | 2024-08-27 | 2024-08-27 | [
"Yuanbing Zhu",
"Bingke Zhu",
"Zhen Chen",
"Huan Xu",
"Ming Tang",
"Jinqiao Wang"
] | [
"",
"",
"",
"",
"",
""
] | Open-vocabulary semantic segmentation aims to segment and recognize
semantically meaningful regions based on text-based descriptions during
inference. A typical solution to address this task is to leverage powerful
vision-language models (VLMs), such as CLIP, to bridge the gap between open-
and close-vocabulary recognition. As VLMs are usually pretrained with
low-resolution images (e.g. $224\times224$), most previous methods operate only
on downscaled images. We question this design as low resolution features often
fail to preserve fine details. Although employing additional image backbones
for high-resolution inputs can mitigate this issue, it may also introduce
significant computation overhead. Therefore, we propose MROVSeg, a
multi-resolution training framework for open-vocabulary semantic segmentation
with a single pretrained CLIP backbone, that uses sliding windows to slice the
high-resolution input into uniform patches, each matching the input size of the
well-trained image encoder. Its key components include a Multi-Res Adapter,
which restores the spatial geometry and grasps local-global correspondences
across patches by learnable convolutional and scale attention layers. To
achieve accurate segmentation, we introduce Multi-grained Masked Attention
scheme to aggregate multi-grained semantics by performing cross-attention
between object queries and multi-resolution CLIP features within the region of
interests. Through comprehensive experiments, we demonstrate the superiority of
MROVSeg on well-established open-vocabulary semantic segmentation benchmarks,
particularly for high-resolution inputs, establishing new standards for
open-vocabulary semantic segmentation. | Technical report | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
A global AI community requires language-diverse publishing | http://arxiv.org/abs/2408.14772v1 | http://arxiv.org/abs/2408.14772v1 | http://arxiv.org/pdf/2408.14772v1 | 2024-08-27 | 2024-08-27 | [
"Haley Lepp",
"Parth Sarin"
] | [
"",
""
] | In this provocation, we discuss the English dominance of the AI research
community, arguing that the requirement for English language publishing upholds
and reinforces broader regimes of extraction in AI. While large language models
and machine translation have been celebrated as a way to break down barriers,
we regard their use as a symptom of linguistic exclusion of scientists and
potential readers. We propose alternative futures for a healthier publishing
culture, organized around three themes: administering conferences in the
languages of the country in which they are held, instructing peer reviewers not
to adjudicate the language appropriateness of papers, and offering
opportunities to publish and present in multiple languages. We welcome new
translations of this piece. Please contact the authors if you would like to
contribute one. | Translations by Michael Hardy (Guarani), Vandana Sarin and Vivek
Sarin (Hindi), Roshna Omer Abdulrahman (Soran\^i Kurdish), Gabriel Poesia
(Portuguese), and Mat\'ias Grinberg (Spanish). In the proceedings of the
Global AI Cultures Workshop at the Twelfth International Conference on
Learning Representations (ICLR) 2024, Vienna, Austria, May 7-11, 2024 | cs.CL | [
"cs.CL",
"cs.AI",
"K.7.0; K.4.2; I.2.m"
] |
||
Sequential-Scanning Dual-Energy CT Imaging Using High Temporal
Resolution Image Reconstruction and Error-Compensated Material Basis Image
Generation | http://arxiv.org/abs/2408.14754v1 | http://arxiv.org/abs/2408.14754v1 | http://arxiv.org/pdf/2408.14754v1 | 2024-08-27 | 2024-08-27 | [
"Qiaoxin Li",
"Ruifeng Chen",
"Peng Wang",
"Guotao Quan",
"Yanfeng Du",
"Dong Liang",
"Yinsheng Li"
] | [
"",
"",
"",
"",
"",
"",
""
] | Dual-energy computed tomography (DECT) has been widely used to obtain
quantitative elemental composition of imaged subjects for personalized and
precise medical diagnosis. Compared with DECT leveraging advanced X-ray source
and/or detector technologies, the use of the sequential-scanning data
acquisition scheme to implement DECT may make a broader impact on clinical
practice because this scheme requires no specialized hardware designs and can
be directly implemented into conventional CT systems. However, since the
concentration of iodinated contrast agent in the imaged subject varies over
time, sequentially scanned data sets acquired at two tube potentials are
temporally inconsistent. As existing material basis image reconstruction
approaches assume that the data sets acquired at two tube potentials are
temporally consistent, the violation of this assumption results in inaccurate
quantification of material concentration. In this work, we developed
sequential-scanning DECT imaging using high temporal resolution image
reconstruction and error-compensated material basis image generation,
ACCELERATION in short, to address the technical challenge induced by temporal
inconsistency of sequentially scanned data sets and improve quantification
accuracy of material concentration in sequential-scanning DECT. ACCELERATION
has been validated and evaluated using numerical simulation data sets generated
from clinical human subject exams and experimental human subject studies.
Results demonstrated the improvement of quantification accuracy and image
quality using ACCELERATION. | physics.med-ph | [
"physics.med-ph",
"cs.AI",
"cs.CV",
"physics.ins-det"
] |
|||
CoopASD: Cooperative Machine Anomalous Sound Detection with Privacy
Concerns | http://arxiv.org/abs/2408.14753v1 | http://arxiv.org/abs/2408.14753v1 | http://arxiv.org/pdf/2408.14753v1 | 2024-08-27 | 2024-08-27 | [
"Anbai Jiang",
"Yuchen Shi",
"Pingyi Fan",
"Wei-Qiang Zhang",
"Jia Liu"
] | [
"",
"",
"",
"",
""
] | Machine anomalous sound detection (ASD) has emerged as one of the most
promising applications in the Industrial Internet of Things (IIoT) due to its
unprecedented efficacy in mitigating risks of malfunctions and promoting
production efficiency. Previous works mainly investigated the machine ASD task
under centralized settings. However, developing the ASD system under
decentralized settings is crucial in practice, since the machine data are
dispersed in various factories and the data should not be explicitly shared due
to privacy concerns. To enable these factories to cooperatively develop a
scalable ASD model while preserving their privacy, we propose a novel framework
named CoopASD, where each factory trains an ASD model on its local dataset, and
a central server aggregates these local models periodically. We employ a
pre-trained model as the backbone of the ASD model to improve its robustness
and develop specialized techniques to stabilize the model under a completely
non-iid and domain shift setting. Compared with previous state-of-the-art
(SOTA) models trained in centralized settings, CoopASD showcases competitive
results with negligible degradation of 0.08%. We also conduct extensive
ablation studies to demonstrate the effectiveness of CoopASD. | Accepted by GLOBECOM 2024 | cs.SD | [
"cs.SD",
"cs.AI",
"cs.DC",
"eess.AS"
] |
||
Benchmarking Reinforcement Learning Methods for Dexterous Robotic
Manipulation with a Three-Fingered Gripper | http://arxiv.org/abs/2408.14747v1 | http://arxiv.org/abs/2408.14747v1 | http://arxiv.org/pdf/2408.14747v1 | 2024-08-27 | 2024-08-27 | [
"Elizabeth Cutler",
"Yuning Xing",
"Tony Cui",
"Brendan Zhou",
"Koen van Rijnsoever",
"Ben Hart",
"David Valencia",
"Lee Violet C. Ong",
"Trevor Gee",
"Minas Liarokapis",
"Henry Williams"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Reinforcement Learning (RL) training is predominantly conducted in
cost-effective and controlled simulation environments. However, the transfer of
these trained models to real-world tasks often presents unavoidable challenges.
This research explores the direct training of RL algorithms in controlled yet
realistic real-world settings for the execution of dexterous manipulation. The
benchmarking results of three RL algorithms trained on intricate in-hand
manipulation tasks within practical real-world contexts are presented. Our
study not only demonstrates the practicality of RL training in authentic
real-world scenarios, facilitating direct real-world applications, but also
provides insights into the associated challenges and considerations.
Additionally, our experiences with the employed experimental methods are
shared, with the aim of empowering and engaging fellow researchers and
practitioners in this dynamic field of robotics. | Australasian conference on robotics and automation (ACRA 2023) | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG"
] |
||
RSTeller: Scaling Up Visual Language Modeling in Remote Sensing with
Rich Linguistic Semantics from Openly Available Data and Large Language
Models | http://arxiv.org/abs/2408.14744v1 | http://arxiv.org/abs/2408.14744v1 | http://arxiv.org/pdf/2408.14744v1 | 2024-08-27 | 2024-08-27 | [
"Junyao Ge",
"Yang Zheng",
"Kaitai Guo",
"Jimin Liang"
] | [
"",
"",
"",
""
] | Abundant, well-annotated multimodal data in remote sensing are pivotal for
aligning complex visual remote sensing (RS) scenes with human language,
enabling the development of specialized vision language models across diverse
RS interpretation tasks. However, annotating RS images with rich linguistic
semantics at scale demands expertise in RS and substantial human labor, making
it costly and often impractical. In this study, we propose a workflow that
leverages large language models (LLMs) to generate multimodal datasets with
semantically rich captions at scale from plain OpenStreetMap (OSM) data for
images sourced from the Google Earth Engine (GEE) platform. This approach
facilitates the generation of paired remote sensing data and can be readily
scaled up using openly available data. Within this framework, we present
RSTeller, a multimodal dataset comprising over 1 million RS images, each
accompanied by multiple descriptive captions. Extensive experiments demonstrate
that RSTeller enhances the performance of multiple existing vision language
models for RS scene understanding through continual pre-training. Our
methodology significantly reduces the manual effort and expertise needed for
annotating remote sensing imagery while democratizing access to high-quality
annotated data. This advancement fosters progress in visual language modeling
and encourages broader participation in remote sensing research and
applications. The RSTeller dataset is available at
https://github.com/SlytherinGe/RSTeller. | Submitted to ISPRS | cs.CV | [
"cs.CV",
"cs.AI",
"I.4.8; I.2.10"
] |
||
TART: Boosting Clean Accuracy Through Tangent Direction Guided
Adversarial Training | http://arxiv.org/abs/2408.14728v1 | http://arxiv.org/abs/2408.14728v1 | http://arxiv.org/pdf/2408.14728v1 | 2024-08-27 | 2024-08-27 | [
"Bongsoo Yi",
"Rongjie Lai",
"Yao Li"
] | [
"",
"",
""
] | Adversarial training has been shown to be successful in enhancing the
robustness of deep neural networks against adversarial attacks. However, this
robustness is accompanied by a significant decline in accuracy on clean data.
In this paper, we propose a novel method, called Tangent Direction Guided
Adversarial Training (TART), that leverages the tangent space of the data
manifold to ameliorate the existing adversarial defense algorithms. We argue
that training with adversarial examples having large normal components
significantly alters the decision boundary and hurts accuracy. TART mitigates
this issue by estimating the tangent direction of adversarial examples and
allocating an adaptive perturbation limit according to the norm of their
tangential component. To the best of our knowledge, our paper is the first work
to consider the concept of tangent space and direction in the context of
adversarial defense. We validate the effectiveness of TART through extensive
experiments on both simulated and benchmark datasets. The results demonstrate
that TART consistently boosts clean accuracy while retaining a high level of
robustness against adversarial attacks. Our findings suggest that incorporating
the geometric properties of data can lead to more effective and efficient
adversarial training methods. | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CR"
] |
|||
XG-NID: Dual-Modality Network Intrusion Detection using a Heterogeneous
Graph Neural Network and Large Language Model | http://arxiv.org/abs/2408.16021v1 | http://arxiv.org/abs/2408.16021v1 | http://arxiv.org/pdf/2408.16021v1 | 2024-08-27 | 2024-08-27 | [
"Yasir Ali Farrukh",
"Syed Wali",
"Irfan Khan",
"Nathaniel D. Bastian"
] | [
"",
"",
"",
""
] | In the rapidly evolving field of cybersecurity, the integration of flow-level
and packet-level information for real-time intrusion detection remains a
largely untapped area of research. This paper introduces "XG-NID," a novel
framework that, to the best of our knowledge, is the first to fuse flow-level
and packet-level data within a heterogeneous graph structure, offering a
comprehensive analysis of network traffic. Leveraging a heterogeneous graph
neural network (GNN) with graph-level classification, XG-NID uniquely enables
real-time inference while effectively capturing the intricate relationships
between flow and packet payload data. Unlike traditional GNN-based
methodologies that predominantly analyze historical data, XG-NID is designed to
accommodate the heterogeneous nature of network traffic, providing a robust and
real-time defense mechanism. Our framework extends beyond mere classification;
it integrates Large Language Models (LLMs) to generate detailed, human-readable
explanations and suggest potential remedial actions, ensuring that the insights
produced are both actionable and comprehensible. Additionally, we introduce a
new set of flow features based on temporal information, further enhancing the
contextual and explainable inferences provided by our model. To facilitate
practical application and accessibility, we developed "GNN4ID," an open-source
tool that enables the extraction and transformation of raw network traffic into
the proposed heterogeneous graph structure, seamlessly integrating flow and
packet-level data. Our comprehensive quantitative comparative analysis
demonstrates that XG-NID achieves an F1 score of 97\% in multi-class
classification, outperforming existing baseline and state-of-the-art methods.
This sets a new standard in Network Intrusion Detection Systems by combining
innovative data fusion with enhanced interpretability and real-time
capabilities. | 19 pages, 6 figures | cs.CR | [
"cs.CR",
"cs.AI",
"cs.LG"
] |
||
PAT: Pruning-Aware Tuning for Large Language Models | http://arxiv.org/abs/2408.14721v1 | http://arxiv.org/abs/2408.14721v1 | http://arxiv.org/pdf/2408.14721v1 | 2024-08-27 | 2024-08-27 | [
"Yijiang Liu",
"Huanrui Yang",
"Youxin Chen",
"Rongyu Zhang",
"Miao Wang",
"Yuan Du",
"Li Du"
] | [
"",
"",
"",
"",
"",
"",
""
] | Large language models (LLMs) excel in language tasks, especially with
supervised fine-tuning after pre-training. However, their substantial memory
and computational requirements hinder practical applications. Structural
pruning, which reduces less significant weight dimensions, is one solution.
Yet, traditional post-hoc pruning often leads to significant performance loss,
with limited recovery from further fine-tuning due to reduced capacity. Since
the model fine-tuning refines the general and chaotic knowledge in pre-trained
models, we aim to incorporate structural pruning with the fine-tuning, and
propose the Pruning-Aware Tuning (PAT) paradigm to eliminate model redundancy
while preserving the model performance to the maximum extend. Specifically, we
insert the innovative Hybrid Sparsification Modules (HSMs) between the
Attention and FFN components to accordingly sparsify the upstream and
downstream linear modules. The HSM comprises a lightweight operator and a
globally shared trainable mask. The lightweight operator maintains a training
overhead comparable to that of LoRA, while the trainable mask unifies the
channels to be sparsified, ensuring structural pruning. Additionally, we
propose the Identity Loss which decouples the transformation and scaling
properties of the HSMs to enhance training robustness. Extensive experiments
demonstrate that PAT excels in both performance and efficiency. For example,
our Llama2-7b model with a 25\% pruning ratio achieves 1.33$\times$ speedup
while outperforming the LoRA-finetuned model by up to 1.26\% in accuracy with a
similar training cost. Code:
https://github.com/kriskrisliu/PAT_Pruning-Aware-Tuning | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
|||
Residual-based Adaptive Huber Loss (RAHL) -- Design of an improved Huber
loss for CQI prediction in 5G networks | http://arxiv.org/abs/2408.14718v1 | http://arxiv.org/abs/2408.14718v1 | http://arxiv.org/pdf/2408.14718v1 | 2024-08-27 | 2024-08-27 | [
"Mina Kaviani",
"Jurandy Almeida",
"Fabio L. Verdi"
] | [
"",
"",
""
] | The Channel Quality Indicator (CQI) plays a pivotal role in 5G networks,
optimizing infrastructure dynamically to ensure high Quality of Service (QoS).
Recent research has focused on improving CQI estimation in 5G networks using
machine learning. In this field, the selection of the proper loss function is
critical for training an accurate model. Two commonly used loss functions are
Mean Squared Error (MSE) and Mean Absolute Error (MAE). Roughly speaking, MSE
put more weight on outliers, MAE on the majority. Here, we argue that the Huber
loss function is more suitable for CQI prediction, since it combines the
benefits of both MSE and MAE. To achieve this, the Huber loss transitions
smoothly between MSE and MAE, controlled by a user-defined hyperparameter
called delta. However, finding the right balance between sensitivity to small
errors (MAE) and robustness to outliers (MSE) by manually choosing the optimal
delta is challenging. To address this issue, we propose a novel loss function,
named Residual-based Adaptive Huber Loss (RAHL). In RAHL, a learnable residual
is added to the delta, enabling the model to adapt based on the distribution of
errors in the data. Our approach effectively balances model robustness against
outliers while preserving inlier data precision. The widely recognized Long
Short-Term Memory (LSTM) model is employed in conjunction with RAHL, showcasing
significantly improved results compared to the aforementioned loss functions.
The obtained results affirm the superiority of RAHL, offering a promising
avenue for enhanced CQI prediction in 5G networks. | https://sol.sbc.org.br/index.php/sbrc/article/view/29822/29625 | cs.NI | [
"cs.NI",
"cs.AI"
] |
||
Text2SQL is Not Enough: Unifying AI and Databases with TAG | http://arxiv.org/abs/2408.14717v1 | http://arxiv.org/abs/2408.14717v1 | http://arxiv.org/pdf/2408.14717v1 | 2024-08-27 | 2024-08-27 | [
"Asim Biswal",
"Liana Patel",
"Siddarth Jha",
"Amog Kamsetty",
"Shu Liu",
"Joseph E. Gonzalez",
"Carlos Guestrin",
"Matei Zaharia"
] | [
"",
"",
"",
"",
"",
"",
"",
""
] | AI systems that serve natural language questions over databases promise to
unlock tremendous value. Such systems would allow users to leverage the
powerful reasoning and knowledge capabilities of language models (LMs)
alongside the scalable computational power of data management systems. These
combined capabilities would empower users to ask arbitrary natural language
questions over custom data sources. However, existing methods and benchmarks
insufficiently explore this setting. Text2SQL methods focus solely on natural
language questions that can be expressed in relational algebra, representing a
small subset of the questions real users wish to ask. Likewise,
Retrieval-Augmented Generation (RAG) considers the limited subset of queries
that can be answered with point lookups to one or a few data records within the
database. We propose Table-Augmented Generation (TAG), a unified and
general-purpose paradigm for answering natural language questions over
databases. The TAG model represents a wide range of interactions between the LM
and database that have been previously unexplored and creates exciting research
opportunities for leveraging the world knowledge and reasoning capabilities of
LMs over data. We systematically develop benchmarks to study the TAG problem
and find that standard methods answer no more than 20% of queries correctly,
confirming the need for further research in this area. We release code for the
benchmark at https://github.com/TAG-Research/TAG-Bench. | cs.DB | [
"cs.DB",
"cs.AI"
] |
|||
StyleSpeech: Parameter-efficient Fine Tuning for Pre-trained
Controllable Text-to-Speech | http://arxiv.org/abs/2408.14713v1 | http://arxiv.org/abs/2408.14713v1 | http://arxiv.org/pdf/2408.14713v1 | 2024-08-27 | 2024-08-27 | [
"Haowei Lou",
"Helen Paik",
"Wen Hu",
"Lina Yao"
] | [
"",
"",
"",
""
] | This paper introduces StyleSpeech, a novel Text-to-Speech~(TTS) system that
enhances the naturalness and accuracy of synthesized speech. Building upon
existing TTS technologies, StyleSpeech incorporates a unique Style Decorator
structure that enables deep learning models to simultaneously learn style and
phoneme features, improving adaptability and efficiency through the principles
of Lower Rank Adaptation~(LoRA). LoRA allows efficient adaptation of style
features in pre-trained models. Additionally, we introduce a novel automatic
evaluation metric, the LLM-Guided Mean Opinion Score (LLM-MOS), which employs
large language models to offer an objective and robust protocol for
automatically assessing TTS system performance. Extensive testing on benchmark
datasets shows that our approach markedly outperforms existing state-of-the-art
baseline methods in producing natural, accurate, and high-quality speech. These
advancements not only pushes the boundaries of current TTS system capabilities,
but also facilitate the application of TTS system in more dynamic and
specialized, such as interactive virtual assistants, adaptive audiobooks, and
customized voice for gaming. Speech samples can be found in
https://style-speech.vercel.app | cs.SD | [
"cs.SD",
"cs.AI",
"cs.MM",
"eess.AS"
] |
|||
Artificial Intelligence in Landscape Architecture: A Survey | http://arxiv.org/abs/2408.14700v1 | http://arxiv.org/abs/2408.14700v1 | http://arxiv.org/pdf/2408.14700v1 | 2024-08-26 | 2024-08-26 | [
"Yue Xing",
"Wensheng Gan",
"Qidi Chen"
] | [
"",
"",
""
] | The development history of landscape architecture (LA) reflects the human
pursuit of environmental beautification and ecological balance. With the
advancement of artificial intelligence (AI) technologies that simulate and
extend human intelligence, immense opportunities have been provided for LA,
offering scientific and technological support throughout the entire workflow.
In this article, we comprehensively review the applications of AI technology in
the field of LA. First, we introduce the many potential benefits that AI brings
to the design, planning, and management aspects of LA. Secondly, we discuss how
AI can assist the LA field in solving its current development problems,
including urbanization, environmental degradation and ecological decline,
irrational planning, insufficient management and maintenance, and lack of
public participation. Furthermore, we summarize the key technologies and
practical cases of applying AI in the LA domain, from design assistance to
intelligent management, all of which provide innovative solutions for the
planning, design, and maintenance of LA. Finally, we look ahead to the problems
and opportunities in LA, emphasizing the need to combine human expertise and
judgment for rational decision-making. This article provides both theoretical
and practical guidance for LA designers, researchers, and technology
developers. The successful integration of AI technology into LA holds great
promise for enhancing the field's capabilities and achieving more sustainable,
efficient, and user-friendly outcomes. | Preprint. 3 figures, 2 tables | cs.AI | [
"cs.AI"
] |
||
Smart Multi-Modal Search: Contextual Sparse and Dense Embedding
Integration in Adobe Express | http://arxiv.org/abs/2408.14698v2 | http://arxiv.org/abs/2408.14698v2 | http://arxiv.org/pdf/2408.14698v2 | 2024-08-26 | 2024-08-29 | [
"Cherag Aroraa",
"Tracy Holloway King",
"Jayant Kumar",
"Yi Lu",
"Sanat Sharma",
"Arvind Srikantan",
"David Uvalle",
"Josep Valls-Vargas",
"Harsha Vardhan"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | As user content and queries become increasingly multi-modal, the need for
effective multi-modal search systems has grown. Traditional search systems
often rely on textual and metadata annotations for indexed images, while
multi-modal embeddings like CLIP enable direct search using text and image
embeddings. However, embedding-based approaches face challenges in integrating
contextual features such as user locale and recency. Building a scalable
multi-modal search system requires fine-tuning several components. This paper
presents a multi-modal search architecture and a series of AB tests that
optimize embeddings and multi-modal technologies in Adobe Express template
search. We address considerations such as embedding model selection, the roles
of embeddings in matching and ranking, and the balance between dense and sparse
embeddings. Our iterative approach demonstrates how utilizing sparse, dense,
and contextual features enhances short and long query search, significantly
reduces null rates (over 70\%), and increases click-through rates (CTR). Our
findings provide insights into developing robust multi-modal search systems,
thereby enhancing relevance for complex queries. | CIKM 2024 (International Conference on Information and Knowledge
Management), Multimodal Search and Recommendations Workshop | cs.IR | [
"cs.IR",
"cs.AI",
"cs.CL",
"cs.CV"
] |
||
Training-Free Activation Sparsity in Large Language Models | http://arxiv.org/abs/2408.14690v1 | http://arxiv.org/abs/2408.14690v1 | http://arxiv.org/pdf/2408.14690v1 | 2024-08-26 | 2024-08-26 | [
"James Liu",
"Pragaash Ponnusamy",
"Tianle Cai",
"Han Guo",
"Yoon Kim",
"Ben Athiwaratkun"
] | [
"",
"",
"",
"",
"",
""
] | Activation sparsity can enable practical inference speedups in large language
models (LLMs) by reducing the compute and memory-movement required for matrix
multiplications during the forward pass. However, existing methods face
limitations that inhibit widespread adoption. Some approaches are tailored
towards older models with ReLU-based sparsity, while others require extensive
continued pre-training on up to hundreds of billions of tokens. This paper
describes TEAL, a simple training-free method that applies magnitude-based
activation sparsity to hidden states throughout the entire model. TEAL achieves
40-50% model-wide sparsity with minimal performance degradation across Llama-2,
Llama-3, and Mistral families, with sizes varying from 7B to 70B. We improve
existing sparse kernels and demonstrate wall-clock decoding speed-ups of up to
1.53$\times$ and 1.8$\times$ at 40% and 50% model-wide sparsity. TEAL is
compatible with weight quantization, enabling further efficiency gains. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Bridging the Gap: Unpacking the Hidden Challenges in Knowledge
Distillation for Online Ranking Systems | http://arxiv.org/abs/2408.14678v1 | http://arxiv.org/abs/2408.14678v1 | http://arxiv.org/pdf/2408.14678v1 | 2024-08-26 | 2024-08-26 | [
"Nikhil Khani",
"Shuo Yang",
"Aniruddh Nath",
"Yang Liu",
"Pendo Abbo",
"Li Wei",
"Shawn Andrews",
"Maciej Kula",
"Jarrod Kahn",
"Zhe Zhao",
"Lichan Hong",
"Ed Chi"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Knowledge Distillation (KD) is a powerful approach for compressing a large
model into a smaller, more efficient model, particularly beneficial for
latency-sensitive applications like recommender systems. However, current KD
research predominantly focuses on Computer Vision (CV) and NLP tasks,
overlooking unique data characteristics and challenges inherent to recommender
systems. This paper addresses these overlooked challenges, specifically: (1)
mitigating data distribution shifts between teacher and student models, (2)
efficiently identifying optimal teacher configurations within time and
budgetary constraints, and (3) enabling computationally efficient and rapid
sharing of teacher labels to support multiple students. We present a robust KD
system developed and rigorously evaluated on multiple large-scale personalized
video recommendation systems within Google. Our live experiment results
demonstrate significant improvements in student model performance while
ensuring consistent and reliable generation of high quality teacher labels from
a continuous data stream of data. | cs.IR | [
"cs.IR",
"cs.AI",
"cs.LG"
] |
|||
KGPrune: a Web Application to Extract Subgraphs of Interest from
Wikidata with Analogical Pruning | http://arxiv.org/abs/2408.14658v1 | http://arxiv.org/abs/2408.14658v1 | http://arxiv.org/pdf/2408.14658v1 | 2024-08-26 | 2024-08-26 | [
"Pierre Monnin",
"Cherif-Hassan Nousradine",
"Lucas Jarnac",
"Laurel Zuckerman",
"Miguel Couceiro"
] | [
"",
"",
"",
"",
""
] | Knowledge graphs (KGs) have become ubiquitous publicly available knowledge
sources, and are nowadays covering an ever increasing array of domains.
However, not all knowledge represented is useful or pertaining when considering
a new application or specific task. Also, due to their increasing size,
handling large KGs in their entirety entails scalability issues. These two
aspects asks for efficient methods to extract subgraphs of interest from
existing KGs. To this aim, we introduce KGPrune, a Web Application that, given
seed entities of interest and properties to traverse, extracts their
neighboring subgraphs from Wikidata. To avoid topical drift, KGPrune relies on
a frugal pruning algorithm based on analogical reasoning to only keep relevant
neighbors while pruning irrelevant ones. The interest of KGPrune is illustrated
by two concrete applications, namely, bootstrapping an enterprise KG and
extracting knowledge related to looted artworks. | Accepted as a demo paper at ECAI 2024 | cs.AI | [
"cs.AI",
"cs.DB",
"cs.IR",
"cs.LG"
] |
||
Emergent Language in Open-Ended Environments | http://arxiv.org/abs/2408.14649v1 | http://arxiv.org/abs/2408.14649v1 | http://arxiv.org/pdf/2408.14649v1 | 2024-08-26 | 2024-08-26 | [
"Cornelius Wolff",
"Julius Mayer",
"Elia Bruni",
"Xenia Ohmer"
] | [
"",
"",
"",
""
] | Emergent language research has made significant progress in recent years, but
still largely fails to explore how communication emerges in more complex and
situated multi-agent systems. Existing setups often employ a reference game,
which limits the range of language emergence phenomena that can be studied, as
the game consists of a single, purely language-based interaction between the
agents. In this paper, we address these limitations and explore the emergence
and utility of token-based communication in open-ended multi-agent
environments, where situated agents interact with the environment through
movement and communication over multiple time-steps. Specifically, we introduce
two novel cooperative environments: Multi-Agent Pong and Collectors. These
environments are interesting because optimal performance requires the emergence
of a communication protocol, but moderate success can be achieved without one.
By employing various methods from explainable AI research, such as saliency
maps, perturbation, and diagnostic classifiers, we are able to track and
interpret the agents' language channel use over time. We find that the emerging
communication is sparse, with the agents only generating meaningful messages
and acting upon incoming messages in states where they cannot succeed without
coordination. | 10 pages, 4 figures, 4 tables, preprint | cs.AI | [
"cs.AI"
] |
||
Visions of Destruction: Exploring a Potential of Generative AI in
Interactive Art | http://arxiv.org/abs/2408.14644v1 | http://arxiv.org/abs/2408.14644v1 | http://arxiv.org/pdf/2408.14644v1 | 2024-08-26 | 2024-08-26 | [
"Mar Canet Sola",
"Varvara Guljajeva"
] | [
"",
""
] | This paper explores the potential of generative AI within interactive art,
employing a practice-based research approach. It presents the interactive
artwork "Visions of Destruction" as a detailed case study, highlighting its
innovative use of generative AI to create a dynamic, audience-responsive
experience. This artwork applies gaze-based interaction to dynamically alter
digital landscapes, symbolizing the impact of human activities on the
environment by generating contemporary collages created with AI, trained on
data about human damage to nature, and guided by audience interaction. The
transformation of pristine natural scenes into human-made and industrialized
landscapes through viewer interaction serves as a stark reminder of
environmental degradation. The paper thoroughly explores the technical
challenges and artistic innovations involved in creating such an interactive
art installation, emphasizing the potential of generative AI to revolutionize
artistic expression, audience engagement, and especially the opportunities for
the interactive art field. It offers insights into the conceptual framework
behind the artwork, aiming to evoke a deeper understanding and reflection on
the Anthropocene era and human-induced climate change. This study contributes
significantly to the field of creative AI and interactive art, blending
technology and environmental consciousness in a compelling, thought-provoking
manner. | 10.1145/3678698.3687185 | cs.HC | [
"cs.HC",
"cs.AI",
"I.2; J.5"
] |
||
Effect of Adaptation Rate and Cost Display in a Human-AI Interaction
Game | http://arxiv.org/abs/2408.14640v1 | http://arxiv.org/abs/2408.14640v1 | http://arxiv.org/pdf/2408.14640v1 | 2024-08-26 | 2024-08-26 | [
"Jason T. Isa",
"Bohan Wu",
"Qirui Wang",
"Yilin Zhang",
"Samuel A. Burden",
"Lillian J. Ratliff",
"Benjamin J. Chasnov"
] | [
"",
"",
"",
"",
"",
"",
""
] | As interactions between humans and AI become more prevalent, it is critical
to have better predictors of human behavior in these interactions. We
investigated how changes in the AI's adaptive algorithm impact behavior
predictions in two-player continuous games. In our experiments, the AI adapted
its actions using a gradient descent algorithm under different adaptation rates
while human participants were provided cost feedback. The cost feedback was
provided by one of two types of visual displays: (a) cost at the current joint
action vector, or (b) cost in a local neighborhood of the current joint action
vector. Our results demonstrate that AI adaptation rate can significantly
affect human behavior, having the ability to shift the outcome between two game
theoretic equilibrium. We observed that slow adaptation rates shift the outcome
towards the Nash equilibrium, while fast rates shift the outcome towards the
human-led Stackelberg equilibrium. The addition of localized cost information
had the effect of shifting outcomes towards Nash, compared to the outcomes from
cost information at only the current joint action vector. Future work will
investigate other effects that influence the convergence of gradient descent
games. | cs.AI | [
"cs.AI",
"cs.GT",
"cs.HC"
] |
|||
Hybrid Deep Convolutional Neural Networks Combined with Autoencoders And
Augmented Data To Predict The Look-Up Table 2006 | http://arxiv.org/abs/2408.14626v1 | http://arxiv.org/abs/2408.14626v1 | http://arxiv.org/pdf/2408.14626v1 | 2024-08-26 | 2024-08-26 | [
"Messaoud Djeddou",
"Aouatef Hellal",
"Ibrahim A. Hameed",
"Xingang Zhao",
"Djehad Al Dallal"
] | [
"",
"",
"",
"",
""
] | This study explores the development of a hybrid deep convolutional neural
network (DCNN) model enhanced by autoencoders and data augmentation techniques
to predict critical heat flux (CHF) with high accuracy. By augmenting the
original input features using three different autoencoder configurations, the
model's predictive capabilities were significantly improved. The hybrid models
were trained and tested on a dataset of 7225 samples, with performance metrics
including the coefficient of determination (R2), Nash-Sutcliffe efficiency
(NSE), mean absolute error (MAE), and normalized root-mean-squared error
(NRMSE) used for evaluation. Among the tested models, the DCNN_3F-A2
configuration demonstrated the highest accuracy, achieving an R2 of 0.9908
during training and 0.9826 during testing, outperforming the base model and
other augmented versions. These results suggest that the proposed hybrid
approach, combining deep learning with feature augmentation, offers a robust
solution for CHF prediction, with the potential to generalize across a wider
range of conditions. | 11 pages, 6 figures | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
On Centralized Critics in Multi-Agent Reinforcement Learning | http://arxiv.org/abs/2408.14597v1 | http://arxiv.org/abs/2408.14597v1 | http://arxiv.org/pdf/2408.14597v1 | 2024-08-26 | 2024-08-26 | [
"Xueguang Lyu",
"Andrea Baisero",
"Yuchen Xiao",
"Brett Daley",
"Christopher Amato"
] | [
"",
"",
"",
"",
""
] | Centralized Training for Decentralized Execution where agents are trained
offline in a centralized fashion and execute online in a decentralized manner,
has become a popular approach in Multi-Agent Reinforcement Learning (MARL). In
particular, it has become popular to develop actor-critic methods that train
decentralized actors with a centralized critic where the centralized critic is
allowed access global information of the entire system, including the true
system state. Such centralized critics are possible given offline information
and are not used for online execution. While these methods perform well in a
number of domains and have become a de facto standard in MARL, using a
centralized critic in this context has yet to be sufficiently analyzed
theoretically or empirically. In this paper, we therefore formally analyze
centralized and decentralized critic approaches, and analyze the effect of
using state-based critics in partially observable environments. We derive
theories contrary to the common intuition: critic centralization is not
strictly beneficial, and using state values can be harmful. We further prove
that, in particular, state-based critics can introduce unexpected bias and
variance compared to history-based critics. Finally, we demonstrate how the
theory applies in practice by comparing different forms of critics on a wide
range of common multi-agent benchmarks. The experiments show practical issues
such as the difficulty of representation learning with partial observability,
which highlights why the theoretical problems are often overlooked in the
literature. | Journal of Artificial Intelligence Research 77 (2023): 295-354 | cs.AI | [
"cs.AI"
] |
||
How to build trust in answers given by Generative AI for specific, and
vague, financial questions | http://arxiv.org/abs/2408.14593v1 | http://arxiv.org/abs/2408.14593v1 | http://arxiv.org/pdf/2408.14593v1 | 2024-08-26 | 2024-08-26 | [
"Alex Zarifis",
"Xusen Cheng"
] | [
"",
""
] | Purpose: Generative artificial intelligence (GenAI) has progressed in its
ability and has seen explosive growth in adoption. However, the consumer's
perspective on its use, particularly in specific scenarios such as financial
advice, is unclear. This research develops a model of how to build trust in the
advice given by GenAI when answering financial questions.
Design/methodology/approach: The model is tested with survey data using
structural equation modelling (SEM) and multi-group analysis (MGA). The MGA
compares two scenarios, one where the consumer makes a specific question and
one where a vague question is made. Findings: This research identifies that
building trust for consumers is different when they ask a specific financial
question in comparison to a vague one. Humanness has a different effect in the
two scenarios. When a financial question is specific, human-like interaction
does not strengthen trust, while (1) when a question is vague, humanness builds
trust. The four ways to build trust in both scenarios are (2) human oversight
and being in the loop, (3) transparency and control, (4) accuracy and
usefulness and finally (5) ease of use and support. Originality/value: This
research contributes to a better understanding of the consumer's perspective
when using GenAI for financial questions and highlights the importance of
understanding GenAI in specific contexts from specific stakeholders. | Journal of Electronic Business & Digital Economics, pp.1-15 | 10.1108/JEBDE-11-2023-0028 | cs.HC | [
"cs.HC",
"cs.AI"
] |
|
DIAGen: Diverse Image Augmentation with Generative Models | http://arxiv.org/abs/2408.14584v1 | http://arxiv.org/abs/2408.14584v1 | http://arxiv.org/pdf/2408.14584v1 | 2024-08-26 | 2024-08-26 | [
"Tobias Lingenberg",
"Markus Reuter",
"Gopika Sudhakaran",
"Dominik Gojny",
"Stefan Roth",
"Simone Schaub-Meyer"
] | [
"",
"",
"",
"",
"",
""
] | Simple data augmentation techniques, such as rotations and flips, are widely
used to enhance the generalization power of computer vision models. However,
these techniques often fail to modify high-level semantic attributes of a
class. To address this limitation, researchers have explored generative
augmentation methods like the recently proposed DA-Fusion. Despite some
progress, the variations are still largely limited to textural changes, thus
falling short on aspects like varied viewpoints, environment, weather
conditions, or even class-level semantic attributes (eg, variations in a dog's
breed). To overcome this challenge, we propose DIAGen, building upon DA-Fusion.
First, we apply Gaussian noise to the embeddings of an object learned with
Textual Inversion to diversify generations using a pre-trained diffusion
model's knowledge. Second, we exploit the general knowledge of a text-to-text
generative model to guide the image generation of the diffusion model with
varied class-specific prompts. Finally, we introduce a weighting mechanism to
mitigate the impact of poorly generated samples. Experimental results across
various datasets show that DIAGen not only enhances semantic diversity but also
improves the performance of subsequent classifiers. The advantages of DIAGen
over standard augmentations and the DA-Fusion baseline are particularly
pronounced with out-of-distribution samples. | Accepted for publication in GCPR 2024 | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
EVINCE: Optimizing Adversarial LLM Dialogues via Conditional Statistics
and Information Theory | http://arxiv.org/abs/2408.14575v1 | http://arxiv.org/abs/2408.14575v1 | http://arxiv.org/pdf/2408.14575v1 | 2024-08-26 | 2024-08-26 | [
"Edward Y. Chang"
] | [
""
] | This paper introduces EVINCE (Entropy and Variation IN Conditional
Exchanges), a dialogue framework advancing Artificial General Intelligence
(AGI) by enhancing versatility, adaptivity, and reasoning in large language
models (LLMs). Leveraging adversarial debate and a novel dual entropy theory,
EVINCE improves prediction accuracy, robustness, and stability in LLMs by
integrating statistical modeling, information theory, and machine learning to
balance diverse perspective exploration with strong prior exploitation. The
framework's effectiveness is demonstrated through consistent convergence of
information-theoretic metrics, particularly improved mutual information,
fostering productive LLM collaboration. We apply EVINCE to healthcare, showing
improved disease diagnosis, and discuss its broader implications for
decision-making across domains. This work provides theoretical foundations and
empirical validation for EVINCE, paving the way for advancements in LLM
collaboration and AGI development. | 19 pages, 7 figures, four tables | cs.AI | [
"cs.AI",
"I.2.7"
] |
||
CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting
Mitigation | http://arxiv.org/abs/2408.14572v1 | http://arxiv.org/abs/2408.14572v1 | http://arxiv.org/pdf/2408.14572v1 | 2024-08-26 | 2024-08-26 | [
"Muhammad Fawi"
] | [
""
] | This paper introduces CURLoRA, a novel approach to fine-tuning large language
models (LLMs) that leverages CUR matrix decomposition in the context of
Low-Rank Adaptation (LoRA). Our method addresses two critical challenges in LLM
fine-tuning: mitigating catastrophic forgetting during continual learning and
reducing the number of trainable parameters. We propose a unique modification
to the CUR decomposition process, utilizing inverted probabilities for column
and row selection which acts as an implicit regularization, and initializing
the $U$ matrix as a zero matrix, and only fine-tuning it. We demonstrate
through experiments on multiple datasets that CURLoRA outperforms standard LoRA
in mitigating catastrophic forgetting. It maintains model stability and
performance across tasks while significantly reducing the number of trainable
parameters. Our results show that CURLoRA achieves very good and stable task
accuracy while maintaining base model's perplexity scores fixed compared to
LoRA upon continual fine-tuning, particularly in scenarios with limited data. | Code available at https://github.com/MNoorFawi/curlora | 10.5281/zenodo.12730055 | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
|
Improving Clinical Note Generation from Complex Doctor-Patient
Conversation | http://arxiv.org/abs/2408.14568v1 | http://arxiv.org/abs/2408.14568v1 | http://arxiv.org/pdf/2408.14568v1 | 2024-08-26 | 2024-08-26 | [
"Yizhan Li",
"Sifan Wu",
"Christopher Smith",
"Thomas Lo",
"Bang Liu"
] | [
"",
"",
"",
"",
""
] | Writing clinical notes and documenting medical exams is a critical task for
healthcare professionals, serving as a vital component of patient care
documentation. However, manually writing these notes is time-consuming and can
impact the amount of time clinicians can spend on direct patient interaction
and other tasks. Consequently, the development of automated clinical note
generation systems has emerged as a clinically meaningful area of research
within AI for health. In this paper, we present three key contributions to the
field of clinical note generation using large language models (LLMs). First, we
introduce CliniKnote, a comprehensive dataset consisting of 1,200 complex
doctor-patient conversations paired with their full clinical notes. This
dataset, created and curated by medical experts with the help of modern neural
networks, provides a valuable resource for training and evaluating models in
clinical note generation tasks. Second, we propose the K-SOAP (Keyword,
Subjective, Objective, Assessment, and Plan) note format, which enhances
traditional SOAP~\cite{podder2023soap} (Subjective, Objective, Assessment, and
Plan) notes by adding a keyword section at the top, allowing for quick
identification of essential information. Third, we develop an automatic
pipeline to generate K-SOAP notes from doctor-patient conversations and
benchmark various modern LLMs using various metrics. Our results demonstrate
significant improvements in efficiency and performance compared to standard LLM
finetuning methods. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
A Survey of Camouflaged Object Detection and Beyond | http://arxiv.org/abs/2408.14562v1 | http://arxiv.org/abs/2408.14562v1 | http://arxiv.org/pdf/2408.14562v1 | 2024-08-26 | 2024-08-26 | [
"Fengyang Xiao",
"Sujie Hu",
"Yuqi Shen",
"Chengyu Fang",
"Jinfa Huang",
"Chunming He",
"Longxiang Tang",
"Ziyun Yang",
"Xiu Li"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Camouflaged Object Detection (COD) refers to the task of identifying and
segmenting objects that blend seamlessly into their surroundings, posing a
significant challenge for computer vision systems. In recent years, COD has
garnered widespread attention due to its potential applications in
surveillance, wildlife conservation, autonomous systems, and more. While
several surveys on COD exist, they often have limitations in terms of the
number and scope of papers covered, particularly regarding the rapid
advancements made in the field since mid-2023. To address this void, we present
the most comprehensive review of COD to date, encompassing both theoretical
frameworks and practical contributions to the field. This paper explores
various COD methods across four domains, including both image-level and
video-level solutions, from the perspectives of traditional and deep learning
approaches. We thoroughly investigate the correlations between COD and other
camouflaged scenario methods, thereby laying the theoretical foundation for
subsequent analyses. Beyond object-level detection, we also summarize extended
methods for instance-level tasks, including camouflaged instance segmentation,
counting, and ranking. Additionally, we provide an overview of commonly used
benchmarks and evaluation metrics in COD tasks, conducting a comprehensive
evaluation of deep learning-based techniques in both image and video domains,
considering both qualitative and quantitative performance. Finally, we discuss
the limitations of current COD models and propose 9 promising directions for
future research, focusing on addressing inherent challenges and exploring
novel, meaningful technologies. For those interested, a curated list of
COD-related techniques, datasets, and additional resources can be found at
https://github.com/ChunmingHe/awesome-concealed-object-segmentation | 26 pages, 10 figures, 8 tables | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Revisiting Image Captioning Training Paradigm via Direct CLIP-based
Optimization | http://arxiv.org/abs/2408.14547v1 | http://arxiv.org/abs/2408.14547v1 | http://arxiv.org/pdf/2408.14547v1 | 2024-08-26 | 2024-08-26 | [
"Nicholas Moratelli",
"Davide Caffagni",
"Marcella Cornia",
"Lorenzo Baraldi",
"Rita Cucchiara"
] | [
"",
"",
"",
"",
""
] | The conventional training approach for image captioning involves pre-training
a network using teacher forcing and subsequent fine-tuning with Self-Critical
Sequence Training to maximize hand-crafted captioning metrics. However, when
attempting to optimize modern and higher-quality metrics like CLIP-Score and
PAC-Score, this training method often encounters instability and fails to
acquire the genuine descriptive capabilities needed to produce fluent and
informative captions. In this paper, we propose a new training paradigm termed
Direct CLIP-Based Optimization (DiCO). Our approach jointly learns and
optimizes a reward model that is distilled from a learnable captioning
evaluator with high human correlation. This is done by solving a weighted
classification problem directly inside the captioner. At the same time, DiCO
prevents divergence from the original model, ensuring that fluency is
maintained. DiCO not only exhibits improved stability and enhanced quality in
the generated captions but also aligns more closely with human preferences
compared to existing methods, especially in modern metrics. Additionally, it
maintains competitive performance in traditional metrics. Our source code and
trained models are publicly available at https://github.com/aimagelab/DiCO. | BMVC 2024 | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] |
||
Advancing Humanoid Locomotion: Mastering Challenging Terrains with
Denoising World Model Learning | http://arxiv.org/abs/2408.14472v1 | http://arxiv.org/abs/2408.14472v1 | http://arxiv.org/pdf/2408.14472v1 | 2024-08-26 | 2024-08-26 | [
"Xinyang Gu",
"Yen-Jen Wang",
"Xiang Zhu",
"Chengming Shi",
"Yanjiang Guo",
"Yichen Liu",
"Jianyu Chen"
] | [
"",
"",
"",
"",
"",
"",
""
] | Humanoid robots, with their human-like skeletal structure, are especially
suited for tasks in human-centric environments. However, this structure is
accompanied by additional challenges in locomotion controller design,
especially in complex real-world environments. As a result, existing humanoid
robots are limited to relatively simple terrains, either with model-based
control or model-free reinforcement learning. In this work, we introduce
Denoising World Model Learning (DWL), an end-to-end reinforcement learning
framework for humanoid locomotion control, which demonstrates the world's first
humanoid robot to master real-world challenging terrains such as snowy and
inclined land in the wild, up and down stairs, and extremely uneven terrains.
All scenarios run the same learned neural network with zero-shot sim-to-real
transfer, indicating the superior robustness and generalization capability of
the proposed method. | Robotics: Science and Systems (RSS), 2024. (Best Paper Award
Finalist) | cs.RO | [
"cs.RO",
"cs.AI",
"cs.SY",
"eess.SY"
] |
||
K-Sort Arena: Efficient and Reliable Benchmarking for Generative Models
via K-wise Human Preferences | http://arxiv.org/abs/2408.14468v1 | http://arxiv.org/abs/2408.14468v1 | http://arxiv.org/pdf/2408.14468v1 | 2024-08-26 | 2024-08-26 | [
"Zhikai Li",
"Xuewen Liu",
"Dongrong Fu",
"Jianquan Li",
"Qingyi Gu",
"Kurt Keutzer",
"Zhen Dong"
] | [
"",
"",
"",
"",
"",
"",
""
] | The rapid advancement of visual generative models necessitates efficient and
reliable evaluation methods. Arena platform, which gathers user votes on model
comparisons, can rank models with human preferences. However, traditional Arena
methods, while established, require an excessive number of comparisons for
ranking to converge and are vulnerable to preference noise in voting,
suggesting the need for better approaches tailored to contemporary evaluation
challenges. In this paper, we introduce K-Sort Arena, an efficient and reliable
platform based on a key insight: images and videos possess higher perceptual
intuitiveness than texts, enabling rapid evaluation of multiple samples
simultaneously. Consequently, K-Sort Arena employs K-wise comparisons, allowing
K models to engage in free-for-all competitions, which yield much richer
information than pairwise comparisons. To enhance the robustness of the system,
we leverage probabilistic modeling and Bayesian updating techniques. We propose
an exploration-exploitation-based matchmaking strategy to facilitate more
informative comparisons. In our experiments, K-Sort Arena exhibits 16.3x faster
convergence compared to the widely used ELO algorithm. To further validate the
superiority and obtain a comprehensive leaderboard, we collect human feedback
via crowdsourced evaluations of numerous cutting-edge text-to-image and
text-to-video models. Thanks to its high efficiency, K-Sort Arena can
continuously incorporate emerging models and update the leaderboard with
minimal votes. Our project has undergone several months of internal testing and
is now available at https://huggingface.co/spaces/ksort/K-Sort-Arena | Project page: https://huggingface.co/spaces/ksort/K-Sort-Arena | cs.AI | [
"cs.AI",
"cs.CV",
"cs.HC"
] |
||
Temporal Ensemble Logic | http://arxiv.org/abs/2408.14443v1 | http://arxiv.org/abs/2408.14443v1 | http://arxiv.org/pdf/2408.14443v1 | 2024-08-26 | 2024-08-26 | [
"Guo-Qiang Zhang"
] | [
""
] | We introduce Temporal Ensemble Logic (TEL), a monadic, first-order modal
logic for linear-time temporal reasoning. TEL includes primitive temporal
constructs such as ``always up to $t$ time later'' ($\Box_t$), ``sometimes
before $t$ time in the future'' ($\Diamond_t$), and ``$t$-time later''
$\varphi_t$. TEL has been motivated from the requirement for rigor and
reproducibility for cohort specification and discovery in clinical and
population health research, to fill a gap in formalizing temporal reasoning in
biomedicine. In this paper, we first introduce TEL in a general set up, with
discrete and dense time as special cases. We then focus on the theoretical
development of discrete TEL on the temporal domain of positive integers
$\mathbb{N}^+$, denoted as ${\rm TEL}_{\mathbb{N}^+}$. ${\rm
TEL}_{\mathbb{N}^+}$ is strictly more expressive than the standard monadic
second order logic, characterized by B\"{u}chi automata. We present its formal
semantics, a proof system, and provide a proof for the undecidability of the
satisfiability of ${\rm TEL}_{\mathbb{N}^+}$. We also discuss expressiveness
and decidability fragments for ${\rm TEL}_{\mathbb{N}^+}$, followed by
illustrative applications. | 47 pages, 2 figures | cs.LO | [
"cs.LO",
"cs.AI",
"cs.FL"
] |
||
Attend-Fusion: Efficient Audio-Visual Fusion for Video Classification | http://arxiv.org/abs/2408.14441v1 | http://arxiv.org/abs/2408.14441v1 | http://arxiv.org/pdf/2408.14441v1 | 2024-08-26 | 2024-08-26 | [
"Mahrukh Awan",
"Asmar Nadeem",
"Muhammad Junaid Awan",
"Armin Mustafa",
"Syed Sameed Husain"
] | [
"",
"",
"",
"",
""
] | Exploiting both audio and visual modalities for video classification is a
challenging task, as the existing methods require large model architectures,
leading to high computational complexity and resource requirements. Smaller
architectures, on the other hand, struggle to achieve optimal performance. In
this paper, we propose Attend-Fusion, an audio-visual (AV) fusion approach that
introduces a compact model architecture specifically designed to capture
intricate audio-visual relationships in video data. Through extensive
experiments on the challenging YouTube-8M dataset, we demonstrate that
Attend-Fusion achieves an F1 score of 75.64\% with only 72M parameters, which
is comparable to the performance of larger baseline models such as
Fully-Connected Late Fusion (75.96\% F1 score, 341M parameters). Attend-Fusion
achieves similar performance to the larger baseline model while reducing the
model size by nearly 80\%, highlighting its efficiency in terms of model
complexity. Our work demonstrates that the Attend-Fusion model effectively
combines audio and visual information for video classification, achieving
competitive performance with significantly reduced model size. This approach
opens new possibilities for deploying high-performance video understanding
systems in resource-constrained environments across various applications. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks:
An Overview | http://arxiv.org/abs/2408.14437v1 | http://arxiv.org/abs/2408.14437v1 | http://arxiv.org/pdf/2408.14437v1 | 2024-08-26 | 2024-08-26 | [
"Ilkin Aliyev",
"Kama Svoboda",
"Tosiron Adegbija",
"Jean-Marc Fellous"
] | [
"",
"",
"",
""
] | Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven
nature of biological neural processing, and offer the potential for
ultra-low-power artificial intelligence. However, realizing their efficiency
benefits requires specialized hardware and a co-design approach that
effectively leverages sparsity. We explore the hardware-software co-design of
sparse SNNs, examining how sparsity representation, hardware architectures, and
training techniques influence hardware efficiency. We analyze the impact of
static and dynamic sparsity, discuss the implications of different neuron
models and encoding schemes, and investigate the need for adaptability in
hardware designs. Our work aims to illuminate the path towards embedded
neuromorphic systems that fully exploit the computational advantages of sparse
SNNs. | IEEE International Symposium on Embedded Multicore/Many-core
Systems-on-Chip (MCSoC 2024) | cs.AR | [
"cs.AR",
"cs.AI"
] |
||
Social perception of faces in a vision-language model | http://arxiv.org/abs/2408.14435v1 | http://arxiv.org/abs/2408.14435v1 | http://arxiv.org/pdf/2408.14435v1 | 2024-08-26 | 2024-08-26 | [
"Carina I. Hausladen",
"Manuel Knott",
"Colin F. Camerer",
"Pietro Perona"
] | [
"",
"",
"",
""
] | We explore social perception of human faces in CLIP, a widely used
open-source vision-language model. To this end, we compare the similarity in
CLIP embeddings between different textual prompts and a set of face images. Our
textual prompts are constructed from well-validated social psychology terms
denoting social perception. The face images are synthetic and are
systematically and independently varied along six dimensions: the legally
protected attributes of age, gender, and race, as well as facial expression,
lighting, and pose. Independently and systematically manipulating face
attributes allows us to study the effect of each on social perception and
avoids confounds that can occur in wild-collected data due to uncontrolled
systematic correlations between attributes. Thus, our findings are experimental
rather than observational. Our main findings are three. First, while CLIP is
trained on the widest variety of images and texts, it is able to make
fine-grained human-like social judgments on face images. Second, age, gender,
and race do systematically impact CLIP's social perception of faces, suggesting
an undesirable bias in CLIP vis-a-vis legally protected attributes. Most
strikingly, we find a strong pattern of bias concerning the faces of Black
women, where CLIP produces extreme values of social perception across different
ages and facial expressions. Third, facial expression impacts social perception
more than age and lighting as much as age. The last finding predicts that
studies that do not control for unprotected visual attributes may reach the
wrong conclusions on bias. Our novel method of investigation, which is founded
on the social psychology literature and on the experiments involving the
manipulation of individual attributes, yields sharper and more reliable
observations than previous observational methods and may be applied to study
biases in any vision-language model. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CY",
"cs.LG"
] |
|||
Contextual Bandit with Herding Effects: Algorithms and Recommendation
Applications | http://arxiv.org/abs/2408.14432v2 | http://arxiv.org/abs/2408.14432v2 | http://arxiv.org/pdf/2408.14432v2 | 2024-08-26 | 2024-08-28 | [
"Luyue Xu",
"Liming Wang",
"Hong Xie",
"Mingqiang Zhou"
] | [
"",
"",
"",
""
] | Contextual bandits serve as a fundamental algorithmic framework for
optimizing recommendation decisions online. Though extensive attention has been
paid to tailoring contextual bandits for recommendation applications, the
"herding effects" in user feedback have been ignored. These herding effects
bias user feedback toward historical ratings, breaking down the assumption of
unbiased feedback inherent in contextual bandits. This paper develops a novel
variant of the contextual bandit that is tailored to address the feedback bias
caused by the herding effects. A user feedback model is formulated to capture
this feedback bias. We design the TS-Conf (Thompson Sampling under Conformity)
algorithm, which employs posterior sampling to balance the exploration and
exploitation tradeoff. We prove an upper bound for the regret of the algorithm,
revealing the impact of herding effects on learning speed. Extensive
experiments on datasets demonstrate that TS-Conf outperforms four benchmark
algorithms. Analysis reveals that TS-Conf effectively mitigates the negative
impact of herding effects, resulting in faster learning and improved
recommendation accuracy. | Published as a conference paper at PRICAI 2024 | cs.LG | [
"cs.LG",
"cs.AI",
"cs.IR"
] |
||
CHARTOM: A Visual Theory-of-Mind Benchmark for Multimodal Large Language
Models | http://arxiv.org/abs/2408.14419v1 | http://arxiv.org/abs/2408.14419v1 | http://arxiv.org/pdf/2408.14419v1 | 2024-08-26 | 2024-08-26 | [
"Shubham Bharti",
"Shiyun Cheng",
"Jihyun Rho",
"Martina Rao",
"Xiaojin Zhu"
] | [
"",
"",
"",
"",
""
] | We introduce CHARTOM, a visual theory-of-mind benchmark for multimodal large
language models. CHARTOM consists of specially designed data visualizing
charts. Given a chart, a language model needs to not only correctly comprehend
the chart (the FACT question) but also judge if the chart will be misleading to
a human reader (the MIND question). Both questions have significant societal
benefits. We detail the construction of the CHARTOM benchmark including its
calibration on human performance. | cs.AI | [
"cs.AI",
"cs.CL",
"cs.CV"
] |
|||
MEDSAGE: Enhancing Robustness of Medical Dialogue Summarization to ASR
Errors with LLM-generated Synthetic Dialogues | http://arxiv.org/abs/2408.14418v1 | http://arxiv.org/abs/2408.14418v1 | http://arxiv.org/pdf/2408.14418v1 | 2024-08-26 | 2024-08-26 | [
"Kuluhan Binici",
"Abhinav Ramesh Kashyap",
"Viktor Schlegel",
"Andy T. Liu",
"Vijay Prakash Dwivedi",
"Thanh-Tung Nguyen",
"Xiaoxue Gao",
"Nancy F. Chen",
"Stefan Winkler"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Automatic Speech Recognition (ASR) systems are pivotal in transcribing speech
into text, yet the errors they introduce can significantly degrade the
performance of downstream tasks like summarization. This issue is particularly
pronounced in clinical dialogue summarization, a low-resource domain where
supervised data for fine-tuning is scarce, necessitating the use of ASR models
as black-box solutions. Employing conventional data augmentation for enhancing
the noise robustness of summarization models is not feasible either due to the
unavailability of sufficient medical dialogue audio recordings and
corresponding ASR transcripts. To address this challenge, we propose MEDSAGE,
an approach for generating synthetic samples for data augmentation using Large
Language Models (LLMs). Specifically, we leverage the in-context learning
capabilities of LLMs and instruct them to generate ASR-like errors based on a
few available medical dialogue examples with audio recordings. Experimental
results show that LLMs can effectively model ASR noise, and incorporating this
noisy data into the training process significantly improves the robustness and
accuracy of medical dialogue summarization systems. This approach addresses the
challenges of noisy ASR outputs in critical applications, offering a robust
solution to enhance the reliability of clinical dialogue summarization. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Language-specific Calibration for Pruning Multilingual Language Models | http://arxiv.org/abs/2408.14398v2 | http://arxiv.org/abs/2408.14398v2 | http://arxiv.org/pdf/2408.14398v2 | 2024-08-26 | 2024-08-28 | [
"Simon Kurz",
"Jian-Jia Chen",
"Lucie Flek",
"Zhixue Zhao"
] | [
"",
"",
"",
""
] | Recent advances in large language model (LLM) pruning have shown
state-of-the-art compression results in post-training and retraining-free
settings while maintaining high predictive performance. However, such research
mainly considers calibrating pruning using English text, despite the
multilingual nature of modern LLMs and their frequent uses in non-English
languages. In this paper, we set out to explore effective strategies for
calibrating the pruning of multilingual language models. We present the first
comprehensive empirical study, comparing different calibration languages for
pruning multilingual models across diverse tasks, models, and state-of-the-art
pruning techniques. Our results present practical suggestions, for example,
calibrating in the target language can efficiently yield lower perplexity, but
does not necessarily benefit downstream tasks. Our further analysis experiments
unveil that calibration in the target language mainly contributes to preserving
language-specific features related to fluency and coherence, but might not
contribute to capturing language-agnostic features such as language
understanding and reasoning. Last, we provide practical recommendations for
future practitioners. | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
|||
Uncovering Knowledge Gaps in Radiology Report Generation Models through
Knowledge Graphs | http://arxiv.org/abs/2408.14397v1 | http://arxiv.org/abs/2408.14397v1 | http://arxiv.org/pdf/2408.14397v1 | 2024-08-26 | 2024-08-26 | [
"Xiaoman Zhang",
"Julián N. Acosta",
"Hong-Yu Zhou",
"Pranav Rajpurkar"
] | [
"",
"",
"",
""
] | Recent advancements in artificial intelligence have significantly improved
the automatic generation of radiology reports. However, existing evaluation
methods fail to reveal the models' understanding of radiological images and
their capacity to achieve human-level granularity in descriptions. To bridge
this gap, we introduce a system, named ReXKG, which extracts structured
information from processed reports to construct a comprehensive radiology
knowledge graph. We then propose three metrics to evaluate the similarity of
nodes (ReXKG-NSC), distribution of edges (ReXKG-AMS), and coverage of subgraphs
(ReXKG-SCS) across various knowledge graphs. We conduct an in-depth comparative
analysis of AI-generated and human-written radiology reports, assessing the
performance of both specialist and generalist models. Our study provides a
deeper understanding of the capabilities and limitations of current AI models
in radiology report generation, offering valuable insights for improving model
performance and clinical applicability. | Code is available at: https://github.com/rajpurkarlab/ReXKG | cs.AI | [
"cs.AI",
"cs.CL",
"cs.CV"
] |
||
Reprogramming Foundational Large Language Models(LLMs) for Enterprise
Adoption for Spatio-Temporal Forecasting Applications: Unveiling a New Era in
Copilot-Guided Cross-Modal Time Series Representation Learning | http://arxiv.org/abs/2408.14387v1 | http://arxiv.org/abs/2408.14387v1 | http://arxiv.org/pdf/2408.14387v1 | 2024-08-26 | 2024-08-26 | [
"Sakhinana Sagar Srinivas",
"Chidaksh Ravuru",
"Geethan Sannidhi",
"Venkataramana Runkana"
] | [
"",
"",
"",
""
] | Spatio-temporal forecasting plays a crucial role in various sectors such as
transportation systems, logistics, and supply chain management. However,
existing methods are limited by their ability to handle large, complex
datasets. To overcome this limitation, we introduce a hybrid approach that
combines the strengths of open-source large and small-scale language models
(LLMs and LMs) with traditional forecasting methods. We augment traditional
methods with dynamic prompting and a grouped-query, multi-head attention
mechanism to more effectively capture both intra-series and inter-series
dependencies in evolving nonlinear time series data. In addition, we facilitate
on-premises customization by fine-tuning smaller open-source LMs for time
series trend analysis utilizing descriptions generated by open-source large LMs
on consumer-grade hardware using Low-Rank Adaptation with Activation Memory
Reduction (LoRA-AMR) technique to reduce computational overhead and activation
storage memory demands while preserving inference latency. We combine language
model processing for time series trend analysis with traditional time series
representation learning method for cross-modal integration, achieving robust
and accurate forecasts. The framework effectiveness is demonstrated through
extensive experiments on various real-world datasets, outperforming existing
methods by significant margins in terms of forecast accuracy. | Paper published at the Deployable AI (DAI) workshop at AAAI-2024 | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Probing Causality Manipulation of Large Language Models | http://arxiv.org/abs/2408.14380v1 | http://arxiv.org/abs/2408.14380v1 | http://arxiv.org/pdf/2408.14380v1 | 2024-08-26 | 2024-08-26 | [
"Chenyang Zhang",
"Haibo Tong",
"Bin Zhang",
"Dongyu Zhang"
] | [
"",
"",
"",
""
] | Large language models (LLMs) have shown various ability on natural language
processing, including problems about causality. It is not intuitive for LLMs to
command causality, since pretrained models usually work on statistical
associations, and do not focus on causes and effects in sentences. So that
probing internal manipulation of causality is necessary for LLMs. This paper
proposes a novel approach to probe causality manipulation hierarchically, by
providing different shortcuts to models and observe behaviors. We exploit
retrieval augmented generation (RAG) and in-context learning (ICL) for models
on a designed causality classification task. We conduct experiments on
mainstream LLMs, including GPT-4 and some smaller and domain-specific models.
Our results suggest that LLMs can detect entities related to causality and
recognize direct causal relationships. However, LLMs lack specialized cognition
for causality, merely treating them as part of the global semantic of the
sentence. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery | http://arxiv.org/abs/2408.14371v1 | http://arxiv.org/abs/2408.14371v1 | http://arxiv.org/pdf/2408.14371v1 | 2024-08-26 | 2024-08-26 | [
"Sarah Rastegar",
"Mohammadreza Salehi",
"Yuki M. Asano",
"Hazel Doughty",
"Cees G. M. Snoek"
] | [
"",
"",
"",
"",
""
] | In this paper, we address Generalized Category Discovery, aiming to
simultaneously uncover novel categories and accurately classify known ones.
Traditional methods, which lean heavily on self-supervision and contrastive
learning, often fall short when distinguishing between fine-grained categories.
To address this, we introduce a novel concept called `self-expertise', which
enhances the model's ability to recognize subtle differences and uncover
unknown categories. Our approach combines unsupervised and supervised
self-expertise strategies to refine the model's discernment and generalization.
Initially, hierarchical pseudo-labeling is used to provide `soft supervision',
improving the effectiveness of self-expertise. Our supervised technique differs
from traditional methods by utilizing more abstract positive and negative
samples, aiding in the formation of clusters that can generalize to novel
categories. Meanwhile, our unsupervised strategy encourages the model to
sharpen its category distinctions by considering within-category examples as
`hard' negatives. Supported by theoretical insights, our empirical results
showcase that our method outperforms existing state-of-the-art techniques in
Generalized Category Discovery across several fine-grained datasets. Our code
is available at: https://github.com/SarahRastegar/SelEx. | Accepted by ECCV 2024 | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
||
GR-MG: Leveraging Partially Annotated Data via Multi-Modal Goal
Conditioned Policy | http://arxiv.org/abs/2408.14368v1 | http://arxiv.org/abs/2408.14368v1 | http://arxiv.org/pdf/2408.14368v1 | 2024-08-26 | 2024-08-26 | [
"Peiyan Li",
"Hongtao Wu",
"Yan Huang",
"Chilam Cheang",
"Liang Wang",
"Tao Kong"
] | [
"",
"",
"",
"",
"",
""
] | The robotics community has consistently aimed to achieve generalizable robot
manipulation with flexible natural language instructions. One of the primary
challenges is that obtaining robot data fully annotated with both actions and
texts is time-consuming and labor-intensive. However, partially annotated data,
such as human activity videos without action labels and robot play data without
language labels, is much easier to collect. Can we leverage these data to
enhance the generalization capability of robots? In this paper, we propose
GR-MG, a novel method which supports conditioning on both a language
instruction and a goal image. During training, GR-MG samples goal images from
trajectories and conditions on both the text and the goal image or solely on
the image when text is unavailable. During inference, where only the text is
provided, GR-MG generates the goal image via a diffusion-based image-editing
model and condition on both the text and the generated image. This approach
enables GR-MG to leverage large amounts of partially annotated data while still
using language to flexibly specify tasks. To generate accurate goal images, we
propose a novel progress-guided goal image generation model which injects task
progress information into the generation process, significantly improving the
fidelity and the performance. In simulation experiments, GR-MG improves the
average number of tasks completed in a row of 5 from 3.35 to 4.04. In
real-robot experiments, GR-MG is able to perform 47 different tasks and
improves the success rate from 62.5% to 75.0% and 42.4% to 57.6% in simple and
generalization settings, respectively. Code and checkpoints will be available
at the project page: https://gr-mg.github.io/. | 9 pages, 7 figures, letter | cs.RO | [
"cs.RO",
"cs.AI"
] |
||
SWE-bench-java: A GitHub Issue Resolving Benchmark for Java | http://arxiv.org/abs/2408.14354v1 | http://arxiv.org/abs/2408.14354v1 | http://arxiv.org/pdf/2408.14354v1 | 2024-08-26 | 2024-08-26 | [
"Daoguang Zan",
"Zhirong Huang",
"Ailun Yu",
"Shaoxin Lin",
"Yifan Shi",
"Wei Liu",
"Dong Chen",
"Zongshuai Qi",
"Hao Yu",
"Lei Yu",
"Dezhi Ran",
"Muhan Zeng",
"Bo Shen",
"Pan Bian",
"Guangtai Liang",
"Bei Guan",
"Pengjie Huang",
"Tao Xie",
"Yongji Wang",
"Qianxiang Wang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | GitHub issue resolving is a critical task in software engineering, recently
gaining significant attention in both industry and academia. Within this task,
SWE-bench has been released to evaluate issue resolving capabilities of large
language models (LLMs), but has so far only focused on Python version. However,
supporting more programming languages is also important, as there is a strong
demand in industry. As a first step toward multilingual support, we have
developed a Java version of SWE-bench, called SWE-bench-java. We have publicly
released the dataset, along with the corresponding Docker-based evaluation
environment and leaderboard, which will be continuously maintained and updated
in the coming months. To verify the reliability of SWE-bench-java, we implement
a classic method SWE-agent and test several powerful LLMs on it. As is well
known, developing a high-quality multi-lingual benchmark is time-consuming and
labor-intensive, so we welcome contributions through pull requests or
collaboration to accelerate its iteration and refinement, paving the way for
fully automated programming. | This work is in progress | cs.SE | [
"cs.SE",
"cs.AI",
"cs.CL"
] |
||
Assessing Contamination in Large Language Models: Introducing the
LogProber method | http://arxiv.org/abs/2408.14352v1 | http://arxiv.org/abs/2408.14352v1 | http://arxiv.org/pdf/2408.14352v1 | 2024-08-26 | 2024-08-26 | [
"Nicolas Yax",
"Pierre-Yves Oudeyer",
"Stefano Palminteri"
] | [
"",
"",
""
] | In machine learning, contamination refers to situations where testing data
leak into the training set. The issue is particularly relevant for the
evaluation of the performance of Large Language Models (LLMs), which are
generally trained on gargantuan, and generally opaque, corpora of text scraped
from the world wide web. Developing tools to detect contamination is therefore
crucial to be able to fairly and properly track the evolution of the
performance of LLMs. Most recent works in the field are not tailored to
quantify contamination on short sequences of text like we find in psychology
questionnaires. In the present paper we introduce LogProber, a novel,
efficient, algorithm that we show able to detect contamination using token
probability in given sentences. In the second part we investigate the
limitations of the method and discuss how different training methods can
contaminate models without leaving traces in the token probabilities. | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
|||
Multi-Agent Path Finding with Real Robot Dynamics and Interdependent
Tasks for Automated Warehouses | http://arxiv.org/abs/2408.14527v1 | http://arxiv.org/abs/2408.14527v1 | http://arxiv.org/pdf/2408.14527v1 | 2024-08-26 | 2024-08-26 | [
"Vassilissa Lehoux-Lebacque",
"Tomi Silander",
"Christelle Loiodice",
"Seungjoon Lee",
"Albert Wang",
"Sofia Michel"
] | [
"",
"",
"",
"",
"",
""
] | Multi-Agent Path Finding (MAPF) is an important optimization problem
underlying the deployment of robots in automated warehouses and factories.
Despite the large body of work on this topic, most approaches make heavy
simplifications, both on the environment and the agents, which make the
resulting algorithms impractical for real-life scenarios. In this paper, we
consider a realistic problem of online order delivery in a warehouse, where a
fleet of robots bring the products belonging to each order from shelves to
workstations. This creates a stream of inter-dependent pickup and delivery
tasks and the associated MAPF problem consists of computing realistic
collision-free robot trajectories fulfilling these tasks. To solve this MAPF
problem, we propose an extension of the standard Prioritized Planning algorithm
to deal with the inter-dependent tasks (Interleaved Prioritized Planning) and a
novel Via-Point Star (VP*) algorithm to compute an optimal dynamics-compliant
robot trajectory to visit a sequence of goal locations while avoiding moving
obstacles. We prove the completeness of our approach and evaluate it in
simulation as well as in a real warehouse. | Accepted to ECAI-2024. For related videos, see
https://europe.naverlabs.com/research/publications/MAPF_IPP | cs.RO | [
"cs.RO",
"cs.AI",
"cs.MA"
] |
||
Foundation Models for Music: A Survey | http://arxiv.org/abs/2408.14340v2 | http://arxiv.org/abs/2408.14340v2 | http://arxiv.org/pdf/2408.14340v2 | 2024-08-26 | 2024-08-27 | [
"Yinghao Ma",
"Anders Øland",
"Anton Ragni",
"Bleiz MacSen Del Sette",
"Charalampos Saitis",
"Chris Donahue",
"Chenghua Lin",
"Christos Plachouras",
"Emmanouil Benetos",
"Elio Quinton",
"Elona Shatri",
"Fabio Morreale",
"Ge Zhang",
"György Fazekas",
"Gus Xia",
"Huan Zhang",
"Ilaria Manco",
"Jiawen Huang",
"Julien Guinot",
"Liwei Lin",
"Luca Marinelli",
"Max W. Y. Lam",
"Megha Sharma",
"Qiuqiang Kong",
"Roger B. Dannenberg",
"Ruibin Yuan",
"Shangda Wu",
"Shih-Lun Wu",
"Shuqi Dai",
"Shun Lei",
"Shiyin Kang",
"Simon Dixon",
"Wenhu Chen",
"Wenhao Huang",
"Xingjian Du",
"Xingwei Qu",
"Xu Tan",
"Yizhi Li",
"Zeyue Tian",
"Zhiyong Wu",
"Zhizheng Wu",
"Ziyang Ma",
"Ziyu Wang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | In recent years, foundation models (FMs) such as large language models (LLMs)
and latent diffusion models (LDMs) have profoundly impacted diverse sectors,
including music. This comprehensive review examines state-of-the-art (SOTA)
pre-trained models and foundation models in music, spanning from representation
learning, generative learning and multimodal learning. We first contextualise
the significance of music in various industries and trace the evolution of AI
in music. By delineating the modalities targeted by foundation models, we
discover many of the music representations are underexplored in FM development.
Then, emphasis is placed on the lack of versatility of previous methods on
diverse music applications, along with the potential of FMs in music
understanding, generation and medical application. By comprehensively exploring
the details of the model pre-training paradigm, architectural choices,
tokenisation, finetuning methodologies and controllability, we emphasise the
important topics that should have been well explored, like instruction tuning
and in-context learning, scaling law and emergent ability, as well as
long-sequence modelling etc. A dedicated section presents insights into music
agents, accompanied by a thorough analysis of datasets and evaluations
essential for pre-training and downstream tasks. Finally, by underscoring the
vital importance of ethical considerations, we advocate that following research
on FM for music should focus more on such issues as interpretability,
transparency, human responsibility, and copyright issues. The paper offers
insights into future challenges and trends on FMs for music, aiming to shape
the trajectory of human-AI collaboration in the music realm. | cs.SD | [
"cs.SD",
"cs.AI",
"cs.CL",
"cs.LG",
"eess.AS"
] |
|||
Machine Learning for Quantifier Selection in cvc5 | http://arxiv.org/abs/2408.14338v1 | http://arxiv.org/abs/2408.14338v1 | http://arxiv.org/pdf/2408.14338v1 | 2024-08-26 | 2024-08-26 | [
"Jan Jakubův",
"Mikoláš Janota",
"Jelle Piepenbrock",
"Josef Urban"
] | [
"",
"",
"",
""
] | In this work we considerably improve the state-of-the-art SMT solving on
first-order quantified problems by efficient machine learning guidance of
quantifier selection. Quantifiers represent a significant challenge for SMT and
are technically a source of undecidability. In our approach, we train an
efficient machine learning model that informs the solver which quantifiers
should be instantiated and which not. Each quantifier may be instantiated
multiple times and the set of the active quantifiers changes as the solving
progresses. Therefore, we invoke the ML predictor many times, during the whole
run of the solver. To make this efficient, we use fast ML models based on
gradient boosting decision trees. We integrate our approach into the
state-of-the-art cvc5 SMT solver and show a considerable increase of the
system's holdout-set performance after training it on a large set of
first-order problems collected from the Mizar Mathematical Library. | cs.AI | [
"cs.AI",
"cs.LG",
"cs.LO"
] |
|||
Equivariant Reinforcement Learning under Partial Observability | http://arxiv.org/abs/2408.14336v1 | http://arxiv.org/abs/2408.14336v1 | http://arxiv.org/pdf/2408.14336v1 | 2024-08-26 | 2024-08-26 | [
"Hai Nguyen",
"Andrea Baisero",
"David Klee",
"Dian Wang",
"Robert Platt",
"Christopher Amato"
] | [
"",
"",
"",
"",
"",
""
] | Incorporating inductive biases is a promising approach for tackling
challenging robot learning domains with sample-efficient solutions. This paper
identifies partially observable domains where symmetries can be a useful
inductive bias for efficient learning. Specifically, by encoding the
equivariance regarding specific group symmetries into the neural networks, our
actor-critic reinforcement learning agents can reuse solutions in the past for
related scenarios. Consequently, our equivariant agents outperform
non-equivariant approaches significantly in terms of sample efficiency and
final performance, demonstrated through experiments on a range of robotic tasks
in simulation and real hardware. | Conference on Robot Learning, 2023 | cs.RO | [
"cs.RO",
"cs.AI",
"cs.CV"
] |
||
PHEVA: A Privacy-preserving Human-centric Video Anomaly Detection
Dataset | http://arxiv.org/abs/2408.14329v1 | http://arxiv.org/abs/2408.14329v1 | http://arxiv.org/pdf/2408.14329v1 | 2024-08-26 | 2024-08-26 | [
"Ghazal Alinezhad Noghre",
"Shanle Yao",
"Armin Danesh Pazho",
"Babak Rahimi Ardabili",
"Vinit Katariya",
"Hamed Tabkhi"
] | [
"",
"",
"",
"",
"",
""
] | PHEVA, a Privacy-preserving Human-centric Ethical Video Anomaly detection
dataset. By removing pixel information and providing only de-identified human
annotations, PHEVA safeguards personally identifiable information. The dataset
includes seven indoor/outdoor scenes, featuring one novel, context-specific
camera, and offers over 5x the pose-annotated frames compared to the largest
previous dataset. This study benchmarks state-of-the-art methods on PHEVA using
a comprehensive set of metrics, including the 10% Error Rate (10ER), a metric
used for anomaly detection for the first time providing insights relevant to
real-world deployment. As the first of its kind, PHEVA bridges the gap between
conventional training and real-world deployment by introducing continual
learning benchmarks, with models outperforming traditional methods in 82.14% of
cases. The dataset is publicly available at
https://github.com/TeCSAR-UNCC/PHEVA.git. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Streamline tractography of the fetal brain in utero with machine
learning | http://arxiv.org/abs/2408.14326v1 | http://arxiv.org/abs/2408.14326v1 | http://arxiv.org/pdf/2408.14326v1 | 2024-08-26 | 2024-08-26 | [
"Weide Liu",
"Camilo Calixto",
"Simon K. Warfield",
"Davood Karimi"
] | [
"",
"",
"",
""
] | Diffusion-weighted magnetic resonance imaging (dMRI) is the only non-invasive
tool for studying white matter tracts and structural connectivity of the brain.
These assessments rely heavily on tractography techniques, which reconstruct
virtual streamlines representing white matter fibers. Much effort has been
devoted to improving tractography methodology for adult brains, while
tractography of the fetal brain has been largely neglected. Fetal tractography
faces unique difficulties due to low dMRI signal quality, immature and rapidly
developing brain structures, and paucity of reference data. This work presents
the first machine learning model for fetal tractography. The model input
consists of five sources of information: (1) Fiber orientation, inferred from a
diffusion tensor fit to the dMRI signal; (2) Directions of recent propagation
steps; (3) Global spatial information, encoded as distances to keypoints in the
brain cortex; (4) Tissue segmentation information; and (5) Prior information
about the expected local fiber orientations supplied with an atlas. In order to
mitigate the local tensor estimation error, a large spatial context around the
current point in the diffusion tensor image is encoded using convolutional and
attention neural network modules. Moreover, the diffusion tensor information at
a hypothetical next point is included in the model input. Filtering rules based
on anatomically constrained tractography are applied to prune implausible
streamlines. We trained the model on manually-refined whole-brain fetal
tractograms and validated the trained model on an independent set of 11 test
scans with gestational ages between 23 and 36 weeks. Results show that our
proposed method achieves superior performance across all evaluated tracts. The
new method can significantly advance the capabilities of dMRI for studying
normal and abnormal brain development in utero. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG",
"q-bio.NC"
] |
|||
Claim Verification in the Age of Large Language Models: A Survey | http://arxiv.org/abs/2408.14317v1 | http://arxiv.org/abs/2408.14317v1 | http://arxiv.org/pdf/2408.14317v1 | 2024-08-26 | 2024-08-26 | [
"Alphaeus Dmonte",
"Roland Oruche",
"Marcos Zampieri",
"Prasad Calyam",
"Isabelle Augenstein"
] | [
"",
"",
"",
"",
""
] | The large and ever-increasing amount of data available on the Internet
coupled with the laborious task of manual claim and fact verification has
sparked the interest in the development of automated claim verification
systems. Several deep learning and transformer-based models have been proposed
for this task over the years. With the introduction of Large Language Models
(LLMs) and their superior performance in several NLP tasks, we have seen a
surge of LLM-based approaches to claim verification along with the use of novel
methods such as Retrieval Augmented Generation (RAG). In this survey, we
present a comprehensive account of recent claim verification frameworks using
LLMs. We describe the different components of the claim verification pipeline
used in these frameworks in detail including common approaches to retrieval,
prompting, and fine-tuning. Finally, we describe publicly available English
datasets created for this task. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Logic interpretations of ANN partition cells | http://arxiv.org/abs/2408.14314v1 | http://arxiv.org/abs/2408.14314v1 | http://arxiv.org/pdf/2408.14314v1 | 2024-08-26 | 2024-08-26 | [
"Ingo Schmitt"
] | [
""
] | Consider a binary classification problem solved using a feed-forward
artificial neural network (ANN). Let the ANN be composed of a ReLU layer and
several linear layers (convolution, sum-pooling, or fully connected). We assume
the network was trained with high accuracy. Despite numerous suggested
approaches, interpreting an artificial neural network remains challenging for
humans. For a new method of interpretation, we construct a bridge between a
simple ANN and logic. As a result, we can analyze and manipulate the semantics
of an ANN using the powerful tool set of logic. To achieve this, we decompose
the input space of the ANN into several network partition cells. Each network
partition cell represents a linear combination that maps input values to a
classifying output value. For interpreting the linear map of a partition cell
using logic expressions, we suggest minterm values as the input of a simple
ANN. We derive logic expressions representing interaction patterns for
separating objects classified as 1 from those classified as 0. To facilitate an
interpretation of logic expressions, we present them as binary logic trees. | cs.LO | [
"cs.LO",
"cs.AI",
"I.2.4; I.2.6; F.4.1"
] |
|||
LLM-3D Print: Large Language Models To Monitor and Control 3D Printing | http://arxiv.org/abs/2408.14307v1 | http://arxiv.org/abs/2408.14307v1 | http://arxiv.org/pdf/2408.14307v1 | 2024-08-26 | 2024-08-26 | [
"Yayati Jadhav",
"Peter Pak",
"Amir Barati Farimani"
] | [
"",
"",
""
] | Industry 4.0 has revolutionized manufacturing by driving digitalization and
shifting the paradigm toward additive manufacturing (AM). Fused Deposition
Modeling (FDM), a key AM technology, enables the creation of highly customized,
cost-effective products with minimal material waste through layer-by-layer
extrusion, posing a significant challenge to traditional subtractive methods.
However, the susceptibility of material extrusion techniques to errors often
requires expert intervention to detect and mitigate defects that can severely
compromise product quality. While automated error detection and machine
learning models exist, their generalizability across diverse 3D printer setups,
firmware, and sensors is limited, and deep learning methods require extensive
labeled datasets, hindering scalability and adaptability. To address these
challenges, we present a process monitoring and control framework that
leverages pre-trained Large Language Models (LLMs) alongside 3D printers to
detect and address printing defects. The LLM evaluates print quality by
analyzing images captured after each layer or print segment, identifying
failure modes and querying the printer for relevant parameters. It then
generates and executes a corrective action plan. We validated the effectiveness
of the proposed framework in identifying defects by comparing it against a
control group of engineers with diverse AM expertise. Our evaluation
demonstrated that LLM-based agents not only accurately identify common 3D
printing errors, such as inconsistent extrusion, stringing, warping, and layer
adhesion, but also effectively determine the parameters causing these failures
and autonomously correct them without any need for human intervention. | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
|||
May the Forgetting Be with You: Alternate Replay for Learning with Noisy
Labels | http://arxiv.org/abs/2408.14284v1 | http://arxiv.org/abs/2408.14284v1 | http://arxiv.org/pdf/2408.14284v1 | 2024-08-26 | 2024-08-26 | [
"Monica Millunzi",
"Lorenzo Bonicelli",
"Angelo Porrello",
"Jacopo Credi",
"Petter N. Kolm",
"Simone Calderara"
] | [
"",
"",
"",
"",
"",
""
] | Forgetting presents a significant challenge during incremental training,
making it particularly demanding for contemporary AI systems to assimilate new
knowledge in streaming data environments. To address this issue, most
approaches in Continual Learning (CL) rely on the replay of a restricted buffer
of past data. However, the presence of noise in real-world scenarios, where
human annotation is constrained by time limitations or where data is
automatically gathered from the web, frequently renders these strategies
vulnerable. In this study, we address the problem of CL under Noisy Labels
(CLN) by introducing Alternate Experience Replay (AER), which takes advantage
of forgetting to maintain a clear distinction between clean, complex, and noisy
samples in the memory buffer. The idea is that complex or mislabeled examples,
which hardly fit the previously learned data distribution, are most likely to
be forgotten. To grasp the benefits of such a separation, we equip AER with
Asymmetric Balanced Sampling (ABS): a new sample selection strategy that
prioritizes purity on the current task while retaining relevant samples from
the past. Through extensive computational comparisons, we demonstrate the
effectiveness of our approach in terms of both accuracy and purity of the
obtained buffer, resulting in a remarkable average gain of 4.71% points in
accuracy with respect to existing loss-based purification strategies. Code is
available at https://github.com/aimagelab/mammoth. | 25 pages, 5 figures. Accepted at the The 35th British Machine Vision
Conference 2024 (BMVC 2024), Glasgow, UK | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
||
Uncertainties of Latent Representations in Computer Vision | http://arxiv.org/abs/2408.14281v1 | http://arxiv.org/abs/2408.14281v1 | http://arxiv.org/pdf/2408.14281v1 | 2024-08-26 | 2024-08-26 | [
"Michael Kirchhof"
] | [
""
] | Uncertainty quantification is a key pillar of trustworthy machine learning.
It enables safe reactions under unsafe inputs, like predicting only when the
machine learning model detects sufficient evidence, discarding anomalous data,
or emitting warnings when an error is likely to be inbound. This is
particularly crucial in safety-critical areas like medical image classification
or self-driving cars. Despite the plethora of proposed uncertainty
quantification methods achieving increasingly higher scores on performance
benchmarks, uncertainty estimates are often shied away from in practice. Many
machine learning projects start from pretrained latent representations that
come without uncertainty estimates. Uncertainties would need to be trained by
practitioners on their own, which is notoriously difficult and
resource-intense.
This thesis makes uncertainty estimates easily accessible by adding them to
the latent representation vectors of pretrained computer vision models. Besides
proposing approaches rooted in probability and decision theory, such as
Monte-Carlo InfoNCE (MCInfoNCE) and loss prediction, we delve into both
theoretical and empirical questions. We show that these unobservable
uncertainties about unobservable latent representations are indeed provably
correct. We also provide an uncertainty-aware representation learning (URL)
benchmark to compare these unobservables against observable ground-truths.
Finally, we compile our findings to pretrain lightweight representation
uncertainties on large-scale computer vision models that transfer to unseen
datasets in a zero-shot manner.
Our findings do not only advance the current theoretical understanding of
uncertainties over latent variables, but also facilitate the access to
uncertainty quantification for future researchers inside and outside the field,
enabling straightforward but trustworthy machine learning. | Doctoral thesis | 10.15496/publikation-98103 | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
|
Estimating Uncertainty with Implicit Quantile Network | http://arxiv.org/abs/2408.14525v1 | http://arxiv.org/abs/2408.14525v1 | http://arxiv.org/pdf/2408.14525v1 | 2024-08-26 | 2024-08-26 | [
"Yi Hung Lim"
] | [
""
] | Uncertainty quantification is an important part of many performance critical
applications. This paper provides a simple alternative to existing approaches
such as ensemble learning and bayesian neural networks. By directly modeling
the loss distribution with an Implicit Quantile Network, we get an estimate of
how uncertain the model is of its predictions. For experiments with MNIST and
CIFAR datasets, the mean of the estimated loss distribution is 2x higher for
incorrect predictions. When data with high estimated uncertainty is removed
from the test dataset, the accuracy of the model goes up as much as 10%. This
method is simple to implement while offering important information to
applications where the user has to know when the model could be wrong (e.g.
deep learning for healthcare). | This method is simple to implement and offers important information
for performance critical applications | cs.LG | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
||
Text3DAug -- Prompted Instance Augmentation for LiDAR Perception | http://arxiv.org/abs/2408.14253v2 | http://arxiv.org/abs/2408.14253v2 | http://arxiv.org/pdf/2408.14253v2 | 2024-08-26 | 2024-08-27 | [
"Laurenz Reichardt",
"Luca Uhr",
"Oliver Wasenmüller"
] | [
"",
"",
""
] | LiDAR data of urban scenarios poses unique challenges, such as heterogeneous
characteristics and inherent class imbalance. Therefore, large-scale datasets
are necessary to apply deep learning methods. Instance augmentation has emerged
as an efficient method to increase dataset diversity. However, current methods
require the time-consuming curation of 3D models or costly manual data
annotation. To overcome these limitations, we propose Text3DAug, a novel
approach leveraging generative models for instance augmentation. Text3DAug does
not depend on labeled data and is the first of its kind to generate instances
and annotations from text. This allows for a fully automated pipeline,
eliminating the need for manual effort in practical applications. Additionally,
Text3DAug is sensor agnostic and can be applied regardless of the LiDAR sensor
used. Comprehensive experimental analysis on LiDAR segmentation, detection and
novel class discovery demonstrates that Text3DAug is effective in supplementing
existing methods or as a standalone method, performing on par or better than
established methods, however while overcoming their specific drawbacks. The
code is publicly available. | Accepted at the 2024 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2024) | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Beyond Few-shot Object Detection: A Detailed Survey | http://arxiv.org/abs/2408.14249v1 | http://arxiv.org/abs/2408.14249v1 | http://arxiv.org/pdf/2408.14249v1 | 2024-08-26 | 2024-08-26 | [
"Vishal Chudasama",
"Hiran Sarkar",
"Pankaj Wasnik",
"Vineeth N Balasubramanian",
"Jayateja Kalla"
] | [
"",
"",
"",
"",
""
] | Object detection is a critical field in computer vision focusing on
accurately identifying and locating specific objects in images or videos.
Traditional methods for object detection rely on large labeled training
datasets for each object category, which can be time-consuming and expensive to
collect and annotate. To address this issue, researchers have introduced
few-shot object detection (FSOD) approaches that merge few-shot learning and
object detection principles. These approaches allow models to quickly adapt to
new object categories with only a few annotated samples. While traditional FSOD
methods have been studied before, this survey paper comprehensively reviews
FSOD research with a specific focus on covering different FSOD settings such as
standard FSOD, generalized FSOD, incremental FSOD, open-set FSOD, and domain
adaptive FSOD. These approaches play a vital role in reducing the reliance on
extensive labeled datasets, particularly as the need for efficient machine
learning models continues to rise. This survey paper aims to provide a
comprehensive understanding of the above-mentioned few-shot settings and
explore the methodologies for each FSOD task. It thoroughly compares
state-of-the-art methods across different FSOD settings, analyzing them in
detail based on their evaluation protocols. Additionally, it offers insights
into their applications, challenges, and potential future directions in the
evolving field of object detection with limited data. | 43 pages, 8 figures | cs.CV | [
"cs.CV",
"cs.AI",
"I.2.10; I.4.8; I.5"
] |
||
Celtibero: Robust Layered Aggregation for Federated Learning | http://arxiv.org/abs/2408.14240v1 | http://arxiv.org/abs/2408.14240v1 | http://arxiv.org/pdf/2408.14240v1 | 2024-08-26 | 2024-08-26 | [
"Borja Molina-Coronado"
] | [
""
] | Federated Learning (FL) is an innovative approach to distributed machine
learning. While FL offers significant privacy advantages, it also faces
security challenges, particularly from poisoning attacks where adversaries
deliberately manipulate local model updates to degrade model performance or
introduce hidden backdoors. Existing defenses against these attacks have been
shown to be effective when the data on the nodes is identically and
independently distributed (i.i.d.), but they often fail under less restrictive,
non-i.i.d data conditions. To overcome these limitations, we introduce
Celtibero, a novel defense mechanism that integrates layered aggregation to
enhance robustness against adversarial manipulation. Through extensive
experiments on the MNIST and IMDB datasets, we demonstrate that Celtibero
consistently achieves high main task accuracy (MTA) while maintaining minimal
attack success rates (ASR) across a range of untargeted and targeted poisoning
attacks. Our results highlight the superiority of Celtibero over existing
defenses such as FL-Defender, LFighter, and FLAME, establishing it as a highly
effective solution for securing federated learning systems against
sophisticated poisoning attacks. | cs.CR | [
"cs.CR",
"cs.AI",
"cs.DC"
] |
|||
DSTI at LLMs4OL 2024 Task A: Intrinsic versus extrinsic knowledge for
type classification | http://arxiv.org/abs/2408.14236v1 | http://arxiv.org/abs/2408.14236v1 | http://arxiv.org/pdf/2408.14236v1 | 2024-08-26 | 2024-08-26 | [
"Hanna Abi Akl"
] | [
""
] | We introduce semantic towers, an extrinsic knowledge representation method,
and compare it to intrinsic knowledge in large language models for ontology
learning. Our experiments show a trade-off between performance and semantic
grounding for extrinsic knowledge compared to a fine-tuned model intrinsic
knowledge. We report our findings on the Large Language Models for Ontology
Learning (LLMs4OL) 2024 challenge. | 8 pages, 4 figures, accepted for the LLMs4OL challenge at the
International Semantic Web Conference (ISWC) 2024 | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
||
Gallery-Aware Uncertainty Estimation For Open-Set Face Recognition | http://arxiv.org/abs/2408.14229v1 | http://arxiv.org/abs/2408.14229v1 | http://arxiv.org/pdf/2408.14229v1 | 2024-08-26 | 2024-08-26 | [
"Leonid Erlygin",
"Alexey Zaytsev"
] | [
"",
""
] | Accurately estimating image quality and model robustness improvement are
critical challenges in unconstrained face recognition, which can be addressed
through uncertainty estimation via probabilistic face embeddings. Previous
research mainly focused on uncertainty estimation in face verification, leaving
the open-set face recognition task underexplored. In open-set face recognition,
one seeks to classify an image, which could also be unknown. Here, the low
variance of probabilistic embedding does not imply a low error probability: an
image embedding could be close to several classes in a gallery, thus yielding
high uncertainty. We propose a method aware of two sources of ambiguity in the
open-set recognition system: (1) the gallery uncertainty caused by overlapping
classes and (2) the uncertainty of the face embeddings. To detect both types,
we use a Bayesian probabilistic model of embedding distribution, which provides
a principled uncertainty estimate. Challenging open-set face recognition
datasets, such as IJB-C, serve as a testbed for our method. We also propose a
new open-set recognition protocol for whale and dolphin identification. The
proposed approach better identifies recognition errors than uncertainty
estimation methods based solely on image quality. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
|||
Fact Probability Vector Based Goal Recognition | http://arxiv.org/abs/2408.14224v1 | http://arxiv.org/abs/2408.14224v1 | http://arxiv.org/pdf/2408.14224v1 | 2024-08-26 | 2024-08-26 | [
"Nils Wilken",
"Lea Cohausz",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] | [
"",
"",
"",
""
] | We present a new approach to goal recognition that involves comparing
observed facts with their expected probabilities. These probabilities depend on
a specified goal g and initial state s0. Our method maps these probabilities
and observed facts into a real vector space to compute heuristic values for
potential goals. These values estimate the likelihood of a given goal being the
true objective of the observed agent. As obtaining exact expected probabilities
for observed facts in an observation sequence is often practically infeasible,
we propose and empirically validate a method for approximating these
probabilities. Our empirical results show that the proposed approach offers
improved goal recognition precision compared to state-of-the-art techniques
while reducing computational complexity. | Will be presented at ECAI 2024 | cs.AI | [
"cs.AI"
] |
||
MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware
Diffusion and Iterative Refinement | http://arxiv.org/abs/2408.14211v1 | http://arxiv.org/abs/2408.14211v1 | http://arxiv.org/pdf/2408.14211v1 | 2024-08-26 | 2024-08-26 | [
"Xu He",
"Xiaoyu Li",
"Di Kang",
"Jiangnan Ye",
"Chaopeng Zhang",
"Liyang Chen",
"Xiangjun Gao",
"Han Zhang",
"Zhiyong Wu",
"Haolin Zhuang"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Existing works in single-image human reconstruction suffer from weak
generalizability due to insufficient training data or 3D inconsistencies for a
lack of comprehensive multi-view knowledge. In this paper, we introduce
MagicMan, a human-specific multi-view diffusion model designed to generate
high-quality novel view images from a single reference image. As its core, we
leverage a pre-trained 2D diffusion model as the generative prior for
generalizability, with the parametric SMPL-X model as the 3D body prior to
promote 3D awareness. To tackle the critical challenge of maintaining
consistency while achieving dense multi-view generation for improved 3D human
reconstruction, we first introduce hybrid multi-view attention to facilitate
both efficient and thorough information interchange across different views.
Additionally, we present a geometry-aware dual branch to perform concurrent
generation in both RGB and normal domains, further enhancing consistency via
geometry cues. Last but not least, to address ill-shaped issues arising from
inaccurate SMPL-X estimation that conflicts with the reference image, we
propose a novel iterative refinement strategy, which progressively optimizes
SMPL-X accuracy while enhancing the quality and consistency of the generated
multi-views. Extensive experimental results demonstrate that our method
significantly outperforms existing approaches in both novel view synthesis and
subsequent 3D human reconstruction tasks. | Project Page: https://thuhcsi.github.io/MagicMan | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Representative Arm Identification: A fixed confidence approach to
identify cluster representatives | http://arxiv.org/abs/2408.14195v1 | http://arxiv.org/abs/2408.14195v1 | http://arxiv.org/pdf/2408.14195v1 | 2024-08-26 | 2024-08-26 | [
"Sarvesh Gharat",
"Aniket Yadav",
"Nikhil Karamchandani",
"Jayakrishnan Nair"
] | [
"",
"",
"",
""
] | We study the representative arm identification (RAI) problem in the
multi-armed bandits (MAB) framework, wherein we have a collection of arms, each
associated with an unknown reward distribution. An underlying instance is
defined by a partitioning of the arms into clusters of predefined sizes, such
that for any $j > i$, all arms in cluster $i$ have a larger mean reward than
those in cluster $j$. The goal in RAI is to reliably identify a certain
prespecified number of arms from each cluster, while using as few arm pulls as
possible. The RAI problem covers as special cases several well-studied MAB
problems such as identifying the best arm or any $M$ out of the top $K$, as
well as both full and coarse ranking. We start by providing an
instance-dependent lower bound on the sample complexity of any feasible
algorithm for this setting. We then propose two algorithms, based on the idea
of confidence intervals, and provide high probability upper bounds on their
sample complexity, which orderwise match the lower bound. Finally, we do an
empirical comparison of both algorithms along with an LUCB-type alternative on
both synthetic and real-world datasets, and demonstrate the superior
performance of our proposed schemes in most cases. | We analyse a clustered multi-armed bandit formulation, where the
learning objective is to identify representative arms from each cluster, in a
fixed confidence setting | cs.LG | [
"cs.LG",
"cs.AI",
"math.PR",
"stat.ML"
] |
||
DynamicRouteGPT: A Real-Time Multi-Vehicle Dynamic Navigation Framework
Based on Large Language Models | http://arxiv.org/abs/2408.14185v1 | http://arxiv.org/abs/2408.14185v1 | http://arxiv.org/pdf/2408.14185v1 | 2024-08-26 | 2024-08-26 | [
"Ziai Zhou",
"Bin Zhou",
"Hao Liu"
] | [
"",
"",
""
] | Real-time dynamic path planning in complex traffic environments presents
challenges, such as varying traffic volumes and signal wait times. Traditional
static routing algorithms like Dijkstra and A* compute shortest paths but often
fail under dynamic conditions. Recent Reinforcement Learning (RL) approaches
offer improvements but tend to focus on local optima, risking dead-ends or
boundary issues. This paper proposes a novel approach based on causal inference
for real-time dynamic path planning, balancing global and local optimality. We
first use the static Dijkstra algorithm to compute a globally optimal baseline
path. A distributed control strategy then guides vehicles along this path. At
intersections, DynamicRouteGPT performs real-time decision-making for local
path selection, considering real-time traffic, driving preferences, and
unexpected events. DynamicRouteGPT integrates Markov chains, Bayesian
inference, and large-scale pretrained language models like Llama3 8B to provide
an efficient path planning solution. It dynamically adjusts to traffic
scenarios and driver preferences and requires no pre-training, offering broad
applicability across road networks. A key innovation is the construction of
causal graphs for counterfactual reasoning, optimizing path decisions.
Experimental results show that our method achieves state-of-the-art performance
in real-time dynamic path planning for multiple vehicles while providing
explainable path selections, offering a novel and efficient solution for
complex traffic environments. | This paper is 12 pages long and represents the initial draft, version
1 | cs.AI | [
"cs.AI",
"cs.RO"
] |
||
Robot Navigation with Entity-Based Collision Avoidance using Deep
Reinforcement Learning | http://arxiv.org/abs/2408.14183v1 | http://arxiv.org/abs/2408.14183v1 | http://arxiv.org/pdf/2408.14183v1 | 2024-08-26 | 2024-08-26 | [
"Yury Kolomeytsev",
"Dmitry Golembiovsky"
] | [
"",
""
] | Efficient navigation in dynamic environments is crucial for autonomous robots
interacting with various environmental entities, including both moving agents
and static obstacles. In this study, we present a novel methodology that
enhances the robot's interaction with different types of agents and obstacles
based on specific safety requirements. This approach uses information about the
entity types, improving collision avoidance and ensuring safer navigation. We
introduce a new reward function that penalizes the robot for collisions with
different entities such as adults, bicyclists, children, and static obstacles,
and additionally encourages the robot's proximity to the goal. It also
penalizes the robot for being close to entities, and the safe distance also
depends on the entity type. Additionally, we propose an optimized algorithm for
training and testing, which significantly accelerates train, validation, and
test steps and enables training in complex environments. Comprehensive
experiments conducted using simulation demonstrate that our approach
consistently outperforms conventional navigation and collision avoidance
methods, including state-of-the-art techniques. To sum up, this work
contributes to enhancing the safety and efficiency of navigation systems for
autonomous robots in dynamic, crowded environments. | 14 pages, 5 figures | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG"
] |
||
I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing | http://arxiv.org/abs/2408.14180v1 | http://arxiv.org/abs/2408.14180v1 | http://arxiv.org/pdf/2408.14180v1 | 2024-08-26 | 2024-08-26 | [
"Yiwei Ma",
"Jiayi Ji",
"Ke Ye",
"Weihuang Lin",
"Zhibin Wang",
"Yonghan Zheng",
"Qiang Zhou",
"Xiaoshuai Sun",
"Rongrong Ji"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Significant progress has been made in the field of Instruction-based Image
Editing (IIE). However, evaluating these models poses a significant challenge.
A crucial requirement in this field is the establishment of a comprehensive
evaluation benchmark for accurately assessing editing results and providing
valuable insights for its further development. In response to this need, we
propose I2EBench, a comprehensive benchmark designed to automatically evaluate
the quality of edited images produced by IIE models from multiple dimensions.
I2EBench consists of 2,000+ images for editing, along with 4,000+ corresponding
original and diverse instructions. It offers three distinctive characteristics:
1) Comprehensive Evaluation Dimensions: I2EBench comprises 16 evaluation
dimensions that cover both high-level and low-level aspects, providing a
comprehensive assessment of each IIE model. 2) Human Perception Alignment: To
ensure the alignment of our benchmark with human perception, we conducted an
extensive user study for each evaluation dimension. 3) Valuable Research
Insights: By analyzing the advantages and disadvantages of existing IIE models
across the 16 dimensions, we offer valuable research insights to guide future
development in the field. We will open-source I2EBench, including all
instructions, input images, human annotations, edited images from all evaluated
methods, and a simple script for evaluating the results from new IIE models.
The code, dataset and generated images from all IIE models are provided in
github: https://github.com/cocoshe/I2EBench. | Tech report, 39 pages, 41 figures | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
SwiftBrush v2: Make Your One-step Diffusion Model Better Than Its
Teacher | http://arxiv.org/abs/2408.14176v2 | http://arxiv.org/abs/2408.14176v2 | http://arxiv.org/pdf/2408.14176v2 | 2024-08-26 | 2024-08-27 | [
"Trung Dao",
"Thuan Hoang Nguyen",
"Thanh Le",
"Duc Vu",
"Khoi Nguyen",
"Cuong Pham",
"Anh Tran"
] | [
"",
"",
"",
"",
"",
"",
""
] | In this paper, we aim to enhance the performance of SwiftBrush, a prominent
one-step text-to-image diffusion model, to be competitive with its multi-step
Stable Diffusion counterpart. Initially, we explore the quality-diversity
trade-off between SwiftBrush and SD Turbo: the former excels in image
diversity, while the latter excels in image quality. This observation motivates
our proposed modifications in the training methodology, including better weight
initialization and efficient LoRA training. Moreover, our introduction of a
novel clamped CLIP loss enhances image-text alignment and results in improved
image quality. Remarkably, by combining the weights of models trained with
efficient LoRA and full training, we achieve a new state-of-the-art one-step
diffusion model, achieving an FID of 8.14 and surpassing all GAN-based and
multi-step Stable Diffusion models. The project page is available at
https://swiftbrushv2.github.io. | Accepted to ECCV'24 | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Dynamic Pricing for Electric Vehicle Charging | http://arxiv.org/abs/2408.14169v1 | http://arxiv.org/abs/2408.14169v1 | http://arxiv.org/pdf/2408.14169v1 | 2024-08-26 | 2024-08-26 | [
"Arun Kumar Kalakanti",
"Shrisha Rao"
] | [
"",
""
] | Dynamic pricing is a promising strategy to address the challenges of smart
charging, as traditional time-of-use (ToU) rates and stationary pricing (SP) do
not dynamically react to changes in operating conditions, reducing revenue for
charging station (CS) vendors and affecting grid stability. Previous studies
evaluated single objectives or linear combinations of objectives for EV CS
pricing solutions, simplifying trade-offs and preferences among objectives. We
develop a novel formulation for the dynamic pricing problem by addressing
multiple conflicting objectives efficiently instead of solely focusing on one
objective or metric, as in earlier works. We find optimal trade-offs or Pareto
solutions efficiently using Non-dominated Sorting Genetic Algorithms (NSGA) II
and NSGA III. A dynamic pricing model quantifies the relationship between
demand and price while simultaneously solving multiple conflicting objectives,
such as revenue, quality of service (QoS), and peak-to-average ratios (PAR). A
single method can only address some of the above aspects of dynamic pricing
comprehensively. We present a three-part dynamic pricing approach using a
Bayesian model, multi-objective optimization, and multi-criteria
decision-making (MCDM) using pseudo-weight vectors. To address the research gap
in CS pricing, our method selects solutions using revenue, QoS, and PAR metrics
simultaneously. Two California charging sites' real-world data validates our
approach. | 12 pages | cs.DC | [
"cs.DC",
"cs.AI"
] |
||
Fire-Flyer AI-HPC: A Cost-Effective Software-Hardware Co-Design for Deep
Learning | http://arxiv.org/abs/2408.14158v1 | http://arxiv.org/abs/2408.14158v1 | http://arxiv.org/pdf/2408.14158v1 | 2024-08-26 | 2024-08-26 | [
"Wei An",
"Xiao Bi",
"Guanting Chen",
"Shanhuang Chen",
"Chengqi Deng",
"Honghui Ding",
"Kai Dong",
"Qiushi Du",
"Wenjun Gao",
"Kang Guan",
"Jianzhong Guo",
"Yongqiang Guo",
"Zhe Fu",
"Ying He",
"Panpan Huang",
"Jiashi Li",
"Wenfeng Liang",
"Xiaodong Liu",
"Xin Liu",
"Yiyuan Liu",
"Yuxuan Liu",
"Shanghao Lu",
"Xuan Lu",
"Xiaotao Nie",
"Tian Pei",
"Junjie Qiu",
"Hui Qu",
"Zehui Ren",
"Zhangli Sha",
"Xuecheng Su",
"Xiaowen Sun",
"Yixuan Tan",
"Minghui Tang",
"Shiyu Wang",
"Yaohui Wang",
"Yongji Wang",
"Ziwei Xie",
"Yiliang Xiong",
"Yanhong Xu",
"Shengfeng Ye",
"Shuiping Yu",
"Yukun Zha",
"Liyue Zhang",
"Haowei Zhang",
"Mingchuan Zhang",
"Wentao Zhang",
"Yichao Zhang",
"Chenggang Zhao",
"Yao Zhao",
"Shangyan Zhou",
"Shunfeng Zhou",
"Yuheng Zou"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | The rapid progress in Deep Learning (DL) and Large Language Models (LLMs) has
exponentially increased demands of computational power and bandwidth. This,
combined with the high costs of faster computing chips and interconnects, has
significantly inflated High Performance Computing (HPC) construction costs. To
address these challenges, we introduce the Fire-Flyer AI-HPC architecture, a
synergistic hardware-software co-design framework and its best practices. For
DL training, we deployed the Fire-Flyer 2 with 10,000 PCIe A100 GPUs, achieved
performance approximating the DGX-A100 while reducing costs by half and energy
consumption by 40%. We specifically engineered HFReduce to accelerate allreduce
communication and implemented numerous measures to keep our Computation-Storage
Integrated Network congestion-free. Through our software stack, including
HaiScale, 3FS, and HAI-Platform, we achieved substantial scalability by
overlapping computation and communication. Our system-oriented experience from
DL training provides valuable insights to drive future advancements in AI-HPC. | This is the preprint version of the paper accepted for presentation
at the 2024 International Conference for High Performance Computing,
Networking, Storage, and Analysis (SC'24). \c{opyright} 2024 IEEE. Personal
use of this material is permitted. For other uses, permission from IEEE must
be obtained. Please refer to IEEE Xplore for the final published version | cs.DC | [
"cs.DC",
"cs.AI"
] |
||
Explaining Vision-Language Similarities in Dual Encoders with
Feature-Pair Attributions | http://arxiv.org/abs/2408.14153v1 | http://arxiv.org/abs/2408.14153v1 | http://arxiv.org/pdf/2408.14153v1 | 2024-08-26 | 2024-08-26 | [
"Lucas Möller",
"Pascal Tilli",
"Ngoc Thang Vu",
"Sebastian Padó"
] | [
"",
"",
"",
""
] | Dual encoder architectures like CLIP models map two types of inputs into a
shared embedding space and learn similarities between them. However, it is not
understood how such models compare two inputs. Here, we address this research
gap with two contributions. First, we derive a method to attribute predictions
of any differentiable dual encoder onto feature-pair interactions between its
inputs. Second, we apply our method to CLIP-type models and show that they
learn fine-grained correspondences between parts of captions and regions in
images. They match objects across input modes and also account for mismatches.
However, this visual-linguistic grounding ability heavily varies between object
classes, depends on the training data distribution, and largely improves after
in-domain training. Using our method we can identify knowledge gaps about
specific object classes in individual models and can monitor their improvement
upon fine-tuning. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CL"
] |
|||
Exploring the Potential of Large Language Models for Heterophilic Graphs | http://arxiv.org/abs/2408.14134v1 | http://arxiv.org/abs/2408.14134v1 | http://arxiv.org/pdf/2408.14134v1 | 2024-08-26 | 2024-08-26 | [
"Yuxia Wu",
"Shujie Li",
"Yuan Fang",
"Chuan Shi"
] | [
"",
"",
"",
""
] | Graph Neural Networks (GNNs) are essential for various graph-based learning
tasks. Notably, classical GNN architectures operate under the assumption of
homophily, which posits that connected nodes are likely to share similar
features. However, this assumption limits the effectiveness of GNNs in handling
heterophilic graphs where connected nodes often exhibit dissimilar
characteristics. Existing approaches for homophily graphs such as non-local
neighbor extension and architectural refinement overlook the rich textual data
associated with nodes, which could unlock deeper insights into these
heterophilic contexts. With advancements in Large Language Models (LLMs), there
is significant promise to enhance GNNs by leveraging the extensive open-world
knowledge within LLMs to more effectively interpret and utilize textual data
for characterizing heterophilic graphs. In this work, we explore the potential
of LLMs for modeling heterophilic graphs and propose a novel two-stage
framework: LLM-enhanced edge discriminator and LLM-guided edge reweighting.
Specifically, in the first stage, we fine-tune the LLM to better identify
homophilic and heterophilic edges based on the textual information of their
nodes. In the second stage, we adaptively manage message propagation in GNNs
for different edge types based on node features, structures, and heterophilic
or homophilic characteristics. To cope with the computational demands when
deploying LLMs in practical scenarios, we further explore model distillation
techniques to fine-tune smaller, more efficient models that maintain
competitive performance. Extensive experiments validate the effectiveness of
our framework, demonstrating the feasibility of using LLMs to enhance GNNs for
node classification on heterophilic graphs. | Under review | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.SI"
] |
||
Retrieval Augmented Generation for Dynamic Graph Modeling | http://arxiv.org/abs/2408.14523v1 | http://arxiv.org/abs/2408.14523v1 | http://arxiv.org/pdf/2408.14523v1 | 2024-08-26 | 2024-08-26 | [
"Yuxia Wu",
"Yuan Fang",
"Lizi Liao"
] | [
"",
"",
""
] | Dynamic graph modeling is crucial for analyzing evolving patterns in various
applications. Existing approaches often integrate graph neural networks with
temporal modules or redefine dynamic graph modeling as a generative sequence
task. However, these methods typically rely on isolated historical contexts of
the target nodes from a narrow perspective, neglecting occurrences of similar
patterns or relevant cases associated with other nodes. In this work, we
introduce the Retrieval-Augmented Generation for Dynamic Graph Modeling
(RAG4DyG) framework, which leverages guidance from contextually and temporally
analogous examples to broaden the perspective of each node. This approach
presents two critical challenges: (1) How to identify and retrieve high-quality
demonstrations that are contextually and temporally analogous to dynamic graph
samples? (2) How can these demonstrations be effectively integrated to improve
dynamic graph modeling? To address these challenges, we propose RAG4DyG, which
enriches the understanding of historical contexts by retrieving and learning
from contextually and temporally pertinent demonstrations. Specifically, we
employ a time- and context-aware contrastive learning module to identify and
retrieve relevant cases for each query sequence. Moreover, we design a graph
fusion strategy to integrate the retrieved cases, thereby augmenting the
inherent historical contexts for improved prediction. Extensive experiments on
real-world datasets across different domains demonstrate the effectiveness of
RAG4DyG for dynamic graph modeling. | Under review | cs.LG | [
"cs.LG",
"cs.AI"
] |
||
Contrastive Learning Subspace for Text Clustering | http://arxiv.org/abs/2408.14119v1 | http://arxiv.org/abs/2408.14119v1 | http://arxiv.org/pdf/2408.14119v1 | 2024-08-26 | 2024-08-26 | [
"Qian Yong",
"Chen Chen",
"Xiabing Zhou"
] | [
"",
"",
""
] | Contrastive learning has been frequently investigated to learn effective
representations for text clustering tasks. While existing contrastive
learning-based text clustering methods only focus on modeling instance-wise
semantic similarity relationships, they ignore contextual information and
underlying relationships among all instances that needs to be clustered. In
this paper, we propose a novel text clustering approach called Subspace
Contrastive Learning (SCL) which models cluster-wise relationships among
instances. Specifically, the proposed SCL consists of two main modules: (1) a
self-expressive module that constructs virtual positive samples and (2) a
contrastive learning module that further learns a discriminative subspace to
capture task-specific cluster-wise relationships among texts. Experimental
results show that the proposed SCL method not only has achieved superior
results on multiple task clustering datasets but also has less complexity in
positive sample construction. | cs.CL | [
"cs.CL",
"cs.AI"
] |
|||
Estimating Causal Effects from Learned Causal Networks | http://arxiv.org/abs/2408.14101v2 | http://arxiv.org/abs/2408.14101v2 | http://arxiv.org/pdf/2408.14101v2 | 2024-08-26 | 2024-08-27 | [
"Anna Raichev",
"Alexander Ihler",
"Jin Tian",
"Rina Dechter"
] | [
"",
"",
"",
""
] | The standard approach to answering an identifiable causal-effect query (e.g.,
$P(Y|do(X)$) when given a causal diagram and observational data is to first
generate an estimand, or probabilistic expression over the observable
variables, which is then evaluated using the observational data. In this paper,
we propose an alternative paradigm for answering causal-effect queries over
discrete observable variables. We propose to instead learn the causal Bayesian
network and its confounding latent variables directly from the observational
data. Then, efficient probabilistic graphical model (PGM) algorithms can be
applied to the learned model to answer queries. Perhaps surprisingly, we show
that this \emph{model completion} learning approach can be more effective than
estimand approaches, particularly for larger models in which the estimand
expressions become computationally difficult.
We illustrate our method's potential using a benchmark collection of Bayesian
networks and synthetically generated causal models. | cs.AI | [
"cs.AI",
"cs.LG"
] |
|||
Exploring GPU-to-GPU Communication: Insights into Supercomputer
Interconnects | http://arxiv.org/abs/2408.14090v1 | http://arxiv.org/abs/2408.14090v1 | http://arxiv.org/pdf/2408.14090v1 | 2024-08-26 | 2024-08-26 | [
"Daniele De Sensi",
"Lorenzo Pichetti",
"Flavio Vella",
"Tiziano De Matteis",
"Zebin Ren",
"Luigi Fusco",
"Matteo Turisini",
"Daniele Cesarini",
"Kurt Lust",
"Animesh Trivedi",
"Duncan Roweth",
"Filippo Spiga",
"Salvatore Di Girolamo",
"Torsten Hoefler"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Multi-GPU nodes are increasingly common in the rapidly evolving landscape of
exascale supercomputers. On these systems, GPUs on the same node are connected
through dedicated networks, with bandwidths up to a few terabits per second.
However, gauging performance expectations and maximizing system efficiency is
challenging due to different technologies, design options, and software layers.
This paper comprehensively characterizes three supercomputers - Alps, Leonardo,
and LUMI - each with a unique architecture and design. We focus on performance
evaluation of intra-node and inter-node interconnects on up to 4096 GPUs, using
a mix of intra-node and inter-node benchmarks. By analyzing its limitations and
opportunities, we aim to offer practical guidance to researchers, system
architects, and software developers dealing with multi-GPU supercomputing. Our
results show that there is untapped bandwidth, and there are still many
opportunities for optimization, ranging from network to software optimization. | Published in Proceedings of The International Conference for High
Performance Computing Networking, Storage, and Analysis (SC '24) (2024) | cs.DC | [
"cs.DC",
"cs.AI",
"cs.AR",
"cs.NI",
"cs.PF",
"C.2.4; C.5.1; C.2.1; C.4"
] |
||
SONICS: Synthetic Or Not -- Identifying Counterfeit Songs | http://arxiv.org/abs/2408.14080v2 | http://arxiv.org/abs/2408.14080v2 | http://arxiv.org/pdf/2408.14080v2 | 2024-08-26 | 2024-08-27 | [
"Md Awsafur Rahman",
"Zaber Ibn Abdul Hakim",
"Najibul Haque Sarker",
"Bishmoy Paul",
"Shaikh Anowarul Fattah"
] | [
"",
"",
"",
"",
""
] | The recent surge in AI-generated songs presents exciting possibilities and
challenges. While these tools democratize music creation, they also necessitate
the ability to distinguish between human-composed and AI-generated songs for
safeguarding artistic integrity and content curation. Existing research and
datasets in fake song detection only focus on singing voice deepfake detection
(SVDD), where the vocals are AI-generated but the instrumental music is sourced
from real songs. However, this approach is inadequate for contemporary
end-to-end AI-generated songs where all components (vocals, lyrics, music, and
style) could be AI-generated. Additionally, existing datasets lack lyrics-music
diversity, long-duration songs, and open fake songs. To address these gaps, we
introduce SONICS, a novel dataset for end-to-end Synthetic Song Detection
(SSD), comprising over 97k songs with over 49k synthetic songs from popular
platforms like Suno and Udio. Furthermore, we highlight the importance of
modeling long-range temporal dependencies in songs for effective authenticity
detection, an aspect overlooked in existing methods. To capture these patterns,
we propose a novel model, SpecTTTra, that is up to 3 times faster and 6 times
more memory efficient compared to popular CNN and Transformer-based models
while maintaining competitive performance. Finally, we offer both AI-based and
Human evaluation benchmarks, addressing another deficiency in current research. | cs.SD | [
"cs.SD",
"cs.AI",
"cs.CV",
"cs.LG",
"eess.AS"
] |
|||
Revisiting Vacuous Reduct Semantics for Abstract Argumentation (Extended
Version) | http://arxiv.org/abs/2408.14069v1 | http://arxiv.org/abs/2408.14069v1 | http://arxiv.org/pdf/2408.14069v1 | 2024-08-26 | 2024-08-26 | [
"Lydia Blümel",
"Matthias Thimm"
] | [
"",
""
] | We consider the notion of a vacuous reduct semantics for abstract
argumentation frameworks, which, given two abstract argumentation semantics
{\sigma} and {\tau}, refines {\sigma} (base condition) by accepting only those
{\sigma}-extensions that have no non-empty {\tau}-extension in their reduct
(vacuity condition). We give a systematic overview on vacuous reduct semantics
resulting from combining different admissibility-based and conflict-free
semantics and present a principle-based analysis of vacuous reduct semantics in
general. We provide criteria for the inheritance of principle satisfaction by a
vacuous reduct semantics from its base and vacuity condition for established as
well as recently introduced principles in the context of weak argumentation
semantics. We also conduct a principle-based analysis for the special case of
undisputed semantics. | The paper has been accepted at ECAI 2024, this is an extended version
including proofs of technical results | cs.AI | [
"cs.AI"
] |
||
HAPM -- Hardware Aware Pruning Method for CNN hardware accelerators in
resource constrained devices | http://arxiv.org/abs/2408.14055v1 | http://arxiv.org/abs/2408.14055v1 | http://arxiv.org/pdf/2408.14055v1 | 2024-08-26 | 2024-08-26 | [
"Federico Nicolas Peccia",
"Luciano Ferreyro",
"Alejandro Furfaro"
] | [
"",
"",
""
] | During the last years, algorithms known as Convolutional Neural Networks
(CNNs) had become increasingly popular, expanding its application range to
several areas. In particular, the image processing field has experienced a
remarkable advance thanks to this algorithms. In IoT, a wide research field
aims to develop hardware capable of execute them at the lowest possible energy
cost, but keeping acceptable image inference time. One can get around this
apparently conflicting objectives by applying design and training techniques.
The present work proposes a generic hardware architecture ready to be
implemented on FPGA devices, supporting a wide range of configurations which
allows the system to run different neural network architectures, dynamically
exploiting the sparsity caused by pruning techniques in the mathematical
operations present in this kind of algorithms. The inference speed of the
design is evaluated over different resource constrained FPGA devices. Finally,
the standard pruning algorithm is compared against a custom pruning technique
specifically designed to exploit the scheduling properties of this hardware
accelerator. We demonstrate that our hardware-aware pruning algorithm achieves
a remarkable improvement of a 45 % in inference time compared to a network
pruned using the standard algorithm. | 8 pages, 7 figure, thesis for the title of Electronic Engineer
attained in 2021 at the Universidad Tecnologica Nacional (UTN), Argentina | cs.AR | [
"cs.AR",
"cs.AI"
] |
||
Beyond Detection: Leveraging Large Language Models for Cyber Attack
Prediction in IoT Networks | http://arxiv.org/abs/2408.14045v1 | http://arxiv.org/abs/2408.14045v1 | http://arxiv.org/pdf/2408.14045v1 | 2024-08-26 | 2024-08-26 | [
"Alaeddine Diaf",
"Abdelaziz Amara Korba",
"Nour Elislem Karabadji",
"Yacine Ghamri-Doudane"
] | [
"",
"",
"",
""
] | In recent years, numerous large-scale cyberattacks have exploited Internet of
Things (IoT) devices, a phenomenon that is expected to escalate with the
continuing proliferation of IoT technology. Despite considerable efforts in
attack detection, intrusion detection systems remain mostly reactive,
responding to specific patterns or observed anomalies. This work proposes a
proactive approach to anticipate and mitigate malicious activities before they
cause damage. This paper proposes a novel network intrusion prediction
framework that combines Large Language Models (LLMs) with Long Short Term
Memory (LSTM) networks. The framework incorporates two LLMs in a feedback loop:
a fine-tuned Generative Pre-trained Transformer (GPT) model for predicting
network traffic and a fine-tuned Bidirectional Encoder Representations from
Transformers (BERT) for evaluating the predicted traffic. The LSTM classifier
model then identifies malicious packets among these predictions. Our framework,
evaluated on the CICIoT2023 IoT attack dataset, demonstrates a significant
improvement in predictive capabilities, achieving an overall accuracy of 98%,
offering a robust solution to IoT cybersecurity challenges. | cs.CR | [
"cs.CR",
"cs.AI"
] |
|||
PAGE: Parametric Generative Explainer for Graph Neural Network | http://arxiv.org/abs/2408.14042v1 | http://arxiv.org/abs/2408.14042v1 | http://arxiv.org/pdf/2408.14042v1 | 2024-08-26 | 2024-08-26 | [
"Yang Qiu",
"Wei Liu",
"Jun Wang",
"Ruixuan Li"
] | [
"",
"",
"",
""
] | This article introduces PAGE, a parameterized generative interpretive
framework. PAGE is capable of providing faithful explanations for any graph
neural network without necessitating prior knowledge or internal details.
Specifically, we train the auto-encoder to generate explanatory substructures
by designing appropriate training strategy. Due to the dimensionality reduction
of features in the latent space of the auto-encoder, it becomes easier to
extract causal features leading to the model's output, which can be easily
employed to generate explanations. To accomplish this, we introduce an
additional discriminator to capture the causality between latent causal
features and the model's output. By designing appropriate optimization
objectives, the well-trained discriminator can be employed to constrain the
encoder in generating enhanced causal features. Finally, these features are
mapped to substructures of the input graph through the decoder to serve as
explanations. Compared to existing methods, PAGE operates at the sample scale
rather than nodes or edges, eliminating the need for perturbation or encoding
processes as seen in previous methods. Experimental results on both
artificially synthesized and real-world datasets demonstrate that our approach
not only exhibits the highest faithfulness and accuracy but also significantly
outperforms baseline models in terms of efficiency. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
Towards Graph Prompt Learning: A Survey and Beyond | http://arxiv.org/abs/2408.14520v1 | http://arxiv.org/abs/2408.14520v1 | http://arxiv.org/pdf/2408.14520v1 | 2024-08-26 | 2024-08-26 | [
"Qingqing Long",
"Yuchen Yan",
"Peiyan Zhang",
"Chen Fang",
"Wentao Cui",
"Zhiyuan Ning",
"Meng Xiao",
"Ning Cao",
"Xiao Luo",
"Lingjun Xu",
"Shiyue Jiang",
"Zheng Fang",
"Chong Chen",
"Xian-Sheng Hua",
"Yuanchun Zhou"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | Large-scale "pre-train and prompt learning" paradigms have demonstrated
remarkable adaptability, enabling broad applications across diverse domains
such as question answering, image recognition, and multimodal retrieval. This
approach fully leverages the potential of large-scale pre-trained models,
reducing downstream data requirements and computational costs while enhancing
model applicability across various tasks. Graphs, as versatile data structures
that capture relationships between entities, play pivotal roles in fields such
as social network analysis, recommender systems, and biological graphs. Despite
the success of pre-train and prompt learning paradigms in Natural Language
Processing (NLP) and Computer Vision (CV), their application in graph domains
remains nascent. In graph-structured data, not only do the node and edge
features often have disparate distributions, but the topological structures
also differ significantly. This diversity in graph data can lead to
incompatible patterns or gaps between pre-training and fine-tuning on
downstream graphs. We aim to bridge this gap by summarizing methods for
alleviating these disparities. This includes exploring prompt design
methodologies, comparing related techniques, assessing application scenarios
and datasets, and identifying unresolved problems and challenges. This survey
categorizes over 100 relevant works in this field, summarizing general design
principles and the latest applications, including text-attributed graphs,
molecules, proteins, and recommendation systems. Through this extensive review,
we provide a foundational understanding of graph prompt learning, aiming to
impact not only the graph mining community but also the broader Artificial
General Intelligence (AGI) community. | 19 pages, 2 figures | cs.LG | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
||
MLR-Copilot: Autonomous Machine Learning Research based on Large
Language Models Agents | http://arxiv.org/abs/2408.14033v1 | http://arxiv.org/abs/2408.14033v1 | http://arxiv.org/pdf/2408.14033v1 | 2024-08-26 | 2024-08-26 | [
"Ruochen Li",
"Teerth Patel",
"Qingyun Wang",
"Xinya Du"
] | [
"",
"",
"",
""
] | Machine learning research, crucial for technological advancements and
innovation, often faces significant challenges due to its inherent complexity,
slow pace of experimentation, and the necessity for specialized expertise.
Motivated by this, we present a new systematic framework, autonomous Machine
Learning Research with large language models (MLR-Copilot), designed to enhance
machine learning research productivity through the automatic generation and
implementation of research ideas using Large Language Model (LLM) agents. The
framework consists of three phases: research idea generation, experiment
implementation, and implementation execution. First, existing research papers
are used to generate hypotheses and experimental plans vis IdeaAgent powered by
LLMs. Next, the implementation generation phase translates these plans into
executables with ExperimentAgent. This phase leverages retrieved prototype code
and optionally retrieves candidate models and data. Finally, the execution
phase, also managed by ExperimentAgent, involves running experiments with
mechanisms for human feedback and iterative debugging to enhance the likelihood
of achieving executable research outcomes. We evaluate our framework on five
machine learning research tasks and the experimental results show the
framework's potential to facilitate the research progress and innovations. | cs.AI | [
"cs.AI",
"cs.CL",
"cs.LG"
] |
|||
SurGen: Text-Guided Diffusion Model for Surgical Video Generation | http://arxiv.org/abs/2408.14028v2 | http://arxiv.org/abs/2408.14028v2 | http://arxiv.org/pdf/2408.14028v2 | 2024-08-26 | 2024-08-28 | [
"Joseph Cho",
"Samuel Schmidgall",
"Cyril Zakka",
"Mrudang Mathur",
"Rohan Shad",
"William Hiesinger"
] | [
"",
"",
"",
"",
"",
""
] | Diffusion-based video generation models have made significant strides,
producing outputs with improved visual fidelity, temporal coherence, and user
control. These advancements hold great promise for improving surgical education
by enabling more realistic, diverse, and interactive simulation environments.
In this study, we introduce SurGen, a text-guided diffusion model tailored for
surgical video synthesis, producing the highest resolution and longest duration
videos among existing surgical video generation models. We validate the visual
and temporal quality of the outputs using standard image and video generation
metrics. Additionally, we assess their alignment to the corresponding text
prompts through a deep learning classifier trained on surgical data. Our
results demonstrate the potential of diffusion models to serve as valuable
educational tools for surgical trainees. | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
|||
Video-CCAM: Enhancing Video-Language Understanding with Causal
Cross-Attention Masks for Short and Long Videos | http://arxiv.org/abs/2408.14023v1 | http://arxiv.org/abs/2408.14023v1 | http://arxiv.org/pdf/2408.14023v1 | 2024-08-26 | 2024-08-26 | [
"Jiajun Fei",
"Dian Li",
"Zhidong Deng",
"Zekun Wang",
"Gang Liu",
"Hui Wang"
] | [
"",
"",
"",
"",
"",
""
] | Multi-modal large language models (MLLMs) have demonstrated considerable
potential across various downstream tasks that require cross-domain knowledge.
MLLMs capable of processing videos, known as Video-MLLMs, have attracted broad
interest in video-language understanding. However, videos, especially long
videos, contain more visual tokens than images, making them difficult for LLMs
to process. Existing works either downsample visual features or extend the LLM
context size, risking the loss of high-resolution information or slowing down
inference speed. To address these limitations, we apply cross-attention layers
in the intermediate projector between the visual encoder and the large language
model (LLM). As the naive cross-attention mechanism is insensitive to temporal
order, we further introduce causal cross-attention masks (CCAMs) within the
cross-attention layers. This Video-MLLM, named Video-CCAM, is trained in a
straightforward two-stage fashion: feature alignment and visual instruction
tuning. We develop several Video-CCAM models based on LLMs of different sizes
(4B, 9B, and 14B). Video-CCAM proves to be a robust Video-MLLM and shows
outstanding performance from short videos to long ones. Among standard video
benchmarks like MVBench and VideoChatGPT-QA, Video-CCAM shows outstanding
performances (1st/2nd/3rd in MVBench and TGIF-QA, 2nd/3rd/4th in MSVD-QA,
MSRVTT-QA, and ActivityNet-QA). In benchmarks encompassing long videos,
Video-CCAM models can be directly adapted to long video understanding and still
achieve exceptional scores despite being trained solely with images and
16-frame videos. Using 96 frames (6$\times$ the training number of frames),
Video-CCAM models rank 1st/2nd/3rd in VideoVista and 1st/2nd/4th in MLVU among
all open-source Video-MLLMs, respectively. The code is publicly available in
\url{https://github.com/QQ-MM/Video-CCAM}. | 10 pages, 5 figures | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Pixel-Aligned Multi-View Generation with Depth Guided Decoder | http://arxiv.org/abs/2408.14016v1 | http://arxiv.org/abs/2408.14016v1 | http://arxiv.org/pdf/2408.14016v1 | 2024-08-26 | 2024-08-26 | [
"Zhenggang Tang",
"Peiye Zhuang",
"Chaoyang Wang",
"Aliaksandr Siarohin",
"Yash Kant",
"Alexander Schwing",
"Sergey Tulyakov",
"Hsin-Ying Lee"
] | [
"",
"",
"",
"",
"",
"",
"",
""
] | The task of image-to-multi-view generation refers to generating novel views
of an instance from a single image. Recent methods achieve this by extending
text-to-image latent diffusion models to multi-view version, which contains an
VAE image encoder and a U-Net diffusion model. Specifically, these generation
methods usually fix VAE and finetune the U-Net only. However, the significant
downscaling of the latent vectors computed from the input images and
independent decoding leads to notable pixel-level misalignment across multiple
views. To address this, we propose a novel method for pixel-level
image-to-multi-view generation. Unlike prior work, we incorporate attention
layers across multi-view images in the VAE decoder of a latent video diffusion
model. Specifically, we introduce a depth-truncated epipolar attention,
enabling the model to focus on spatially adjacent regions while remaining
memory efficient. Applying depth-truncated attn is challenging during inference
as the ground-truth depth is usually difficult to obtain and pre-trained depth
estimation models is hard to provide accurate depth. Thus, to enhance the
generalization to inaccurate depth when ground truth depth is missing, we
perturb depth inputs during training. During inference, we employ a rapid
multi-view to 3D reconstruction approach, NeuS, to obtain coarse depth for the
depth-truncated epipolar attention. Our model enables better pixel alignment
across multi-view images. Moreover, we demonstrate the efficacy of our approach
in improving downstream multi-view to 3D reconstruction tasks. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Optimizing TD3 for 7-DOF Robotic Arm Grasping: Overcoming Suboptimality
with Exploration-Enhanced Contrastive Learning | http://arxiv.org/abs/2408.14009v1 | http://arxiv.org/abs/2408.14009v1 | http://arxiv.org/pdf/2408.14009v1 | 2024-08-26 | 2024-08-26 | [
"Wen-Han Hsieh",
"Jen-Yuan Chang"
] | [
"",
""
] | In actor-critic-based reinforcement learning algorithms such as Twin Delayed
Deep Deterministic policy gradient (TD3), insufficient exploration of the
spatial space can result in suboptimal policies when controlling 7-DOF robotic
arms. To address this issue, we propose a novel Exploration-Enhanced
Contrastive Learning (EECL) module that improves exploration by providing
additional rewards for encountering novel states. Our module stores previously
explored states in a buffer and identifies new states by comparing them with
historical data using Euclidean distance within a K-dimensional tree (KDTree)
framework. When the agent explores new states, exploration rewards are
assigned. These rewards are then integrated into the TD3 algorithm, ensuring
that the Q-learning process incorporates these signals, promoting more
effective strategy optimization. We evaluate our method on the robosuite panda
lift task, demonstrating that it significantly outperforms the baseline TD3 in
terms of both efficiency and convergence speed in the tested environment. | 4 pages, 2 figures, IEEE-ICKII-2024 | cs.RO | [
"cs.RO",
"cs.AI"
] |
||
LMM-VQA: Advancing Video Quality Assessment with Large Multimodal Models | http://arxiv.org/abs/2408.14008v1 | http://arxiv.org/abs/2408.14008v1 | http://arxiv.org/pdf/2408.14008v1 | 2024-08-26 | 2024-08-26 | [
"Qihang Ge",
"Wei Sun",
"Yu Zhang",
"Yunhao Li",
"Zhongpeng Ji",
"Fengyu Sun",
"Shangling Jui",
"Xiongkuo Min",
"Guangtao Zhai"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
""
] | The explosive growth of videos on streaming media platforms has underscored
the urgent need for effective video quality assessment (VQA) algorithms to
monitor and perceptually optimize the quality of streaming videos. However, VQA
remains an extremely challenging task due to the diverse video content and the
complex spatial and temporal distortions, thus necessitating more advanced
methods to address these issues. Nowadays, large multimodal models (LMMs), such
as GPT-4V, have exhibited strong capabilities for various visual understanding
tasks, motivating us to leverage the powerful multimodal representation ability
of LMMs to solve the VQA task. Therefore, we propose the first Large
Multi-Modal Video Quality Assessment (LMM-VQA) model, which introduces a novel
spatiotemporal visual modeling strategy for quality-aware feature extraction.
Specifically, we first reformulate the quality regression problem into a
question and answering (Q&A) task and construct Q&A prompts for VQA instruction
tuning. Then, we design a spatiotemporal vision encoder to extract spatial and
temporal features to represent the quality characteristics of videos, which are
subsequently mapped into the language space by the spatiotemporal projector for
modality alignment. Finally, the aligned visual tokens and the quality-inquired
text tokens are aggregated as inputs for the large language model (LLM) to
generate the quality score and level. Extensive experiments demonstrate that
LMM-VQA achieves state-of-the-art performance across five VQA benchmarks,
exhibiting an average improvement of $5\%$ in generalization ability over
existing methods. Furthermore, due to the advanced design of the spatiotemporal
encoder and projector, LMM-VQA also performs exceptionally well on general
video understanding tasks, further validating its effectiveness. Our code will
be released at https://github.com/Sueqk/LMM-VQA. | cs.CV | [
"cs.CV",
"cs.AI"
] |
|||
Dual-CBA: Improving Online Continual Learning via Dual Continual Bias
Adaptors from a Bi-level Optimization Perspective | http://arxiv.org/abs/2408.13991v1 | http://arxiv.org/abs/2408.13991v1 | http://arxiv.org/pdf/2408.13991v1 | 2024-08-26 | 2024-08-26 | [
"Quanziang Wang",
"Renzhen Wang",
"Yichen Wu",
"Xixi Jia",
"Minghao Zhou",
"Deyu Meng"
] | [
"",
"",
"",
"",
"",
""
] | In online continual learning (CL), models trained on changing distributions
easily forget previously learned knowledge and bias toward newly received
tasks. To address this issue, we present Continual Bias Adaptor (CBA), a
bi-level framework that augments the classification network to adapt to
catastrophic distribution shifts during training, enabling the network to
achieve a stable consolidation of all seen tasks. However, the CBA module
adjusts distribution shifts in a class-specific manner, exacerbating the
stability gap issue and, to some extent, fails to meet the need for continual
testing in online CL. To mitigate this challenge, we further propose a novel
class-agnostic CBA module that separately aggregates the posterior
probabilities of classes from new and old tasks, and applies a stable
adjustment to the resulting posterior probabilities. We combine the two kinds
of CBA modules into a unified Dual-CBA module, which thus is capable of
adapting to catastrophic distribution shifts and simultaneously meets the
real-time testing requirements of online CL. Besides, we propose Incremental
Batch Normalization (IBN), a tailored BN module to re-estimate its population
statistics for alleviating the feature bias arising from the inner loop
optimization problem of our bi-level framework. To validate the effectiveness
of the proposed method, we theoretically provide some insights into how it
mitigates catastrophic distribution shifts, and empirically demonstrate its
superiority through extensive experiments based on four rehearsal-based
baselines and three public continual learning benchmarks. | cs.LG | [
"cs.LG",
"cs.AI"
] |
|||
Automatic Medical Report Generation: Methods and Applications | http://arxiv.org/abs/2408.13988v1 | http://arxiv.org/abs/2408.13988v1 | http://arxiv.org/pdf/2408.13988v1 | 2024-08-26 | 2024-08-26 | [
"Li Guo",
"Anas M. Tahir",
"Dong Zhang",
"Z. Jane Wang",
"Rabab K. Ward"
] | [
"",
"",
"",
"",
""
] | The increasing demand for medical imaging has surpassed the capacity of
available radiologists, leading to diagnostic delays and potential
misdiagnoses. Artificial intelligence (AI) techniques, particularly in
automatic medical report generation (AMRG), offer a promising solution to this
dilemma. This review comprehensively examines AMRG methods from 2021 to 2024.
It (i) presents solutions to primary challenges in this field, (ii) explores
AMRG applications across various imaging modalities, (iii) introduces publicly
available datasets, (iv) outlines evaluation metrics, (v) identifies techniques
that significantly enhance model performance, and (vi) discusses unresolved
issues and potential future research directions. This paper aims to provide a
comprehensive understanding of the existing literature and inspire valuable
future research. | 42 pages and 9 figures | cs.CV | [
"cs.CV",
"cs.AI"
] |
||
Focused Large Language Models are Stable Many-Shot Learners | http://arxiv.org/abs/2408.13987v1 | http://arxiv.org/abs/2408.13987v1 | http://arxiv.org/pdf/2408.13987v1 | 2024-08-26 | 2024-08-26 | [
"Peiwen Yuan",
"Shaoxiong Feng",
"Yiwei Li",
"Xinglin Wang",
"Yueqi Zhang",
"Chuyi Tan",
"Boyuan Pan",
"Heda Wang",
"Yao Hu",
"Kan Li"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | In-Context Learning (ICL) enables large language models (LLMs) to achieve
rapid task adaptation by learning from demonstrations. With the increase in
available context length of LLMs, recent experiments have shown that the
performance of ICL does not necessarily scale well in many-shot (demonstration)
settings. We theoretically and experimentally confirm that the reason lies in
more demonstrations dispersing the model attention from the query, hindering
its understanding of key content. Inspired by how humans learn from examples,
we propose a training-free method FocusICL, which conducts triviality filtering
to avoid attention being diverted by unimportant contents at token-level and
operates hierarchical attention to further ensure sufficient attention towards
current query at demonstration-level. We also design an efficient
hyperparameter searching strategy for FocusICL based on model perplexity of
demonstrations. Comprehensive experiments validate that FocusICL achieves an
average performance improvement of 5.2% over vanilla ICL and scales well with
many-shot demonstrations. | 15 pages | cs.CL | [
"cs.CL",
"cs.AI"
] |
||
AgentMove: Predicting Human Mobility Anywhere Using Large Language Model
based Agentic Framework | http://arxiv.org/abs/2408.13986v1 | http://arxiv.org/abs/2408.13986v1 | http://arxiv.org/pdf/2408.13986v1 | 2024-08-26 | 2024-08-26 | [
"Jie Feng",
"Yuwei Du",
"Jie Zhao",
"Yong Li"
] | [
"",
"",
"",
""
] | Human mobility prediction plays a crucial role in various real-world
applications. Although deep learning based models have shown promising results
over the past decade, their reliance on extensive private mobility data for
training and their inability to perform zero-shot predictions, have hindered
further advancements. Recently, attempts have been made to apply large language
models (LLMs) to mobility prediction task. However, their performance has been
constrained by the absence of a systematic design of workflow. They directly
generate the final output using LLMs, which limits the potential of LLMs to
uncover complex mobility patterns and underestimates their extensive reserve of
global geospatial knowledge. In this paper, we introduce AgentMove, a
systematic agentic prediction framework to achieve generalized mobility
prediction for any cities worldwide. In AgentMove, we first decompose the
mobility prediction task into three sub-tasks and then design corresponding
modules to complete these subtasks, including spatial-temporal memory for
individual mobility pattern mining, world knowledge generator for modeling the
effects of urban structure and collective knowledge extractor for capturing the
shared patterns among population. Finally, we combine the results of three
modules and conduct a reasoning step to generate the final predictions.
Extensive experiments on mobility data from two sources in 12 cities
demonstrate that AgentMove outperforms the best baseline more than 8% in
various metrics and it shows robust predictions with various LLMs as base and
also less geographical bias across cities. Codes and data can be found in
https://github.com/tsinghua-fib-lab/AgentMove. | 13 pages | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.IR"
] |
||
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | http://arxiv.org/abs/2408.13979v1 | http://arxiv.org/abs/2408.13979v1 | http://arxiv.org/pdf/2408.13979v1 | 2024-08-26 | 2024-08-26 | [
"Shuai Fu",
"Xiequn Wang",
"Qiushi Huang",
"Yu Zhang"
] | [
"",
"",
"",
""
] | With the prevalence of large-scale pretrained vision-language models (VLMs),
such as CLIP, soft-prompt tuning has become a popular method for adapting these
models to various downstream tasks. However, few works delve into the inherent
properties of learnable soft-prompt vectors, specifically the impact of their
norms to the performance of VLMs. This motivates us to pose an unexplored
research question: ``Do we need to normalize the soft prompts in VLMs?'' To
fill this research gap, we first uncover a phenomenon, called the
\textbf{Low-Norm Effect} by performing extensive corruption experiments,
suggesting that reducing the norms of certain learned prompts occasionally
enhances the performance of VLMs, while increasing them often degrades it. To
harness this effect, we propose a novel method named \textbf{N}ormalizing
th\textbf{e} soft-pro\textbf{m}pt v\textbf{e}ctors of vi\textbf{si}on-language
model\textbf{s} (\textbf{Nemesis}) to normalize soft-prompt vectors in VLMs. To
the best of our knowledge, our work is the first to systematically investigate
the role of norms of soft-prompt vector in VLMs, offering valuable insights for
future research in soft-prompt tuning. The code is available at
\texttt{\href{https://github.com/ShyFoo/Nemesis}{https://github.com/ShyFoo/Nemesis}}. | Accepted at ICLR 2024 (Spotlight) | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
||
Time Series Analysis for Education: Methods, Applications, and Future
Directions | http://arxiv.org/abs/2408.13960v2 | http://arxiv.org/abs/2408.13960v2 | http://arxiv.org/pdf/2408.13960v2 | 2024-08-25 | 2024-08-27 | [
"Shengzhong Mao",
"Chaoli Zhang",
"Yichi Song",
"Jindong Wang",
"Xiao-Jun Zeng",
"Zenglin Xu",
"Qingsong Wen"
] | [
"",
"",
"",
"",
"",
"",
""
] | Recent advancements in the collection and analysis of sequential educational
data have brought time series analysis to a pivotal position in educational
research, highlighting its essential role in facilitating data-driven
decision-making. However, there is a lack of comprehensive summaries that
consolidate these advancements. To the best of our knowledge, this paper is the
first to provide a comprehensive review of time series analysis techniques
specifically within the educational context. We begin by exploring the
landscape of educational data analytics, categorizing various data sources and
types relevant to education. We then review four prominent time series
methods-forecasting, classification, clustering, and anomaly
detection-illustrating their specific application points in educational
settings. Subsequently, we present a range of educational scenarios and
applications, focusing on how these methods are employed to address diverse
educational tasks, which highlights the practical integration of multiple time
series methods to solve complex educational problems. Finally, we conclude with
a discussion on future directions, including personalized learning analytics,
multimodal data fusion, and the role of large language models (LLMs) in
educational time series. The contributions of this paper include a detailed
taxonomy of educational data, a synthesis of time series techniques with
specific educational applications, and a forward-looking perspective on
emerging trends and future research opportunities in educational analysis. The
related papers and resources are available and regularly updated at the project
page. | 24 pages, 3 figures, 6 tables, project page: see
https://github.com/ai-for-edu/time-series-analysis-for-education | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CY"
] |
||
Bridging the Gap between Real-world and Synthetic Images for Testing
Autonomous Driving Systems | http://arxiv.org/abs/2408.13950v1 | http://arxiv.org/abs/2408.13950v1 | http://arxiv.org/pdf/2408.13950v1 | 2024-08-25 | 2024-08-25 | [
"Mohammad Hossein Amini",
"Shiva Nejati"
] | [
"",
""
] | Deep Neural Networks (DNNs) for Autonomous Driving Systems (ADS) are
typically trained on real-world images and tested using synthetic simulator
images. This approach results in training and test datasets with dissimilar
distributions, which can potentially lead to erroneously decreased test
accuracy. To address this issue, the literature suggests applying
domain-to-domain translators to test datasets to bring them closer to the
training datasets. However, translating images used for testing may
unpredictably affect the reliability, effectiveness and efficiency of the
testing process. Hence, this paper investigates the following questions in the
context of ADS: Could translators reduce the effectiveness of images used for
ADS-DNN testing and their ability to reveal faults in ADS-DNNs? Can translators
result in excessive time overhead during simulation-based testing? To address
these questions, we consider three domain-to-domain translators: CycleGAN and
neural style transfer, from the literature, and SAEVAE, our proposed
translator. Our results for two critical ADS tasks -- lane keeping and object
detection -- indicate that translators significantly narrow the gap in ADS test
accuracy caused by distribution dissimilarities between training and test data,
with SAEVAE outperforming the other two translators. We show that, based on the
recent diversity, coverage, and fault-revealing ability metrics for testing
deep-learning systems, translators do not compromise the diversity and the
coverage of test data, nor do they lead to revealing fewer faults in ADS-DNNs.
Further, among the translators considered, SAEVAE incurs a negligible overhead
in simulation time and can be efficiently integrated into simulation-based
testing. Finally, we show that translators increase the correlation between
offline and simulation-based testing results, which can help reduce the cost of
simulation-based testing. | Accepted for publication by the International Conference on Automated
Software Engineering (ASE 2024) | cs.SE | [
"cs.SE",
"cs.AI"
] |
||
Learning to Move Like Professional Counter-Strike Players | http://arxiv.org/abs/2408.13934v1 | http://arxiv.org/abs/2408.13934v1 | http://arxiv.org/pdf/2408.13934v1 | 2024-08-25 | 2024-08-25 | [
"David Durst",
"Feng Xie",
"Vishnu Sarukkai",
"Brennan Shacklett",
"Iuri Frosio",
"Chen Tessler",
"Joohwan Kim",
"Carly Taylor",
"Gilbert Bernstein",
"Sanjiban Choudhury",
"Pat Hanrahan",
"Kayvon Fatahalian"
] | [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
] | In multiplayer, first-person shooter games like Counter-Strike: Global
Offensive (CS:GO), coordinated movement is a critical component of high-level
strategic play. However, the complexity of team coordination and the variety of
conditions present in popular game maps make it impractical to author
hand-crafted movement policies for every scenario. We show that it is possible
to take a data-driven approach to creating human-like movement controllers for
CS:GO. We curate a team movement dataset comprising 123 hours of professional
game play traces, and use this dataset to train a transformer-based movement
model that generates human-like team movement for all players in a "Retakes"
round of the game. Importantly, the movement prediction model is efficient.
Performing inference for all players takes less than 0.5 ms per game step
(amortized cost) on a single CPU core, making it plausible for use in
commercial games today. Human evaluators assess that our model behaves more
like humans than both commercially-available bots and procedural movement
controllers scripted by experts (16% to 59% higher by TrueSkill rating of
"human-like"). Using experiments involving in-game bot vs. bot self-play, we
demonstrate that our model performs simple forms of teamwork, makes fewer
common movement mistakes, and yields movement distributions, player lifetimes,
and kill locations similar to those observed in professional CS:GO match play. | The project website is at https://davidbdurst.com/mlmove/ | ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA),
August 21-23, 2024, Montreal, Canada | cs.LG | [
"cs.LG",
"cs.AI",
"cs.GR"
] |