arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.04534 | 2023-05-08T08:10:24Z | Smart Home Device Detection Algorithm Based on FSA-YOLOv5 | [
"Jiafeng Zhang",
"Xuejing Pu"
] | Smart home device detection is a critical aspect of human-computer
interaction. However, detecting targets in indoor environments can be
challenging due to interference from ambient light and background noise. In
this paper, we present a new model called FSA-YOLOv5, which addresses the
limitations of traditional convolutional neural networks by introducing the
Transformer to learn long-range dependencies. Additionally, we propose a new
attention module, the full-separation attention module, which integrates
spatial and channel dimensional information to learn contextual information. To
improve tiny device detection, we include a prediction head for the indoor
smart home device detection task. We also release the Southeast University
Indoor Smart Speaker Dataset (SUSSD) to supplement existing data samples.
Through a series of experiments on SUSSD, we demonstrate that our method
outperforms other methods, highlighting the effectiveness of FSA-YOLOv5. | [
"cs.CV"
] | false |
2305.04536 | 2023-05-08T08:14:46Z | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed
Multi-Label Visual Recognition | [
"Peng Xia",
"Di Xu",
"Lie Ju",
"Ming Hu",
"Jun Chen",
"Zongyuan Ge"
] | Long-tailed multi-label visual recognition (LTML) task is a highly
challenging task due to the label co-occurrence and imbalanced data
distribution. In this work, we propose a unified framework for LTML, namely
prompt tuning with class-specific embedding loss (LMPT), capturing the semantic
feature interactions between categories by combining text and image modality
data and improving the performance synchronously on both head and tail classes.
Specifically, LMPT introduces the embedding loss function with class-aware soft
margin and re-weighting to learn class-specific contexts with the benefit of
textual descriptions (captions), which could help establish semantic
relationships between classes, especially between the head and tail classes.
Furthermore, taking into account the class imbalance, the distribution-balanced
loss is adopted as the classification loss function to further improve the
performance on the tail classes without compromising head classes. Extensive
experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates
that the proposed method significantly surpasses the previous state-of-the-art
methods and zero-shot CLIP in LTML. Our codes are fully available at
\url{https://github.com/richard-peng-xia/LMPT}. | [
"cs.CV"
] | false |
2305.04541 | 2023-05-08T08:29:21Z | High Quality Large-Scale 3-D Urban Mapping with Multi-Master TomoSAR | [
"Yilei Shi",
"Richard Bamler",
"Yuanyuan Wang",
"Xiao Xiang Zhu"
] | Multi-baseline interferometric synthetic aperture radar (InSAR) techniques
are effective approaches for retrieving the 3-D information of urban areas. In
order to obtain a plausible reconstruction, it is necessary to use large-stack
interferograms. Hence, these methods are commonly not appropriate for
large-scale 3-D urban mapping using TanDEM-X data where only a few acquisitions
are available in average for each city. This work proposes a new SAR
tomographic processing framework to work with those extremely small stacks,
which integrates the non-local filtering into SAR tomography inversion. The
applicability of the algorithm is demonstrated using a TanDEM-X multi-baseline
stack with 5 bistatic interferograms over the whole city of Munich, Germany.
Systematic comparison of our result with airborne LiDAR data shows that the
relative height accuracy of two third buildings is within two meters, which
outperforms the TanDEM-X raw DEM. The promising performance of the proposed
algorithm paved the first step towards high quality large-scale 3-D urban
mapping. | [
"cs.CV"
] | false |
2305.04603 | 2023-05-08T10:25:09Z | Privacy-Preserving Representations are not Enough -- Recovering Scene
Content from Camera Poses | [
"Kunal Chelani",
"Torsten Sattler",
"Fredrik Kahl",
"Zuzana Kukelova"
] | Visual localization is the task of estimating the camera pose from which a
given image was taken and is central to several 3D computer vision
applications. With the rapid growth in the popularity of AR/VR/MR devices and
cloud-based applications, privacy issues are becoming a very important aspect
of the localization process. Existing work on privacy-preserving localization
aims to defend against an attacker who has access to a cloud-based service. In
this paper, we show that an attacker can learn about details of a scene without
any access by simply querying a localization service. The attack is based on
the observation that modern visual localization algorithms are robust to
variations in appearance and geometry. While this is in general a desired
property, it also leads to algorithms localizing objects that are similar
enough to those present in a scene. An attacker can thus query a server with a
large enough set of images of objects, \eg, obtained from the Internet, and
some of them will be localized. The attacker can thus learn about object
placements from the camera poses returned by the service (which is the minimal
information returned by such a service). In this paper, we develop a
proof-of-concept version of this attack and demonstrate its practical
feasibility. The attack does not place any requirements on the localization
algorithm used, and thus also applies to privacy-preserving representations.
Current work on privacy-preserving representations alone is thus insufficient. | [
"cs.CV"
] | false |
2305.04651 | 2023-05-08T12:08:12Z | ReGeneration Learning of Diffusion Models with Rich Prompts for
Zero-Shot Image Translation | [
"Yupei Lin",
"Sen Zhang",
"Xiaojun Yang",
"Xiao Wang",
"Yukai Shi"
] | Large-scale text-to-image models have demonstrated amazing ability to
synthesize diverse and high-fidelity images. However, these models are often
violated by several limitations. Firstly, they require the user to provide
precise and contextually relevant descriptions for the desired image
modifications. Secondly, current models can impose significant changes to the
original image content during the editing process. In this paper, we explore
ReGeneration learning in an image-to-image Diffusion model (ReDiffuser), that
preserves the content of the original image without human prompting and the
requisite editing direction is automatically discovered within the text
embedding space. To ensure consistent preservation of the shape during image
editing, we propose cross-attention guidance based on regeneration learning.
This novel approach allows for enhanced expression of the target domain
features while preserving the original shape of the image. In addition, we
introduce a cooperative update strategy, which allows for efficient
preservation of the original shape of an image, thereby improving the quality
and consistency of shape preservation throughout the editing process. Our
proposed method leverages an existing pre-trained text-image diffusion model
without any additional training. Extensive experiments show that the proposed
method outperforms existing work in both real and synthetic image editing. | [
"cs.CV"
] | false |
2305.04691 | 2023-05-08T13:20:55Z | Self-supervised Learning for Pre-Training 3D Point Clouds: A Survey | [
"Ben Fei",
"Weidong Yang",
"Liwen Liu",
"Tianyue Luo",
"Rui Zhang",
"Yixuan Li",
"Ying He"
] | Point cloud data has been extensively studied due to its compact form and
flexibility in representing complex 3D structures. The ability of point cloud
data to accurately capture and represent intricate 3D geometry makes it an
ideal choice for a wide range of applications, including computer vision,
robotics, and autonomous driving, all of which require an understanding of the
underlying spatial structures. Given the challenges associated with annotating
large-scale point clouds, self-supervised point cloud representation learning
has attracted increasing attention in recent years. This approach aims to learn
generic and useful point cloud representations from unlabeled data,
circumventing the need for extensive manual annotations. In this paper, we
present a comprehensive survey of self-supervised point cloud representation
learning using DNNs. We begin by presenting the motivation and general trends
in recent research. We then briefly introduce the commonly used datasets and
evaluation metrics. Following that, we delve into an extensive exploration of
self-supervised point cloud representation learning methods based on these
techniques. Finally, we share our thoughts on some of the challenges and
potential issues that future research in self-supervised learning for
pre-training 3D point clouds may encounter. | [
"cs.CV"
] | false |
2305.04719 | 2023-05-08T14:10:10Z | Learning to Generate Poetic Chinese Landscape Painting with Calligraphy | [
"Shaozu Yuan",
"Aijun Dai",
"Zhiling Yan",
"Ruixue Liu",
"Meng Chen",
"Baoyang Chen",
"Zhijie Qiu",
"Xiaodong He"
] | In this paper, we present a novel system (denoted as Polaca) to generate
poetic Chinese landscape painting with calligraphy. Unlike previous single
image-to-image painting generation, Polaca takes the classic poetry as input
and outputs the artistic landscape painting image with the corresponding
calligraphy. It is equipped with three different modules to complete the whole
piece of landscape painting artwork: the first one is a text-to-image module to
generate landscape painting image, the second one is an image-to-image module
to generate stylistic calligraphy image, and the third one is an image fusion
module to fuse the two images into a whole piece of aesthetic artwork. | [
"cs.CV"
] | false |
2305.04722 | 2023-05-08T14:12:25Z | Understanding Gaussian Attention Bias of Vision Transformers Using
Effective Receptive Fields | [
"Bum Jun Kim",
"Hyeyeon Choi",
"Hyeonah Jang",
"Sang Woo Kim"
] | Vision transformers (ViTs) that model an image as a sequence of partitioned
patches have shown notable performance in diverse vision tasks. Because
partitioning patches eliminates the image structure, to reflect the order of
patches, ViTs utilize an explicit component called positional embedding.
However, we claim that the use of positional embedding does not simply
guarantee the order-awareness of ViT. To support this claim, we analyze the
actual behavior of ViTs using an effective receptive field. We demonstrate that
during training, ViT acquires an understanding of patch order from the
positional embedding that is trained to be a specific pattern. Based on this
observation, we propose explicitly adding a Gaussian attention bias that guides
the positional embedding to have the corresponding pattern from the beginning
of training. We evaluated the influence of Gaussian attention bias on the
performance of ViTs in several image classification, object detection, and
semantic segmentation experiments. The results showed that proposed method not
only facilitates ViTs to understand images but also boosts their performance on
various datasets, including ImageNet, COCO 2017, and ADE20K. | [
"cs.CV"
] | false |
2305.04763 | 2023-05-08T15:11:28Z | Large-scale and Efficient Texture Mapping Algorithm via Loopy Belief
Propagation | [
"Xiao ling",
"Rongjun Qin"
] | Texture mapping as a fundamental task in 3D modeling has been well
established for well-acquired aerial assets under consistent illumination, yet
it remains a challenge when it is scaled to large datasets with images under
varying views and illuminations. A well-performed texture mapping algorithm
must be able to efficiently select views, fuse and map textures from these
views to mesh models, at the same time, achieve consistent radiometry over the
entire model. Existing approaches achieve efficiency either by limiting the
number of images to one view per face, or simplifying global inferences to only
achieve local color consistency. In this paper, we break this tie by proposing
a novel and efficient texture mapping framework that allows the use of multiple
views of texture per face, at the same time to achieve global color
consistency. The proposed method leverages a loopy belief propagation algorithm
to perform an efficient and global-level probabilistic inferences to rank
candidate views per face, which enables face-level multi-view texture fusion
and blending. The texture fusion algorithm, being non-parametric, brings
another advantage over typical parametric post color correction methods, due to
its improved robustness to non-linear illumination differences. The experiments
on three different types of datasets (i.e. satellite dataset, unmanned-aerial
vehicle dataset and close-range dataset) show that the proposed method has
produced visually pleasant and texturally consistent results in all scenarios,
with an added advantage of consuming less running time as compared to the state
of the art methods, especially for large-scale dataset such as
satellite-derived models. | [
"cs.CV"
] | false |
2305.04766 | 2023-05-08T15:15:37Z | OSTA: One-shot Task-adaptive Channel Selection for Semantic Segmentation
of Multichannel Images | [
"Yuanzhi Cai",
"Jagannath Aryal",
"Yuan Fang",
"Hong Huang",
"Lei Fan"
] | Semantic segmentation of multichannel images is a fundamental task for many
applications. Selecting an appropriate channel combination from the original
multichannel image can improve the accuracy of semantic segmentation and reduce
the cost of data storage, processing and future acquisition. Existing channel
selection methods typically use a reasonable selection procedure to determine a
desirable channel combination, and then train a semantic segmentation network
using that combination. In this study, the concept of pruning from a supernet
is used for the first time to integrate the selection of channel combination
and the training of a semantic segmentation network. Based on this concept, a
One-Shot Task-Adaptive (OSTA) channel selection method is proposed for the
semantic segmentation of multichannel images. OSTA has three stages, namely the
supernet training stage, the pruning stage and the fine-tuning stage. The
outcomes of six groups of experiments (L7Irish3C, L7Irish2C, L8Biome3C,
L8Biome2C, RIT-18 and Semantic3D) demonstrated the effectiveness and efficiency
of OSTA. OSTA achieved the highest segmentation accuracies in all tests (62.49%
(mIoU), 75.40% (mIoU), 68.38% (mIoU), 87.63% (mIoU), 66.53% (mA) and 70.86%
(mIoU), respectively). It even exceeded the highest accuracies of exhaustive
tests (61.54% (mIoU), 74.91% (mIoU), 67.94% (mIoU), 87.32% (mIoU), 65.32% (mA)
and 70.27% (mIoU), respectively), where all possible channel combinations were
tested. All of this can be accomplished within a predictable and relatively
efficient timeframe, ranging from 101.71% to 298.1% times the time required to
train the segmentation network alone. In addition, there were interesting
findings that were deemed valuable for several fields. | [
"cs.CV"
] | false |
2305.04868 | 2023-05-08T17:16:38Z | SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign
Language Understanding | [
"Hezhen Hu",
"Weichao Zhao",
"Wengang Zhou",
"Houqiang Li"
] | Hand gesture serves as a crucial role during the expression of sign language.
Current deep learning based methods for sign language understanding (SLU) are
prone to over-fitting due to insufficient sign data resource and suffer limited
interpretability. In this paper, we propose the first self-supervised
pre-trainable SignBERT+ framework with model-aware hand prior incorporated. In
our framework, the hand pose is regarded as a visual token, which is derived
from an off-the-shelf detector. Each visual token is embedded with gesture
state and spatial-temporal position encoding. To take full advantage of current
sign data resource, we first perform self-supervised learning to model its
statistics. To this end, we design multi-level masked modeling strategies
(joint, frame and clip) to mimic common failure detection cases. Jointly with
these masked modeling strategies, we incorporate model-aware hand prior to
better capture hierarchical context over the sequence. After the pre-training,
we carefully design simple yet effective prediction heads for downstream tasks.
To validate the effectiveness of our framework, we perform extensive
experiments on three main SLU tasks, involving isolated and continuous sign
language recognition (SLR), and sign language translation (SLT). Experimental
results demonstrate the effectiveness of our method, achieving new
state-of-the-art performance with a notable gain. | [
"cs.CV"
] | false |
2305.04925 | 2023-05-08T17:59:14Z | PillarNeXt: Rethinking Network Designs for 3D Object Detection in LiDAR
Point Clouds | [
"Jinyu Li",
"Chenxu Luo",
"Xiaodong Yang"
] | In order to deal with the sparse and unstructured raw point clouds, LiDAR
based 3D object detection research mostly focuses on designing dedicated local
point aggregators for fine-grained geometrical modeling. In this paper, we
revisit the local point aggregators from the perspective of allocating
computational resources. We find that the simplest pillar based models perform
surprisingly well considering both accuracy and latency. Additionally, we show
that minimal adaptions from the success of 2D object detection, such as
enlarging receptive field, significantly boost the performance. Extensive
experiments reveal that our pillar based networks with modernized designs in
terms of architecture and training render the state-of-the-art performance on
the two popular benchmarks: Waymo Open Dataset and nuScenes. Our results
challenge the common intuition that the detailed geometry modeling is essential
to achieve high performance for 3D object detection. | [
"cs.CV"
] | false |
2305.04994 | 2023-05-08T19:03:21Z | Crop identification using deep learning on LUCAS crop cover photos | [
"Momchil Yordanov",
"Raphael d'Andrimont",
"Laura Martinez-Sanchez",
"Guido Lemoine",
"Dominique Fasbender",
"Marijn van der Velde"
] | Crop classification via deep learning on ground imagery can deliver timely
and accurate crop-specific information to various stakeholders. Dedicated
ground-based image acquisition exercises can help to collect data in data
scarce regions, improve control on timing of collection, or when study areas
are to small to monitor via satellite. Automatic labelling is essential when
collecting large volumes of data. One such data collection is the EU's Land Use
Cover Area frame Survey (LUCAS), and in particular, the recently published
LUCAS Cover photos database. The aim of this paper is to select and publish a
subset of LUCAS Cover photos for 12 mature major crops across the EU, to
deploy, benchmark, and identify the best configuration of Mobile-net for the
classification task, to showcase the possibility of using entropy-based metrics
for post-processing of results, and finally to show the applications and
limitations of the model in a practical and policy relevant context. In
particular, the usefulness of automatically identifying crops on geo-tagged
photos is illustrated in the context of the EU's Common Agricultural Policy.
The work has produced a dataset of 169,460 images of mature crops for the 12
classes, out of which 15,876 were manually selected as representing a clean
sample without any foreign objects or unfavorable conditions. The best
performing model achieved a Macro F1 (M-F1) of 0.75 on an imbalanced test
dataset of 8,642 photos. Using metrics from information theory, namely - the
Equivalence Reference Probability, resulted in achieving an increase of 6%. The
most unfavorable conditions for taking such images, across all crop classes,
were found to be too early or late in the season. The proposed methodology
shows the possibility for using minimal auxiliary data, outside the images
themselves, in order to achieve a M-F1 of 0.817 for labelling between 12 major
European crops. | [
"cs.CV"
] | false |
2305.05026 | 2023-05-08T20:09:19Z | Self-supervised Pre-training with Masked Shape Prediction for 3D Scene
Understanding | [
"Li Jiang",
"Zetong Yang",
"Shaoshuai Shi",
"Vladislav Golyanik",
"Dengxin Dai",
"Bernt Schiele"
] | Masked signal modeling has greatly advanced self-supervised pre-training for
language and 2D images. However, it is still not fully explored in 3D scene
understanding. Thus, this paper introduces Masked Shape Prediction (MSP), a new
framework to conduct masked signal modeling in 3D scenes. MSP uses the
essential 3D semantic cue, i.e., geometric shape, as the prediction target for
masked points. The context-enhanced shape target consisting of explicit shape
context and implicit deep shape feature is proposed to facilitate exploiting
contextual cues in shape prediction. Meanwhile, the pre-training architecture
in MSP is carefully designed to alleviate the masked shape leakage from point
coordinates. Experiments on multiple 3D understanding tasks on both indoor and
outdoor datasets demonstrate the effectiveness of MSP in learning good feature
representations to consistently boost downstream performance. | [
"cs.CV"
] | false |
2305.05057 | 2023-05-08T21:28:40Z | Crack Detection of Asphalt Concrete Using Combined Fracture Mechanics
and Digital Image Correlation | [
"Zehui Zhu",
"Imad L. Al-Qadi"
] | Cracking is a common failure mode in asphalt concrete (AC) pavements. Many
tests have been developed to characterize the fracture behavior of AC. Accurate
crack detection during testing is crucial to describe AC fracture behavior.
This paper proposed a framework to detect surface cracks in AC specimens using
two-dimensional digital image correlation (DIC). Two significant drawbacks in
previous research in this field were addressed. First, a multi-seed incremental
reliability-guided DIC was proposed to solve the decorrelation issue due to
large deformation and discontinuities. The method was validated using synthetic
deformed images. A correctly implemented analysis could accurately measure
strains up to 450\%, even with significant discontinuities (cracks) present in
the deformed image. Second, a robust method was developed to detect cracks
based on displacement fields. The proposed method uses critical crack tip
opening displacement ($\delta_c$) to define the onset of cleavage fracture. The
proposed method relies on well-developed fracture mechanics theory. The
proposed threshold $\delta_c$ has a physical meaning and can be easily
determined from DIC measurement. The method was validated using an extended
finite element model. The framework was implemented to measure the crack
propagation rate while conducting the Illinois-flexibility index test on two AC
mixes. The calculated rates could distinguish mixes based on their cracking
potential. The proposed framework could be applied to characterize AC cracking
phenomenon, evaluate its fracture properties, assess asphalt mixture testing
protocols, and develop theoretical models. | [
"cs.CV"
] | false |
2305.05391 | 2023-05-08T08:52:08Z | Privacy-preserving Adversarial Facial Features | [
"Zhibo Wang",
"He Wang",
"Shuaifan Jin",
"Wenwen Zhang",
"Jiahui Hu",
"Yan Wang",
"Peng Sun",
"Wei Yuan",
"Kaixin Liu",
"Kui Ren"
] | Face recognition service providers protect face privacy by extracting compact
and discriminative facial features (representations) from images, and storing
the facial features for real-time recognition. However, such features can still
be exploited to recover the appearance of the original face by building a
reconstruction network. Although several privacy-preserving methods have been
proposed, the enhancement of face privacy protection is at the expense of
accuracy degradation. In this paper, we propose an adversarial features-based
face privacy protection (AdvFace) approach to generate privacy-preserving
adversarial features, which can disrupt the mapping from adversarial features
to facial images to defend against reconstruction attacks. To this end, we
design a shadow model which simulates the attackers' behavior to capture the
mapping function from facial features to images and generate adversarial latent
noise to disrupt the mapping. The adversarial features rather than the original
features are stored in the server's database to prevent leaked features from
exposing facial information. Moreover, the AdvFace requires no changes to the
face recognition network and can be implemented as a privacy-enhancing plugin
in deployed face recognition systems. Extensive experimental results
demonstrate that AdvFace outperforms the state-of-the-art face
privacy-preserving methods in defending against reconstruction attacks while
maintaining face recognition accuracy. | [
"cs.CV"
] | false |
2305.04443 | 2023-05-08T03:43:51Z | Towards Accurate Human Motion Prediction via Iterative Refinement | [
"Jiarui Sun",
"Girish Chowdhary"
] | Human motion prediction aims to forecast an upcoming pose sequence given a
past human motion trajectory. To address the problem, in this work we propose
FreqMRN, a human motion prediction framework that takes into account both the
kinematic structure of the human body and the temporal smoothness nature of
motion. Specifically, FreqMRN first generates a fixed-size motion history
summary using a motion attention module, which helps avoid inaccurate motion
predictions due to excessively long motion inputs. Then, supervised by the
proposed spatial-temporal-aware, velocity-aware and global-smoothness-aware
losses, FreqMRN iteratively refines the predicted motion though the proposed
motion refinement module, which converts motion representations back and forth
between pose space and frequency space. We evaluate FreqMRN on several standard
benchmark datasets, including Human3.6M, AMASS and 3DPW. Experimental results
demonstrate that FreqMRN outperforms previous methods by large margins for both
short-term and long-term predictions, while demonstrating superior robustness. | [
"cs.CV",
"cs.LG"
] | false |
2305.04497 | 2023-05-08T06:46:56Z | IIITD-20K: Dense captioning for Text-Image ReID | [
"A V Subramanyam",
"Niranjan Sundararajan",
"Vibhu Dubey",
"Brejesh Lall"
] | Text-to-Image (T2I) ReID has attracted a lot of attention in the recent past.
CUHK-PEDES, RSTPReid and ICFG-PEDES are the three available benchmarks to
evaluate T2I ReID methods. RSTPReid and ICFG-PEDES comprise of identities from
MSMT17 but due to limited number of unique persons, the diversity is limited.
On the other hand, CUHK-PEDES comprises of 13,003 identities but has relatively
shorter text description on average. Further, these datasets are captured in a
restricted environment with limited number of cameras. In order to further
diversify the identities and provide dense captions, we propose a novel dataset
called IIITD-20K. IIITD-20K comprises of 20,000 unique identities captured in
the wild and provides a rich dataset for text-to-image ReID. With a minimum of
26 words for a description, each image is densely captioned. We further
synthetically generate images and fine-grained captions using Stable-diffusion
and BLIP models trained on our dataset. We perform elaborate experiments using
state-of-art text-to-image ReID models and vision-language pre-trained models
and present a comprehensive analysis of the dataset. Our experiments also
reveal that synthetically generated data leads to a substantial performance
improvement in both same dataset as well as cross dataset settings. Our dataset
is available at https://bit.ly/3pkA3Rj. | [
"cs.CV",
"cs.MM"
] | false |
2305.04499 | 2023-05-08T06:50:05Z | Building Footprint Extraction with Graph Convolutional Network | [
"Yilei Shi",
"Qinyu Li",
"Xiaoxiang Zhu"
] | Building footprint information is an essential ingredient for 3-D
reconstruction of urban models. The automatic generation of building footprints
from satellite images presents a considerable challenge due to the complexity
of building shapes. Recent developments in deep convolutional neural networks
(DCNNs) have enabled accurate pixel-level labeling tasks. One central issue
remains, which is the precise delineation of boundaries. Deep architectures
generally fail to produce fine-grained segmentation with accurate boundaries
due to progressive downsampling. In this work, we have proposed a end-to-end
framework to overcome this issue, which uses the graph convolutional network
(GCN) for building footprint extraction task. Our proposed framework
outperforms state-of-the-art methods. | [
"cs.CV",
"eess.IV"
] | false |
2305.04506 | 2023-05-08T07:03:26Z | Pedestrian Behavior Maps for Safety Advisories: CHAMP Framework and
Real-World Data Analysis | [
"Ross Greer",
"Samveed Desai",
"Lulua Rakla",
"Akshay Gopalkrishnan",
"Afnan Alofi",
"Mohan Trivedi"
] | It is critical for vehicles to prevent any collisions with pedestrians.
Current methods for pedestrian collision prevention focus on integrating visual
pedestrian detectors with Automatic Emergency Braking (AEB) systems which can
trigger warnings and apply brakes as a pedestrian enters a vehicle's path.
Unfortunately, pedestrian-detection-based systems can be hindered in certain
situations such as night-time or when pedestrians are occluded. Our system
addresses such issues using an online, map-based pedestrian detection
aggregation system where common pedestrian locations are learned after repeated
passes of locations. Using a carefully collected and annotated dataset in La
Jolla, CA, we demonstrate the system's ability to learn pedestrian zones and
generate advisory notices when a vehicle is approaching a pedestrian despite
challenges like dark lighting or pedestrian occlusion. Using the number of
correct advisories, false advisories, and missed advisories to define precision
and recall performance metrics, we evaluate our system and discuss future
positive effects with further data collection. We have made our code available
at https://github.com/s7desai/ped-mapping, and a video demonstration of the
CHAMP system at https://youtu.be/dxeCrS_Gpkw. | [
"cs.CV",
"cs.AI"
] | false |
2305.04542 | 2023-05-08T08:30:52Z | Multi-Temporal Lip-Audio Memory for Visual Speech Recognition | [
"Jeong Hun Yeo",
"Minsu Kim",
"Yong Man Ro"
] | Visual Speech Recognition (VSR) is a task to predict a sentence or word from
lip movements. Some works have been recently presented which use audio signals
to supplement visual information. However, existing methods utilize only
limited information such as phoneme-level features and soft labels of Automatic
Speech Recognition (ASR) networks. In this paper, we present a Multi-Temporal
Lip-Audio Memory (MTLAM) that makes the best use of audio signals to complement
insufficient information of lip movements. The proposed method is mainly
composed of two parts: 1) MTLAM saves multi-temporal audio features produced
from short- and long-term audio signals, and the MTLAM memorizes a
visual-to-audio mapping to load stored multi-temporal audio features from
visual features at the inference phase. 2) We design an audio temporal model to
produce multi-temporal audio features capturing the context of neighboring
words. In addition, to construct effective visual-to-audio mapping, the audio
temporal models can generate audio features time-aligned with visual features.
Through extensive experiments, we validate the effectiveness of the MTLAM
achieving state-of-the-art performances on two public VSR datasets. | [
"cs.CV",
"eess.AS"
] | false |
2305.04609 | 2023-05-08T10:38:14Z | SwinDocSegmenter: An End-to-End Unified Domain Adaptive Transformer for
Document Instance Segmentation | [
"Ayan Banerjee",
"Sanket Biswas",
"Josep Lladós",
"Umapada Pal"
] | Instance-level segmentation of documents consists in assigning a class-aware
and instance-aware label to each pixel of the image. It is a key step in
document parsing for their understanding. In this paper, we present a unified
transformer encoder-decoder architecture for en-to-end instance segmentation of
complex layouts in document images. The method adapts a contrastive training
with a mixed query selection for anchor initialization in the decoder. Later
on, it performs a dot product between the obtained query embeddings and the
pixel embedding map (coming from the encoder) for semantic reasoning. Extensive
experimentation on competitive benchmarks like PubLayNet, PRIMA, Historical
Japanese (HJ), and TableBank demonstrate that our model with SwinL backbone
achieves better segmentation performance than the existing state-of-the-art
approaches with the average precision of \textbf{93.72}, \textbf{54.39},
\textbf{84.65} and \textbf{98.04} respectively under one billion parameters.
The code is made publicly available at:
\href{https://github.com/ayanban011/SwinDocSegmenter}{github.com/ayanban011/SwinDocSegmenter} | [
"cs.CV",
"cs.LG"
] | false |
2305.04710 | 2023-05-08T13:50:47Z | ElasticHash: Semantic Image Similarity Search by Deep Hashing with
Elasticsearch | [
"Nikolaus Korfhage",
"Markus Mühling",
"Bernd Freisleben"
] | We present ElasticHash, a novel approach for high-quality, efficient, and
large-scale semantic image similarity search. It is based on a deep hashing
model to learn hash codes for fine-grained image similarity search in natural
images and a two-stage method for efficiently searching binary hash codes using
Elasticsearch (ES). In the first stage, a coarse search based on short hash
codes is performed using multi-index hashing and ES terms lookup of neighboring
hash codes. In the second stage, the list of results is re-ranked by computing
the Hamming distance on long hash codes. We evaluate the retrieval performance
of \textit{ElasticHash} for more than 120,000 query images on about 6.9 million
database images of the OpenImages data set. The results show that our approach
achieves high-quality retrieval results and low search latencies. | [
"cs.CV",
"cs.MM"
] | false |
2305.04724 | 2023-05-08T14:17:33Z | Strategy for Rapid Diabetic Retinopathy Exposure Based on Enhanced
Feature Extraction Processing | [
"V. Banupriya",
"S. Anusuya"
] | In the modern world, one of the most severe eye infections brought on by
diabetes is known as diabetic retinopathy, which will result in retinal damage,
and, thus, lead to blindness. Diabetic retinopathy can be well treated with
early diagnosis. Retinal fundus images of humans are used to screen for lesions
in the retina. However, detecting DR in the early stages is challenging due to
the minimal symptoms. Furthermore, the occurrence of diseases linked to
vascular anomalies brought on by DR aids in diagnosing the condition.
Nevertheless, the resources required for manually identifying the lesions are
high. Similarly, training for Convolutional Neural Networks is more
time-consuming. This proposed research aims to improve diabetic retinopathy
diagnosis by developing an enhanced deep learning model for timely DR
identification that is potentially more accurate than existing CNN-based
models. The proposed model will detect various lesions from retinal images in
the early stages. First, characteristics are retrieved from the retinal fundus
picture and put into the EDLM for classification. For dimensionality reduction,
EDLM is used. Additionally, the classification and feature extraction processes
are optimized using the stochastic gradient descent optimizer. The EDLM
effectiveness is assessed on the KAG GLE dataset with 3459 retinal images, and
results are compared over VGG16, VGG19, RESNET18, RESNET34, and RESNET50. | [
"cs.CV",
"cs.AI"
] | false |
2305.04745 | 2023-05-08T14:46:28Z | Controllable Light Diffusion for Portraits | [
"David Futschik",
"Kelvin Ritland",
"James Vecore",
"Sean Fanello",
"Sergio Orts-Escolano",
"Brian Curless",
"Daniel Sýkora",
"Rohit Pandey"
] | We introduce light diffusion, a novel method to improve lighting in
portraits, softening harsh shadows and specular highlights while preserving
overall scene illumination. Inspired by professional photographers' diffusers
and scrims, our method softens lighting given only a single portrait photo.
Previous portrait relighting approaches focus on changing the entire lighting
environment, removing shadows (ignoring strong specular highlights), or
removing shading entirely. In contrast, we propose a learning based method that
allows us to control the amount of light diffusion and apply it on in-the-wild
portraits. Additionally, we design a method to synthetically generate plausible
external shadows with sub-surface scattering effects while conforming to the
shape of the subject's face. Finally, we show how our approach can increase the
robustness of higher level vision applications, such as albedo estimation,
geometry estimation and semantic segmentation. | [
"cs.CV",
"cs.GR",
"I.4.3"
] | true |
2305.04749 | 2023-05-08T14:49:01Z | Toeplitz Neural Network for Sequence Modeling | [
"Zhen Qin",
"Xiaodong Han",
"Weixuan Sun",
"Bowen He",
"Dong Li",
"Dongxu Li",
"Yuchao Dai",
"Lingpeng Kong",
"Yiran Zhong"
] | Sequence modeling has important applications in natural language processing
and computer vision. Recently, the transformer-based models have shown strong
performance on various sequence modeling tasks, which rely on attention to
capture pairwise token relations, and position embedding to inject positional
information. While showing good performance, the transformer models are
inefficient to scale to long input sequences, mainly due to the quadratic
space-time complexity of attention. To overcome this inefficiency, we propose
to model sequences with a relative position encoded Toeplitz matrix and use a
Toeplitz matrix-vector production trick to reduce the space-time complexity of
the sequence modeling to log linear. A lightweight sub-network called relative
position encoder is proposed to generate relative position coefficients with a
fixed budget of parameters, enabling the proposed Toeplitz neural network to
deal with varying sequence lengths. In addition, despite being trained on
512-token sequences, our model can extrapolate input sequence length up to 14K
tokens in inference with consistent performance. Extensive experiments on
autoregressive and bidirectional language modeling, image modeling, and the
challenging Long-Range Arena benchmark show that our method achieves better
performance than its competitors in most downstream tasks while being
significantly faster. The code is available at
https://github.com/OpenNLPLab/Tnn. | [
"cs.CL",
"cs.CV"
] | false |
2305.04789 | 2023-05-08T15:43:00Z | AvatarReX: Real-time Expressive Full-body Avatars | [
"Zerong Zheng",
"Xiaochen Zhao",
"Hongwen Zhang",
"Boning Liu",
"Yebin Liu"
] | We present AvatarReX, a new method for learning NeRF-based full-body avatars
from video data. The learnt avatar not only provides expressive control of the
body, hands and the face together, but also supports real-time animation and
rendering. To this end, we propose a compositional avatar representation, where
the body, hands and the face are separately modeled in a way that the
structural prior from parametric mesh templates is properly utilized without
compromising representation flexibility. Furthermore, we disentangle the
geometry and appearance for each part. With these technical designs, we propose
a dedicated deferred rendering pipeline, which can be executed in real-time
framerate to synthesize high-quality free-view images. The disentanglement of
geometry and appearance also allows us to design a two-pass training strategy
that combines volume rendering and surface rendering for network training. In
this way, patch-level supervision can be applied to force the network to learn
sharp appearance details on the basis of geometry estimation. Overall, our
method enables automatic construction of expressive full-body avatars with
real-time rendering capability, and can generate photo-realistic images with
dynamic details for novel body motions and facial expressions. | [
"cs.CV",
"cs.GR"
] | true |
2305.04844 | 2023-05-08T16:42:55Z | Compressed Video Quality Assessment for Super-Resolution: a Benchmark
and a Quality Metric | [
"Evgeney Bogatyrev",
"Ivan Molodetskikh",
"Dmitriy Vatolin"
] | We developed a super-resolution (SR) benchmark to analyze SR's capacity to
upscale compressed videos. Our dataset employed video codecs based on five
compression standards: H.264, H.265, H.266, AV1, and AVS3. We assessed 17
state-ofthe-art SR models using our benchmark and evaluated their ability to
preserve scene context and their susceptibility to compression artifacts. To
get an accurate perceptual ranking of SR models, we conducted a crowd-sourced
side-by-side comparison of their outputs. The benchmark is publicly available
at
https://videoprocessing.ai/benchmarks/super-resolutionfor-video-compression.html.
We also analyzed benchmark results and developed an
objective-quality-assessment metric based on the current bestperforming
objective metrics. Our metric outperforms others, according to Spearman
correlation with subjective scores for compressed video upscaling. It is
publicly available at
https://github.com/EvgeneyBogatyrev/super-resolution-metric. | [
"eess.IV",
"cs.CV"
] | false |
2305.04923 | 2023-05-08T17:58:27Z | Learning to Evaluate the Artness of AI-generated Images | [
"Junyu Chen",
"Jie An",
"Hanjia Lyu",
"Jiebo Luo"
] | Assessing the artness of AI-generated images continues to be a challenge
within the realm of image generation. Most existing metrics cannot be used to
perform instance-level and reference-free artness evaluation. This paper
presents ArtScore, a metric designed to evaluate the degree to which an image
resembles authentic artworks by artists (or conversely photographs), thereby
offering a novel approach to artness assessment. We first blend pre-trained
models for photo and artwork generation, resulting in a series of mixed models.
Subsequently, we utilize these mixed models to generate images exhibiting
varying degrees of artness with pseudo-annotations. Each photorealistic image
has a corresponding artistic counterpart and a series of interpolated images
that range from realistic to artistic. This dataset is then employed to train a
neural network that learns to estimate quantized artness levels of arbitrary
images. Extensive experiments reveal that the artness levels predicted by
ArtScore align more closely with human artistic evaluation than existing
evaluation metrics, such as Gram loss and ArtFID. | [
"cs.CV",
"cs.AI"
] | false |
2305.05095 | 2023-05-08T23:47:07Z | Less is More: Removing Text-regions Improves CLIP Training Efficiency
and Robustness | [
"Liangliang Cao",
"Bowen Zhang",
"Chen Chen",
"Yinfei Yang",
"Xianzhi Du",
"Wencong Zhang",
"Zhiyun Lu",
"Yantao Zheng"
] | The CLIP (Contrastive Language-Image Pre-training) model and its variants are
becoming the de facto backbone in many applications. However, training a CLIP
model from hundreds of millions of image-text pairs can be prohibitively
expensive. Furthermore, the conventional CLIP model doesn't differentiate
between the visual semantics and meaning of text regions embedded in images.
This can lead to non-robustness when the text in the embedded region doesn't
match the image's visual appearance. In this paper, we discuss two effective
approaches to improve the efficiency and robustness of CLIP training: (1)
augmenting the training dataset while maintaining the same number of
optimization steps, and (2) filtering out samples that contain text regions in
the image. By doing so, we significantly improve the classification and
retrieval accuracy on public benchmarks like ImageNet and CoCo. Filtering out
images with text regions also protects the model from typographic attacks. To
verify this, we build a new dataset named ImageNet with Adversarial Text
Regions (ImageNet-Attr). Our filter-based CLIP model demonstrates a top-1
accuracy of 68.78\%, outperforming previous models whose accuracy was all below
50\%. | [
"cs.CV",
"cs.AI"
] | false |
2305.04516 | 2023-05-08T07:22:15Z | Robust Traffic Light Detection Using Salience-Sensitive Loss:
Computational Framework and Evaluations | [
"Ross Greer",
"Akshay Gopalkrishnan",
"Jacob Landgren",
"Lulua Rakla",
"Anish Gopalan",
"Mohan Trivedi"
] | One of the most important tasks for ensuring safe autonomous driving systems
is accurately detecting road traffic lights and accurately determining how they
impact the driver's actions. In various real-world driving situations, a scene
may have numerous traffic lights with varying levels of relevance to the
driver, and thus, distinguishing and detecting the lights that are relevant to
the driver and influence the driver's actions is a critical safety task. This
paper proposes a traffic light detection model which focuses on this task by
first defining salient lights as the lights that affect the driver's future
decisions. We then use this salience property to construct the LAVA Salient
Lights Dataset, the first US traffic light dataset with an annotated salience
property. Subsequently, we train a Deformable DETR object detection transformer
model using Salience-Sensitive Focal Loss to emphasize stronger performance on
salient traffic lights, showing that a model trained with this loss function
has stronger recall than one trained without. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2305.04605 | 2023-05-08T10:26:46Z | Development of a Vision System to Enhance the Reliability of the
Pick-and-Place Robot for Autonomous Testing of Camera Module used in
Smartphones | [
"Hoang-Anh Phan",
"Duy Nam Bui",
"Tuan Nguyen Dinh",
"Bao-Anh Hoang",
"An Nguyen Ngoc",
"Dong Tran Huu Quoc",
"Ha Tran Thi Thuy",
"Tung Thanh Bui",
"Van Nguyen Thi Thanh"
] | Pick-and-place robots are commonly used in modern industrial manufacturing.
For complex devices/parts like camera modules used in smartphones, which
contain optical parts, electrical components and interfacing connectors, the
placement operation may not absolutely accurate, which may cause damage in the
device under test during the mechanical movement to make good contact for
electrical functions inspection. In this paper, we proposed an effective vision
system including hardware and algorithm to enhance the reliability of the
pick-and-place robot for autonomous testing memory of camera modules. With
limited hardware based on camera and raspberry PI and using simplify image
processing algorithm based on histogram information, the vision system can
confirm the presence of the camera modules in feeding tray and the placement
accuracy of the camera module in test socket. Through that, the system can work
with more flexibility and avoid damaging the device under test. The system was
experimentally quantified through testing approximately 2000 camera modules in
a stable light condition. Experimental results demonstrate that the system
achieves accuracy of more than 99.92%. With its simplicity and effectiveness,
the proposed vision system can be considered as a useful solution for using in
pick-and-place systems in industry. | [
"eess.SY",
"cs.CV",
"cs.SY"
] | false |
2305.04769 | 2023-05-08T15:19:39Z | BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning | [
"Kishaan Jeeveswaran",
"Prashant Bhat",
"Bahram Zonooz",
"Elahe Arani"
] | The ability of deep neural networks to continually learn and adapt to a
sequence of tasks has remained challenging due to catastrophic forgetting of
previously learned tasks. Humans, on the other hand, have a remarkable ability
to acquire, assimilate, and transfer knowledge across tasks throughout their
lifetime without catastrophic forgetting. The versatility of the brain can be
attributed to the rehearsal of abstract experiences through a complementary
learning system. However, representation rehearsal in vision transformers lacks
diversity, resulting in overfitting and consequently, performance drops
significantly compared to raw image rehearsal. Therefore, we propose BiRT, a
novel representation rehearsal-based continual learning approach using vision
transformers. Specifically, we introduce constructive noises at various stages
of the vision transformer and enforce consistency in predictions with respect
to an exponential moving average of the working model. Our method provides
consistent performance gain over raw image and vanilla representation rehearsal
on several challenging CL benchmarks, while being memory efficient and robust
to natural and adversarial corruptions. | [
"cs.CV",
"cs.LG",
"cs.NE"
] | false |
2305.04961 | 2023-05-08T18:00:33Z | Joint Moment Retrieval and Highlight Detection Via Natural Language
Queries | [
"Richard Luo",
"Austin Peng",
"Heidi Yap",
"Koby Beard"
] | Video summarization has become an increasingly important task in the field of
computer vision due to the vast amount of video content available on the
internet. In this project, we propose a new method for natural language query
based joint video summarization and highlight detection using multi-modal
transformers. This approach will use both visual and audio cues to match a
user's natural language query to retrieve the most relevant and interesting
moments from a video. Our approach employs multiple recent techniques used in
Vision Transformers (ViTs) to create a transformer-like encoder-decoder model.
We evaluated our approach on multiple datasets such as YouTube Highlights and
TVSum to demonstrate the flexibility of our proposed method. | [
"cs.CV",
"cs.CL",
"cs.LG"
] | false |
2305.04430 | 2023-05-08T02:59:02Z | Breaking Through the Haze: An Advanced Non-Homogeneous Dehazing Method
based on Fast Fourier Convolution and ConvNeXt | [
"Han Zhou",
"Wei Dong",
"Yangyi Liu",
"Jun Chen"
] | Haze usually leads to deteriorated images with low contrast, color shift and
structural distortion. We observe that many deep learning based models exhibit
exceptional performance on removing homogeneous haze, but they usually fail to
address the challenge of non-homogeneous dehazing. Two main factors account for
this situation. Firstly, due to the intricate and non uniform distribution of
dense haze, the recovery of structural and chromatic features with high
fidelity is challenging, particularly in regions with heavy haze. Secondly, the
existing small scale datasets for non-homogeneous dehazing are inadequate to
support reliable learning of feature mappings between hazy images and their
corresponding haze-free counterparts by convolutional neural network
(CNN)-based models. To tackle these two challenges, we propose a novel two
branch network that leverages 2D discrete wavelete transform (DWT), fast
Fourier convolution (FFC) residual block and a pretrained ConvNeXt model.
Specifically, in the DWT-FFC frequency branch, our model exploits DWT to
capture more high-frequency features. Moreover, by taking advantage of the
large receptive field provided by FFC residual blocks, our model is able to
effectively explore global contextual information and produce images with
better perceptual quality. In the prior knowledge branch, an ImageNet
pretrained ConvNeXt as opposed to Res2Net is adopted. This enables our model to
learn more supplementary information and acquire a stronger generalization
ability. The feasibility and effectiveness of the proposed method is
demonstrated via extensive experiments and ablation studies. The code is
available at https://github.com/zhouh115/DWT-FFC. | [
"cs.CV",
"cs.AI",
"cs.GR",
"cs.IR",
"cs.LG"
] | false |
2305.04429 | 2023-05-08T02:50:41Z | Improving Cross-Task Generalization with Step-by-Step Instructions | [
"Yang Wu",
"Yanyan Zhao",
"Zhongyang Li",
"Bing Qin",
"Kai Xiong"
] | Instruction tuning has been shown to be able to improve cross-task
generalization of language models. However, it is still challenging for
language models to complete the target tasks following the instructions, as the
instructions are general and lack intermediate steps. To address this problem,
we propose to incorporate the step-by-step instructions to help language models
to decompose the tasks, which can provide the detailed and specific procedures
for completing the target tasks. The step-by-step instructions are obtained
automatically by prompting ChatGPT, which are further combined with the
original instructions to tune language models. The extensive experiments on
SUP-NATINST show that the high-quality step-by-step instructions can improve
cross-task generalization across different model sizes. Moreover, the further
analysis indicates the importance of the order of steps of the step-by-step
instruction for the improvement. To facilitate future research, we release the
step-by-step instructions and their human quality evaluation results. | [
"cs.CL"
] | false |
2305.04465 | 2023-05-08T05:32:22Z | Can Diffusion Model Achieve Better Performance in Text Generation?
Bridging the Gap between Training and Inference! | [
"Zecheng Tang",
"Pinzheng Wang",
"Keyan Zhou",
"Juntao Li",
"Ziqiang Cao",
"Min Zhang"
] | Diffusion models have been successfully adapted to text generation tasks by
mapping the discrete text into the continuous space. However, there exist
nonnegligible gaps between training and inference, owing to the absence of the
forward process during inference. Thus, the model only predicts based on the
previously generated reverse noise rather than the noise computed by the
forward process. Besides, the widely-used downsampling strategy in speeding up
the inference will cause the mismatch of diffusion trajectories between
training and inference. To understand and mitigate the above two types of
training-inference discrepancies, we launch a thorough preliminary study. Based
on our observations, we propose two simple yet effective methods to bridge the
gaps mentioned above, named Distance Penalty and Adaptive Decay Sampling.
Extensive experiments on \textbf{6} generation tasks confirm the superiority of
our methods, which can achieve $100\times \rightarrow 200\times$ speedup with
better performance. | [
"cs.CL"
] | false |
2305.04522 | 2023-05-08T07:45:12Z | Event Knowledge Incorporation with Posterior Regularization for
Event-Centric Question Answering | [
"Junru Lu",
"Gabriele Pergola",
"Lin Gui",
"Yulan He"
] | We propose a simple yet effective strategy to incorporate event knowledge
extracted from event trigger annotations via posterior regularization to
improve the event reasoning capability of mainstream question-answering (QA)
models for event-centric QA. In particular, we define event-related knowledge
constraints based on the event trigger annotations in the QA datasets, and
subsequently use them to regularize the posterior answer output probabilities
from the backbone pre-trained language models used in the QA setting. We
explore two different posterior regularization strategies for extractive and
generative QA separately. For extractive QA, the sentence-level event knowledge
constraint is defined by assessing if a sentence contains an answer event or
not, which is later used to modify the answer span extraction probability. For
generative QA, the token-level event knowledge constraint is defined by
comparing the generated token from the backbone language model with the answer
event in order to introduce a reward or penalty term, which essentially adjusts
the answer generative probability indirectly. We conduct experiments on two
event-centric QA datasets, TORQUE and ESTER. The results show that our proposed
approach can effectively inject event knowledge into existing pre-trained
language models and achieves strong performance compared to existing QA models
in answer evaluation. Code and models can be found:
https://github.com/LuJunru/EventQAviaPR. | [
"cs.CL"
] | false |
2305.04530 | 2023-05-08T08:05:40Z | A Multi-Modal Context Reasoning Approach for Conditional Inference on
Joint Textual and Visual Clues | [
"Yunxin Li",
"Baotian Hu",
"Xinyu Chen",
"Yuxin Ding",
"Lin Ma",
"Min Zhang"
] | Conditional inference on joint textual and visual clues is a multi-modal
reasoning task that textual clues provide prior permutation or external
knowledge, which are complementary with visual content and pivotal to deducing
the correct option. Previous methods utilizing pretrained vision-language
models (VLMs) have achieved impressive performances, yet they show a lack of
multimodal context reasoning capability, especially for text-modal information.
To address this issue, we propose a Multi-modal Context Reasoning approach,
named ModCR. Compared to VLMs performing reasoning via cross modal semantic
alignment, it regards the given textual abstract semantic and objective image
information as the pre-context information and embeds them into the language
model to perform context reasoning. Different from recent vision-aided language
models used in natural language processing, ModCR incorporates the multi-view
semantic alignment information between language and vision by introducing the
learnable alignment prefix between image and text in the pretrained language
model. This makes the language model well-suitable for such multi-modal
reasoning scenario on joint textual and visual clues. We conduct extensive
experiments on two corresponding data sets and experimental results show
significantly improved performance (exact gain by 4.8% on PMR test set)
compared to previous strong baselines. Code Link:
\url{https://github.com/YunxinLi/Multimodal-Context-Reasoning}. | [
"cs.CL"
] | false |
2305.04547 | 2023-05-08T08:40:30Z | Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous
Dimensions in Pre-trained Language Models Caused by Backdoor or Bias | [
"Zhiyuan Zhang",
"Deli Chen",
"Hao Zhou",
"Fandong Meng",
"Jie Zhou",
"Xu Sun"
] | Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias
injected by the suspicious attacker during the fine-tuning process. A core
challenge of purifying potentially poisonous PLMs is precisely finding
poisonous dimensions. To settle this issue, we propose the Fine-purifying
approach, which utilizes the diffusion theory to study the dynamic process of
fine-tuning for finding potentially poisonous dimensions. According to the
relationship between parameter drifts and Hessians of different dimensions, we
can detect poisonous dimensions with abnormal dynamics, purify them by
resetting them to clean pre-trained weights, and then fine-tune the purified
weights on a small clean dataset. To the best of our knowledge, we are the
first to study the dynamics guided by the diffusion theory for safety or
defense purposes. Experimental results validate the effectiveness of
Fine-purifying even with a small clean dataset. | [
"cs.CL"
] | false |
2305.04557 | 2023-05-08T08:56:51Z | Toward Adversarial Training on Contextualized Language Representation | [
"Hongqiu Wu",
"Yongxiang Liu",
"Hanwen Shi",
"Hai Zhao",
"Min Zhang"
] | Beyond the success story of adversarial training (AT) in the recent text
domain on top of pre-trained language models (PLMs), our empirical study
showcases the inconsistent gains from AT on some tasks, e.g. commonsense
reasoning, named entity recognition. This paper investigates AT from the
perspective of the contextualized language representation outputted by PLM
encoders. We find the current AT attacks lean to generate sub-optimal
adversarial examples that can fool the decoder part but have a minor effect on
the encoder. However, we find it necessary to effectively deviate the latter
one to allow AT to gain. Based on the observation, we propose simple yet
effective \textit{Contextualized representation-Adversarial Training} (CreAT),
in which the attack is explicitly optimized to deviate the contextualized
representation of the encoder. It allows a global optimization of adversarial
examples that can fool the entire model. We also find CreAT gives rise to a
better direction to optimize the adversarial examples, to let them less
sensitive to hyperparameters. Compared to AT, CreAT produces consistent
performance gains on a wider range of tasks and is proven to be more effective
for language pre-training where only the encoder part is kept for downstream
tasks. We achieve the new state-of-the-art performances on a series of
challenging benchmarks, e.g. AdvGLUE (59.1 $ \rightarrow $ 61.1), HellaSWAG
(93.0 $ \rightarrow $ 94.9), ANLI (68.1 $ \rightarrow $ 69.3). | [
"cs.CL"
] | false |
2305.04573 | 2023-05-08T09:31:13Z | HiFi: High-Information Attention Heads Hold for Parameter-Efficient
Model Adaptation | [
"Anchun Gui",
"Han Xiao"
] | To fully leverage the advantages of large-scale pre-trained language models
(PLMs) on downstream tasks, it has become a ubiquitous adaptation paradigm to
fine-tune the entire parameters of PLMs. However, this paradigm poses issues of
inefficient updating and resource over-consuming for fine-tuning in data-scarce
and resource-limited scenarios, because of the large scale of parameters in
PLMs. To alleviate these concerns, in this paper, we propose a
parameter-efficient fine-tuning method HiFi, that is, only the highly
informative and strongly correlated attention heads for the specific task are
fine-tuned. To search for those significant attention heads, we develop a novel
framework to analyze the effectiveness of heads. Specifically, we first model
the relationship between heads into a graph from two perspectives of
information richness and correlation, and then apply PageRank algorithm to
determine the relative importance of each head. Extensive experiments on the
GLUE benchmark demonstrate the effectiveness of our method, and show that HiFi
obtains state-of-the-art performance over the prior baselines. | [
"cs.CL"
] | false |
2305.04599 | 2023-05-08T10:18:30Z | Cone: Unsupervised Contrastive Opinion Extraction | [
"Runcong Zhao",
"Lin Gui",
"Yulan He"
] | Contrastive opinion extraction aims to extract a structured summary or key
points organised as positive and negative viewpoints towards a common aspect or
topic. Most recent works for unsupervised key point extraction is largely built
on sentence clustering or opinion summarisation based on the popularity of
opinions expressed in text. However, these methods tend to generate aspect
clusters with incoherent sentences, conflicting viewpoints, redundant aspects.
To address these problems, we propose a novel unsupervised Contrastive OpinioN
Extraction model, called Cone, which learns disentangled latent aspect and
sentiment representations based on pseudo aspect and sentiment labels by
combining contrastive learning with iterative aspect/sentiment clustering
refinement. Apart from being able to extract contrastive opinions, it is also
able to quantify the relative popularity of aspects and their associated
sentiment distributions. The model has been evaluated on both a hotel review
dataset and a Twitter dataset about COVID vaccines. The results show that
despite using no label supervision or aspect-denoted seed words, Cone
outperforms a number of competitive baselines on contrastive opinion
extraction. The results of Cone can be used to offer a better recommendation of
products and services online. | [
"cs.CL"
] | false |
2305.04636 | 2023-05-08T11:29:33Z | Enhancing Continual Relation Extraction via Classifier Decomposition | [
"Heming Xia",
"Peiyi Wang",
"Tianyu Liu",
"Binghuai Lin",
"Yunbo Cao",
"Zhifang Sui"
] | Continual relation extraction (CRE) models aim at handling emerging new
relations while avoiding catastrophically forgetting old ones in the streaming
data. Though improvements have been shown by previous CRE studies, most of them
only adopt a vanilla strategy when models first learn representations of new
relations. In this work, we point out that there exist two typical biases after
training of this vanilla strategy: classifier bias and representation bias,
which causes the previous knowledge that the model learned to be shaded. To
alleviate those biases, we propose a simple yet effective classifier
decomposition framework that splits the last FFN layer into separated previous
and current classifiers, so as to maintain previous knowledge and encourage the
model to learn more robust representations at this training stage. Experimental
results on two standard benchmarks show that our proposed framework
consistently outperforms the state-of-the-art CRE models, which indicates that
the importance of the first training stage to CRE models may be underestimated.
Our code is available at https://github.com/hemingkx/CDec. | [
"cs.CL"
] | false |
2305.04676 | 2023-05-08T12:53:06Z | Enhancing Knowledge Graph Construction Using Large Language Models | [
"Milena Trajanoska",
"Riste Stojanov",
"Dimitar Trajanov"
] | The growing trend of Large Language Models (LLM) development has attracted
significant attention, with models for various applications emerging
consistently. However, the combined application of Large Language Models with
semantic technologies for reasoning and inference is still a challenging task.
This paper analyzes how the current advances in foundational LLM, like ChatGPT,
can be compared with the specialized pretrained models, like REBEL, for joint
entity and relation extraction. To evaluate this approach, we conducted several
experiments using sustainability-related text as our use case. We created
pipelines for the automatic creation of Knowledge Graphs from raw texts, and
our findings indicate that using advanced LLM models can improve the accuracy
of the process of creating these graphs from unstructured text. Furthermore, we
explored the potential of automatic ontology creation using foundation LLM
models, which resulted in even more relevant and accurate knowledge graphs. | [
"cs.CL"
] | false |
2305.04737 | 2023-05-08T14:40:48Z | SkillQG: Learning to Generate Question for Reading Comprehension
Assessment | [
"Xiaoqiang Wang",
"Bang Liu",
"Siliang Tang",
"Lingfei Wu"
] | We present $\textbf{$\texttt{SkillQG}$}$: a question generation framework
with controllable comprehension types for assessing and improving machine
reading comprehension models. Existing question generation systems widely
differentiate questions by $\textit{literal}$ information such as question
words and answer types to generate semantically relevant questions for a given
context. However, they rarely consider the $\textit{comprehension}$ nature of
questions, i.e. the different comprehension capabilities embodied by different
questions. In comparison, our $\texttt{SkillQG}$ is able to tailor a
fine-grained assessment and improvement to the capabilities of question
answering models built on it. Specifically, we first frame the comprehension
type of questions based on a hierarchical skill-based schema, then formulate
$\texttt{SkillQG}$ as a skill-conditioned question generator. Furthermore, to
improve the controllability of generation, we augment the input text with
question focus and skill-specific knowledge, which are constructed by
iteratively prompting the pre-trained language models. Empirical results
demonstrate that $\texttt{SkillQG}$ outperforms baselines in terms of quality,
relevance, and skill-controllability while showing a promising performance
boost in downstream question answering task. | [
"cs.CL"
] | false |
2305.04824 | 2023-05-08T16:24:46Z | Learning Summary-Worthy Visual Representation for Abstractive
Summarization in Video | [
"Zenan Xu",
"Xiaojun Meng",
"Yasheng Wang",
"Qinliang Su",
"Zexuan Qiu",
"Xin Jiang",
"Qun Liu"
] | Multimodal abstractive summarization for videos (MAS) requires generating a
concise textual summary to describe the highlights of a video according to
multimodal resources, in our case, the video content and its transcript.
Inspired by the success of the large-scale generative pre-trained language
model (GPLM) in generating high-quality textual content (e.g., summary), recent
MAS methods have proposed to adapt the GPLM to this task by equipping it with
the visual information, which is often obtained through a general-purpose
visual feature extractor. However, the generally extracted visual features may
overlook some summary-worthy visual information, which impedes model
performance. In this work, we propose a novel approach to learning the
summary-worthy visual representation that facilitates abstractive
summarization. Our method exploits the summary-worthy information from both the
cross-modal transcript data and the knowledge that distills from the pseudo
summary. Extensive experiments on three public multimodal datasets show that
our method outperforms all competing baselines. Furthermore, with the
advantages of summary-worthy visual information, our model can have a
significant improvement on small datasets or even datasets with limited
training data. | [
"cs.CL"
] | false |
2305.05001 | 2023-05-08T19:16:26Z | GersteinLab at MEDIQA-Chat 2023: Clinical Note Summarization from
Doctor-Patient Conversations through Fine-tuning and In-context Learning | [
"Xiangru Tang",
"Andrew Tran",
"Jeffrey Tan",
"Mark Gerstein"
] | This paper presents our contribution to the MEDIQA-2023 Dialogue2Note shared
task, encompassing both subtask A and subtask B. We approach the task as a
dialogue summarization problem and implement two distinct pipelines: (a) a
fine-tuning of a pre-trained dialogue summarization model and GPT-3, and (b)
few-shot in-context learning (ICL) using a large language model, GPT-4. Both
methods achieve excellent results in terms of ROUGE-1 F1, BERTScore F1
(deberta-xlarge-mnli), and BLEURT, with scores of 0.4011, 0.7058, and 0.5421,
respectively. Additionally, we predict the associated section headers using
RoBERTa and SciBERT based classification models. Our team ranked fourth among
all teams, while each team is allowed to submit three runs as part of their
submission. We also utilize expert annotations to demonstrate that the notes
generated through the ICL GPT-4 are better than all other baselines. The code
for our submission is available. | [
"cs.CL"
] | false |
2305.05003 | 2023-05-08T19:19:07Z | Revisiting Relation Extraction in the era of Large Language Models | [
"Somin Wadhwa",
"Silvio Amir",
"Byron C. Wallace"
] | Relation extraction (RE) is the core NLP task of inferring semantic
relationships between entities from text. Standard supervised RE techniques
entail training modules to tag tokens comprising entity spans and then predict
the relationship between them. Recent work has instead treated the problem as a
\emph{sequence-to-sequence} task, linearizing relations between entities as
target strings to be generated conditioned on the input. Here we push the
limits of this approach, using larger language models (GPT-3 and Flan-T5 large)
than considered in prior work and evaluating their performance on standard RE
tasks under varying levels of supervision. We address issues inherent to
evaluating generative approaches to RE by doing human evaluations, in lieu of
relying on exact matching. Under this refined evaluation, we find that: (1)
Few-shot prompting with GPT-3 achieves near SOTA performance, i.e., roughly
equivalent to existing fully supervised models; (2) Flan-T5 is not as capable
in the few-shot setting, but supervising and fine-tuning it with
Chain-of-Thought (CoT) style explanations (generated via GPT-3) yields SOTA
results. We release this model as a new baseline for RE tasks. | [
"cs.CL"
] | false |
2305.05054 | 2023-05-08T21:24:12Z | Dreams Are More "Predictable'' Than You Think | [
"Lorenzo Bertolini"
] | A consistent body of evidence suggests that dream reports significantly vary
from other types of textual transcripts with respect to semantic content.
Furthermore, it appears to be a widespread belief in the dream/sleep research
community that dream reports constitute rather ``unique'' strings of text. This
might be a notable issue for the growing amount of approaches using natural
language processing (NLP) tools to automatically analyse dream reports, as they
largely rely on neural models trained on non-dream corpora scraped from the
web. In this work, I will adopt state-of-the-art (SotA) large language models
(LLMs), to study if and how dream reports deviate from other human-generated
text strings, such as Wikipedia. Results show that, taken as a whole, DreamBank
does not deviate from Wikipedia. Moreover, on average, single dream reports are
significantly more predictable than Wikipedia articles. Preliminary evidence
suggests that word count, gender, and visual impairment can significantly shape
how predictable a dream report can appear to the model. | [
"cs.CL"
] | false |
2305.05079 | 2023-05-08T22:37:30Z | A Unified Evaluation Framework for Novelty Detection and Accommodation
in NLP with an Instantiation in Authorship Attribution | [
"Neeraj Varshney",
"Himanshu Gupta",
"Eric Robertson",
"Bing Liu",
"Chitta Baral"
] | State-of-the-art natural language processing models have been shown to
achieve remarkable performance in 'closed-world' settings where all the labels
in the evaluation set are known at training time. However, in real-world
settings, 'novel' instances that do not belong to any known class are often
observed. This renders the ability to deal with novelties crucial. To initiate
a systematic research in this important area of 'dealing with novelties', we
introduce 'NoveltyTask', a multi-stage task to evaluate a system's performance
on pipelined novelty 'detection' and 'accommodation' tasks. We provide
mathematical formulation of NoveltyTask and instantiate it with the authorship
attribution task that pertains to identifying the correct author of a given
text. We use Amazon reviews corpus and compile a large dataset (consisting of
250k instances across 200 authors/labels) for NoveltyTask. We conduct
comprehensive experiments and explore several baseline methods for the task.
Our results show that the methods achieve considerably low performance making
the task challenging and leaving sufficient room for improvement. Finally, we
believe our work will encourage research in this underexplored area of dealing
with novelties, an important step en route to developing robust systems. | [
"cs.CL"
] | false |
2305.04417 | 2023-05-08T01:55:53Z | Unlocking Practical Applications in Legal Domain: Evaluation of GPT for
Zero-Shot Semantic Annotation of Legal Texts | [
"Jaromir Savelka"
] | We evaluated the capability of a state-of-the-art generative pre-trained
transformer (GPT) model to perform semantic annotation of short text snippets
(one to few sentences) coming from legal documents of various types.
Discussions of potential uses (e.g., document drafting, summarization) of this
emerging technology in legal domain have intensified, but to date there has not
been a rigorous analysis of these large language models' (LLM) capacity in
sentence-level semantic annotation of legal texts in zero-shot learning
settings. Yet, this particular type of use could unlock many practical
applications (e.g., in contract review) and research opportunities (e.g., in
empirical legal studies). We fill the gap with this study. We examined if and
how successfully the model can semantically annotate small batches of short
text snippets (10-50) based exclusively on concise definitions of the semantic
types. We found that the GPT model performs surprisingly well in zero-shot
settings on diverse types of documents (F1=.73 on a task involving court
opinions, .86 for contracts, and .54 for statutes and regulations). These
findings can be leveraged by legal scholars and practicing lawyers alike to
guide their decisions in integrating LLMs in wide range of workflows involving
semantic annotation of legal texts. | [
"cs.CL",
"cs.AI"
] | false |
2305.04446 | 2023-05-08T03:50:38Z | Facilitating Fine-grained Detection of Chinese Toxic Language:
Hierarchical Taxonomy, Resources, and Benchmarks | [
"Junyu Lu",
"Bo Xu",
"Xiaokun Zhang",
"Changrong Min",
"Liang Yang",
"Hongfei Lin"
] | The widespread dissemination of toxic online posts is increasingly damaging
to society. However, research on detecting toxic language in Chinese has lagged
significantly. Existing datasets lack fine-grained annotation of toxic types
and expressions, and ignore the samples with indirect toxicity. In addition, it
is crucial to introduce lexical knowledge to detect the toxicity of posts,
which has been a challenge for researchers. In this paper, we facilitate the
fine-grained detection of Chinese toxic language. First, we built Monitor Toxic
Frame, a hierarchical taxonomy to analyze toxic types and expressions. Then, a
fine-grained dataset ToxiCN is presented, including both direct and indirect
toxic samples. We also build an insult lexicon containing implicit profanity
and propose Toxic Knowledge Enhancement (TKE) as a benchmark, incorporating the
lexical feature to detect toxic language. In the experimental stage, we
demonstrate the effectiveness of TKE. After that, a systematic quantitative and
qualitative analysis of the findings is given. | [
"cs.CL",
"cs.AI"
] | false |
2305.04460 | 2023-05-08T05:03:07Z | Language Independent Neuro-Symbolic Semantic Parsing for Form
Understanding | [
"Bhanu Prakash Voutharoja",
"Lizhen Qu",
"Fatemeh Shiri"
] | Recent works on form understanding mostly employ multimodal transformers or
large-scale pre-trained language models. These models need ample data for
pre-training. In contrast, humans can usually identify key-value pairings from
a form only by looking at layouts, even if they don't comprehend the language
used. No prior research has been conducted to investigate how helpful layout
information alone is for form understanding. Hence, we propose a unique
entity-relation graph parsing method for scanned forms called LAGNN, a
language-independent Graph Neural Network model. Our model parses a form into a
word-relation graph in order to identify entities and relations jointly and
reduce the time complexity of inference. This graph is then transformed by
deterministic rules into a fully connected entity-relation graph. Our model
simply takes into account relative spacing between bounding boxes from layout
information to facilitate easy transfer across languages. To further improve
the performance of LAGNN, and achieve isomorphism between entity-relation
graphs and word-relation graphs, we use integer linear programming (ILP) based
inference. Code is publicly available at https://github.com/Bhanu068/LAGNN | [
"cs.CL",
"cs.AI"
] | false |
2305.04631 | 2023-05-08T11:19:21Z | XAI in Computational Linguistics: Understanding Political Leanings in
the Slovenian Parliament | [
"Bojan Evkoski",
"Senja Pollak"
] | The work covers the development and explainability of machine learning models
for predicting political leanings through parliamentary transcriptions. We
concentrate on the Slovenian parliament and the heated debate on the European
migrant crisis, with transcriptions from 2014 to 2020. We develop both
classical machine learning and transformer language models to predict the left-
or right-leaning of parliamentarians based on their given speeches on the topic
of migrants. With both types of models showing great predictive success, we
continue with explaining their decisions. Using explainability techniques, we
identify keywords and phrases that have the strongest influence in predicting
political leanings on the topic, with left-leaning parliamentarians using
concepts such as people and unity and speak about refugees, and right-leaning
parliamentarians using concepts such as nationality and focus more on illegal
migrants. This research is an example that understanding the reasoning behind
predictions can not just be beneficial for AI engineers to improve their
models, but it can also be helpful as a tool in the qualitative analysis steps
in interdisciplinary research. | [
"cs.CL",
"cs.AI"
] | false |
2305.04843 | 2023-05-08T16:41:08Z | Reinforcement Learning for Topic Models | [
"Jeremy Costello",
"Marek Z. Reformat"
] | We apply reinforcement learning techniques to topic modeling by replacing the
variational autoencoder in ProdLDA with a continuous action space reinforcement
learning policy. We train the system with a policy gradient algorithm
REINFORCE. Additionally, we introduced several modifications: modernize the
neural network architecture, weight the ELBO loss, use contextual embeddings,
and monitor the learning process via computing topic diversity and coherence
for each training step. Experiments are performed on 11 data sets. Our
unsupervised model outperforms all other unsupervised models and performs on
par with or better than most models using supervised labeling. Our model is
outperformed on certain data sets by a model using supervised labeling and
contrastive learning. We have also conducted an ablation study to provide
empirical evidence of performance improvements from changes we made to ProdLDA
and found that the reinforcement learning formulation boosts performance. | [
"cs.CL",
"cs.LG"
] | false |
2305.04859 | 2023-05-08T17:08:14Z | A Frustratingly Easy Improvement for Position Embeddings via Random
Padding | [
"Mingxu Tao",
"Yansong Feng",
"Dongyan Zhao"
] | Position embeddings, encoding the positional relationships among tokens in
text sequences, make great contributions to modeling local context features in
Transformer-based pre-trained language models. However, in Extractive Question
Answering, position embeddings trained with instances of varied context lengths
may not perform well as we expect. Since the embeddings of rear positions are
updated fewer times than the front position embeddings, the rear ones may not
be properly trained. In this paper, we propose a simple but effective strategy,
Random Padding, without any modifications to architectures of existing
pre-trained language models. We adjust the token order of input sequences when
fine-tuning, to balance the number of updating times of every position
embedding. Experiments show that Random Padding can significantly improve model
performance on the instances whose answers are located at rear positions,
especially when models are trained on short contexts but evaluated on long
contexts. Our code and data will be released for future research. | [
"cs.CL",
"cs.AI"
] | false |
2305.04971 | 2023-05-08T18:04:18Z | LABO: Towards Learning Optimal Label Regularization via Bi-level
Optimization | [
"Peng Lu",
"Ahmad Rashid",
"Ivan Kobyzev",
"Mehdi Rezagholizadeh",
"Philippe Langlais"
] | Regularization techniques are crucial to improving the generalization
performance and training efficiency of deep neural networks. Many deep learning
algorithms rely on weight decay, dropout, batch/layer normalization to converge
faster and generalize. Label Smoothing (LS) is another simple, versatile and
efficient regularization which can be applied to various supervised
classification tasks. Conventional LS, however, regardless of the training
instance assumes that each non-target class is equally likely. In this work, we
present a general framework for training with label regularization, which
includes conventional LS but can also model instance-specific variants. Based
on this formulation, we propose an efficient way of learning LAbel
regularization by devising a Bi-level Optimization (LABO) problem. We derive a
deterministic and interpretable solution of the inner loop as the optimal label
smoothing without the need to store the parameters or the output of a trained
model. Finally, we conduct extensive experiments and demonstrate our LABO
consistently yields improvement over conventional label regularization on
various fields, including seven machine translation and three image
classification tasks across various | [
"cs.LG",
"cs.CL"
] | false |
2305.04989 | 2023-05-08T18:53:14Z | Knowledge Graph Guided Semantic Evaluation of Language Models For User
Trust | [
"Kaushik Roy",
"Tarun Garg",
"Vedant Palit",
"Yuxin Zi",
"Vignesh Narayanan",
"Amit Sheth"
] | A fundamental question in natural language processing is - what kind of
language structure and semantics is the language model capturing? Graph formats
such as knowledge graphs are easy to evaluate as they explicitly express
language semantics and structure. This study evaluates the semantics encoded in
the self-attention transformers by leveraging explicit knowledge graph
structures. We propose novel metrics to measure the reconstruction error when
providing graph path sequences from a knowledge graph and trying to
reproduce/reconstruct the same from the outputs of the self-attention
transformer models. The opacity of language models has an immense bearing on
societal issues of trust and explainable decision outcomes. Our findings
suggest that language models are models of stochastic control processes for
plausible language pattern generation. However, they do not ascribe object and
concept-level meaning and semantics to the learned stochastic patterns such as
those described in knowledge graphs. Furthermore, to enable robust evaluation
of concept understanding by language models, we construct and make public an
augmented language understanding benchmark built on the General Language
Understanding Evaluation (GLUE) benchmark. This has significant
application-level user trust implications as stochastic patterns without a
strong sense of meaning cannot be trusted in high-stakes applications. | [
"cs.CL",
"cs.AI"
] | false |
2305.05010 | 2023-05-08T19:31:09Z | Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge
Distillation | [
"Rongzhi Zhang",
"Jiaming Shen",
"Tianqi Liu",
"Jialu Liu",
"Michael Bendersky",
"Marc Najork",
"Chao Zhang"
] | Knowledge distillation is a popular technique to transfer knowledge from
large teacher models to a small student model. Typically, the student learns to
imitate the teacher by minimizing the KL divergence of its output distribution
with the teacher's output distribution. In this work, we argue that such a
learning objective is sub-optimal because there exists a discrepancy between
the teacher's output distribution and the ground truth label distribution.
Therefore, forcing the student to blindly imitate the unreliable teacher output
distribution leads to inferior performance. To this end, we propose a novel
knowledge distillation objective PTLoss by first representing the vanilla
KL-based distillation loss function via a Maclaurin series and then perturbing
the leading-order terms in this series. This perturbed loss implicitly
transforms the original teacher into a proxy teacher with a distribution closer
to the ground truth distribution. We establish the theoretical connection
between this "distribution closeness" and the student model generalizability,
which enables us to select the PTLoss's perturbation coefficients in a
principled way. Extensive experiments on five datasets demonstrate PTLoss can
significantly improve the distillation effectiveness for teachers of various
scales. | [
"cs.LG",
"cs.CL"
] | false |
2305.05061 | 2023-05-08T21:35:12Z | Coherent Wave Dynamics and Language Generation of a Generative
Pre-trained Transformer | [
"Tao Hong"
] | Large Language Models (LLMs), such as the Generative Pretrained Transformer
(GPT), have achieved tremendous success in various language tasks, but their
emergent abilities have also raised many questions, concerns, and challenges
that need to be addressed. To gain a better understanding of the models' inner
mechanisms, we analyze the hidden state and channel wave dynamics in a small
GPT, focusing on the coherence of wave patterns in terms of cross-channel
correlation and individual auto-correlation. Our findings suggest that wave
dynamics offer consistent and repeatable intrinsic oscillation modes, along
with context-aware plasticity and expressiveness in language generation. By
analyzing wave patterns, coherence, and clustering, we provide a systematic way
to identify and interpret the functionality of the hidden state channels,
paving the way to understand and control higher-level language pattern
formation. In addition, we investigate the Poisson statistics of spelling
errors in text sequence generation across various levels of model training and
observe a phase-transition-like process. As coherence builds up, there is a
competition between the generation of correct and misspelled words. However,
once the model is adequately trained and significant coherence has emerged, the
coherent process becomes strong enough to effectively suppress spelling errors,
preventing the cascade amplification of defects. The distribution of correct
spellings transitions from Poissonian to Sub-Poissonian, while the distribution
of misspellings shows the opposite trend. By leveraging concepts and techniques
from quantum physics, we gain novel insights into the dynamics of the small
GPT. This approach can be extended to larger language models that exhibit more
complex coherent language patterns, opening up opportunities to interpret their
emergent capabilities and develop more specialized models. | [
"cs.CL",
"nlin.PS",
"68T07",
"I.2.7"
] | false |
2305.05094 | 2023-05-08T23:43:15Z | Interactive Concept Learning for Uncovering Latent Themes in Large Text
Collections | [
"Maria Leonor Pacheco",
"Tunazzina Islam",
"Lyle Ungar",
"Ming Yin",
"Dan Goldwasser"
] | Experts across diverse disciplines are often interested in making sense of
large text collections. Traditionally, this challenge is approached either by
noisy unsupervised techniques such as topic models, or by following a manual
theme discovery process. In this paper, we expand the definition of a theme to
account for more than just a word distribution, and include generalized
concepts deemed relevant by domain experts. Then, we propose an interactive
framework that receives and encodes expert feedback at different levels of
abstraction. Our framework strikes a balance between automation and manual
coding, allowing experts to maintain control of their study while reducing the
manual effort required. | [
"cs.CL",
"cs.HC"
] | false |
2305.06163 | 2023-05-08T15:51:38Z | Algebra Error Classification with Large Language Models | [
"Hunter McNichols",
"Mengxue Zhang",
"Andrew Lan"
] | Automated feedback as students answer open-ended math questions has
significant potential in improving learning outcomes at large scale. A key part
of automated feedback systems is an error classification component, which
identifies student errors and enables appropriate, predefined feedback to be
deployed. Most existing approaches to error classification use a rule-based
method, which has limited capacity to generalize. Existing data-driven methods
avoid these limitations but specifically require mathematical expressions in
student responses to be parsed into syntax trees. This requirement is itself a
limitation, since student responses are not always syntactically valid and
cannot be converted into trees. In this work, we introduce a flexible method
for error classification using pre-trained large language models. We
demonstrate that our method can outperform existing methods in algebra error
classification, and is able to classify a larger set of student responses.
Additionally, we analyze common classification errors made by our method and
discuss limitations of automated error classification. | [
"cs.CL",
"cs.AI"
] | false |
2305.06358 | 2023-05-08T23:57:26Z | Accessible Instruction-Following Agent | [
"Kairui Zhou"
] | Humans can collaborate and complete tasks based on visual signals and
instruction from the environment. Training such a robot is difficult especially
due to the understanding of the instruction and the complicated environment.
Previous instruction-following agents are biased to English-centric corpus,
making it unrealizable to be applied to users that use multiple languages or
even low-resource languages. Nevertheless, the instruction-following agents are
pre-trained in a mode that assumes the user can observe the environment, which
limits its accessibility. In this work, we're trying to generalize the success
of instruction-following agents to non-English languages with little corpus
resources, and improve its intractability and accessibility. We introduce UVLN
(Universal Vision-Language Navigation), a novel machine-translation
instructional augmented framework for cross-lingual vision-language navigation,
with a novel composition of state-of-the-art large language model (GPT3) with
the image caption model (BLIP). We first collect a multilanguage
vision-language navigation dataset via machine translation. Then we extend the
standard VLN training objectives to a multilingual setting via a cross-lingual
language encoder. The alignment between different languages is captured through
a shared vision and action context via a cross-modal transformer, which encodes
the inputs of language instruction, visual observation, and action decision
sequences. To improve the intractability, we connect our agent with the large
language model that informs the situation and current state to the user and
also explains the action decisions. Experiments over Room Across Room Dataset
prove the effectiveness of our approach. And the qualitative results show the
promising intractability and accessibility of our instruction-following agent. | [
"cs.AI",
"cs.CL"
] | false |
2305.07666 | 2023-05-08T18:26:39Z | Imitation versus Innovation: What children can do that large language
and language-and-vision models cannot (yet)? | [
"Eunice Yiu",
"Eliza Kosoy",
"Alison Gopnik"
] | Much discussion about large language models and language-and-vision models
has focused on whether these models are intelligent agents. We present an
alternative perspective. We argue that these artificial intelligence models are
cultural technologies that enhance cultural transmission in the modern world,
and are efficient imitation engines. We explore what AI models can tell us
about imitation and innovation by evaluating their capacity to design new tools
and discover novel causal structures, and contrast their responses with those
of human children. Our work serves as a first step in determining which
particular representations and competences, as well as which kinds of knowledge
or skill, can be derived from particular learning techniques and data.
Critically, our findings suggest that machines may need more than large scale
language and images to achieve what a child can do. | [
"cs.AI",
"cs.CL"
] | false |
2305.15323 | 2023-05-08T14:54:44Z | ChatGPT: Vision and Challenges | [
"Sukhpal Singh Gill",
"Rupinder Kaur"
] | Artificial intelligence (AI) and machine learning have changed the nature of
scientific inquiry in recent years. Of these, the development of virtual
assistants has accelerated greatly in the past few years, with ChatGPT becoming
a prominent AI language model. In this study, we examine the foundations,
vision, research challenges of ChatGPT. This article investigates into the
background and development of the technology behind it, as well as its popular
applications. Moreover, we discuss the advantages of bringing everything
together through ChatGPT and Internet of Things (IoT). Further, we speculate on
the future of ChatGPT by considering various possibilities for study and
development, such as energy-efficiency, cybersecurity, enhancing its
applicability to additional technologies (Robotics and Computer Vision),
strengthening human-AI communications, and bridging the technological gap.
Finally, we discuss the important ethics and current trends of ChatGPT. | [
"cs.CY",
"cs.CL"
] | false |
2305.18086 | 2023-05-08T17:57:34Z | The impact and applications of ChatGPT: a systematic review of
literature reviews | [
"Irene S. Gabashvili"
] | The conversational artificial-intelligence (AI) technology ChatGPT has become
one of the most widely used natural language processing tools. With thousands
of published papers demonstrating its applications across various industries
and fields, ChatGPT has sparked significant interest in the research community.
Reviews of primary data have also begun to emerge. An overview of the available
evidence from multiple reviews and studies could provide further insights,
minimize redundancy, and identify areas where further research is needed.
Objective: To evaluate the existing reviews and literature related to ChatGPT's
applications and its potential impact on different fields by conducting a
systematic review of reviews and bibliometric analysis of primary literature.
Methods: PubMed, EuropePMC, Dimensions AI, medRxiv, bioRxiv, arXiv, and Google
Scholar were searched for ChatGPT-related publications from 2022 to 4/30/2023.
Studies including secondary data related to the application of ChatGPT were
considered. Reporting and risk of bias assesment was performed using PRISMA
guidelines. Results: A total of 305 unique records with potential relevance to
the review were identified from a pool of over 2,000 original articles. After
multi-step screening process, 11 reviews were selected, consisting of 9 reviews
specifically focused on ChatGPT and 2 reviews on broader AI topics that also
included discussions on ChatGPT. We also conducted bibliometric analysis of
primary data. Conclusions: While AI has the potential to revolutionize various
industries, further interdisciplinary research, customized integrations, and
ethical innovation are necessary to address existing concerns and ensure its
responsible use. Protocol Registration: PROSPERO registration no.
CRD42023417336, DOI 10.17605/OSF.IO/87U6Q. | [
"cs.CY",
"cs.CL"
] | false |
2305.04400 | 2023-05-08T01:02:52Z | Do Large Language Models Show Decision Heuristics Similar to Humans? A
Case Study Using GPT-3.5 | [
"Gaurav Suri",
"Lily R. Slater",
"Ali Ziaee",
"Morgan Nguyen"
] | A Large Language Model (LLM) is an artificial intelligence system that has
been trained on vast amounts of natural language data, enabling it to generate
human-like responses to written or spoken language input. GPT-3.5 is an example
of an LLM that supports a conversational agent called ChatGPT. In this work, we
used a series of novel prompts to determine whether ChatGPT shows heuristics,
biases, and other decision effects. We also tested the same prompts on human
participants. Across four studies, we found that ChatGPT was influenced by
random anchors in making estimates (Anchoring Heuristic, Study 1); it judged
the likelihood of two events occurring together to be higher than the
likelihood of either event occurring alone, and it was erroneously influenced
by salient anecdotal information (Representativeness and Availability
Heuristic, Study 2); it found an item to be more efficacious when its features
were presented positively rather than negatively - even though both
presentations contained identical information (Framing Effect, Study 3); and it
valued an owned item more than a newly found item even though the two items
were identical (Endowment Effect, Study 4). In each study, human participants
showed similar effects. Heuristics and related decision effects in humans are
thought to be driven by cognitive and affective processes such as loss aversion
and effort reduction. The fact that an LLM - which lacks these processes - also
shows such effects invites consideration of the possibility that language may
play a role in generating these effects in humans. | [
"cs.AI",
"cs.CL",
"q-bio.NC"
] | false |
2305.04518 | 2023-05-08T07:28:16Z | Sparks of Artificial General Recommender (AGR): Early Experiments with
ChatGPT | [
"Guo Lin",
"Yongfeng Zhang"
] | This study investigates the feasibility of developing an Artificial General
Recommender (AGR), facilitated by recent advancements in Large Language Models
(LLMs). An AGR comprises both conversationality and universality to engage in
natural dialogues and generate recommendations across various domains. We
propose ten fundamental principles that an AGR should adhere to, each with its
corresponding testing protocols. We proceed to assess whether ChatGPT, a
sophisticated LLM, can comply with the proposed principles by engaging in
recommendation-oriented dialogues with the model while observing its behavior.
Our findings demonstrate the potential for ChatGPT to serve as an AGR, though
several limitations and areas for improvement are identified. | [
"cs.IR",
"cs.CL",
"cs.LG"
] | false |
2305.04533 | 2023-05-08T08:09:00Z | Prompted LLMs as Chatbot Modules for Long Open-domain Conversation | [
"Gibbeum Lee",
"Volker Hartmann",
"Jongho Park",
"Dimitris Papailiopoulos",
"Kangwook Lee"
] | In this paper, we propose MPC (Modular Prompted Chatbot), a new approach for
creating high-quality conversational agents without the need for fine-tuning.
Our method utilizes pre-trained large language models (LLMs) as individual
modules for long-term consistency and flexibility, by using techniques such as
few-shot prompting, chain-of-thought (CoT), and external memory. Our human
evaluation results show that MPC is on par with fine-tuned chatbot models in
open-domain conversations, making it an effective solution for creating
consistent and engaging chatbots. | [
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.04905 | 2023-05-08T17:42:23Z | What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text
Classification With Human-in-the-Loop Curation for Automatic Labeling of
Patient Self Reports of Problems | [
"Lakshmi Arbatti",
"Abhishek Hosamath",
"Vikram Ramanarayanan",
"Ira Shoulson"
] | The USA Food and Drug Administration has accorded increasing importance to
patient-reported problems in clinical and research settings. In this paper, we
explore one of the largest online datasets comprising 170,141 open-ended
self-reported responses (called "verbatims") from patients with Parkinson's
(PwPs) to questions about what bothers them about their Parkinson's Disease and
how it affects their daily functioning, also known as the Parkinson's Disease
Patient Report of Problems. Classifying such verbatims into multiple clinically
relevant symptom categories is an important problem and requires multiple steps
- expert curation, a multi-label text classification (MLTC) approach and large
amounts of labelled training data. Further, human annotation of such large
datasets is tedious and expensive. We present a novel solution to this problem
where we build a baseline dataset using 2,341 (of the 170,141) verbatims
annotated by nine curators including clinical experts and PwPs. We develop a
rules based linguistic-dictionary using NLP techniques and graph database-based
expert phrase-query system to scale the annotation to the remaining cohort
generating the machine annotated dataset, and finally build a Keras-Tensorflow
based MLTC model for both datasets. The machine annotated model significantly
outperforms the baseline model with a F1-score of 95% across 65 symptom
categories on a held-out test set. | [
"cs.CL",
"cs.LG",
"eess.AS"
] | false |
2305.05383 | 2023-05-08T10:00:05Z | Code Execution with Pre-trained Language Models | [
"Chenxiao Liu",
"Shuai Lu",
"Weizhu Chen",
"Daxin Jiang",
"Alexey Svyatkovskiy",
"Shengyu Fu",
"Neel Sundaresan",
"Nan Duan"
] | Code execution is a fundamental aspect of programming language semantics that
reflects the exact behavior of the code. However, most pre-trained models for
code intelligence ignore the execution trace and only rely on source code and
syntactic structures. In this paper, we investigate how well pre-trained models
can understand and perform code execution. We develop a mutation-based data
augmentation technique to create a large-scale and realistic Python dataset and
task for code execution, which challenges existing models such as Codex. We
then present CodeExecutor, a Transformer model that leverages code execution
pre-training and curriculum learning to enhance its semantic comprehension. We
evaluate CodeExecutor on code execution and show its promising performance and
limitations. We also demonstrate its potential benefits for code intelligence
tasks such as zero-shot code-to-code search and text-to-code generation. Our
analysis provides insights into the learning and generalization abilities of
pre-trained models for code execution. | [
"cs.PL",
"cs.AI",
"cs.CL",
"cs.SE"
] | true |
2305.06218 | 2023-05-08T22:42:48Z | Multi-Task End-to-End Training Improves Conversational Recommendation | [
"Naveen Ram",
"Dima Kuzmin",
"Ellie Ka In Chio",
"Moustafa Farid Alzantot",
"Santiago Ontanon",
"Ambarish Jash",
"Judith Yue Li"
] | In this paper, we analyze the performance of a multitask end-to-end
transformer model on the task of conversational recommendations, which aim to
provide recommendations based on a user's explicit preferences expressed in
dialogue. While previous works in this area adopt complex multi-component
approaches where the dialogue management and entity recommendation tasks are
handled by separate components, we show that a unified transformer model, based
on the T5 text-to-text transformer model, can perform competitively in both
recommending relevant items and generating conversation dialogue. We fine-tune
our model on the ReDIAL conversational movie recommendation dataset, and create
additional training tasks derived from MovieLens (such as the prediction of
movie attributes and related movies based on an input movie), in a multitask
learning setting. Using a series of probe studies, we demonstrate that the
learned knowledge in the additional tasks is transferred to the conversational
setting, where each task leads to a 9%-52% increase in its related probe score. | [
"cs.CL",
"cs.AI",
"cs.IR"
] | true |
2305.06223 | 2023-05-08T19:21:41Z | ComputeGPT: A computational chat model for numerical problems | [
"Ryan Hardesty Lewis",
"Junfeng Jiao"
] | Language models are not accurate in numerical problems. Their architecture
does not allow for anything less than a probabilistic next word. This paper
introduces ComputeGPT: an approach of creating a chat model able to answer
computational problems through running on-demand code. ComputeGPT converts each
question to relevant code, runs the code, and returns the computed answer as
part of the chat. We combine this approach with a local browser-based Python
interpretation and fine-tuned prompts in order to achieve state-of-the-art
efficiency on numerical problems and provide a suitable front-end and safe
environment for the code to be executed in. | [
"cs.PL",
"cs.AI",
"cs.CL",
"68T50, 68N18, 97R50",
"I.2.7; I.2.6; H.5.2"
] | false |
2305.04477 | 2023-05-08T06:02:11Z | Behavior Contrastive Learning for Unsupervised Skill Discovery | [
"Rushuai Yang",
"Chenjia Bai",
"Hongyi Guo",
"Siyuan Li",
"Bin Zhao",
"Zhen Wang",
"Peng Liu",
"Xuelong Li"
] | In reinforcement learning, unsupervised skill discovery aims to learn diverse
skills without extrinsic rewards. Previous methods discover skills by
maximizing the mutual information (MI) between states and skills. However, such
an MI objective tends to learn simple and static skills and may hinder
exploration. In this paper, we propose a novel unsupervised skill discovery
method through contrastive learning among behaviors, which makes the agent
produce similar behaviors for the same skill and diverse behaviors for
different skills. Under mild assumptions, our objective maximizes the MI
between different behaviors based on the same skill, which serves as an upper
bound of the previous MI objective. Meanwhile, our method implicitly increases
the state entropy to obtain better state coverage. We evaluate our method on
challenging mazes and continuous control tasks. The results show that our
method generates diverse and far-reaching skills, and also obtains competitive
performance in downstream tasks compared to the state-of-the-art methods. | [
"cs.LG"
] | false |
2305.04618 | 2023-05-08T10:56:06Z | A LSTM and Cost-Sensitive Learning-Based Real-Time Warning for Civil
Aviation Over-limit | [
"Yiming Bian"
] | The issue of over-limit during passenger aircraft flights has drawn
increasing attention in civil aviation due to its potential safety risks. To
address this issue, real-time automated warning systems are essential. In this
study, a real-time warning model for civil aviation over-limit is proposed
based on QAR data monitoring. Firstly, highly correlated attributes to
over-limit are extracted from a vast QAR dataset using the Spearman rank
correlation coefficient. Because flight over-limit poses a binary
classification problem with unbalanced samples, this paper incorporates
cost-sensitive learning in the LSTM model. Finally, the time step length,
number of LSTM cells, and learning rate in the LSTM model are optimized using a
grid search approach. The model is trained on a real dataset, and its
performance is evaluated on a validation set. The experimental results show
that the proposed model achieves an F1 score of 0.991 and an accuracy of 0.978,
indicating its effectiveness in real-time warning of civil aviation over-limit. | [
"cs.LG"
] | false |
2305.04670 | 2023-05-08T12:48:18Z | Analysis of Numerical Integration in RNN-Based Residuals for Fault
Diagnosis of Dynamic Systems | [
"Arman Mohammadi",
"Theodor Westny",
"Daniel Jung",
"Mattias Krysander"
] | Data-driven modeling and machine learning are widely used to model the
behavior of dynamic systems. One application is the residual evaluation of
technical systems where model predictions are compared with measurement data to
create residuals for fault diagnosis applications. While recurrent neural
network models have been shown capable of modeling complex non-linear dynamic
systems, they are limited to fixed steps discrete-time simulation. Modeling
using neural ordinary differential equations, however, make it possible to
evaluate the state variables at specific times, compute gradients when training
the model and use standard numerical solvers to explicitly model the underlying
dynamic of the time-series data. Here, the effect of solver selection on the
performance of neural ordinary differential equation residuals during training
and evaluation is investigated. The paper includes a case study of a heavy-duty
truck's after-treatment system to highlight the potential of these techniques
for improving fault diagnosis performance. | [
"cs.LG"
] | false |
2305.04684 | 2023-05-08T12:59:49Z | ASDL: A Unified Interface for Gradient Preconditioning in PyTorch | [
"Kazuki Osawa",
"Satoki Ishikawa",
"Rio Yokota",
"Shigang Li",
"Torsten Hoefler"
] | Gradient preconditioning is a key technique to integrate the second-order
information into gradients for improving and extending gradient-based learning
algorithms. In deep learning, stochasticity, nonconvexity, and high
dimensionality lead to a wide variety of gradient preconditioning methods, with
implementation complexity and inconsistent performance and feasibility. We
propose the Automatic Second-order Differentiation Library (ASDL), an extension
library for PyTorch, which offers various implementations and a plug-and-play
unified interface for gradient preconditioning. ASDL enables the study and
structured comparison of a range of gradient preconditioning methods. | [
"cs.LG"
] | false |
2305.04754 | 2023-05-08T14:57:06Z | Is AUC the best measure for practical comparison of anomaly detectors? | [
"Vít Škvára",
"Tomáš Pevný",
"Václav Šmídl"
] | The area under receiver operating characteristics (AUC) is the standard
measure for comparison of anomaly detectors. Its advantage is in providing a
scalar number that allows a natural ordering and is independent on a threshold,
which allows to postpone the choice. In this work, we question whether AUC is a
good metric for anomaly detection, or if it gives a false sense of comfort, due
to relying on assumptions which are unlikely to hold in practice. Our
investigation shows that variations of AUC emphasizing accuracy at low false
positive rate seem to be better correlated with the needs of practitioners, but
also that we can compare anomaly detectors only in the case when we have
representative examples of anomalous samples. This last result is disturbing,
as it suggests that in many cases, we should do active or few-show learning
instead of pure anomaly detection. | [
"cs.LG"
] | false |
2305.04432 | 2023-05-08T03:00:59Z | Goal-oriented inference of environment from redundant observations | [
"Kazuki Takahashi",
"Tomoki Fukai",
"Yutaka Sakai",
"Takashi Takekawa"
] | The agent learns to organize decision behavior to achieve a behavioral goal,
such as reward maximization, and reinforcement learning is often used for this
optimization. Learning an optimal behavioral strategy is difficult under the
uncertainty that events necessary for learning are only partially observable,
called as Partially Observable Markov Decision Process (POMDP). However, the
real-world environment also gives many events irrelevant to reward delivery and
an optimal behavioral strategy. The conventional methods in POMDP, which
attempt to infer transition rules among the entire observations, including
irrelevant states, are ineffective in such an environment. Supposing
Redundantly Observable Markov Decision Process (ROMDP), here we propose a
method for goal-oriented reinforcement learning to efficiently learn state
transition rules among reward-related "core states'' from redundant
observations. Starting with a small number of initial core states, our model
gradually adds new core states to the transition diagram until it achieves an
optimal behavioral strategy consistent with the Bellman equation. We
demonstrate that the resultant inference model outperforms the conventional
method for POMDP. We emphasize that our model only containing the core states
has high explainability. Furthermore, the proposed method suits online learning
as it suppresses memory consumption and improves learning speed. | [
"cs.LG",
"cs.AI"
] | false |
2305.04433 | 2023-05-08T03:05:55Z | Accelerated Algorithms for a Class of Optimization Problems with
Equality and Box Constraints | [
"Anjali Parashar",
"Priyank Srivastava",
"Anuradha M. Annaswamy"
] | Convex optimization with equality and inequality constraints is a ubiquitous
problem in several optimization and control problems in large-scale systems.
Recently there has been a lot of interest in establishing accelerated
convergence of the loss function. A class of high-order tuners was recently
proposed in an effort to lead to accelerated convergence for the case when no
constraints are present. In this paper, we propose a new high-order tuner that
can accommodate the presence of equality constraints. In order to accommodate
the underlying box constraints, time-varying gains are introduced in the
high-order tuner which leverage convexity and ensure anytime feasibility of the
constraints. Numerical examples are provided to support the theoretical
derivations. | [
"math.OC",
"cs.LG"
] | false |
2305.04468 | 2023-05-08T05:42:24Z | AnomalyBERT: Self-Supervised Transformer for Time Series Anomaly
Detection using Data Degradation Scheme | [
"Yungi Jeong",
"Eunseok Yang",
"Jung Hyun Ryu",
"Imseong Park",
"Myungjoo Kang"
] | Mechanical defects in real situations affect observation values and cause
abnormalities in multivariate time series, such as sensor values or network
data. To perceive abnormalities in such data, it is crucial to understand the
temporal context and interrelation between variables simultaneously. The
anomaly detection task for time series, especially for unlabeled data, has been
a challenging problem, and we address it by applying a suitable data
degradation scheme to self-supervised model training. We define four types of
synthetic outliers and propose the degradation scheme in which a portion of
input data is replaced with one of the synthetic outliers. Inspired by the
self-attention mechanism, we design a Transformer-based architecture to
recognize the temporal context and detect unnatural sequences with high
efficiency. Our model converts multivariate data points into temporal
representations with relative position bias and yields anomaly scores from
these representations. Our method, AnomalyBERT, shows a great capability of
detecting anomalies contained in complex time series and surpasses previous
state-of-the-art methods on five real-world benchmarks. Our code is available
at https://github.com/Jhryu30/AnomalyBERT. | [
"cs.LG",
"cs.AI"
] | false |
2305.04513 | 2023-05-08T07:14:50Z | Blockchained Federated Learning for Internet of Things: A Comprehensive
Survey | [
"Yanna Jiang",
"Baihe Ma",
"Xu Wang",
"Ping Yu",
"Guangsheng Yu",
"Zhe Wang",
"Wei Ni",
"Ren Ping Liu"
] | The demand for intelligent industries and smart services based on big data is
rising rapidly with the increasing digitization and intelligence of the modern
world. This survey comprehensively reviews Blockchained Federated Learning
(BlockFL) that joins the benefits of both Blockchain and Federated Learning to
provide a secure and efficient solution for the demand. We compare the existing
BlockFL models in four Internet-of-Things (IoT) application scenarios: Personal
IoT (PIoT), Industrial IoT (IIoT), Internet of Vehicles (IoV), and Internet of
Health Things (IoHT), with a focus on security and privacy, trust and
reliability, efficiency, and data heterogeneity. Our analysis shows that the
features of decentralization and transparency make BlockFL a secure and
effective solution for distributed model training, while the overhead and
compatibility still need further study. It also reveals the unique challenges
of each domain presents unique challenges, e.g., the requirement of
accommodating dynamic environments in IoV and the high demands of identity and
permission management in IoHT, in addition to some common challenges
identified, such as privacy, resource constraints, and data heterogeneity.
Furthermore, we examine the existing technologies that can benefit BlockFL,
thereby helping researchers and practitioners to make informed decisions about
the selection and development of BlockFL for various IoT application scenarios. | [
"cs.LG",
"cs.CR"
] | false |
2305.04539 | 2023-05-08T08:22:18Z | Q&A Label Learning | [
"Kota Kawamoto",
"Masato Uchida"
] | Assigning labels to instances is crucial for supervised machine learning. In
this paper, we proposed a novel annotation method called Q&A labeling, which
involves a question generator that asks questions about the labels of the
instances to be assigned, and an annotator who answers the questions and
assigns the corresponding labels to the instances. We derived a generative
model of labels assigned according to two different Q&A labeling procedures
that differ in the way questions are asked and answered. We showed that, in
both procedures, the derived model is partially consistent with that assumed in
previous studies. The main distinction of this study from previous studies lies
in the fact that the label generative model was not assumed, but rather derived
based on the definition of a specific annotation method, Q&A labeling. We also
derived a loss function to evaluate the classification risk of ordinary
supervised machine learning using instances assigned Q&A labels and evaluated
the upper bound of the classification error. The results indicate statistical
consistency in learning with Q&A labels. | [
"cs.LG",
"stat.ML"
] | false |
2305.04638 | 2023-05-08T11:35:22Z | Learning Good Interventions in Causal Graphs via Covering | [
"Ayush Sawarni",
"Rahul Madhavan",
"Gaurav Sinha",
"Siddharth Barman"
] | We study the causal bandit problem that entails identifying a near-optimal
intervention from a specified set $A$ of (possibly non-atomic) interventions
over a given causal graph. Here, an optimal intervention in ${A}$ is one that
maximizes the expected value for a designated reward variable in the graph, and
we use the standard notion of simple regret to quantify near optimality.
Considering Bernoulli random variables and for causal graphs on $N$ vertices
with constant in-degree, prior work has achieved a worst case guarantee of
$\widetilde{O} (N/\sqrt{T})$ for simple regret. The current work utilizes the
idea of covering interventions (which are not necessarily contained within
${A}$) and establishes a simple regret guarantee of
$\widetilde{O}(\sqrt{N/T})$. Notably, and in contrast to prior work, our simple
regret bound depends only on explicit parameters of the problem instance. We
also go beyond prior work and achieve a simple regret guarantee for causal
graphs with unobserved variables. Further, we perform experiments to show
improvements over baselines in this setting. | [
"cs.LG",
"cs.AI"
] | false |
2305.04675 | 2023-05-08T12:51:16Z | Predicting nuclear masses with product-unit networks | [
"Babette Dellen",
"Uwe Jaekel",
"Paulo S. A. Freitas",
"John W. Clark"
] | Accurate estimation of nuclear masses and their prediction beyond the
experimentally explored domains of the nuclear landscape are crucial to an
understanding of the fundamental origin of nuclear properties and to many
applications of nuclear science, most notably in quantifying the $r$-process of
stellar nucleosynthesis. Neural networks have been applied with some success to
the prediction of nuclear masses, but they are known to have shortcomings in
application to extrapolation tasks. In this work, we propose and explore a
novel type of neural network for mass prediction in which the usual neuron-like
processing units are replaced by complex-valued product units that permit
multiplicative couplings of inputs to be learned from the input data. This
generalized network model is tested on both interpolation and extrapolation
data sets drawn from the Atomic Mass Evaluation. Its performance is compared
with that of several neural-network architectures, substantiating its
suitability for nuclear mass prediction. Additionally, a prediction-uncertainty
measure for such complex-valued networks is proposed that serves to identify
regions of expected low prediction error. | [
"nucl-th",
"cs.LG"
] | false |
2305.04701 | 2023-05-08T13:32:41Z | Differentially Private Attention Computation | [
"Yeqi Gao",
"Zhao Song",
"Xin Yang"
] | Large language models (LLMs) have had a profound impact on numerous aspects
of daily life including natural language processing, content generation,
research methodologies and so on. However, one crucial issue concerning the
inference results of large language models is security and privacy. In many
scenarios, the results generated by LLMs could possibly leak many confidential
or copyright information. A recent beautiful and breakthrough work [Vyas,
Kakade and Barak 2023] focus on such privacy issue of the LLMs from theoretical
perspective. It is well-known that computing the attention matrix is one of the
major task during the LLMs computation. Thus, how to give a provable privately
guarantees of computing the attention matrix is an important research
direction.
Previous work [Alman and Song 2023, Brand, Song and Zhou 2023] have proposed
provable tight result for fast computation of attention without considering
privacy concerns. One natural mathematical formulation to quantity the privacy
in theoretical computer science graduate school textbook is differential
privacy. Inspired by [Vyas, Kakade and Barak 2023], in this work, we provide a
provable result for showing how to differentially private approximate the
attention matrix.
From technique perspective, our result replies on a pioneering work in the
area of differential privacy by [Alabi, Kothari, Tankala, Venkat and Zhang
2022]. | [
"cs.LG",
"cs.CR"
] | false |
2305.04727 | 2023-05-08T14:23:27Z | DEFENDER: DTW-Based Episode Filtering Using Demonstrations for Enhancing
RL Safety | [
"André Correia",
"Luís Alexandre"
] | Deploying reinforcement learning agents in the real world can be challenging
due to the risks associated with learning through trial and error. We propose a
task-agnostic method that leverages small sets of safe and unsafe
demonstrations to improve the safety of RL agents during learning. The method
compares the current trajectory of the agent with both sets of demonstrations
at every step, and filters the trajectory if it resembles the unsafe
demonstrations. We perform ablation studies on different filtering strategies
and investigate the impact of the number of demonstrations on performance. Our
method is compatible with any stand-alone RL algorithm and can be applied to
any task. We evaluate our method on three tasks from OpenAI Gym's Mujoco
benchmark and two state-of-the-art RL algorithms. The results demonstrate that
our method significantly reduces the crash rate of the agent while converging
to, and in most cases even improving, the performance of the stand-alone agent. | [
"cs.LG",
"cs.AI"
] | false |
2305.04746 | 2023-05-08T14:46:34Z | Understanding Noise-Augmented Training for Randomized Smoothing | [
"Ambar Pal",
"Jeremias Sulam"
] | Randomized smoothing is a technique for providing provable robustness
guarantees against adversarial attacks while making minimal assumptions about a
classifier. This method relies on taking a majority vote of any base classifier
over multiple noise-perturbed inputs to obtain a smoothed classifier, and it
remains the tool of choice to certify deep and complex neural network models.
Nonetheless, non-trivial performance of such smoothed classifier crucially
depends on the base model being trained on noise-augmented data, i.e., on a
smoothed input distribution. While widely adopted in practice, it is still
unclear how this noisy training of the base classifier precisely affects the
risk of the robust smoothed classifier, leading to heuristics and tricks that
are poorly understood. In this work we analyze these trade-offs theoretically
in a binary classification setting, proving that these common observations are
not universal. We show that, without making stronger distributional
assumptions, no benefit can be expected from predictors trained with
noise-augmentation, and we further characterize distributions where such
benefit is obtained. Our analysis has direct implications to the practical
deployment of randomized smoothing, and we illustrate some of these via
experiments on CIFAR-10 and MNIST, as well as on synthetic datasets. | [
"cs.LG",
"cs.AI"
] | false |
2305.04792 | 2023-05-08T15:48:53Z | Global Update Tracking: A Decentralized Learning Algorithm for
Heterogeneous Data | [
"Sai Aparna Aketi",
"Abolfazl Hashemi",
"Kaushik Roy"
] | Decentralized learning enables the training of deep learning models over
large distributed datasets generated at different locations, without the need
for a central server. However, in practical scenarios, the data distribution
across these devices can be significantly different, leading to a degradation
in model performance. In this paper, we focus on designing a decentralized
learning algorithm that is less susceptible to variations in data distribution
across devices. We propose Global Update Tracking (GUT), a novel tracking-based
method that aims to mitigate the impact of heterogeneous data in decentralized
learning without introducing any communication overhead. We demonstrate the
effectiveness of the proposed technique through an exhaustive set of
experiments on various Computer Vision datasets (CIFAR-10, CIFAR-100, Fashion
MNIST, and ImageNette), model architectures, and network topologies. Our
experiments show that the proposed method achieves state-of-the-art performance
for decentralized learning on heterogeneous data via a $1-6\%$ improvement in
test accuracy compared to other existing techniques. | [
"cs.LG",
"cs.MA"
] | false |
2305.04912 | 2023-05-08T17:47:28Z | On User-Level Private Convex Optimization | [
"Badih Ghazi",
"Pritish Kamath",
"Ravi Kumar",
"Raghu Meka",
"Pasin Manurangsi",
"Chiyuan Zhang"
] | We introduce a new mechanism for stochastic convex optimization (SCO) with
user-level differential privacy guarantees. The convergence rates of this
mechanism are similar to those in the prior work of Levy et al. (2021);
Narayanan et al. (2022), but with two important improvements. Our mechanism
does not require any smoothness assumptions on the loss. Furthermore, our
bounds are also the first where the minimum number of users needed for
user-level privacy has no dependence on the dimension and only a logarithmic
dependence on the desired excess error. The main idea underlying the new
mechanism is to show that the optimizers of strongly convex losses have low
local deletion sensitivity, along with an output perturbation method for
functions with low local deletion sensitivity, which could be of independent
interest. | [
"cs.LG",
"cs.CR"
] | false |
2305.04963 | 2023-05-08T18:00:50Z | From Relational Pooling to Subgraph GNNs: A Universal Framework for More
Expressive Graph Neural Networks | [
"Cai Zhou",
"Xiyuan Wang",
"Muhan Zhang"
] | Relational pooling is a framework for building more expressive and
permutation-invariant graph neural networks. However, there is limited
understanding of the exact enhancement in the expressivity of RP and its
connection with the Weisfeiler Lehman hierarchy. Starting from RP, we propose
to explicitly assign labels to nodes as additional features to improve
expressive power of message passing neural networks. The method is then
extended to higher dimensional WL, leading to a novel $k,l$-WL algorithm, a
more general framework than $k$-WL. Theoretically, we analyze the expressivity
of $k,l$-WL with respect to $k$ and $l$ and unifies it with a great number of
subgraph GNNs. Complexity reduction methods are also systematically discussed
to build powerful and practical $k,l$-GNN instances. We theoretically and
experimentally prove that our method is universally compatible and capable of
improving the expressivity of any base GNN model. Our $k,l$-GNNs achieve
superior performance on many synthetic and real-world datasets, which verifies
the effectiveness of our framework. | [
"cs.LG",
"cs.AI"
] | false |
2305.04992 | 2023-05-08T18:56:37Z | Autoencoder-based prediction of ICU clinical codes | [
"Tsvetan R. Yordanov",
"Ameen Abu-Hanna",
"Anita CJ Ravelli",
"Iacopo Vagliano"
] | Availability of diagnostic codes in Electronic Health Records (EHRs) is
crucial for patient care as well as reimbursement purposes. However, entering
them in the EHR is tedious, and some clinical codes may be overlooked. Given an
in-complete list of clinical codes, we investigate the performance of ML
methods on predicting the complete ones, and assess the added predictive value
of including other clinical patient data in this task. We used the MIMIC-III
dataset and frame the task of completing the clinical codes as a recommendation
problem. We con-sider various autoencoder approaches plus two strong baselines;
item co-occurrence and Singular Value Decomposition (SVD). Inputs are 1) a
record's known clinical codes, 2) the codes plus variables. The
co-occurrence-based ap-proach performed slightly better (F1 score=0.26, Mean
Average Precision [MAP]=0.19) than the SVD (F1=0.24, MAP=0.18). However, the
adversarial autoencoder achieved the best performance when using the codes plus
variables (F1=0.32, MAP=0.25). Adversarial autoencoders performed best in terms
of F1 and were equal to vanilla and denoising autoencoders in term of MAP.
Using clinical variables in addition to the incomplete codes list, improves the
predictive performance of the models. | [
"cs.LG",
"cs.IR",
"68",
"J.3"
] | false |
2305.05020 | 2023-05-08T19:57:18Z | Domain independent post-processing with graph U-nets: Applications to
Electrical Impedance Tomographic Imaging | [
"William Herzberg",
"Andreas Hauptmann",
"Sarah J. Hamilton"
] | Reconstruction of tomographic images from boundary measurements requires
flexibility with respect to target domains. For instance, when the system
equations are modeled by partial differential equations the reconstruction is
usually done on finite element (FE) meshes, allowing for flexible geometries.
Thus, any processing of the obtained reconstructions should be ideally done on
the FE mesh as well. For this purpose, we extend the hugely successful U-Net
architecture that is limited to rectangular pixel or voxel domains to an
equivalent that works flexibly on FE meshes. To achieve this, the FE mesh is
converted into a graph and we formulate a graph U-Net with a new cluster
pooling and unpooling on the graph that mimics the classic neighborhood based
max-pooling. We demonstrate effectiveness and flexibility of the graph U-Net
for improving reconstructions from electrical impedance tomographic (EIT)
measurements, a nonlinear and highly ill-posed inverse problem. The performance
is evaluated for simulated data and from three measurement devices with
different measurement geometries and instrumentations. We successfully show
that such networks can be trained with a simple two-dimensional simulated
training set and generalize to very different domains, including measurements
from a three-dimensional device and subsequent 3D reconstructions. | [
"eess.IV",
"cs.LG"
] | false |
2305.05082 | 2023-05-08T22:46:54Z | A Unifying Framework of Attention-based Neural Load Forecasting | [
"Jing Xiong",
"Yu Zhang"
] | Accurate load forecasting is critical for reliable and efficient planning and
operation of electric power grids. In this paper, we propose a unifying deep
learning framework for load forecasting, which includes time-varying feature
weighting, hierarchical temporal attention, and feature-reinforced error
correction. Our framework adopts a modular design with good generalization
capability. First, the feature-weighting mechanism assigns input features with
temporal weights. Second, a recurrent encoder-decoder structure with
hierarchical attention is developed as a load predictor. The hierarchical
attention enables a similar day selection, which re-evaluates the importance of
historical information at each time step. Third, we develop an error correction
module that explores the errors and learned feature hidden information to
further improve the model's forecasting performance. Experimental results
demonstrate that our proposed framework outperforms existing methods on two
public datasets and performance metrics, with the feature weighting mechanism
and error correction module being critical to achieving superior performance.
Our framework provides an effective solution to the electric load forecasting
problem, which can be further adapted to many other forecasting tasks. | [
"cs.LG",
"eess.SP"
] | false |
2305.05670 | 2023-05-08T21:05:36Z | Enhancing Road Safety through Accurate Detection of Hazardous Driving
Behaviors with Graph Convolutional Recurrent Networks | [
"Pooyan Khosravinia",
"Thinagaran Perumal",
"Javad Zarrin"
] | Car accidents remain a significant public safety issue worldwide, with the
majority of them attributed to driver errors stemming from inadequate driving
knowledge, non-compliance with regulations, and poor driving habits. To improve
road safety, Driving Behavior Detection (DBD) systems have been proposed in
several studies to identify safe and unsafe driving behavior. Many of these
studies have utilized sensor data obtained from the Controller Area Network
(CAN) bus to construct their models. However, the use of publicly available
sensors is known to reduce the accuracy of detection models, while
incorporating vendor-specific sensors into the dataset increases accuracy. To
address the limitations of existing approaches, we present a reliable DBD
system based on Graph Convolutional Long Short-Term Memory Networks (GConvLSTM)
that enhances the precision and practicality of DBD models using public
sensors. Additionally, we incorporate non-public sensors to evaluate the
model's effectiveness. Our proposed model achieved a high accuracy of 97.5\%
for public sensors and an average accuracy of 98.1\% for non-public sensors,
indicating its consistency and accuracy in both settings. To enable local
driver behavior analysis, we deployed our DBD system on a Raspberry Pi at the
network edge, with drivers able to access daily driving condition reports,
sensor data, and prediction results through a monitoring dashboard.
Furthermore, the dashboard issues voice warnings to alert drivers of hazardous
driving conditions. Our findings demonstrate that the proposed system can
effectively detect hazardous and unsafe driving behavior, with potential
applications in improving road safety and reducing the number of accidents
caused by driver errors. | [
"cs.LG",
"cs.AI"
] | false |
2305.13929 | 2023-05-08T05:40:54Z | Deep Learning and Image Super-Resolution-Guided Beam and Power
Allocation for mmWave Networks | [
"Yuwen Cao",
"Tomoaki Ohtsuki",
"Setareh Maghsudi",
"Tony Q. S. Quek"
] | In this paper, we develop a deep learning (DL)-guided hybrid beam and power
allocation approach for multiuser millimeter-wave (mmWave) networks, which
facilitates swift beamforming at the base station (BS). The following
persisting challenges motivated our research: (i) User and vehicular mobility,
as well as redundant beam-reselections in mmWave networks, degrade the
efficiency; (ii) Due to the large beamforming dimension at the BS, the
beamforming weights predicted by the cutting-edge DL-based methods often do not
suit the channel distributions; (iii) Co-located user devices may cause a
severe beam conflict, thus deteriorating system performance. To address the
aforementioned challenges, we exploit the synergy of supervised learning and
super-resolution technology to enable low-overhead beam- and power allocation.
In the first step, we propose a method for beam-quality prediction. It is based
on deep learning and explores the relationship between high- and low-resolution
beam images (energy). Afterward, we develop a DL-based allocation approach,
which enables high-accuracy beam and power allocation with only a portion of
the available time-sequential low-resolution images. Theoretical and numerical
results verify the effectiveness of our proposed | [
"eess.SP",
"cs.LG"
] | false |
2305.04412 | 2023-05-08T01:39:35Z | Efficient Reinforcement Learning for Autonomous Driving with
Parameterized Skills and Priors | [
"Letian Wang",
"Jie Liu",
"Hao Shao",
"Wenshuo Wang",
"Ruobing Chen",
"Yu Liu",
"Steven L. Waslander"
] | When autonomous vehicles are deployed on public roads, they will encounter
countless and diverse driving situations. Many manually designed driving
policies are difficult to scale to the real world. Fortunately, reinforcement
learning has shown great success in many tasks by automatic trial and error.
However, when it comes to autonomous driving in interactive dense traffic, RL
agents either fail to learn reasonable performance or necessitate a large
amount of data. Our insight is that when humans learn to drive, they will 1)
make decisions over the high-level skill space instead of the low-level control
space and 2) leverage expert prior knowledge rather than learning from scratch.
Inspired by this, we propose ASAP-RL, an efficient reinforcement learning
algorithm for autonomous driving that simultaneously leverages motion skills
and expert priors. We first parameterized motion skills, which are diverse
enough to cover various complex driving scenarios and situations. A skill
parameter inverse recovery method is proposed to convert expert demonstrations
from control space to skill space. A simple but effective double initialization
technique is proposed to leverage expert priors while bypassing the issue of
expert suboptimality and early performance degradation. We validate our
proposed method on interactive dense-traffic driving tasks given simple and
sparse rewards. Experimental results show that our method can lead to higher
learning efficiency and better driving performance relative to previous methods
that exploit skills and priors differently. Code is open-sourced to facilitate
further research. | [
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2305.04625 | 2023-05-08T11:05:44Z | The Signature Kernel | [
"Darrick Lee",
"Harald Oberhauser"
] | The signature kernel is a positive definite kernel for sequential data. It
inherits theoretical guarantees from stochastic analysis, has efficient
algorithms for computation, and shows strong empirical performance. In this
short survey paper for a forthcoming Springer handbook, we give an elementary
introduction to the signature kernel and highlight these theoretical and
computational properties. | [
"math.PR",
"cs.LG",
"stat.ML"
] | false |
2305.04646 | 2023-05-08T11:58:49Z | CURTAINs Flows For Flows: Constructing Unobserved Regions with Maximum
Likelihood Estimation | [
"Debajyoti Sengupta",
"Samuel Klein",
"John Andrew Raine",
"Tobias Golling"
] | Model independent techniques for constructing background data templates using
generative models have shown great promise for use in searches for new physics
processes at the LHC. We introduce a major improvement to the CURTAINs method
by training the conditional normalizing flow between two side-band regions
using maximum likelihood estimation instead of an optimal transport loss. The
new training objective improves the robustness and fidelity of the transformed
data and is much faster and easier to train.
We compare the performance against the previous approach and the current
state of the art using the LHC Olympics anomaly detection dataset, where we see
a significant improvement in sensitivity over the original CURTAINs method.
Furthermore, CURTAINsF4F requires substantially less computational resources to
cover a large number of signal regions than other fully data driven approaches.
When using an efficient configuration, an order of magnitude more models can be
trained in the same time required for ten signal regions, without a significant
drop in performance. | [
"hep-ph",
"cs.LG",
"hep-ex"
] | false |