Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Semantic Knowledge Discovery and Discussion Mining of Incel Online Community: Topic modeling
Hamed Jelodar, Richard Frank
Online forums provide a unique opportunity for online users to share comments and exchange information on a particular topic. Understanding user behaviour is valuable to organizations and has applications for social and security strategies, for instance, identifying user opinions within a community or predicting future behaviour. Discovering the semantic aspects in Incel forums are the main goal of this research; we apply Natural language processing techniques based on topic modeling to latent topic discovery and opinion mining of users from a popular online Incel discussion forum. To prepare the input data for our study, we extracted the comments from Incels.co. The research experiments show that Artificial Intelligence (AI) based on NLP models can be effective for semantic and emotion knowledge discovery and retrieval of useful information from the Incel community. For example, we discovered semantic-related words that describe issues within a large volume of Incel comments, which is difficult with manual methods.
http://arxiv.org/abs/2104.09586v2
"2021-04-19T19:39:07Z"
cs.AI
2,021
Few-shot Learning for Topic Modeling
Tomoharu Iwata
Topic models have been successfully used for analyzing text documents. However, with existing topic models, many documents are required for training. In this paper, we propose a neural network-based few-shot learning method that can learn a topic model from just a few documents. The neural networks in our model take a small number of documents as inputs, and output topic model priors. The proposed method trains the neural networks such that the expected test likelihood is improved when topic model parameters are estimated by maximizing the posterior probability using the priors based on the EM algorithm. Since each step in the EM algorithm is differentiable, the proposed method can backpropagate the loss through the EM algorithm to train the neural networks. The expected test likelihood is maximized by a stochastic gradient descent method using a set of multiple text corpora with an episodic training framework. In our experiments, we demonstrate that the proposed method achieves better perplexity than existing methods using three real-world text document sets.
http://arxiv.org/abs/2104.09011v1
"2021-04-19T01:56:48Z"
cs.CL, cs.LG, stat.ML
2,021
Multi-source Neural Topic Modeling in Multi-view Embedding Spaces
Pankaj Gupta, Yatin Chaudhary, Hinrich Schütze
Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity in short-text or small collection of documents. This work presents a novel neural topic modeling framework using multi-view embedding spaces: (1) pretrained topic-embeddings, and (2) pretrained word-embeddings (context insensitive from Glove and context-sensitive from BERT models) jointly from one or many sources to improve topic quality and better deal with polysemy. In doing so, we first build respective pools of pretrained topic (i.e., TopicPool) and word embeddings (i.e., WordPool). We then identify one or more relevant source domain(s) and transfer knowledge to guide meaningful learning in the sparse target domain. Within neural topic modeling, we quantify the quality of topics and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. Introducing the multi-source multi-view embedding spaces, we have shown state-of-the-art neural topic modeling using 6 source (high-resource) and 5 target (low-resource) corpora.
http://arxiv.org/abs/2104.08551v1
"2021-04-17T14:08:00Z"
cs.CL, cs.AI, cs.LG
2,021
Hierarchical Topic Presence Models
Jason Wang, Robert E. Weiss
Topic models analyze text from a set of documents. Documents are modeled as a mixture of topics, with topics defined as probability distributions on words. Inferences of interest include the most probable topics and characterization of a topic by inspecting the topic's highest probability words. Motivated by a data set of web pages (documents) nested in web sites, we extend the Poisson factor analysis topic model to hierarchical topic presence models for analyzing text from documents nested in known groups. We incorporate an unknown binary topic presence parameter for each topic at the web site and/or the web page level to allow web sites and/or web pages to be sparse mixtures of topics and we propose logistic regression modeling of topic presence conditional on web site covariates. We introduce local topics into the Poisson factor analysis framework, where each web site has a local topic not found in other web sites. Two data augmentation methods, the Chinese table distribution and P\'{o}lya-Gamma augmentation, aid in constructing our sampler. We analyze text from web pages nested in United States local public health department web sites to abstract topical information and understand national patterns in topic presence.
http://arxiv.org/abs/2104.07969v1
"2021-04-16T08:41:07Z"
cs.IR
2,021
AI supported Topic Modeling using KNIME-Workflows
Jamal Al Qundus, Silvio Peikert, Adrian Paschke
Topic modeling algorithms traditionally model topics as list of weighted terms. These topic models can be used effectively to classify texts or to support text mining tasks such as text summarization or fact extraction. The general procedure relies on statistical analysis of term frequencies. The focus of this work is on the implementation of the knowledge-based topic modelling services in a KNIME workflow. A brief description and evaluation of the DBPedia-based enrichment approach and the comparative evaluation of enriched topic models will be outlined based on our previous work. DBpedia-Spotlight is used to identify entities in the input text and information from DBpedia is used to extend these entities. We provide a workflow developed in KNIME implementing this approach and perform a result comparison of topic modeling supported by knowledge base information to traditional LDA. This topic modeling approach allows semantic interpretation both by algorithms and by humans.
http://arxiv.org/abs/2104.09428v1
"2021-04-15T10:19:58Z"
cs.IR, cs.AI, cs.LG
2,021
Restoring and Mining the Records of the Joseon Dynasty via Neural Language Modeling and Machine Translation
Kyeongpil Kang, Kyohoon Jin, Soyoung Yang, Sujin Jang, Jaegul Choo, Youngbin Kim
Understanding voluminous historical records provides clues on the past in various aspects, such as social and political issues and even natural science facts. However, it is generally difficult to fully utilize the historical records, since most of the documents are not written in a modern language and part of the contents are damaged over time. As a result, restoring the damaged or unrecognizable parts as well as translating the records into modern languages are crucial tasks. In response, we present a multi-task learning approach to restore and translate historical documents based on a self-attention mechanism, specifically utilizing two Korean historical records, ones of the most voluminous historical records in the world. Experimental results show that our approach significantly improves the accuracy of the translation task than baselines without multi-task learning. In addition, we present an in-depth exploratory analysis on our translated results via topic modeling, uncovering several significant historical events.
http://arxiv.org/abs/2104.05964v3
"2021-04-13T06:40:25Z"
cs.CL, cs.AI
2,021
Fine-tuning Encoders for Improved Monolingual and Zero-shot Polylingual Neural Topic Modeling
Aaron Mueller, Mark Dredze
Neural topic models can augment or replace bag-of-words inputs with the learned representations of deep pre-trained transformer-based word prediction models. One added benefit when using representations from multilingual models is that they facilitate zero-shot polylingual topic modeling. However, while it has been widely observed that pre-trained embeddings should be fine-tuned to a given task, it is not immediately clear what supervision should look like for an unsupervised task such as topic modeling. Thus, we propose several methods for fine-tuning encoders to improve both monolingual and zero-shot polylingual neural topic modeling. We consider fine-tuning on auxiliary tasks, constructing a new topic classification task, integrating the topic classification objective directly into topic model training, and continued pre-training. We find that fine-tuning encoder representations on topic classification and integrating the topic classification task directly into topic modeling improves topic quality, and that fine-tuning encoder representations on any task is the most important factor for facilitating cross-lingual transfer.
http://arxiv.org/abs/2104.05064v1
"2021-04-11T18:03:57Z"
cs.CL
2,021
Learning Graph Structures with Transformer for Multivariate Time Series Anomaly Detection in IoT
Zekai Chen, Dingshuo Chen, Xiao Zhang, Zixuan Yuan, Xiuzhen Cheng
Many real-world IoT systems, which include a variety of internet-connected sensory devices, produce substantial amounts of multivariate time series data. Meanwhile, vital IoT infrastructures like smart power grids and water distribution networks are frequently targeted by cyber-attacks, making anomaly detection an important study topic. Modeling such relatedness is, nevertheless, unavoidable for any efficient and effective anomaly detection system, given the intricate topological and nonlinear connections that are originally unknown among sensors. Furthermore, detecting anomalies in multivariate time series is difficult due to their temporal dependency and stochasticity. This paper presented GTA, a new framework for multivariate time series anomaly detection that involves automatically learning a graph structure, graph convolution, and modeling temporal dependency using a Transformer-based architecture. The connection learning policy, which is based on the Gumbel-softmax sampling approach to learn bi-directed links among sensors directly, is at the heart of learning graph structure. To describe the anomaly information flow between network nodes, we introduced a new graph convolution called Influence Propagation convolution. In addition, to tackle the quadratic complexity barrier, we suggested a multi-branch attention mechanism to replace the original multi-head self-attention method. Extensive experiments on four publicly available anomaly detection benchmarks further demonstrate the superiority of our approach over alternative state-of-the-arts. Codes are available at https://github.com/ZEKAICHEN/GTA.
http://arxiv.org/abs/2104.03466v3
"2021-04-08T01:45:28Z"
cs.LG, cs.CR, cs.SY, eess.SY
2,021
Min(d)ing the President: A text analytic approach to measuring tax news
Adam Jassem, Lenard Lieb, Rui Jorge Almeida, Nalan Baştürk, Stephan Smeekes
We propose a novel text-analytic approach for incorporating textual information into structural economic models and apply this to study the effects of tax news. We first develop a novel semi-supervised two-step topic model that automatically extracts specific information regarding future tax policy changes from text. We also propose an approach for transforming such textual information into an economically meaningful time series to be included in a structural econometric model as variable of interest or instrument. We apply our method to study the effects of fiscal foresight, in particular the informational content in speeches of the U.S. president about future tax reforms, and find that our semi-supervised topic model can successfully extract information about the direction of tax changes. The extracted information predicts (exogenous) future tax changes and contains signals that are not present in previously considered (narrative) measures of (exogenous) tax changes. We find that tax news triggers a significant yet delayed response in output.
http://arxiv.org/abs/2104.03261v2
"2021-04-07T17:08:16Z"
econ.EM
2,021
Exploring Topic-Metadata Relationships with the STM: A Bayesian Approach
P. Schulze, S. Wiegrebe, P. W. Thurner, C. Heumann, M. Aßenmacher, S. Wankmüller
Topic models such as the Structural Topic Model (STM) estimate latent topical clusters within text. An important step in many topic modeling applications is to explore relationships between the discovered topical structure and metadata associated with the text documents. Methods used to estimate such relationships must take into account that the topical structure is not directly observed, but instead being estimated itself. The authors of the STM, for instance, perform repeated OLS regressions of sampled topic proportions on metadata covariates by using a Monte Carlo sampling technique known as the method of composition. In this paper, we propose two improvements: first, we replace OLS with more appropriate Beta regression. Second, we suggest a fully Bayesian approach instead of the current blending of frequentist and Bayesian methods. We demonstrate our improved methodology by exploring relationships between Twitter posts by German members of parliament (MPs) and different metadata covariates.
http://arxiv.org/abs/2104.02496v1
"2021-04-06T13:28:04Z"
cs.CL, cs.LG, stat.ML
2,021
Mining DEV for social and technical insights about software development
Maria Papoutsoglou, Johannes Wachs, Georgia M. Kapitsaki
Software developers are social creatures: they communicate, collaborate, and promote their work in a variety of channels. Twitter, GitHub, Stack Overflow, and other platforms offer developers opportunities to network and exchange ideas. Researchers analyze content on these sites to learn about trends and topics in software engineering. However, insight mined from the text of Stack Overflow questions or GitHub issues is highly focused on detailed and technical aspects of software development. In this paper, we present a relatively new online community for software developers called DEV. On DEV users write long-form posts about their experiences, preferences, and working life in software, zooming out from specific issues and files to reflect on broader topics. About 50,000 users have posted over 140,000 articles related to software development. In this work, we describe the content of posts on DEV using a topic model, showing that developers discuss a rich variety and mixture of social and technical aspects of software development. We show that developers use DEV to promote themselves and their work: 83% link their profiles to their GitHub profiles and 56% to their Twitter profiles. 14% of users pin specific GitHub repos in their profiles. We argue that DEV is emerging as an important hub for software developers, and a valuable source of insight for researchers to complement data from platforms like GitHub and Stack Overflow.
http://arxiv.org/abs/2103.17054v2
"2021-03-31T13:21:36Z"
cs.SE, cs.SI
2,021
The Kaleidoscope of Privacy: Differences across French, German, UK, and US GDPR Media Discourse
Mary Sanford, Taha Yasseri
Conceptions of privacy differ by culture. In the Internet age, digital tools continuously challenge the way users, technologists, and governments define, value, and protect privacy. National and supranational entities attempt to regulate privacy and protect data managed online. The European Union passed the General Data Protection Regulation (GDPR), which took effect on 25 May 2018. The research presented here draws on two years of media reporting on GDPR from French, German, UK, and US sources. We use the unsupervised machine learning method of topic modelling to compare the thematic structure of the news articles across time and geographic regions. Our work emphasises the relevance of regional differences regarding valuations of privacy and potential obstacles to the implementation of unilateral data protection regulation such as GDPR. We find that the topics and trends over time in GDPR media coverage of the four countries reflect the differences found across their traditional privacy cultures.
http://arxiv.org/abs/2104.04074v1
"2021-03-31T12:46:23Z"
cs.CY, cs.LG, cs.SI
2,021
Topic Scaling: A Joint Document Scaling -- Topic Model Approach To Learn Time-Specific Topics
Sami Diaf, Ulrich Fritsche
This paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of Topic Scaling which ranks learned topics within the same document scale. The first stage ranks documents using Wordfish, a Poisson-based document scaling method, to estimate document positions that serve, in the second stage, as a dependent variable to learn relevant topics via a supervised Latent Dirichlet Allocation. This novelty brings two innovations in text mining as it explains document positions, whose scale is a latent variable, and ranks the inferred topics on the document scale to match their occurrences within the corpus and track their evolution. Tested on the U.S. State Of The Union two-party addresses, this inductive approach reveals that each party dominates one end of the learned scale with interchangeable transitions that follow the parties' term of office. Besides a demonstrated high accuracy in predicting in-sample documents' positions from topic scores, this method reveals further hidden topics that differentiate similar documents by increasing the number of learned topics to unfold potential nested hierarchical topic structures. Compared to other popular topic models, Topic Scaling learns topics with respect to document similarities without specifying a time frequency to learn topic evolution, thus capturing broader topic patterns than dynamic topic models and yielding more interpretable outputs than a plain latent Dirichlet allocation.
http://arxiv.org/abs/2104.01117v1
"2021-03-31T12:35:36Z"
cs.IR, cs.CL, cs.LG
2,021
Local and Global Topics in Text Modeling of Web Pages Nested in Web Sites
Jason Wang, Robert E. Weiss
Topic models are popular models for analyzing a collection of text documents. The models assert that documents are distributions over latent topics and latent topics are distributions over words. A nested document collection is where documents are nested inside a higher order structure such as stories in a book, articles in a journal, or web pages in a web site. In a single collection of documents, topics are global, or shared across all documents. For web pages nested in web sites, topic frequencies likely vary between web sites. Within a web site, topic frequencies almost certainly vary between web pages. A hierarchical prior for topic frequencies models this hierarchical structure and specifies a global topic distribution. Web site topic distributions vary around the global topic distribution and web page topic distributions vary around the web site topic distribution. In a nested collection of web pages, some topics are likely unique to a single web site. Local topics in a nested collection of web pages are topics unique to one web site. For US local health department web sites, brief inspection of the text shows local geographic and news topics specific to each department that are not present in others. Topic models that ignore the nesting may identify local topics, but do not label topics as local nor do they explicitly identify the web site owner of the local topic. For web pages nested inside web sites, local topic models explicitly label local topics and identifies the owning web site. This identification can be used to adjust inferences about global topics. In the US public health web site data, topic coverage is defined at the web site level after removing local topic words from pages. Hierarchical local topic models can be used to identify local topics, adjust inferences about if web sites cover particular health topics, and study how well health topics are covered.
http://arxiv.org/abs/2104.01115v1
"2021-03-30T23:16:46Z"
cs.IR, stat.ME
2,021
An Embedding-based Joint Sentiment-Topic Model for Short Texts
Ayan Sengupta, William Scott Paka, Suman Roy, Gaurav Ranjan, Tanmoy Chakraborty
Short text is a popular avenue of sharing feedback, opinions and reviews on social media, e-commerce platforms, etc. Many companies need to extract meaningful information (which may include thematic content as well as semantic polarity) out of such short texts to understand users' behaviour. However, obtaining high quality sentiment-associated and human interpretable themes still remains a challenge for short texts. In this paper we develop ELJST, an embedding enhanced generative joint sentiment-topic model that can discover more coherent and diverse topics from short texts. It uses Markov Random Field Regularizer that can be seen as a generalisation of skip-gram based models. Further, it can leverage higher-order semantic information appearing in word embedding, such as self-attention weights in graphical models. Our results show an average improvement of 10% in topic coherence and 5% in topic diversification over baselines. Finally, ELJST helps understand users' behaviour at more granular levels which can be explained. All these can bring significant values to the service and healthcare industries often dealing with customers.
http://arxiv.org/abs/2103.14410v1
"2021-03-26T11:41:21Z"
cs.CL, cs.AI
2,021
TOUR: Dynamic Topic and Sentiment Analysis of User Reviews for Assisting App Release
Tianyi Yang, Cuiyun Gao, Jingya Zang, David Lo, Michael R. Lyu
App reviews deliver user opinions and emerging issues (e.g., new bugs) about the app releases. Due to the dynamic nature of app reviews, topics and sentiment of the reviews would change along with app release versions. Although several studies have focused on summarizing user opinions by analyzing user sentiment towards app features, no practical tool is released. The large quantity of reviews and noise words also necessitates an automated tool for monitoring user reviews. In this paper, we introduce TOUR for dynamic TOpic and sentiment analysis of User Reviews. TOUR is able to (i) detect and summarize emerging app issues over app versions, (ii) identify user sentiment towards app features, and (iii) prioritize important user reviews for facilitating developers' examination. The core techniques of TOUR include the online topic modeling approach and sentiment prediction strategy. TOUR provides entries for developers to customize the hyper-parameters and the results are presented in an interactive way. We evaluate TOUR by conducting a developer survey that involves 15 developers, and all of them confirm the practical usefulness of the recommended feature changes by TOUR.
http://arxiv.org/abs/2103.15774v2
"2021-03-26T08:44:55Z"
cs.SE, cs.CL
2,021
Biwhitening Reveals the Rank of a Count Matrix
Boris Landa, Thomas T. C. K. Zhang, Yuval Kluger
Estimating the rank of a corrupted data matrix is an important task in data analysis, most notably for choosing the number of components in PCA. Significant progress on this task was achieved using random matrix theory by characterizing the spectral properties of large noise matrices. However, utilizing such tools is not straightforward when the data matrix consists of count random variables, e.g., Poisson, in which case the noise can be heteroskedastic with an unknown variance in each entry. In this work, we consider a Poisson random matrix with independent entries, and propose a simple procedure termed \textit{biwhitening} for estimating the rank of the underlying signal matrix (i.e., the Poisson parameter matrix) without any prior knowledge. Our approach is based on the key observation that one can scale the rows and columns of the data matrix simultaneously so that the spectrum of the corresponding noise agrees with the standard Marchenko-Pastur (MP) law, justifying the use of the MP upper edge as a threshold for rank selection. Importantly, the required scaling factors can be estimated directly from the observations by solving a matrix scaling problem via the Sinkhorn-Knopp algorithm. Aside from the Poisson, our approach is extended to families of distributions that satisfy a quadratic relation between the mean and the variance, such as the generalized Poisson, binomial, negative binomial, gamma, and many others. This quadratic relation can also account for missing entries in the data. We conduct numerical experiments that corroborate our theoretical findings, and showcase the advantage of our approach for rank estimation in challenging regimes. Furthermore, we demonstrate the favorable performance of our approach on several real datasets of single-cell RNA sequencing (scRNA-seq), High-Throughput Chromosome Conformation Capture (Hi-C), and document topic modeling.
http://arxiv.org/abs/2103.13840v2
"2021-03-25T13:48:42Z"
math.ST, cs.IT, math.IT, stat.TH, 62H12, 62H25
2,021
Term-community-based topic detection with variable resolution
Andreas Hamm, Simon Odrowski
Network-based procedures for topic detection in huge text collections offer an intuitive alternative to probabilistic topic models. We present in detail a method that is especially designed with the requirements of domain experts in mind. Like similar methods, it employs community detection in term co-occurrence graphs, but it is enhanced by including a resolution parameter that can be used for changing the targeted topic granularity. We also establish a term ranking and use semantic word-embedding for presenting term communities in a way that facilitates their interpretation. We demonstrate the application of our method with a widely used corpus of general news articles and show the results of detailed social-sciences expert evaluations of detected topics at various resolutions. A comparison with topics detected by Latent Dirichlet Allocation is also included. Finally, we discuss factors that influence topic interpretation.
http://arxiv.org/abs/2103.13550v2
"2021-03-25T01:29:39Z"
cs.CL, I.2.7; I.5.3
2,021
Topic Modeling Genre: An Exploration of French Classical and Enlightenment Drama
Christof Schöch
The concept of literary genre is a highly complex one: not only are different genres frequently defined on several, but not necessarily the same levels of description, but consideration of genres as cognitive, social, or scholarly constructs with a rich history further complicate the matter. This contribution focuses on thematic aspects of genre with a quantitative approach, namely Topic Modeling. Topic Modeling has proven to be useful to discover thematic patterns and trends in large collections of texts, with a view to class or browse them on the basis of their dominant themes. It has rarely if ever, however, been applied to collections of dramatic texts. In this contribution, Topic Modeling is used to analyze a collection of French Drama of the Classical Age and the Enlightenment. The general aim of this contribution is to discover what semantic types of topics are found in this collection, whether different dramatic subgenres have distinctive dominant topics and plot-related topic patterns, and inversely, to what extent clustering methods based on topic scores per play produce groupings of texts which agree with more conventional genre distinctions. This contribution shows that interesting topic patterns can be detected which provide new insights into the thematic, subgenre-related structure of French drama as well as into the history of French drama of the Classical Age and the Enlightenment.
http://arxiv.org/abs/2103.13019v1
"2021-03-24T06:57:00Z"
cs.CL, J.5
2,021
TeCoMiner: Topic Discovery Through Term Community Detection
Andreas Hamm, Jana Thelen, Rasmus Beckmann, Simon Odrowski
This note is a short description of TeCoMiner, an interactive tool for exploring the topic content of text collections. Unlike other topic modeling tools, TeCoMiner is not based on some generative probabilistic model but on topological considerations about co-occurrence networks of terms. We outline the methods used for identifying topics, describe the features of the tool, and sketch an application, using a corpus of policy related scientific news on environmental issues published by the European Commission over the last decade.
http://arxiv.org/abs/2103.12882v1
"2021-03-23T23:08:46Z"
cs.CL, I.2.7; I.5.3; H.5.2
2,021
Bridging the gap between supervised classification and unsupervised topic modelling for social-media assisted crisis management
Mikael Brunila, Rosie Zhao, Andrei Mircea, Sam Lumley, Renee Sieber
Social media such as Twitter provide valuable information to crisis managers and affected people during natural disasters. Machine learning can help structure and extract information from the large volume of messages shared during a crisis; however, the constantly evolving nature of crises makes effective domain adaptation essential. Supervised classification is limited by unchangeable class labels that may not be relevant to new events, and unsupervised topic modelling by insufficient prior knowledge. In this paper, we bridge the gap between the two and show that BERT embeddings finetuned on crisis-related tweet classification can effectively be used to adapt to a new crisis, discovering novel topics while preserving relevant classes from supervised training, and leveraging bidirectional self-attention to extract topic keywords. We create a dataset of tweets from a snowstorm to evaluate our method's transferability to new crises, and find that it outperforms traditional topic models in both automatic, and human evaluations grounded in the needs of crisis managers. More broadly, our method can be used for textual domain adaptation where the latent classes are unknown but overlap with known classes from other domains.
http://arxiv.org/abs/2103.11835v1
"2021-03-22T13:30:39Z"
cs.CL
2,021
An Empirical Study of Developer Discussions on Low-Code Software Development Challenges
Md Abdullah Al Alamin, Sanjay Malakar, Gias Uddin, Sadia Afroz, Tameem Bin Haider, Anindya Iqbal
Low-code software development (LCSD) is an emerging paradigm that combines minimal source code with interactive graphical interfaces to promote rapid application development. LCSD aims to democratize application development to software practitioners with diverse backgrounds. Given that LCSD is relatively a new paradigm, it is vital to learn about the challenges developers face during their adoption of LCSD platforms. The online developer forum, Stack Overflow (SO), is popular among software developers to ask for solutions to their technical problems. We observe a growing body of posts in SO with discussions of LCSD platforms. In this paper, we present an empirical study of around 5K SO posts (questions + accepted answers) that contain discussions of nine popular LCSD platforms. We apply topic modeling on the posts to determine the types of topics discussed. We find 13 topics related to LCSD in SO. The 13 topics are grouped into four categories: Customization, Platform Adoption, Database Management, and Third-Party Integration. More than 40% of the questions are about customization, i.e., developers frequently face challenges with customizing user interfaces or services offered by LCSD platforms. The topic "Dynamic Event Handling" under the "Customization" category is the most popular (in terms of average view counts per question of the topic) as well as the most difficult. It means that developers frequently search for customization solutions such as how to attach dynamic events to a form in low-code UI, yet most (75.9%) of their questions remain without an accepted answer. We manually label 900 questions from the posts to determine the prevalence of the topics' challenges across LCSD phases. We find that most of the questions are related to the development phase, and low-code developers also face challenges with automated testing.
http://arxiv.org/abs/2103.11429v1
"2021-03-21T16:24:43Z"
cs.SE
2,021
Posterior distributions for Hierarchical Spike and Slab Indian Buffet processes
Lancelot F. James, Juho Lee, Abhinav Pandey
Bayesian nonparametric hierarchical priors are highly effective in providing flexible models for latent data structures exhibiting sharing of information between and across groups. Most prominent is the Hierarchical Dirichlet Process (HDP), and its subsequent variants, which model latent clustering between and across groups. The HDP, may be viewed as a more flexible extension of Latent Dirichlet Allocation models (LDA), and has been applied to, for example, topic modelling, natural language processing, and datasets arising in health-care. We focus on analogous latent feature allocation models, where the data structures correspond to multisets or unbounded sparse matrices. The fundamental development in this regard is the Hierarchical Indian Buffet process (HIBP), which utilizes a hierarchy of Beta processes over J groups, where each group generates binary random matrices, reflecting within group sharing of features, according to beta-Bernoulli IBP priors. To encompass HIBP versions of non-Bernoulli extensions of the IBP, we introduce hierarchical versions of general spike and slab IBP. We provide explicit novel descriptions of the marginal, posterior and predictive distributions of the HIBP and its generalizations which allow for exact sampling and simpler practical implementation. We highlight common structural properties of these processes and establish relationships to existing IBP type and related models arising in the literature. Examples of potential applications may involve topic models, Poisson factorization models, random count matrix priors and neural network models
http://arxiv.org/abs/2103.11407v1
"2021-03-21T14:16:28Z"
math.ST, math.PR, stat.TH, 60C05, 60G09 (Primary), 60G57, 60E99 (Secondary)
2,021
Extractive Summarization of Call Transcripts
Pratik K. Biswas, Aleksandr Iakubovich
Text summarization is the process of extracting the most important information from the text and presenting it concisely in fewer sentences. Call transcript is a text that involves textual description of a phone conversation between a customer (caller) and agent(s) (customer representatives). This paper presents an indigenously developed method that combines topic modeling and sentence selection with punctuation restoration in condensing ill-punctuated or un-punctuated call transcripts to produce summaries that are more readable. Extensive testing, evaluation and comparisons have demonstrated the efficacy of this summarizer for call transcript summarization.
http://arxiv.org/abs/2103.10599v2
"2021-03-19T02:40:59Z"
cs.CL
2,021
Covid-19 Discourse on Twitter: How the Topics, Sentiments, Subjectivity, and Figurative Frames Changed Over Time
Philipp Wicke, Marianna M. Bolognesi
The words we use to talk about the current epidemiological crisis on social media can inform us on how we are conceptualizing the pandemic and how we are reacting to its development. This paper provides an extensive explorative analysis of how the discourse about Covid-19 reported on Twitter changes through time, focusing on the first wave of this pandemic. Based on an extensive corpus of tweets (produced between 20th March and 1st July 2020) first we show how the topics associated with the development of the pandemic changed through time, using topic modeling. Second, we show how the sentiment polarity of the language used in the tweets changed from a relatively positive valence during the first lockdown, toward a more negative valence in correspondence with the reopening. Third we show how the average subjectivity of the tweets increased linearly and fourth, how the popular and frequently used figurative frame of WAR changed when real riots and fights entered the discourse.
http://arxiv.org/abs/2103.08952v1
"2021-03-16T10:22:39Z"
cs.CL, cs.SI
2,021
Topical Language Generation using Transformers
Rohola Zandie, Mohammad H. Mahoor
Large-scale transformer-based language models (LMs) demonstrate impressive capabilities in open text generation. However, controlling the generated text's properties such as the topic, style, and sentiment is challenging and often requires significant changes to the model architecture or retraining and fine-tuning the model on new supervised data. This paper presents a novel approach for Topical Language Generation (TLG) by combining a pre-trained LM with topic modeling information. We cast the problem using Bayesian probability formulation with topic probabilities as a prior, LM probabilities as the likelihood, and topical language generation probability as the posterior. In learning the model, we derive the topic probability distribution from the user-provided document's natural structure. Furthermore, we extend our model by introducing new parameters and functions to influence the quantity of the topical features presented in the generated text. This feature would allow us to easily control the topical properties of the generated text. Our experimental results demonstrate that our model outperforms the state-of-the-art results on coherency, diversity, and fluency while being faster in decoding.
http://arxiv.org/abs/2103.06434v1
"2021-03-11T03:45:24Z"
cs.CL, cs.AI
2,021
The Unfolding Structure of Arguments in Online Debates: The case of a No-Deal Brexit
Carlo Santagiustina, Massimo Warglien
In the last decade, political debates have progressively shifted to social media. Rhetorical devices employed by online actors and factions that operate in these debating arenas can be captured and analysed to conduct a statistical reading of societal controversies and their argumentation dynamics. In this paper, we propose a five-step methodology, to extract, categorize and explore the latent argumentation structures of online debates. Using Twitter data about a "no-deal" Brexit, we focus on the expected effects in case of materialisation of this event. First, we extract cause-effect claims contained in tweets using RegEx that exploit verbs related to Creation, Destruction and Causation. Second, we categorise extracted "no-deal" effects using a Structural Topic Model estimated on unigrams and bigrams. Third, we select controversial effect topics and explore within-topic argumentation differences between self-declared partisan user factions. We hence type topics using estimated covariate effects on topic propensities, then, using the topics correlation network, we study the topological structure of the debate to identify coherent topical constellations. Finally, we analyse the debate time dynamics and infer lead/follow relations among factions. Results show that the proposed methodology can be employed to perform a statistical rhetorics analysis of debates, and map the architecture of controversies across time. In particular, the "no-deal" Brexit debate is shown to have an assortative argumentation structure heavily characterized by factional constellations of arguments, as well as by polarized narrative frames invoked through verbs related to Creation and Destruction. Our findings highlight the benefits of implementing a systemic approach to the analysis of debates, which allows the unveiling of topical and factional dependencies between arguments employed in online debates.
http://arxiv.org/abs/2103.16387v1
"2021-03-09T12:29:43Z"
cs.CL, cs.CY, stat.AP, stat.ME, J.4; I.7
2,021
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou, Catarina Moreira, Peter Bruza, Chun Ouyang, Joaquim Jorge
There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence.
http://arxiv.org/abs/2103.04244v2
"2021-03-07T03:11:39Z"
cs.AI, cs.LG
2,021
A Statistical Perspective on the Challenges in Molecular Microbial Biology
Pratheepa Jeganathan, Susan P. Holmes
High throughput sequencing (HTS)-based technology enables identifying and quantifying non-culturable microbial organisms in all environments. Microbial sequences have enhanced our understanding of the human microbiome, the soil and plant environment, and the marine environment. All molecular microbial data pose statistical challenges due to contamination sequences from reagents, batch effects, unequal sampling, and undetected taxa. Technical biases and heteroscedasticity have the strongest effects, but different strains across subjects and environments also make direct differential abundance testing unwieldy. We provide an introduction to a few statistical tools that can overcome some of these difficulties and demonstrate those tools on an example. We show how standard statistical methods, such as simple hierarchical mixture and topic models, can facilitate inferences on latent microbial communities. We also review some nonparametric Bayesian approaches that combine visualization and uncertainty quantification. The intersection of molecular microbial biology and statistics is an exciting new venue. Finally, we list some of the important open problems that would benefit from more careful statistical method development.
http://arxiv.org/abs/2103.04198v1
"2021-03-06T21:14:23Z"
stat.AP
2,021
COVID-19: Detecting Depression Signals during Stay-At-Home Period
Jean Marie Tshimula, Belkacem Chikhaoui, Shengrui Wang
The new coronavirus outbreak has been officially declared a global pandemic by the World Health Organization. To grapple with the rapid spread of this ongoing pandemic, most countries have banned indoor and outdoor gatherings and ordered their residents to stay home. Given the developing situation with coronavirus, mental health is an important challenge in our society today. In this paper, we discuss the investigation of social media postings to detect signals relevant to depression. To this end, we utilize topic modeling features and a collection of psycholinguistic and mental-well-being attributes to develop statistical models to characterize and facilitate representation of the more subtle aspects of depression. Furthermore, we predict whether signals relevant to depression are likely to grow significantly as time moves forward. Our best classifier yields F-1 scores as high as 0.8 and surpasses the utilized baseline by a considerable margin, 0.173. In closing, we propose several future research avenues.
http://arxiv.org/abs/2103.00597v1
"2021-02-28T19:30:20Z"
cs.SI
2,021
Exploring the social influence of Kaggle virtual community on the M5 competition
Xixi Li, Yun Bai, Yanfei Kang
One of the most significant differences of M5 over previous forecasting competitions is that it was held on Kaggle, an online platform of data scientists and machine learning practitioners. Kaggle provides a gathering place, or virtual community, for web users who are interested in the M5 competition. Users can share code, models, features, loss functions, etc. through online notebooks and discussion forums. This paper aims to study the social influence of virtual community on user behaviors in the M5 competition. We first research the content of the M5 virtual community by topic modeling and trend analysis. Further, we perform social media analysis to identify the potential relationship network of the virtual community. We study the roles and characteristics of some key participants that promote the diffusion of information within the M5 virtual community. Overall, this study provides in-depth insights into the mechanism of the virtual community's influence on the participants and has potential implications for future online competitions.
http://arxiv.org/abs/2103.00501v3
"2021-02-28T13:15:50Z"
cs.SI, cs.LG
2,021
Topic Modelling Meets Deep Neural Networks: A Survey
He Zhao, Dinh Phung, Viet Huynh, Yuan Jin, Lan Du, Wray Buntine
Topic modelling has been a successful technique for text analysis for almost twenty years. When topic modelling met deep neural networks, there emerged a new and increasingly popular research area, neural topic models, with over a hundred models developed and a wide range of applications in neural language understanding such as text generation, summarisation and language models. There is a need to summarise research developments and discuss open problems and future directions. In this paper, we provide a focused yet comprehensive overview of neural topic models for interested researchers in the AI community, so as to facilitate them to navigate and innovate in this fast-growing research area. To the best of our knowledge, ours is the first review focusing on this specific topic.
http://arxiv.org/abs/2103.00498v1
"2021-02-28T12:59:28Z"
cs.LG, cs.CL, cs.IR
2,021
Visualizing Music Genres using a Topic Model
Swaroop Panda, V. Namboodiri, S. T. Roy
Music Genres serve as an important meta-data in the field of music information retrieval and have been widely used for music classification and analysis tasks. Visualizing these music genres can thus be helpful for music exploration, archival and recommendation. Probabilistic topic models have been very successful in modelling text documents. In this work, we visualize music genres using a probabilistic topic model. Unlike text documents, audio is continuous and needs to be sliced into smaller segments. We use simple MFCC features of these segments as musical words. We apply the topic model on the corpus and subsequently use the genre annotations of the data to interpret and visualize the latent space.
http://arxiv.org/abs/2103.00127v1
"2021-02-27T04:46:36Z"
cs.HC
2,021
Deep NMF Topic Modeling
JianYu Wang, Xiao-Lei Zhang
Nonnegative matrix factorization (NMF) based topic modeling methods do not rely on model- or data-assumptions much. However, they are usually formulated as difficult optimization problems, which may suffer from bad local minima and high computational complexity. In this paper, we propose a deep NMF (DNMF) topic modeling framework to alleviate the aforementioned problems. It first applies an unsupervised deep learning method to learn latent hierarchical structures of documents, under the assumption that if we could learn a good representation of documents by, e.g. a deep model, then the topic word discovery problem can be boosted. Then, it takes the output of the deep model to constrain a topic-document distribution for the discovery of the discriminant topic words, which not only improves the efficacy but also reduces the computational complexity over conventional unsupervised NMF methods. We constrain the topic-document distribution in three ways, which takes the advantages of the three major sub-categories of NMF -- basic NMF, structured NMF, and constrained NMF respectively. To overcome the weaknesses of deep neural networks in unsupervised topic modeling, we adopt a non-neural-network deep model -- multilayer bootstrap network. To our knowledge, this is the first time that a deep NMF model is used for unsupervised topic modeling. We have compared the proposed method with a number of representative references covering major branches of topic modeling on a variety of real-world text corpora. Experimental results illustrate the effectiveness of the proposed method under various evaluation metrics.
http://arxiv.org/abs/2102.12998v1
"2021-02-24T14:40:22Z"
cs.IR
2,021
Investigating Local and Global Information for Automated Audio Captioning with Transfer Learning
Xuenan Xu, Heinrich Dinkel, Mengyue Wu, Zeyu Xie, Kai Yu
Automated audio captioning (AAC) aims at generating summarizing descriptions for audio clips. Multitudinous concepts are described in an audio caption, ranging from local information such as sound events to global information like acoustic scenery. Currently, the mainstream paradigm for AAC is the end-to-end encoder-decoder architecture, expecting the encoder to learn all levels of concepts embedded in the audio automatically. This paper first proposes a topic model for audio descriptions, comprehensively analyzing the hierarchical audio topics that are commonly covered. We then explore a transfer learning scheme to access local and global information. Two source tasks are identified to respectively represent local and global information, being Audio Tagging (AT) and Acoustic Scene Classification (ASC). Experiments are conducted on the AAC benchmark dataset Clotho and Audiocaps, amounting to a vast increase in all eight metrics with topic transfer learning. Further, it is discovered that local information and abstract representation learning are more crucial to AAC than global information and temporal relationship learning.
http://arxiv.org/abs/2102.11457v1
"2021-02-23T02:09:49Z"
cs.SD, eess.AS
2,021
Using Transformer based Ensemble Learning to classify Scientific Articles
Sohom Ghosh, Ankush Chopra
Many time reviewers fail to appreciate novel ideas of a researcher and provide generic feedback. Thus, proper assignment of reviewers based on their area of expertise is necessary. Moreover, reading each and every paper from end-to-end for assigning it to a reviewer is a tedious task. In this paper, we describe a system which our team FideLIPI submitted in the shared task of SDPRA-2021 [14]. It comprises four independent sub-systems capable of classifying abstracts of scientific literature to one of the given seven classes. The first one is a RoBERTa [10] based model built over these abstracts. Adding topic models / Latent dirichlet allocation (LDA) [2] based features to the first model results in the second sub-system. The third one is a sentence level RoBERTa [10] model. The fourth one is a Logistic Regression model built using Term Frequency Inverse Document Frequency (TF-IDF) features. We ensemble predictions of these four sub-systems using majority voting to develop the final system which gives a F1 score of 0.93 on the test and validation set. This outperforms the existing State Of The Art (SOTA) model SciBERT's [1] in terms of F1 score on the validation set.Our codebase is available at https://github.com/SDPRA-2021/shared-task/tree/main/FideLIPI
http://arxiv.org/abs/2102.09991v2
"2021-02-19T15:42:26Z"
cs.CL
2,021
JST-RR Model: Joint Modeling of Ratings and Reviews in Sentiment-Topic Prediction
Qiao Liang, Shyam Ranganathan, Kaibo Wang, Xinwei Deng
Analysis of online reviews has attracted great attention with broad applications. Often times, the textual reviews are coupled with the numerical ratings in the data. In this work, we propose a probabilistic model to accommodate both textual reviews and overall ratings with consideration of their intrinsic connection for a joint sentiment-topic prediction. The key of the proposed method is to develop a unified generative model where the topic modeling is constructed based on review texts and the sentiment prediction is obtained by combining review texts and overall ratings. The inference of model parameters are obtained by an efficient Gibbs sampling procedure. The proposed method can enhance the prediction accuracy of review data and achieve an effective detection of interpretable topics and sentiments. The merits of the proposed method are elaborated by the case study from Amazon datasets and simulation studies.
http://arxiv.org/abs/2102.11048v1
"2021-02-18T15:47:34Z"
cs.CL, stat.ME
2,021
Generating Diversified Comments via Reader-Aware Topic Modeling and Saliency Detection
Wei Wang, Piji Li, Hai-Tao Zheng
Automatic comment generation is a special and challenging task to verify the model ability on news content comprehension and language generation. Comments not only convey salient and interesting information in news articles, but also imply various and different reader characteristics which we treat as the essential clues for diversity. However, most of the comment generation approaches only focus on saliency information extraction, while the reader-aware factors implied by comments are neglected. To address this issue, we propose a unified reader-aware topic modeling and saliency information detection framework to enhance the quality of generated comments. For reader-aware topic modeling, we design a variational generative clustering algorithm for latent semantic learning and topic mining from reader comments. For saliency information detection, we introduce Bernoulli distribution estimating on news content to select saliency information. The obtained topic representations as well as the selected saliency information are incorporated into the decoder to generate diversified and informative comments. Experimental results on three datasets show that our framework outperforms existing baseline methods in terms of both automatic metrics and human evaluation. The potential ethical issues are also discussed in detail.
http://arxiv.org/abs/2102.06856v1
"2021-02-13T03:50:31Z"
cs.CL
2,021
Pulse of the Pandemic: Iterative Topic Filtering for Clinical Information Extraction from Social Media
Julia Wu, Venkatesh Sivaraman, Dheekshita Kumar, Juan M. Banda, David Sontag
The rapid evolution of the COVID-19 pandemic has underscored the need to quickly disseminate the latest clinical knowledge during a public-health emergency. One surprisingly effective platform for healthcare professionals (HCPs) to share knowledge and experiences from the front lines has been social media (for example, the "#medtwitter" community on Twitter). However, identifying clinically-relevant content in social media without manual labeling is a challenge because of the sheer volume of irrelevant data. We present an unsupervised, iterative approach to mine clinically relevant information from social media data, which begins by heuristically filtering for HCP-authored texts and incorporates topic modeling and concept extraction with MetaMap. This approach identifies granular topics and tweets with high clinical relevance from a set of about 52 million COVID-19-related tweets from January to mid-June 2020. We also show that because the technique does not require manual labeling, it can be used to identify emerging topics on a week-to-week basis. Our method can aid in future public-health emergencies by facilitating knowledge transfer among healthcare workers in a rapidly-changing information environment, and by providing an efficient and unsupervised way of highlighting potential areas for clinical research.
http://arxiv.org/abs/2102.06836v2
"2021-02-13T01:01:04Z"
cs.SI, cs.CY
2,021
Real-time tracking of COVID-19 and coronavirus research updates through text mining
Yutong Jin, Jie Li, Xinyu Wang, Peiyao Li, Jinjiang Guo, Junfeng Wu, Dawei Leng, Lurong Pan
The novel coronavirus (SARS-CoV-2) which causes COVID-19 is an ongoing pandemic. There are ongoing studies with up to hundreds of publications uploaded to databases daily. We are exploring the use-case of artificial intelligence and natural language processing in order to efficiently sort through these publications. We demonstrate that clinical trial information, preclinical studies, and a general topic model can be used as text mining data intelligence tools for scientists all over the world to use as a resource for their own research. To evaluate our method, several metrics are used to measure the information extraction and clustering results. In addition, we demonstrate that our workflow not only have a use-case for COVID-19, but for other disease areas as well. Overall, our system aims to allow scientists to more efficiently research coronavirus. Our automatically updating modules are available on our information portal at https://ghddi-ailab.github.io/Targeting2019-nCoV/ for public viewing.
http://arxiv.org/abs/2102.07640v1
"2021-02-09T04:09:42Z"
cs.IR
2,021
Concentrated Document Topic Model
Hao Lei, Ying Chen
We propose a Concentrated Document Topic Model(CDTM) for unsupervised text classification, which is able to produce a concentrated and sparse document topic distribution. In particular, an exponential entropy penalty is imposed on the document topic distribution. Documents that have diverse topic distributions are penalized more, while those having concentrated topics are penalized less. We apply the model to the benchmark NIPS dataset and observe more coherent topics and more concentrated and sparse document-topic distributions than Latent Dirichlet Allocation(LDA).
http://arxiv.org/abs/2102.04449v1
"2021-02-06T07:12:05Z"
stat.ML, cs.IR, cs.LG
2,021
Exclusive Topic Modeling
Hao Lei, Ying Chen
We propose an Exclusive Topic Modeling (ETM) for unsupervised text classification, which is able to 1) identify the field-specific keywords though less frequently appeared and 2) deliver well-structured topics with exclusive words. In particular, a weighted Lasso penalty is imposed to reduce the dominance of the frequently appearing yet less relevant words automatically, and a pairwise Kullback-Leibler divergence penalty is used to implement topics separation. Simulation studies demonstrate that the ETM detects the field-specific keywords, while LDA fails. When applying to the benchmark NIPS dataset, the topic coherence score on average improves by 22% and 10% for the model with weighted Lasso penalty and pairwise Kullback-Leibler divergence penalty, respectively.
http://arxiv.org/abs/2102.03525v1
"2021-02-06T07:03:15Z"
stat.ML, cs.IR, cs.LG
2,021
How Pandemic Spread in News: Text Analysis Using Topic Model
Minghao Wang, Paolo Mengoni
Researches about COVID-19 has increased largely, no matter in the biology field or the others. This research conducted a text analysis using LDA topic model. We firstly scraped totally 1127 articles and 5563 comments on SCMP covering COVID-19 from Jan 20 to May 19, then we trained the LDA model and tuned parameters based on the Cv coherence as the model evaluation method. With the optimal model, dominant topics, representative documents of each topic and the inconsistence between articles and comments are analyzed. 3 possible improvements are discussed at last.
http://arxiv.org/abs/2102.04205v2
"2021-02-05T08:33:45Z"
cs.IR, cs.AI
2,021
Modeling Financial Products and their Supply Chains
Margret Bjarnadottir, Louiqa Raschid
The objective of this paper is to explore how financial big data and machine learning methods can be applied to model and understand financial products. We focus on residential mortgage backed securities, resMBS, which were at the heart of the 2008 US financial crisis. These securities are contained within a prospectus and have a complex waterfall payoff structure. Multiple financial institutions form a supply chain to create prospectuses. To model this supply chain, we use unsupervised probabilistic methods, particularly dynamic topics models (DTM), to extract a set of features (topics) reflecting community formation and temporal evolution along the chain. We then provide insight into the performance of the resMBS securities and the impact of the supply chain through a series of increasingly comprehensive models. First, models at the security level directly identify salient features of resMBS securities that impact their performance. We then extend the model to include prospectus level features and demonstrate that the composition of the prospectus is significant. Our model also shows that communities along the supply chain that are associated with the generation of the prospectuses and securities have an impact on performance. We are the first to show that toxic communities that are closely linked to financial institutions that played a key role in the subprime crisis can increase the risk of failure of resMBS securities.
http://arxiv.org/abs/2102.02329v2
"2021-02-03T23:20:21Z"
cs.LG
2,021
Focusing Knowledge-based Graph Argument Mining via Topic Modeling
Patrick Abels, Zahra Ahmadi, Sophie Burkhardt, Benjamin Schiller, Iryna Gurevych, Stefan Kramer
Decision-making usually takes five steps: identifying the problem, collecting data, extracting evidence, identifying pro and con arguments, and making decisions. Focusing on extracting evidence, this paper presents a hybrid model that combines latent Dirichlet allocation and word embeddings to obtain external knowledge from structured and unstructured data. We study the task of sentence-level argument mining, as arguments mostly require some degree of world knowledge to be identified and understood. Given a topic and a sentence, the goal is to classify whether a sentence represents an argument in regard to the topic. We use a topic model to extract topic- and sentence-specific evidence from the structured knowledge base Wikidata, building a graph based on the cosine similarity between the entity word vectors of Wikidata and the vector of the given sentence. Also, we build a second graph based on topic-specific articles found via Google to tackle the general incompleteness of structured knowledge bases. Combining these graphs, we obtain a graph-based model which, as our evaluation shows, successfully capitalizes on both structured and unstructured data.
http://arxiv.org/abs/2102.02086v1
"2021-02-03T14:39:58Z"
cs.IR, cs.AI, cs.LG
2,021
Machine Translation, Sentiment Analysis, Text Similarity, Topic Modelling, and Tweets: Understanding Social Media Usage Among Police and Gendarmerie Organizations
Emre Cihan Ates, Erkan Bostanci, Mehmet Serdar Guzel
It is well known that social media has revolutionized communication. Nowadays, citizens, companies, and public institutions actively use social media in order to express themselves better to the population they address. This active use is also carried out by the gendarmerie and police organizations to communicate with the public with the purpose of improving social relations. However, it has been seen that the posts by the gendarmerie and police organizations did not attract much attention from their target audience from time to time, and it has been discovered that there was not enough research in the literature on this issue. In this study, it was aimed to investigate the use of social media by the gendarmerie and police organizations operating in Turkey (Jandarma - Polis), Italy (Carabinieri - Polizia), France (Gendarmerie - Police) and Spain (Guardia Civil - Polic\'ia), and the extent to which they can be effective on the followers, by comparatively examining their activity on twitter. According to the obtained results, it was found that Jandarma (Turkey) has the highest power of influence in the twitter sample, and the findings were comparatively presented in the study.
http://arxiv.org/abs/2101.12717v1
"2021-01-29T18:26:24Z"
cs.CY
2,021
Out-of-Town Recommendation with Travel Intention Modeling
Haoran Xin, Xinjiang Lu, Tong Xu, Hao Liu, Jingjing Gu, Dejing Dou, Hui Xiong
Out-of-town recommendation is designed for those users who leave their home-town areas and visit the areas they have never been to before. It is challenging to recommend Point-of-Interests (POIs) for out-of-town users since the out-of-town check-in behavior is determined by not only the user's home-town preference but also the user's travel intention. Besides, the user's travel intentions are complex and dynamic, which leads to big difficulties in understanding such intentions precisely. In this paper, we propose a TRAvel-INtention-aware Out-of-town Recommendation framework, named TRAINOR. The proposed TRAINOR framework distinguishes itself from existing out-of-town recommenders in three aspects. First, graph neural networks are explored to represent users' home-town check-in preference and geographical constraints in out-of-town check-in behaviors. Second, a user-specific travel intention is formulated as an aggregation combining home-town preference and generic travel intention together, where the generic travel intention is regarded as a mixture of inherent intentions that can be learned by Neural Topic Model (NTM). Third, a non-linear mapping function, as well as a matrix factorization method, are employed to transfer users' home-town preference and estimate out-of-town POI's representation, respectively. Extensive experiments on real-world data sets validate the effectiveness of the TRAINOR framework. Moreover, the learned travel intention can deliver meaningful explanations for understanding a user's travel purposes.
http://arxiv.org/abs/2101.12555v2
"2021-01-29T13:14:29Z"
cs.IR, cs.LG
2,021
CML-COVID: A Large-Scale COVID-19 Twitter Dataset with Latent Topics, Sentiment and Location Information
Hassan Dashtian, Dhiraj Murthy
As a platform, Twitter has been a significant public space for discussion related to the COVID-19 pandemic. Public social media platforms such as Twitter represent important sites of engagement regarding the pandemic and these data can be used by research teams for social, health, and other research. Understanding public opinion about COVID-19 and how information diffuses in social media is important for governments and research institutions. Twitter is a ubiquitous public platform and, as such, has tremendous utility for understanding public perceptions, behavior, and attitudes related to COVID-19. In this research, we present CML-COVID, a COVID-19 Twitter data set of 19,298,967 million tweets from 5,977,653 unique individuals and summarize some of the attributes of these data. These tweets were collected between March 2020 and July 2020 using the query terms coronavirus, covid and mask related to COVID-19. We use topic modeling, sentiment analysis, and descriptive statistics to describe the tweets related to COVID-19 we collected and the geographical location of tweets, where available. We provide information on how to access our tweet dataset (archived using twarc).
http://arxiv.org/abs/2101.12202v1
"2021-01-28T18:59:10Z"
cs.SI, cs.CY, cs.HC, J.4; H.3.5; K.4
2,021
The Power of Language: Understanding Sentiment Towards the Climate Emergency using Twitter Data
Arman Sarjou
Understanding how attitudes towards the Climate Emergency vary can hold the key to driving policy changes for effective action to mitigate climate related risk. The Oil and Gas industry account for a significant proportion of global emissions and so it could be speculated that there is a relationship between Crude Oil Futures and sentiment towards the Climate Emergency. Using Latent Dirichlet Allocation for Topic Modelling on a bespoke Twitter dataset, this study shows that it is possible to split the conversation surrounding the Climate Emergency into 3 distinct topics. Forecasting Crude Oil Futures using Seasonal AutoRegressive Integrated Moving Average Modelling gives promising results with a root mean squared error of 0.196 and 0.209 on the training and testing data respectively. Understanding variation in attitudes towards climate emergency provides inconclusive results which could be improved using spatial-temporal analysis methods such as Density Based Clustering (DBSCAN).
http://arxiv.org/abs/2101.10376v1
"2021-01-25T19:51:10Z"
cs.CL
2,021
Adversarial Learning of Poisson Factorisation Model for Gauging Brand Sentiment in User Reviews
Runcong Zhao, Lin Gui, Gabriele Pergola, Yulan He
In this paper, we propose the Brand-Topic Model (BTM) which aims to detect brand-associated polarity-bearing topics from product reviews. Different from existing models for sentiment-topic extraction which assume topics are grouped under discrete sentiment categories such as `positive', `negative' and `neural', BTM is able to automatically infer real-valued brand-associated sentiment scores and generate fine-grained sentiment-topics in which we can observe continuous changes of words under a certain topic (e.g., `shaver' or `cream') while its associated sentiment gradually varies from negative to positive. BTM is built on the Poisson factorisation model with the incorporation of adversarial learning. It has been evaluated on a dataset constructed from Amazon reviews. Experimental results show that BTM outperforms a number of competitive baselines in brand ranking, achieving a better balance of topic coherence and uniqueness, and extracting better-separated polarity-bearing topics.
http://arxiv.org/abs/2101.10150v1
"2021-01-25T14:58:17Z"
cs.LG
2,021
Analysis and tuning of hierarchical topic models based on Renyi entropy approach
Sergei Koltcov, Vera Ignatenko, Maxim Terpilovskii, Paolo Rosso
Hierarchical topic modeling is a potentially powerful instrument for determining the topical structure of text collections that allows constructing a topical hierarchy representing levels of topical abstraction. However, tuning of parameters of hierarchical models, including the number of topics on each hierarchical level, remains a challenging task and an open issue. In this paper, we propose a Renyi entropy-based approach for a partial solution to the above problem. First, we propose a Renyi entropy-based metric of quality for hierarchical models. Second, we propose a practical concept of hierarchical topic model tuning tested on datasets with human mark-up. In the numerical experiments, we consider three different hierarchical models, namely, hierarchical latent Dirichlet allocation (hLDA) model, hierarchical Pachinko allocation model (hPAM), and hierarchical additive regularization of topic models (hARTM). We demonstrate that hLDA model possesses a significant level of instability and, moreover, the derived numbers of topics are far away from the true numbers for labeled datasets. For hPAM model, the Renyi entropy approach allows us to determine only one level of the data structure. For hARTM model, the proposed approach allows us to estimate the number of topics for two hierarchical levels.
http://arxiv.org/abs/2101.07598v1
"2021-01-19T12:54:47Z"
stat.ML, cs.LG
2,021
User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics
Anamaria Crisan, Michael Correll
Topic models are widely used analysis techniques for clustering documents and surfacing thematic elements of text corpora. These models remain challenging to optimize and often require a "human-in-the-loop" approach where domain experts use their knowledge to steer and adjust. However, the fragility, incompleteness, and opacity of these models means even minor changes could induce large and potentially undesirable changes in resulting model. In this paper we conduct a simulation-based analysis of human-centered interactions with topic models, with the objective of measuring the sensitivity of topic models to common classes of user actions. We find that user interactions have impacts that differ in magnitude but often negatively affect the quality of the resulting modelling in a way that can be difficult for the user to evaluate. We suggest the incorporation of sensitivity and "multiverse" analyses to topic model interfaces to surface and overcome these deficiencies.
http://arxiv.org/abs/2101.02244v1
"2021-01-06T19:44:11Z"
cs.HC, cs.CL, cs.LG, 68U15, H.5.0
2,021
A Multilayer Correlated Topic Model
Ye Tian
We proposed a novel multilayer correlated topic model (MCTM) to analyze how the main ideas inherit and vary between a document and its different segments, which helps understand an article's structure. The variational expectation-maximization (EM) algorithm was derived to estimate the posterior and parameters in MCTM. We introduced two potential applications of MCTM, including the paragraph-level document analysis and market basket data analysis. The effectiveness of MCTM in understanding the document structure has been verified by the great predictive performance on held-out documents and intuitive visualization. We also showed that MCTM could successfully capture customers' popular shopping patterns in the market basket analysis.
http://arxiv.org/abs/2101.02028v1
"2021-01-02T21:50:36Z"
cs.IR, cs.LG, stat.CO, stat.ME, stat.ML
2,021
Panarchy: ripples of a boundary concept
Juan Rocha, Linda Luvuno, Jesse Rieb, Erin Crockett, Katja Malmborg, Michael Schoon, Garry Peterson
How do social-ecological systems change over time? In 2002 Holling and colleagues proposed the concept of Panarchy, which presented social-ecological systems as an interacting set of adaptive cycles, each of which is produced by the dynamic tensions between novelty and efficiency at multiple scales. Initially introduced as a conceptual framework and set of metaphors, panarchy has gained the attention of scholars across many disciplines and its ideas continue to inspire further conceptual developments. Almost twenty years after this concept was introduced we review how it has been used, tested, extended and revised. We do this by combining qualitative methods and machine learning. Document analysis was used to code panarchy features that are commonly used in the scientific literature (N = 42), a qualitative analysis that was complemented with topic modeling of 2177 documents. We find that the adaptive cycle is the feature of panarchy that has attracted the most attention. Challenges remain in empirically grounding the metaphor, but recent theoretical and empirical work offers some avenues for future research.
http://arxiv.org/abs/2012.14312v1
"2020-12-28T15:47:45Z"
cs.CL
2,020
Facebook Ad Engagement in the Russian Active Measures Campaign of 2016
Mirela Silva, Luiz Giovanini, Juliana Fernandes, Daniela Oliveira, Catia S. Silva
This paper examines 3,517 Facebook ads created by Russia's Internet Research Agency (IRA) between June 2015 and August 2017 in its active measures disinformation campaign targeting the 2016 U.S. general election. We aimed to unearth the relationship between ad engagement (as measured by ad clicks) and 41 features related to ads' metadata, sociolinguistic structures, and sentiment. Our analysis was three-fold: (i) understand the relationship between engagement and features via correlation analysis; (ii) find the most relevant feature subsets to predict engagement via feature selection; and (iii) find the semantic topics that best characterize the dataset via topic modeling. We found that ad expenditure, text size, ad lifetime, and sentiment were the top features predicting users' engagement to the ads. Additionally, positive sentiment ads were more engaging than negative ads, and sociolinguistic features (e.g., use of religion-relevant words) were identified as highly important in the makeup of an engaging ad. Linear SVM and Logistic Regression classifiers achieved the highest mean F-scores (93.6% for both models), determining that the optimal feature subset contains 12 and 6 features, respectively. Finally, we corroborate the findings of related works that the IRA specifically targeted Americans on divisive ad topics (e.g., LGBT rights, African American reparations).
http://arxiv.org/abs/2012.11690v2
"2020-12-21T21:30:58Z"
cs.CY, cs.LG, cs.SI
2,020
Fake news agenda in the era of COVID-19: Identifying trends through fact-checking content
Wilson Ceron, Mathias-Felipe de-Lima-Santos, Marcos G. Quiles
The rise of social media has ignited an unprecedented circulation of false information in our society. It is even more evident in times of crises, such as the COVID-19 pandemic. Fact-checking efforts have expanded greatly and have been touted as among the most promising solutions to fake news, especially in times like these. Several studies have reported the development of fact-checking organizations in Western societies, albeit little attention has been given to the Global South. Here, to fill this gap, we introduce a novel Markov-inspired computational method for identifying topics in tweets. In contrast to other topic modeling approaches, our method clusters topics and their current evolution in a predefined time window. Through these, we collected data from Twitter accounts of two Brazilian fact-checking outlets and presented the topics debunked by these initiatives in fortnights throughout the pandemic. By comparing these organizations, we could identify similarities and differences in what was shared by them. Our method resulted in an important technique to cluster topics in a wide range of scenarios, including an infodemic -- a period overabundance of the same information. In particular, the data clearly revealed a complex intertwining between politics and the health crisis during this period. We conclude by proposing a generic model which, in our opinion, is suitable for topic modeling and an agenda for future research.
http://arxiv.org/abs/2012.11004v1
"2020-12-20T19:35:25Z"
cs.SI, cs.CY
2,020
Technical Progress Analysis Using a Dynamic Topic Model for Technical Terms to Revise Patent Classification Codes
Mana Iwata, Yoshiro Matsuda, Yoshimasa Utsumi, Yoshitoshi Tanaka, Kazuhide Nakata
Japanese patents are assigned a patent classification code, FI (File Index), that is unique to Japan. FI is a subdivision of the IPC, an international patent classification code, that is related to Japanese technology. FIs are revised to keep up with technological developments. These revisions have already established more than 30,000 new FIs since 2006. However, these revisions require a lot of time and workload. Moreover, these revisions are not automated and are thus inefficient. Therefore, using machine learning to assist in the revision of patent classification codes (FI) will lead to improved accuracy and efficiency. This study analyzes patent documents from this new perspective of assisting in the revision of patent classification codes with machine learning. To analyze time-series changes in patents, we used the dynamic topic model (DTM), which is an extension of the latent Dirichlet allocation (LDA). Also, unlike English, the Japanese language requires morphological analysis. Patents contain many technical words that are not used in everyday life, so morphological analysis using a common dictionary is not sufficient. Therefore, we used a technique for extracting technical terms from text. After extracting technical terms, we applied them to DTM. In this study, we determined the technological progress of the lighting class F21 for 14 years and compared it with the actual revision of patent classification codes. In other words, we extracted technical terms from Japanese patents and applied DTM to determine the progress of Japanese technology. Then, we analyzed the results from the new perspective of revising patent classification codes with machine learning. As a result, it was found that those whose topics were on the rise were judged to be new technologies.
http://arxiv.org/abs/2012.10120v1
"2020-12-18T09:24:01Z"
cs.CL
2,020
Checking Fact Worthiness using Sentence Embeddings
Sidharth Singla
Checking and confirming factual information in texts and speeches is vital to determine the veracity and correctness of the factual statements. This work was previously done by journalists and other manual means but it is a time-consuming task. With the advancements in Information Retrieval and NLP, research in the area of Fact-checking is getting attention for automating it. CLEF-2018 and 2019 organised tasks related to Fact-checking and invited participants. This project focuses on CLEF-2019 Task-1 Check-Worthiness and experiments using the latest Sentence-BERT pre-trained embeddings, topic Modeling and sentiment score are performed. Evaluation metrics such as MAP, Mean Reciprocal Rank, Mean R-Precision and Mean Precision@N present the improvement in the results using the techniques.
http://arxiv.org/abs/2012.09263v1
"2020-12-16T21:00:24Z"
cs.IR
2,020
Exploring Thematic Coherence in Fake News
Martins Samuel Dogo, Deepak P, Anna Jurek-Loughrey
The spread of fake news remains a serious global issue; understanding and curtailing it is paramount. One way of differentiating between deceptive and truthful stories is by analyzing their coherence. This study explores the use of topic models to analyze the coherence of cross-domain news shared online. Experimental results on seven cross-domain datasets demonstrate that fake news shows a greater thematic deviation between its opening sentences and its remainder.
http://arxiv.org/abs/2012.09118v2
"2020-12-16T18:01:04Z"
cs.CL
2,020
Efficient Clustering from Distributions over Topics
Carlos Badenes-Olmedo, Jose-Luis Redondo García, Oscar Corcho
There are many scenarios where we may want to find pairs of textually similar documents in a large corpus (e.g. a researcher doing literature review, or an R&D project manager analyzing project proposals). To programmatically discover those connections can help experts to achieve those goals, but brute-force pairwise comparisons are not computationally adequate when the size of the document corpus is too large. Some algorithms in the literature divide the search space into regions containing potentially similar documents, which are later processed separately from the rest in order to reduce the number of pairs compared. However, this kind of unsupervised methods still incur in high temporal costs. In this paper, we present an approach that relies on the results of a topic modeling algorithm over the documents in a collection, as a means to identify smaller subsets of documents where the similarity function can then be computed. This approach has proved to obtain promising results when identifying similar documents in the domain of scientific publications. We have compared our approach against state of the art clustering techniques and with different configurations for the topic modeling algorithm. Results suggest that our approach outperforms (> 0.5) the other analyzed techniques in terms of efficiency.
http://arxiv.org/abs/2012.08206v1
"2020-12-15T10:52:19Z"
cs.CL, cs.AI
2,020
Scalable Cross-lingual Document Similarity through Language-specific Concept Hierarchies
Carlos Badenes-Olmedo, Jose-Luis Redondo García, Oscar Corcho
With the ongoing growth in number of digital articles in a wider set of languages and the expanding use of different languages, we need annotation methods that enable browsing multi-lingual corpora. Multilingual probabilistic topic models have recently emerged as a group of semi-supervised machine learning models that can be used to perform thematic explorations on collections of texts in multiple languages. However, these approaches require theme-aligned training data to create a language-independent space. This constraint limits the amount of scenarios that this technique can offer solutions to train and makes it difficult to scale up to situations where a huge collection of multi-lingual documents are required during the training phase. This paper presents an unsupervised document similarity algorithm that does not require parallel or comparable corpora, or any other type of translation resource. The algorithm annotates topics automatically created from documents in a single language with cross-lingual labels and describes documents by hierarchies of multi-lingual concepts from independently-trained models. Experiments performed on the English, Spanish and French editions of JCR-Acquis corpora reveal promising results on classifying and sorting documents by similar content.
http://arxiv.org/abs/2101.03026v1
"2020-12-15T10:42:40Z"
cs.CL, cs.AI, cs.IR
2,020
Discovering Airline-Specific Business Intelligence from Online Passenger Reviews: An Unsupervised Text Analytics Approach
Sharan Srinivas, Surya Ramachandiran
To understand the important dimensions of service quality from the passenger's perspective and tailor service offerings for competitive advantage, airlines can capitalize on the abundantly available online customer reviews (OCR). The objective of this paper is to discover company- and competitor-specific intelligence from OCR using an unsupervised text analytics approach. First, the key aspects (or topics) discussed in the OCR are extracted using three topic models - probabilistic latent semantic analysis (pLSA) and two variants of Latent Dirichlet allocation (LDA-VI and LDA-GS). Subsequently, we propose an ensemble-assisted topic model (EA-TM), which integrates the individual topic models, to classify each review sentence to the most representative aspect. Likewise, to determine the sentiment corresponding to a review sentence, an ensemble sentiment analyzer (E-SA), which combines the predictions of three opinion mining methods (AFINN, SentiStrength, and VADER), is developed. An aspect-based opinion summary (AOS), which provides a snapshot of passenger-perceived strengths and weaknesses of an airline, is established by consolidating the sentiments associated with each aspect. Furthermore, a bi-gram analysis of the labeled OCR is employed to perform root cause analysis within each identified aspect. A case study involving 99,147 airline reviews of a US-based target carrier and four of its competitors is used to validate the proposed approach. The results indicate that a cost- and time-effective performance summary of an airline and its competitors can be obtained from OCR. Finally, besides providing theoretical and managerial implications based on our results, we also provide implications for post-pandemic preparedness in the airline industry considering the unprecedented impact of coronavirus disease 2019 (COVID-19) and predictions on similar pandemics in the future.
http://arxiv.org/abs/2012.08000v1
"2020-12-14T23:09:10Z"
cs.IR, cs.AI, cs.LG
2,020
Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling
Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu
In a customer service system, dialogue summarization can boost service efficiency by automatically creating summaries for long spoken dialogues in which customers and agents try to address issues about specific topics. In this work, we focus on topic-oriented dialogue summarization, which generates highly abstractive summaries that preserve the main ideas from dialogues. In spoken dialogues, abundant dialogue noise and common semantics could obscure the underlying informative content, making the general topic modeling approaches difficult to apply. In addition, for customer service, role-specific information matters and is an indispensable part of a summary. To effectively perform topic modeling on dialogues and capture multi-role information, in this work we propose a novel topic-augmented two-stage dialogue summarizer (TDS) jointly with a saliency-aware neural topic model (SATM) for topic-oriented summarization of customer service dialogues. Comprehensive studies on a real-world Chinese customer service dataset demonstrated the superiority of our method against several strong baselines.
http://arxiv.org/abs/2012.07311v2
"2020-12-14T07:50:25Z"
cs.CL
2,020
A Topic Coverage Approach to Evaluation of Topic Models
Damir Korenčić, Strahil Ristov, Jelena Repar, Jan Šnajder
Topic models are widely used unsupervised models capable of learning topics - weighted lists of words and documents - from large collections of text documents. When topic models are used for discovery of topics in text collections, a question that arises naturally is how well the model-induced topics correspond to topics of interest to the analyst. In this paper we revisit and extend a so far neglected approach to topic model evaluation based on measuring topic coverage - computationally matching model topics with a set of reference topics that models are expected to uncover. The approach is well suited for analyzing models' performance in topic discovery and for large-scale analysis of both topic models and measures of model quality. We propose new measures of coverage and evaluate, in a series of experiments, different types of topic models on two distinct text domains for which interest for topic discovery exists. The experiments include evaluation of model quality, analysis of coverage of distinct topic categories, and the analysis of the relationship between coverage and other methods of topic model evaluation. The paper contributes a new supervised measure of coverage, and the first unsupervised measure of coverage. The supervised measure achieves topic matching accuracy close to human agreement. The unsupervised measure correlates highly with the supervised one (Spearman's $\rho \geq 0.95$). Other contributions include insights into both topic models and different methods of model evaluation, and the datasets and code for facilitating future research on topic coverage.
http://arxiv.org/abs/2012.06274v3
"2020-12-11T12:08:27Z"
cs.IR, cs.CL, cs.LG, H.3.3; I.5.4; I.2.7
2,020
A Sentiment Analysis Approach to the Prediction of Market Volatility
Justina Deveikyte, Helyette Geman, Carlo Piccari, Alessandro Provetti
Prediction and quantification of future volatility and returns play an important role in financial modelling, both in portfolio optimization and risk management. Natural language processing today allows to process news and social media comments to detect signals of investors' confidence. We have explored the relationship between sentiment extracted from financial news and tweets and FTSE100 movements. We investigated the strength of the correlation between sentiment measures on a given day and market volatility and returns observed the next day. The findings suggest that there is evidence of correlation between sentiment and stock market movements: the sentiment captured from news headlines could be used as a signal to predict market returns; the same does not apply for volatility. Also, in a surprising finding, for the sentiment found in Twitter comments we obtained a correlation coefficient of -0.7, and p-value below 0.05, which indicates a strong negative correlation between positive sentiment captured from the tweets on a given day and the volatility observed the next day. We developed an accurate classifier for the prediction of market volatility in response to the arrival of new information by deploying topic modelling, based on Latent Dirichlet Allocation, to extract feature vectors from a collection of tweets and financial news. The obtained features were used as additional input to the classifier. Thanks to the combination of sentiment and topic modelling our classifier achieved a directional prediction accuracy for volatility of 63%.
http://arxiv.org/abs/2012.05906v1
"2020-12-10T01:15:48Z"
q-fin.ST, cs.AI, cs.CL, I.2.7; I.5.4; J.1
2,020
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation
Qi Zhou, Haipeng Chen, Yitao Zheng, Zhen Wang
As one of the most powerful topic models, Latent Dirichlet Allocation (LDA) has been used in a vast range of tasks, including document understanding, information retrieval and peer-reviewer assignment. Despite its tremendous popularity, the security of LDA has rarely been studied. This poses severe risks to security-critical tasks such as sentiment analysis and peer-reviewer assignment that are based on LDA. In this paper, we are interested in knowing whether LDA models are vulnerable to adversarial perturbations of benign document examples during inference time. We formalize the evasion attack to LDA models as an optimization problem and prove it to be NP-hard. We then propose a novel and efficient algorithm, EvaLDA to solve it. We show the effectiveness of EvaLDA via extensive empirical evaluations. For instance, in the NIPS dataset, EvaLDA can averagely promote the rank of a target topic from 10 to around 7 by only replacing 1% of the words with similar words in a victim document. Our work provides significant insights into the power and limitations of evasion attacks to LDA models.
http://arxiv.org/abs/2012.04864v2
"2020-12-09T04:57:20Z"
cs.LG, cs.CR, cs.IR
2,020
TAN-NTM: Topic Attention Networks for Neural Topic Modeling
Madhur Panwar, Shashank Shailabh, Milan Aggarwal, Balaji Krishnamurthy
Topic models have been widely used to learn text representations and gain insight into document corpora. To perform topic discovery, most existing neural models either take document bag-of-words (BoW) or sequence of tokens as input followed by variational inference and BoW reconstruction to learn topic-word distribution. However, leveraging topic-word distribution for learning better features during document encoding has not been explored much. To this end, we develop a framework TAN-NTM, which processes document as a sequence of tokens through a LSTM whose contextual outputs are attended in a topic-aware manner. We propose a novel attention mechanism which factors in topic-word distribution to enable the model to attend on relevant words that convey topic related cues. The output of topic attention module is then used to carry out variational inference. We perform extensive ablations and experiments resulting in ~9-15 percentage improvement over score of existing SOTA topic models in NPMI coherence on several benchmark datasets - 20Newsgroups, Yelp Review Polarity and AGNews. Further, we show that our method learns better latent document-topic features compared to existing topic models through improvement on two downstream tasks: document classification and topic guided keyphrase generation.
http://arxiv.org/abs/2012.01524v2
"2020-12-02T20:58:04Z"
cs.CL
2,020
A Framework for Authorial Clustering of Shorter Texts in Latent Semantic Spaces
Rafi Trad, Myra Spiliopoulou
Authorial clustering involves the grouping of documents written by the same author or team of authors without any prior positive examples of an author's writing style or thematic preferences. For authorial clustering on shorter texts (paragraph-length texts that are typically shorter than conventional documents), the document representation is particularly important: very high-dimensional feature spaces lead to data sparsity and suffer from serious consequences like the curse of dimensionality, while feature selection may lead to information loss. We propose a high-level framework which utilizes a compact data representation in a latent feature space derived with non-parametric topic modeling. Authorial clusters are identified thereafter in two scenarios: (a) fully unsupervised and (b) semi-supervised where a small number of shorter texts are known to belong to the same author (must-link constraints) or not (cannot-link constraints). We report on experiments with 120 collections in three languages and two genres and show that the topic-based latent feature space provides a promising level of performance while reducing the dimensionality by a factor of 1500 compared to state-of-the-arts. We also demonstrate that, while prior knowledge on the precise number of authors (i.e. authorial clusters) does not contribute much to additional quality, little knowledge on constraints in authorial clusters memberships leads to clear performance improvements in front of this difficult task. Thorough experimentation with standard metrics indicates that there still remains an ample room for improvement for authorial clustering, especially with shorter texts
http://arxiv.org/abs/2011.15038v1
"2020-11-30T17:39:44Z"
cs.CL, cs.AI, cs.LG
2,020
MITAO: a tool for enabling scholars in the Humanities to use Topic Modelling in their studies
Ivan Heibi, Silvio Peroni, Luca Pareschi, Paolo Ferri
Automatic text analysis methods, such as Topic Modelling, are gaining much attention in Humanities. However, scholars need to have extensive coding skills to use such methods appropriately. The need of having this technical expertise prevents the broad adoption of these methods in Humanities research. In this paper, to help scholars in the Humanities to use Topic Modelling having no or limited coding skills, we introduce MITAO, a web-based tool that allow the definition of a visual workflow which embeds various automatic text analysis operations and allows one to store and share both the workflow and the results of its execution to other researchers, which enables the reproducibility of the analysis. We present an example of an application of use of Topic Modelling with MITAO using a collection of English abstracts of the articles published in "Umanistica Digitale". The results returned by MITAO are shown with dynamic web-based visualizations, which allowed us to have preliminary insights about the evolution of the topics treated over the time in the articles published in "Umanistica Digitale". All the results along with the defined workflows are published and accessible for further studies.
http://arxiv.org/abs/2011.13886v1
"2020-11-27T18:20:10Z"
cs.DL
2,020
Gender bias in magazines oriented to men and women: a computational approach
Diego Kozlowski, Gabriela Lozano, Carla M. Felcher, Fernando Gonzalez, Edgar Altszyler
Cultural products are a source to acquire individual values and behaviours. Therefore, the differences in the content of the magazines aimed specifically at women or men are a means to create and reproduce gender stereotypes. In this study, we compare the content of a women-oriented magazine with that of a men-oriented one, both produced by the same editorial group, over a decade (2008-2018). With Topic Modelling techniques we identify the main themes discussed in the magazines and quantify how much the presence of these topics differs between magazines over time. Then, we performed a word-frequency analysis to validate this methodology and extend the analysis to other subjects that did not emerge automatically. Our results show that the frequency of appearance of the topics Family, Business and Women as sex objects, present an initial bias that tends to disappear over time. Conversely, in Fashion and Science topics, the initial differences between both magazines are maintained. Besides, we show that in 2012, the content associated with horoscope increased in the women-oriented magazine, generating a new gap that remained open over time. Also, we show a strong increase in the use of words associated with feminism since 2015 and specifically the word abortion in 2018. Overall, these computational tools allowed us to analyse more than 24,000 articles. Up to our knowledge, this is the first study to compare magazines in such a large dataset, a task that would have been prohibitive using manual content analysis methodologies.
http://arxiv.org/abs/2011.12096v1
"2020-11-24T14:02:49Z"
cs.CL, cs.CY
2,020
LaHAR: Latent Human Activity Recognition using LDA
Zeyd Boukhers, Danniene Wete, Steffen Staab
Processing sequential multi-sensor data becomes important in many tasks due to the dramatic increase in the availability of sensors that can acquire sequential data over time. Human Activity Recognition (HAR) is one of the fields which are actively benefiting from this availability. Unlike most of the approaches addressing HAR by considering predefined activity classes, this paper proposes a novel approach to discover the latent HAR patterns in sequential data. To this end, we employed Latent Dirichlet Allocation (LDA), which is initially a topic modelling approach used in text analysis. To make the data suitable for LDA, we extract the so-called "sensory words" from the sequential data. We carried out experiments on a challenging HAR dataset, demonstrating that LDA is capable of uncovering underlying structures in sequential data, which provide a human-understandable representation of the data. The extrinsic evaluations reveal that LDA is capable of accurately clustering HAR data sequences compared to the labelled activities.
http://arxiv.org/abs/2011.11151v1
"2020-11-23T00:33:01Z"
cs.LG, cs.AI
2,020
Topic modelling discourse dynamics in historical newspapers
Jani Marjanen, Elaine Zosa, Simon Hengchen, Lidia Pivovarova, Mikko Tolonen
This paper addresses methodological issues in diachronic data analysis for historical research. We apply two families of topic models (LDA and DTM) on a relatively large set of historical newspapers, with the aim of capturing and understanding discourse dynamics. Our case study focuses on newspapers and periodicals published in Finland between 1854 and 1917, but our method can easily be transposed to any diachronic data. Our main contributions are a) a combined sampling, training and inference procedure for applying topic models to huge and imbalanced diachronic text collections; b) a discussion on the differences between two topic models for this type of data; c) quantifying topic prominence for a period and thus a generalization of document-wise topic assignment to a discourse level; and d) a discussion of the role of humanistic interpretation with regard to analysing discourse dynamics through topic models.
http://arxiv.org/abs/2011.10428v1
"2020-11-20T14:51:07Z"
cs.CL
2,020
User Questions from Tweets on COVID-19: An Exploratory Study
Tiago de Melo
Social media platforms, such as Twitter, provide a suitable avenue for users (people or patients) concerned on health questions to discuss and share information with each other. In December 2019, a few coronavirus disease cases were first reported in China. Soon after, the World Health Organization (WHO) declared a state of emergency due to the rapid spread of the virus in other parts of the world. In this work, we used automated extraction of COVID-19 discussion from Twitter and a natural language processing (NLP) method based on topic modeling to discover the main questions related to COVID-19 from tweets. Moreover, we created a Named Entity Recognition (NER) model to identify the main entities of four different categories: disease, drug, person, and organization. Our findings can help policy makers and health care organizations to understand the issues of people on COVID-19 and it can be used to address them appropriately.
http://arxiv.org/abs/2012.05836v1
"2020-11-20T12:29:55Z"
cs.SI
2,020
Biomedical Named Entity Recognition at Scale
Veysel Kocaman, David Talby
Named entity recognition (NER) is a widely applicable natural language processing task and building block of question answering, topic modeling, information retrieval, etc. In the medical domain, NER plays a crucial role by extracting meaningful chunks from clinical notes and reports, which are then fed to downstream tasks like assertion status detection, entity resolution, relation extraction, and de-identification. Reimplementing a Bi-LSTM-CNN-Char deep learning architecture on top of Apache Spark, we present a single trainable NER model that obtains new state-of-the-art results on seven public biomedical benchmarks without using heavy contextual embeddings like BERT. This includes improving BC4CHEMD to 93.72% (4.1% gain), Species800 to 80.91% (4.6% gain), and JNLPBA to 81.29% (5.2% gain). In addition, this model is freely available within a production-grade code base as part of the open-source Spark NLP library; can scale up for training and inference in any Spark cluster; has GPU support and libraries for popular programming languages such as Python, R, Scala and Java; and can be extended to support other human languages with no code changes.
http://arxiv.org/abs/2011.06315v1
"2020-11-12T11:10:17Z"
cs.CL, cs.AI, cs.LG
2,020
A Critical Correspondence on Humpty Dumpty's Funding for European Journalism
Jukka Ruohonen
This short critical correspondence discusses the Digital News Innovation (DNI) fund orchestrated by Humpty Dumpty -- a.k.a. Google -- for helping European journalism to innovate and renew itself. Based on topic modeling and critical discourse analysis, the results indicate that the innovative projects mostly mimic the old business model of Humpty Dumpty. With these results and the accompanying critical discussion, this correspondence contributes to the ongoing battle between platforms and media.
http://arxiv.org/abs/2011.00751v3
"2020-11-02T05:16:23Z"
cs.CY
2,020
Leveraging Natural Language Processing to Mine Issues on Twitter During the COVID-19 Pandemic
Ankita Agarwal, Preetham Salehundam, Swati Padhee, William L. Romine, Tanvi Banerjee
The recent global outbreak of the coronavirus disease (COVID-19) has spread to all corners of the globe. The international travel ban, panic buying, and the need for self-quarantine are among the many other social challenges brought about in this new era. Twitter platforms have been used in various public health studies to identify public opinion about an event at the local and global scale. To understand the public concerns and responses to the pandemic, a system that can leverage machine learning techniques to filter out irrelevant tweets and identify the important topics of discussion on social media platforms like Twitter is needed. In this study, we constructed a system to identify the relevant tweets related to the COVID-19 pandemic throughout January 1st, 2020 to April 30th, 2020, and explored topic modeling to identify the most discussed topics and themes during this period in our data set. Additionally, we analyzed the temporal changes in the topics with respect to the events that occurred during this pandemic. We found out that eight topics were sufficient to identify the themes in our corpus. These topics depicted a temporal trend. The dominant topics vary over time and align with the events related to the COVID-19 pandemic.
http://arxiv.org/abs/2011.00377v2
"2020-10-31T22:26:26Z"
cs.IR, cs.LG, cs.SI
2,020
Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic
Neil Yeung, Jonathan Lai, Jiebo Luo
In spite of a growing body of scientific evidence on the effectiveness of individual face mask usage for reducing transmission rates, individual face mask usage has become a highly polarized topic within the United States. A series of policy shifts by various governmental bodies have been speculated to have contributed to the polarization of face masks. A typical method to investigate the effects of these policy shifts is to use surveys. However, survey-based approaches have multiple limitations: biased responses, limited sample size, badly crafted questions may skew responses and inhibit insight, and responses may prove quickly irrelevant as opinions change in response to a dynamic topic. We propose a novel approach to 1) accurately gauge public sentiment towards face masks in the United States during COVID-19 using a multi-modal demographic inference framework with topic modeling and 2) determine whether face mask policy shifts contributed to polarization towards face masks using offline change point analysis on Twitter data. First, we infer several key demographics of individual Twitter users such as their age, gender, and whether they are a college student using a multi-modal demographic prediction framework and analyze the average sentiment for each respective demographic. Next, we conduct topic analysis using latent Dirichlet allocation (LDA). Finally, we conduct offline change point discovery on our sentiment time series data using the Pruned Exact Linear Time (PELT) search algorithm. Experimental results on a large corpus of Twitter data reveal multiple insights regarding demographic sentiment towards face masks that agree with existing surveys. Furthermore, we find two key policy-shift events contributed to statistically significant changes in sentiment for both Republicans and Democrats.
http://arxiv.org/abs/2011.00336v2
"2020-10-31T18:52:41Z"
cs.CY, cs.SI
2,020
Graph-based Topic Extraction from Vector Embeddings of Text Documents: Application to a Corpus of News Articles
M. Tarik Altuncu, Sophia N. Yaliraki, Mauricio Barahona
Production of news content is growing at an astonishing rate. To help manage and monitor the sheer amount of text, there is an increasing need to develop efficient methods that can provide insights into emerging content areas, and stratify unstructured corpora of text into `topics' that stem intrinsically from content similarity. Here we present an unsupervised framework that brings together powerful vector embeddings from natural language processing with tools from multiscale graph partitioning that can reveal natural partitions at different resolutions without making a priori assumptions about the number of clusters in the corpus. We show the advantages of graph-based clustering through end-to-end comparisons with other popular clustering and topic modelling methods, and also evaluate different text vector embeddings, from classic Bag-of-Words to Doc2Vec to the recent transformers based model Bert. This comparative work is showcased through an analysis of a corpus of US news coverage during the presidential election year of 2016.
http://arxiv.org/abs/2010.15067v1
"2020-10-28T16:20:05Z"
cs.CL, cs.AI, cs.LG
2,020
TopicModel4J: A Java Package for Topic Models
Yang Qian, Yuanchun Jiang, Yidong Chai, Yezheng Liu, Jiansha Sun
Topic models provide a flexible and principled framework for exploring hidden structure in high-dimensional co-occurrence data and are commonly used natural language processing (NLP) of text. In this paper, we design and implement a Java package, TopicModel4J, which contains 13 kinds of representative algorithms for fitting topic models. The TopicModel4J in the Java programming environment provides an easy-to-use interface for data analysts to run the algorithms, and allow to easily input and output data. In addition, this package provides a few unstructured text preprocessing techniques, such as splitting textual data into words, lowercasing the words, preforming lemmatization and removing the useless characters, URLs and stop words.
http://arxiv.org/abs/2010.14707v1
"2020-10-28T02:33:41Z"
cs.CL, cs.IR
2,020
Analyzing Societal Impact of COVID-19: A Study During the Early Days of the Pandemic
Swaroop Gowdra Shanthakumar, Anand Seetharam, Arti Ramesh
In this paper, we collect and study Twitter communications to understand the societal impact of COVID-19 in the United States during the early days of the pandemic. With infections soaring rapidly, users took to Twitter asking people to self isolate and quarantine themselves. Users also demanded closure of schools, bars, and restaurants as well as lockdown of cities and states. We methodically collect tweets by identifying and tracking trending COVID-related hashtags. We first manually group the hashtags into six main categories, namely, 1) General COVID, 2) Quarantine, 3) Panic Buying, 4) School Closures, 5) Lockdowns, and 6) Frustration and Hope}, and study the temporal evolution of tweets in these hashtags. We conduct a linguistic analysis of words common to all hashtag groups and specific to each hashtag group and identify the chief concerns of people as the pandemic gripped the nation (e.g., exploring bidets as an alternative to toilet paper). We conduct sentiment analysis and our investigation reveals that people reacted positively to school closures and negatively to the lack of availability of essential goods due to panic buying. We adopt a state-of-the-art semantic role labeling approach to identify the action words and then leverage a LSTM-based dependency parsing model to analyze the context of action words (e.g., verb deal is accompanied by nouns such as anxiety, stress, and crisis). Finally, we develop a scalable seeded topic modeling approach to automatically categorize and isolate tweets into hashtag groups and experimentally validate that our topic model provides a grouping similar to our manual grouping. Our study presents a systematic way to construct an aggregated picture of peoples' response to the pandemic and lays the groundwork for future fine-grained linguistic and behavioral analysis.
http://arxiv.org/abs/2010.15674v1
"2020-10-27T19:30:43Z"
cs.SI
2,020
Robust Document Representations using Latent Topics and Metadata
Natraj Raman, Armineh Nourbakhsh, Sameena Shah, Manuela Veloso
Task specific fine-tuning of a pre-trained neural language model using a custom softmax output layer is the de facto approach of late when dealing with document classification problems. This technique is not adequate when labeled examples are not available at training time and when the metadata artifacts in a document must be exploited. We address these challenges by generating document representations that capture both text and metadata artifacts in a task agnostic manner. Instead of traditional auto-regressive or auto-encoding based training, our novel self-supervised approach learns a soft-partition of the input space when generating text embeddings. Specifically, we employ a pre-learned topic model distribution as surrogate labels and construct a loss function based on KL divergence. Our solution also incorporates metadata explicitly rather than just augmenting them with text. The generated document embeddings exhibit compositional characteristics and are directly used by downstream classification tasks to create decision boundaries from a small number of labeled examples, thereby eschewing complicated recognition methods. We demonstrate through extensive evaluation that our proposed cross-model fusion solution outperforms several competitive baselines on multiple datasets.
http://arxiv.org/abs/2010.12681v1
"2020-10-23T21:52:38Z"
cs.CL, cs.AI, cs.LG, cs.NE, I.2.7; I.7.0
2,020
Topic Modeling with Contextualized Word Representation Clusters
Laure Thompson, David Mimno
Clustering token-level contextualized word representations produces output that shares many similarities with topic models for English text collections. Unlike clusterings of vocabulary-level word embeddings, the resulting models more naturally capture polysemy and can be used as a way of organizing documents. We evaluate token clusterings trained from several different output layers of popular contextualized language models. We find that BERT and GPT-2 produce high quality clusterings, but RoBERTa does not. These cluster models are simple, reliable, and can perform as well as, if not better than, LDA topic models, maintaining high topic quality even when the number of topics is large relative to the size of the local collection.
http://arxiv.org/abs/2010.12626v1
"2020-10-23T19:16:59Z"
cs.CL
2,020
Helping users discover perspectives: Enhancing opinion mining with joint topic models
Tim Draws, Jody Liu, Nava Tintarev
Support or opposition concerning a debated claim such as abortion should be legal can have different underlying reasons, which we call perspectives. This paper explores how opinion mining can be enhanced with joint topic modeling, to identify distinct perspectives within the topic, providing an informative overview from unstructured text. We evaluate four joint topic models (TAM, JST, VODUM, and LAM) in a user study assessing human understandability of the extracted perspectives. Based on the results, we conclude that joint topic models such as TAM can discover perspectives that align with human judgments. Moreover, our results suggest that users are not influenced by their pre-existing stance on the topic of abortion when interpreting the output of topic models.
http://arxiv.org/abs/2010.12505v2
"2020-10-23T16:13:06Z"
cs.CL
2,020
A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
Mehdi Rezaee, Francis Ferraro
We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables. The model we utilize combines the expressive power of neural methods for representing sequences of text with the topic model's ability to capture global, thematic coherence. Using neural variational inference, we show improved perplexity and document understanding across multiple corpora. We examine the effect of prior parameters both on the model and variational parameters and demonstrate how our approach can compete and surpass a popular topic model implementation on an automatic measure of topic quality.
http://arxiv.org/abs/2010.12055v1
"2020-10-22T20:53:44Z"
cs.LG
2,020
A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews
Gabriele Pergola, Lin Gui, Yulan He
The flexibility of the inference process in Variational Autoencoders (VAEs) has recently led to revising traditional probabilistic topic models giving rise to Neural Topic Models (NTMs). Although these approaches have achieved significant results, surprisingly very little work has been done on how to disentangle the latent topics. Existing topic models when applied to reviews may extract topics associated with writers' subjective opinions mixed with those related to factual descriptions such as plot summaries in movie and book reviews. It is thus desirable to automatically separate opinion topics from plot/neutral ones enabling a better interpretability. In this paper, we propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones. We conduct an extensive experimental assessment introducing a new collection of movie and book reviews paired with their plots, namely MOBO dataset, showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models.
http://arxiv.org/abs/2010.11384v2
"2020-10-22T02:15:13Z"
cs.CL
2,020
On a Guided Nonnegative Matrix Factorization
Joshua Vendrow, Jamie Haddock, Elizaveta Rebrova, Deanna Needell
Fully unsupervised topic models have found fantastic success in document clustering and classification. However, these models often suffer from the tendency to learn less-than-meaningful or even redundant topics when the data is biased towards a set of features. For this reason, we propose an approach based upon the nonnegative matrix factorization (NMF) model, deemed \textit{Guided NMF}, that incorporates user-designed seed word supervision. Our experimental results demonstrate the promise of this model and illustrate that it is competitive with other methods of this ilk with only very little supervision information.
http://arxiv.org/abs/2010.11365v2
"2020-10-22T01:06:17Z"
cs.LG
2,020
Topic-Guided Abstractive Text Summarization: a Joint Learning Approach
Chujie Zheng, Kunpeng Zhang, Harry Jiannan Wang, Ling Fan, Zhe Wang
We introduce a new approach for abstractive text summarization, Topic-Guided Abstractive Summarization, which calibrates long-range dependencies from topic-level features with globally salient content. The idea is to incorporate neural topic modeling with a Transformer-based sequence-to-sequence (seq2seq) model in a joint learning framework. This design can learn and preserve the global semantics of the document, which can provide additional contextual guidance for capturing important ideas of the document, thereby enhancing the generation of summary. We conduct extensive experiments on two datasets and the results show that our proposed model outperforms many extractive and abstractive systems in terms of both ROUGE measurements and human evaluation. Our code is available at: https://github.com/chz816/tas.
http://arxiv.org/abs/2010.10323v2
"2020-10-20T14:45:25Z"
cs.CL
2,020
Auto-Encoding Variational Bayes for Inferring Topics and Visualization
Dang Pham, Tuan M. V. Le
Visualization and topic modeling are widely used approaches for text analysis. Traditional visualization methods find low-dimensional representations of documents in the visualization space (typically 2D or 3D) that can be displayed using a scatterplot. In contrast, topic modeling aims to discover topics from text, but for visualization, one needs to perform a post-hoc embedding using dimensionality reduction methods. Recent approaches propose using a generative model to jointly find topics and visualization, allowing the semantics to be infused in the visualization space for a meaningful interpretation. A major challenge that prevents these methods from being used practically is the scalability of their inference algorithms. We present, to the best of our knowledge, the first fast Auto-Encoding Variational Bayes based inference method for jointly inferring topics and visualization. Since our method is black box, it can handle model changes efficiently with little mathematical rederivation effort. We demonstrate the efficiency and effectiveness of our method on real-world large datasets and compare it with existing baselines.
http://arxiv.org/abs/2010.09233v2
"2020-10-19T05:57:11Z"
cs.CL
2,020
From Talk to Action with Accountability: Monitoring the Public Discussion of Policy Makers with Deep Neural Networks and Topic Modelling
Vili Hätönen, Fiona Melzer
Decades of research on climate have provided a consensus that human activity has changed the climate and we are currently heading into a climate crisis. While public discussion and research efforts on climate change mitigation have increased, potential solutions need to not only be discussed but also effectively deployed. For preventing mismanagement and holding policy makers accountable, transparency and degree of information about government processes have been shown to be crucial. However, currently the quantity of information about climate change discussions and the range of sources make it increasingly difficult for the public and civil society to maintain an overview to hold politicians accountable. In response, we propose a multi-source topic aggregation system (MuSTAS) which processes policy makers speech and rhetoric from several publicly available sources into an easily digestible topic summary. MuSTAS uses novel multi-source hybrid latent Dirichlet allocation to model topics from a variety of documents. This topic digest will serve the general public and civil society in assessing where, how, and when politicians talk about climate and climate policies, enabling them to hold politicians accountable for their actions to mitigate climate change and lack thereof.
http://arxiv.org/abs/2010.08346v3
"2020-10-16T12:21:01Z"
cs.CL, cs.LG, I.2.7; K.4.1
2,020
Semi-supervised NMF Models for Topic Modeling in Learning Tasks
Jamie Haddock, Lara Kassab, Sixian Li, Alona Kryshchenko, Rachel Grotheer, Elena Sizikova, Chuntian Wang, Thomas Merkh, R. W. M. A. Madushani, Miju Ahn, Deanna Needell, Kathryn Leonard
We propose several new models for semi-supervised nonnegative matrix factorization (SSNMF) and provide motivation for SSNMF models as maximum likelihood estimators given specific distributions of uncertainty. We present multiplicative updates training methods for each new model, and demonstrate the application of these models to classification, although they are flexible to other supervised learning tasks. We illustrate the promise of these models and training methods on both synthetic and real data, and achieve high classification accuracy on the 20 Newsgroups dataset.
http://arxiv.org/abs/2010.07956v1
"2020-10-15T18:03:46Z"
cs.LG, math.OC
2,020
Understanding the Hoarding Behaviors during the COVID-19 Pandemic using Large Scale Social Media Data
Xupin Zhang, Hanjia Lyu, Jiebo Luo
The COVID-19 pandemic has affected people's lives around the world on an unprecedented scale. We intend to investigate hoarding behaviors in response to the pandemic using large-scale social media data. First, we collect hoarding-related tweets shortly after the outbreak of the coronavirus. Next, we analyze the hoarding and anti-hoarding patterns of over 42,000 unique Twitter users in the United States from March 1 to April 30, 2020, and dissect the hoarding-related tweets by age, gender, and geographic location. We find the percentage of females in both hoarding and anti-hoarding groups is higher than that of the general Twitter user population. Furthermore, using topic modeling, we investigate the opinions expressed towards the hoarding behavior by categorizing these topics according to demographic and geographic groups. We also calculate the anxiety scores for the hoarding and anti-hoarding related tweets using a lexical approach. By comparing their anxiety scores with the baseline Twitter anxiety score, we reveal further insights. The LIWC anxiety mean for the hoarding-related tweets is significantly higher than the baseline Twitter anxiety mean. Interestingly, beer has the highest calculated anxiety score compared to other hoarded items mentioned in the tweets.
http://arxiv.org/abs/2010.07845v2
"2020-10-15T16:02:25Z"
cs.SI, cs.CY
2,020
On Cross-Dataset Generalization in Automatic Detection of Online Abuse
Isar Nejadgholi, Svetlana Kiritchenko
NLP research has attained high performances in abusive language detection as a supervised classification task. While in research settings, training and test datasets are usually obtained from similar data samples, in practice systems are often applied on data that are different from the training set in topic and class distributions. Also, the ambiguity in class definitions inherited in this task aggravates the discrepancies between source and target datasets. We explore the topic bias and the task formulation bias in cross-dataset generalization. We show that the benign examples in the Wikipedia Detox dataset are biased towards platform-specific topics. We identify these examples using unsupervised topic modeling and manual inspection of topics' keywords. Removing these topics increases cross-dataset generalization, without reducing in-domain classification performance. For a robust dataset design, we suggest applying inexpensive unsupervised methods to inspect the collected data and downsize the non-generalizable content before manually annotating for class labels.
http://arxiv.org/abs/2010.07414v3
"2020-10-14T21:47:03Z"
cs.CL, cs.AI
2,020
Choosing News Topics to Explain Stock Market Returns
Paul Glasserman, Kriste Krstovski, Paul Laliberte, Harry Mamaysky
We analyze methods for selecting topics in news articles to explain stock returns. We find, through empirical and theoretical results, that supervised Latent Dirichlet Allocation (sLDA) implemented through Gibbs sampling in a stochastic EM algorithm will often overfit returns to the detriment of the topic model. We obtain better out-of-sample performance through a random search of plain LDA models. A branching procedure that reinforces effective topic assignments often performs best. We test methods on an archive of over 90,000 news articles about S&P 500 firms.
http://arxiv.org/abs/2010.07289v1
"2020-10-14T01:38:35Z"
q-fin.ST, cs.LG, stat.ML
2,020
Weakly-Supervised Aspect-Based Sentiment Analysis via Joint Aspect-Sentiment Topic Embedding
Jiaxin Huang, Yu Meng, Fang Guo, Heng Ji, Jiawei Han
Aspect-based sentiment analysis of review texts is of great value for understanding user feedback in a fine-grained manner. It has in general two sub-tasks: (i) extracting aspects from each review, and (ii) classifying aspect-based reviews by sentiment polarity. In this paper, we propose a weakly-supervised approach for aspect-based sentiment analysis, which uses only a few keywords describing each aspect/sentiment without using any labeled examples. Existing methods are either designed only for one of the sub-tasks, neglecting the benefit of coupling both, or are based on topic models that may contain overlapping concepts. We propose to first learn <sentiment, aspect> joint topic embeddings in the word embedding space by imposing regularizations to encourage topic distinctiveness, and then use neural models to generalize the word-level discriminative information by pre-training the classifiers with embedding-based predictions and self-training them on unlabeled data. Our comprehensive performance analysis shows that our method generates quality joint topics and outperforms the baselines significantly (7.4% and 5.1% F1-score gain on average for aspect and sentiment classification respectively) on benchmark datasets. Our code and data are available at https://github.com/teapot123/JASen.
http://arxiv.org/abs/2010.06705v1
"2020-10-13T21:33:24Z"
cs.CL, cs.IR, cs.LG
2,020
Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks
Peng Cui, Le Hu, Yuanchao Liu
Text summarization aims to compress a textual document to a short summary while keeping salient information. Extractive approaches are widely used in text summarization because of their fluency and efficiency. However, most of existing extractive models hardly capture inter-sentence relationships, particularly in long documents. They also often ignore the effect of topical information on capturing important contents. To address these issues, this paper proposes a graph neural network (GNN)-based extractive summarization model, enabling to capture inter-sentence relationships efficiently via graph-structured document representation. Moreover, our model integrates a joint neural topic model (NTM) to discover latent topics, which can provide document-level features for sentence selection. The experimental results demonstrate that our model not only substantially achieves state-of-the-art results on CNN/DM and NYT datasets but also considerably outperforms existing approaches on scientific paper datasets consisting of much longer documents, indicating its better robustness in document genres and lengths. Further discussions show that topical information can help the model preselect salient contents from an entire document, which interprets its effectiveness in long document summarization.
http://arxiv.org/abs/2010.06253v1
"2020-10-13T09:30:04Z"
cs.CL, cs.AI
2,020
Paying down metadata debt: learning the representation of concepts using topic models
Jiahao Chen, Manuela Veloso
We introduce a data management problem called metadata debt, to identify the mapping between data concepts and their logical representations. We describe how this mapping can be learned using semisupervised topic models based on low-rank matrix factorizations that account for missing and noisy labels, coupled with sparsity penalties to improve localization and interpretability. We introduce a gauge transformation approach that allows us to construct explicit associations between topics and concept labels, and thus assign meaning to topics. We also show how to use this topic model for semisupervised learning tasks like extrapolating from known labels, evaluating possible errors in existing labels, and predicting missing features. We show results from this topic model in predicting subject tags on over 25,000 datasets from Kaggle.com, demonstrating the ability to learn semantically meaningful features.
http://arxiv.org/abs/2010.04836v1
"2020-10-09T22:42:38Z"
cs.LG, cs.AI, cs.DL, cs.NA, math.NA, 65F55, 68T50, 68T01, 15A83, I.2.6; J.1
2,020
Artificial Intelligence (AI) in Action: Addressing the COVID-19 Pandemic with Natural Language Processing (NLP)
Qingyu Chen, Robert Leaman, Alexis Allot, Ling Luo, Chih-Hsuan Wei, Shankai Yan, Zhiyong Lu
The COVID-19 pandemic has had a significant impact on society, both because of the serious health effects of COVID-19 and because of public health measures implemented to slow its spread. Many of these difficulties are fundamentally information needs; attempts to address these needs have caused an information overload for both researchers and the public. Natural language processing (NLP), the branch of artificial intelligence that interprets human language, can be applied to address many of the information needs made urgent by the COVID-19 pandemic. This review surveys approximately 150 NLP studies and more than 50 systems and datasets addressing the COVID-19 pandemic. We detail work on four core NLP tasks: information retrieval, named entity recognition, literature-based discovery, and question answering. We also describe work that directly addresses aspects of the pandemic through four additional tasks: topic modeling, sentiment and emotion analysis, caseload forecasting, and misinformation detection. We conclude by discussing observable trends and remaining challenges.
http://arxiv.org/abs/2010.16413v3
"2020-10-09T22:10:43Z"
cs.CL, cs.IR, cs.LG
2,020
Young Adult Unemployment Through the Lens of Social Media: Italy as a case study
Alessandra Urbinati, Kyriaki Kalimeri, Andrea Bonanomi, Alessandro Rosina, Ciro Cattuto, Daniela Paolotti
Youth unemployment rates are still in alerting levels for many countries, among which Italy. Direct consequences include poverty, social exclusion, and criminal behaviours, while negative impact on the future employability and wage cannot be obscured. In this study, we employ survey data together with social media data, and in particular likes on Facebook Pages, to analyse personality, moral values, but also cultural elements of the young unemployed population in Italy. Our findings show that there are small but significant differences in personality and moral values, with the unemployed males to be less agreeable while females more open to new experiences. At the same time, unemployed have a more collectivist point of view, valuing more in-group loyalty, authority, and purity foundations. Interestingly, topic modelling analysis did not reveal major differences in interests and cultural elements of the unemployed. Utilisation patterns emerged though; the employed seem to use Facebook to connect with local activities, while the unemployed use it mostly as for entertainment purposes and as a source of news, making them susceptible to mis/disinformation. We believe these findings can help policymakers get a deeper understanding of this population and initiatives that improve both the hard and the soft skills of this fragile population.
http://arxiv.org/abs/2010.04496v2
"2020-10-09T10:56:04Z"
cs.CY
2,020
Latent Dirichlet Allocation Model Training with Differential Privacy
Fangyuan Zhao, Xuebin Ren, Shusen Yang, Qing Han, Peng Zhao, Xinyu Yang
Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for hidden semantic discovery of text data and serves as a fundamental tool for text analysis in various applications. However, the LDA model as well as the training process of LDA may expose the text information in the training data, thus bringing significant privacy concerns. To address the privacy issue in LDA, we systematically investigate the privacy protection of the main-stream LDA training algorithm based on Collapsed Gibbs Sampling (CGS) and propose several differentially private LDA algorithms for typical training scenarios. In particular, we present the first theoretical analysis on the inherent differential privacy guarantee of CGS based LDA training and further propose a centralized privacy-preserving algorithm (HDP-LDA) that can prevent data inference from the intermediate statistics in the CGS training. Also, we propose a locally private LDA training algorithm (LP-LDA) on crowdsourced data to provide local differential privacy for individual data contributors. Furthermore, we extend LP-LDA to an online version as OLP-LDA to achieve LDA training on locally private mini-batches in a streaming setting. Extensive analysis and experiment results validate both the effectiveness and efficiency of our proposed privacy-preserving LDA training algorithms.
http://arxiv.org/abs/2010.04391v1
"2020-10-09T06:58:40Z"
cs.LG, cs.CR
2,020
Studying the UK Job Market During the COVID-19 Crisis with Online Job Ads
Rudy Arthur
The COVID-19 global pandemic and the lockdown policies enacted to mitigate it have had profound effects on the labour market. Understanding these effects requires us to obtain and analyse data in as close to real time as possible, especially as rules change rapidly and local lockdowns are enacted. In this work we study the UK labour market by analysing data from the online job board Reed.co.uk. Using topic modelling and geo-inference methods we are able to break down the data by sector and geography. We also study how the salary, contract type and mode of work have changed since the COVID-19 crisis hit the UK in March. Overall, vacancies were down by 60 to 70\% in the first weeks of lockdown. By the end of the year numbers had recovered somewhat, but the total job ad deficit is measured to be over 40\%. Broken down by sector, vacancies for hospitality and graduate jobs are greatly reduced, while there were more care work and nursing vacancies during lockdown. Differences by geography are less significant than between sectors, though there is some indication that local lockdowns stall recovery and less badly hit areas may have experienced a smaller reduction in vacancies. There are also small but significant changes in the salary distribution and number of full time and permanent jobs. In addition to these results, this work presents an open methodology that enables a rapid and detailed survey of the job market in these unsettled conditions and we describe a web application \url{jobtrender.com} that allows others to query this data set.
http://arxiv.org/abs/2010.03629v2
"2020-10-07T19:56:16Z"
cs.CY, cs.SI
2,020