Dataset Viewer
id
stringlengths 10
10
| title
stringlengths 14
156
| abstract
stringlengths 279
1.83k
| full_text
sequence | qas
sequence | figures_and_tables
sequence |
---|---|---|---|---|---|
1811.00942
|
Progress and Tradeoffs in Neural Language Models
|
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with"classic"Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop.
|
{
"section_name": [
"Introduction",
"Background and Related Work",
"Experimental Setup",
"Hyperparameters and Training",
"Infrastructure",
"Results and Discussion",
"Conclusion"
],
"paragraphs": [
[
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .",
"Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.",
"In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.",
"There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.",
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
[
" BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.",
"Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.",
"AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”",
"Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\\mathbf {X} \\in \\mathbb {R}^{k \\times n}$ , the convolution layer is $\n\\mathbf {Z} = \\tanh (\\mathbf {W}_z \\cdot \\mathbf {X})\\\\\n\\mathbf {F} = \\sigma (\\mathbf {W}_f \\cdot \\mathbf {X})\\\\\n\\mathbf {O} = \\sigma (\\mathbf {W}_o \\cdot \\mathbf {X})\n$ ",
"where $\\sigma $ denotes the sigmoid function, $\\cdot $ represents masked convolution across time, and $\\mathbf {W}_{\\lbrace z, f, o\\rbrace } \\in \\mathbb {R}^{m \\times k \\times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $\n\\mathbf {c}_t &= \\mathbf {f}_t \\odot \\mathbf {c}_{t-1} + (1 -\n\\mathbf {f}_t) \\odot \\mathbf {z}_t\\\\\n\\mathbf {h}_t &= \\mathbf {o}_t \\odot \\mathbf {c}_t\n$ ",
"Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .",
"Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\\text{``choo''}) = 0.1$ , $P(\\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\\text{``choo''}) \\le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall."
],
[
"We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.",
"For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.",
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
[
"The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.",
"For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters."
],
[
"We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.",
"For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.",
"In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD."
],
[
"To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.",
"Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \\text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section \"Infrastructure\" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.",
"From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.",
"Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\\times $ slower and 32 $\\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.",
"In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration."
],
[
"In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\\times $ results in a 49 $\\times $ rise in latency and 32 $\\times $ increase in energy usage, when compared to KN-5."
]
]
} |
{
"question": [
"What aspects have been compared between various language models?",
"what classic language models are mentioned in the paper?",
"What is a commonly used evaluation metric for language models?"
],
"question_id": [
"dd155f01f6f4a14f9d25afc97504aefdc6d29c13",
"a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c",
"e07df8f613dbd567a35318cd6f6f4cb959f5c82d"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Quality measures using perplexity and recall, and performance measured using latency and energy usage. ",
"evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
],
"highlighted_evidence": [
"For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set."
]
}
],
"annotation_id": [
"c17796e0bd3bfcc64d5a8e844d23d8d39274af6b"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Kneser–Ney smoothing"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\\times $ longer and requires 32 $\\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point."
],
"highlighted_evidence": [
"Kneser–Ney smoothing",
"In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today."
]
}
],
"annotation_id": [
"715840b32a89c33e0a1de1ab913664eb9694bd34"
],
"worker_id": [
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"perplexity"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"highlighted_evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
]
},
{
"unanswerable": false,
"extractive_spans": [
"perplexity"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
],
"highlighted_evidence": [
"recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 ."
]
}
],
"annotation_id": [
"062dcccfdfb5af1c6ee886885703f9437d91a9dc",
"1cc952fc047d0bb1a961c3ce65bada2e983150d1"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a"
]
}
]
} |
{
"caption": [
"Table 1: Comparison of neural language models on Penn Treebank and WikiText-103.",
"Figure 1: Log perplexity–recall error with KN-5.",
"Figure 2: Log perplexity–recall error with QRNN.",
"Table 2: Language modeling results on performance and model quality."
],
"file": [
"3-Table1-1.png",
"4-Figure1-1.png",
"4-Figure2-1.png",
"4-Table2-1.png"
]
} |
1907.05664
|
Saliency Maps Generation for Automatic Text Summarization
|
Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these"explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation.
|
{
"section_name": [
"Introduction",
"The Task and the Model",
"Dataset and Training Task",
"The Model",
"Obtained Summaries",
"Layer-Wise Relevance Propagation",
"Mathematical Description",
"Generation of the Saliency Maps",
"Experimental results",
"First Observations",
"Validating the Attributions",
"Conclusion"
],
"paragraphs": [
[
"Ever since the LIME algorithm BIBREF0 , \"explanation\" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.",
"There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain\" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .",
"Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words\" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.",
"We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense\" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more."
],
[
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it."
],
[
"The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights\" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017."
],
[
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
[
"We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.",
"The “summaries\" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text."
],
[
"We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction."
],
[
"We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ : ",
"$$\\begin{split}\n\nR_{i\\leftarrow j}^{(l, l+1)} &= \\dfrac{w_{i\\rightarrow j}^{l,l+1}\\textbf {z}^l_i + \\dfrac{\\epsilon \\textrm { sign}(\\textbf {z}^{l+1}_j) + \\textbf {b}^{l+1}_j}{D_l}}{\\textbf {z}^{l+1}_j + \\epsilon * \\textrm { sign}(\\textbf {z}^{l+1}_j)} * R_j^{l+1} \\\\\n\\end{split}$$ (Eq. 7) ",
"where $w_{i\\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.",
"The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).",
"For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate\" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant\" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information\" vector and none to the “gate\" vector."
],
[
"We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.",
"The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.",
"This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary."
],
[
"In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings."
],
[
"The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.",
"It can be seen as evidence that using the attention distribution as an “explanation\" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates\" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output.",
"This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively."
],
[
"We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important\" words from the input text and observe the change in the resulting generated summaries.",
"We first define what “important\" (and “unimportant\") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant\" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.",
"We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).",
"One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.",
"This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.",
"One interesting point is that one saliency map didn't look “better\" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.",
"We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary\". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them."
],
[
"In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.",
"We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.",
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked."
]
]
} |
{
"question": [
"Which baselines did they compare?",
"How many attention layers are there in their model?",
"Is the explanation from saliency map correct?"
],
"question_id": [
"6e2ad9ad88cceabb6977222f5e090ece36aa84ea",
"aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5",
"710c1f8d4c137c8dad9972f5ceacdbf8004db208"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"saliency",
"saliency",
"saliency"
],
"question_writer": [
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66",
"50d8b4a941c26b89482c94ab324b5a274f9ced66"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset.",
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
]
},
{
"unanswerable": false,
"extractive_spans": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
]
}
],
"annotation_id": [
"0850b7c0555801d057062480de6bb88adb81cae3",
"93216bca45711b73083372495d9a2667736fbac9"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "one",
"evidence": [
"The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254."
],
"highlighted_evidence": [
"The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. "
]
}
],
"annotation_id": [
"e0ca6b95c1c051723007955ce6804bd29f325379"
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video\" highlighted in the input text, which seems to be important for the output."
],
"highlighted_evidence": [
"But we also showed that in some cases the saliency maps seem to not capture the important input features. ",
"The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates"
]
}
],
"annotation_id": [
"79e54a7b9ba9cde5813c3434e64a02d722f13b23"
],
"worker_id": [
"101dbdd2108b3e676061cb693826f0959b47891b"
]
}
]
} |
{
"caption": [
"Figure 2: Representation of the propagation of the relevance from the output to the input. It passes through the decoder and attention mechanism for each previous decoding time-step, then is passed onto the encoder which takes into account the relevance transiting in both direction due to the bidirectional nature of the encoding LSTM cell.",
"Figure 3: Left : Saliency map over the truncated input text for the second generated word “the”. Right : Saliency map over the truncated input text for the 25th generated word “investigation”. We see that the difference between the mappings is marginal.",
"Figure 4: Summary from Figure 1 generated after deleting important and unimportant words from the input text. We observe a significant difference in summary degradation between the two experiments, where the decoder just repeats the UNKNOWN token over and over."
],
"file": [
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png"
]
} |
1810.04528
|
Is there Gender bias and stereotype in Portuguese Word Embeddings?
|
In this work, we propose an analysis of the presence of gender bias associated with professions in Portuguese word embeddings. The objective of this work is to study gender implications related to stereotyped professions for women and men in the context of the Portuguese language.
|
{
"section_name": [
"Introduction",
"Related Work",
"Portuguese Embedding",
"Proposed Approach",
"Experiments",
"Final Remarks"
],
"paragraphs": [
[
"Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.",
"Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.",
"One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.",
"This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks."
],
[
"There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .",
"The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware\" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.",
"In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias."
],
[
"In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.",
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.",
"The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors."
],
[
"Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.",
"Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.",
"Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.",
"[!htb] Model Evaluation [1]",
"w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])",
"x = female count += 1 accuracy = count/size(profession_pairs) return accuracy"
],
[
"The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.",
"Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.",
"Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.",
"Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low."
],
[
"This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.",
"A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.",
"To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions."
]
]
} |
{
"question": [
"Does this paper target European or Brazilian Portuguese?",
"What were the word embeddings trained on?",
"Which word embeddings are analysed?"
],
"question_id": [
"519db0922376ce1e87fcdedaa626d665d9f3e8ce",
"99a10823623f78dbff9ccecb210f187105a196e9",
"09f0dce416a1e40cc6a24a8b42a802747d2c9363"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"bias",
"bias",
"bias"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"0a93ba2daf6764079c983e70ca8609d6d1d8fa5c",
"c6686e4e6090f985be4cc72a08ca2d4948b355bb"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8",
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"large Portuguese corpus"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics."
],
"highlighted_evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. ",
"Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . "
]
}
],
"annotation_id": [
"e0cd186397ec9543e48d25f5944fc9318542f1d5"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Continuous Bag-of-Words (CBOW)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal."
],
"highlighted_evidence": [
"The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. "
]
}
],
"annotation_id": [
"8b5278bfc35cf0a1b43ceb3418c2c5d20f213a31"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
}
]
} |
{
"caption": [
"Fig. 1. Proposal",
"Fig. 2. Extreme Analogies"
],
"file": [
"3-Figure1-1.png",
"5-Figure2-1.png"
]
} |
2003.12218
|
Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision
|
We created this CORD-19-NER dataset with comprehensive named entity recognition (NER) on the COVID-19 Open Research Dataset Challenge (CORD-19) corpus (2020- 03-13). This CORD-19-NER dataset covers 74 fine-grained named entity types. It is automatically generated by combining the annotation results from four sources: (1) pre-trained NER model on 18 general entity types from Spacy, (2) pre-trained NER model on 18 biomedical entity types from SciSpacy, (3) knowledge base (KB)-guided NER model on 127 biomedical entity types with our distantly-supervised NER method, and (4) seed-guided NER model on 8 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID- 19 studies, both on the biomedical side and on the social side.
|
{
"section_name": [
"Introduction",
"CORD-19-NER Dataset ::: Corpus",
"CORD-19-NER Dataset ::: NER Methods",
"Results ::: NER Annotation Results",
"Results ::: Top-Frequent Entity Summarization",
"Conclusion",
"Acknowledgment"
],
"paragraphs": [
[
"Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.",
"Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset."
],
[
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.",
"Corpus Tokenization. The raw corpus is a combination of the “title\", “abstract\" and “full-text\" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.",
"Key Items. The tokenized corpus includes the following items:",
"doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv\" in the CORD-19 corpus (2020-03-13).",
"sents: [sent_id, sent_tokens], tokenized sentences and words as described above.",
"source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).",
"doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).",
"pmcid: populated for all PMC paper records (27337 non null).",
"pubmed_id: populated for some of the records.",
"Other keys: publish_time, authors and journal.",
"The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset."
],
[
"CORD-19-NER annotation is a combination from four sources with different NER methods:",
"Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm\".",
"Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md\".",
"Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.",
"Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.",
"The 9 new entity types with examples of their input seed are as follows:",
"Coronavirus: COVID-19, SARS, MERS, etc.",
"Viral Protein: Hemagglutinin, GP120, etc.",
"Livestock: cattle, sheep, pig, etc.",
"Wildlife: bats, wild animals, wild birds, etc",
"Evolution: genetic drift, natural selection, mutation rate, etc",
"Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.",
"Substrate: blood, sputum, urine, etc.",
"Material: copper, stainless steel, plastic, etc.",
"Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.",
"We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.",
"Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset."
],
[
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.",
"In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
],
[
"In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior\" includes “cough\" and “respiratory symptoms\" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR\" includes “hand hygiene\", “disclosures\" and “absenteeism\", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY\" includes “machine learning\", “data processing\" and “automation\", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication\" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset."
],
[
"In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side."
],
[
"Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies."
]
]
} |
{
"question": [
"Did they experiment with the dataset?",
"What is the size of this dataset?",
"Do they list all the named entity types present?"
],
"question_id": [
"ce6201435cc1196ad72b742db92abd709e0f9e8d",
"928828544e38fe26c53d81d1b9c70a9fb1cc3feb",
"4f243056e63a74d1349488983dc1238228ca76a7"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.",
"In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
],
"highlighted_evidence": [
"In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2\" as the “CORONAVIRUS\" type, “bat\" and “pangolins\" as the “WILDLIFE\" type and “Van der Waals forces\" as the “PHYSICAL_SCIENCE\" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.\n\nIn Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2\" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic\" as a evolution term and “bat\" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism\" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation."
]
}
],
"annotation_id": [
"2d5e1221e7cd30341b51ddb988b8659b48b7ac2b"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"29,500 documents"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.",
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations."
],
"highlighted_evidence": [
"We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). ",
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). "
]
},
{
"unanswerable": false,
"extractive_spans": [
"29,500 documents in the CORD-19 corpus (2020-03-13)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations."
],
"highlighted_evidence": [
"The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.\n\n"
]
}
],
"annotation_id": [
"1466a1bd3601c1b1cdedab1edb1bca2334809e3d",
"cd982553050caaa6fd8dabefe8b9697b05f5cf94"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Examples of the most frequent entities annotated in CORD-NER."
]
}
],
"annotation_id": [
"bd64f676b7b1d47ad86c5c897acfe759c2259269"
],
"worker_id": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} |
{
"caption": [
"Table 1: Performance comparison on three major biomedical entity types in COVID-19 corpus.",
"Figure 1: Examples of the annotation results with CORD-NER system.",
"Figure 2: Annotation result comparison with other NER methods.",
"Table 2: Examples of the most frequent entities annotated in CORD-NER."
],
"file": [
"2-Table1-1.png",
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Table2-1.png"
]
} |
1904.09678
|
UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages
|
In this paper, we introduce UniSent a universal sentiment lexica for 1000 languages created using an English sentiment lexicon and a massively parallel corpus in the Bible domain. To the best of our knowledge, UniSent is the largest sentiment resource to date in terms of number of covered languages, including many low resource languages. To create UniSent, we propose Adapted Sentiment Pivot, a novel method that combines annotation projection, vocabulary expansion, and unsupervised domain adaptation. We evaluate the quality of UniSent for Macedonian, Czech, German, Spanish, and French and show that its quality is comparable to manually or semi-manually created sentiment resources. With the publication of this paper, we release UniSent lexica as well as Adapted Sentiment Pivot related codes. method.
|
{
"section_name": [
"Introduction",
"Method",
"Experimental Setup",
"Results",
"Conclusion"
],
"paragraphs": [
[
"Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs."
],
[
"Our method, Adapted Sentiment Pivot requires a sentiment lexicon in one language (e.g. English) as well as a massively parallel corpus. Following steps are performed on this input."
],
[
"Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages. We do this in several steps.",
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 .",
"We use the (manually created) English sentiment lexicon (WKWSCI) in BIBREF18 as a resource to be projected over languages. For the projection step (Section SECREF1 ) we use the massively parallel Bible corpus in BIBREF8 . We then propagate the projected sentiment polarities to all words in the Wikipedia corpus. We chose Wikipedia here because its domain is closest to the manually annotated sentiment lexica we use to evaluate UniSent. In the adaptation step, we compute the shift between the vocabularies in the Bible and Wikipedia corpora. To show that our adaptation method also works well on domains like Twitter, we propose a second evaluation in which we use Adapted Sentiment Pivot to predict the sentiment of emoticons in Twitter.",
"To create our test sets, we first split UniSent and our gold standard lexica as illustrated in Figure FIGREF11 . We then form our training and test sets as follows:",
"(i) UniSent-Lexicon: we use words in UniSent for the sentiment learning in the target domain; for this purpose, we use words INLINEFORM0 .",
"(ii) Baseline-Lexicon: we use words in the gold standard lexicon for the sentiment learning in the target domain; for this purpose we use words INLINEFORM0 .",
"(iii) Evaluation-Lexicon: we randomly exclude a set of words the baseline-lexicon INLINEFORM0 . In selection of the sampling size we make sure that INLINEFORM1 and INLINEFORM2 would contain a comparable number of words.",
""
],
[
"In Table TABREF13 we compare the quality of UniSent with the Baseline-Lexicon as well as with the gold standard lexicon for general domain data. The results show that (i) UniSent clearly outperforms the baseline for all languages (ii) the quality of UniSent is close to manually annotated data (iii) the domain adaptation method brings small improvements for morphologically poor languages. The modest gains could be because our drift weighting method (Section SECREF3 ) mainly models a sense shift between words which is not always equivalent to a polarity shift.",
"In Table TABREF14 we compare the quality of UniSent with the gold standard emoticon lexicon in the Twitter domain. The results show that (i) UniSent clearly outperforms the baseline and (ii) our domain adaptation technique brings small improvements for French and Spanish."
],
[
"Using our novel Adapted Sentiment Pivot method, we created UniSent, a sentiment lexicon covering over 1000 (including many low-resource) languages in several domains. The only necessary resources to create UniSent are a sentiment lexicon in any language and a massively parallel corpus that can be small and domain specific. Our evaluation showed that the quality of UniSent is closed to manually annotated resources.",
" "
]
]
} |
{
"question": [
"how is quality measured?",
"how many languages exactly is the sentiment lexica for?",
"what sentiment sources do they compare with?"
],
"question_id": [
"8f87215f4709ee1eb9ddcc7900c6c054c970160b",
"b04098f7507efdffcbabd600391ef32318da28b3",
"8fc14714eb83817341ada708b9a0b6b4c6ab5023"
],
"nlp_background": [
"",
"",
""
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality.",
"evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting."
]
},
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"97009bed24107de806232d7cf069f51053d7ba5e",
"e38ed05ec140abd97006a8fa7af9a7b4930247df"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"d1204f71bd3c78a11b133016f54de78e8eaecf6e"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 ."
],
"highlighted_evidence": [
"As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 ."
]
}
],
"annotation_id": [
"17db53c0c6f13fe1d43eee276a9554677f007eef"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} |
{
"caption": [
"Figure 1: Neighbors of word ’sensual’ in Spanish, in bible embedding graph (a) and twitter embedding graph (b). Our unsupervised drift weighting method found this word in Spanish to be the most changing word from bible context to the twitter context. Looking more closely at the neighbors, the word sensual in the biblical context has been associated with a negative sentiment of sins. However, in the twitter domain, it has a positive sentiment. This example shows how our unsupervised method can improve the quality of sentiment lexicon.",
"Figure 2: Data split used in the experimental setup of UniSent evaluation: Set (C) is the intersection of the target embedding space words (Wikipedia/Emoticon) and the UniSent lexicon as well as the manually created lexicon. Set (A) is the intersection of the target embedding space words and the UniSent lexicon, excluding set (C). Set (B) is the intersection of the target embedding space words and the manually created lexicon, excluding set (C).",
"Table 1: Comparison of manually created lexicon performance with UniSent in Czech, German, French, Macedonians, and Spanish. We report accuracy and the macro-F1 (averaged F1 over positive and negative classes). The baseline is constantly considering the majority label. The last two columns indicate the performance of UniSent after drift weighting.",
"Table 2: Comparison of domain adapted and vanilla UniSent for Emoticon sentiment prediction using monlingual twitter embeddings in German, Italian, French, and Spanish."
],
"file": [
"3-Figure1-1.png",
"3-Figure2-1.png",
"4-Table1-1.png",
"4-Table2-1.png"
]
} |
1910.03042
|
Gunrock: A Social Bot for Complex and Engaging Long Conversations
|
Gunrock is the winner of the 2018 Amazon Alexa Prize, as evaluated by coherence and engagement from both real users and Amazon-selected expert conversationalists. We focus on understanding complex sentences and having in-depth conversations in open domains. In this paper, we introduce some innovative system designs and related validation analysis. Overall, we found that users produce longer sentences to Gunrock, which are directly related to users' engagement (e.g., ratings, number of turns). Additionally, users' backstory queries about Gunrock are positively correlated to user satisfaction. Finally, we found dialog flows that interleave facts and personal opinions and stories lead to better user satisfaction.
|
{
"section_name": [
"Introduction",
"System Architecture",
"System Architecture ::: Automatic Speech Recognition",
"System Architecture ::: Natural Language Understanding",
"System Architecture ::: Dialog Manager",
"System Architecture ::: Knowledge Databases",
"System Architecture ::: Natural Language Generation",
"System Architecture ::: Text To Speech",
"Analysis",
"Analysis ::: Response Depth: Mean Word Count",
"Analysis ::: Gunrock's Backstory and Persona",
"Analysis ::: Interleaving Personal and Factual Information: Animal Module",
"Conclusion",
"Acknowledgments"
],
"paragraphs": [
[
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example)."
],
[
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
[
"Gunrock receives ASR results with the raw text and timestep information for each word in the sequence (without case information and punctuation). Keywords, especially named entities such as movie names, are prone to generate ASR errors without contextual information, but are essential for NLU and NLG. Therefore, Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base (e.g. a list of the most popular movies names) based on their phonetic information. We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases (extracted by noun trunks) and the selected knowledge base, and suggest a potential fix by code matching. An example can be seen in User_3 and Gunrock_3 in Table TABREF2."
],
[
"Gunrock is designed to engage users in deeper conversation; accordingly, a user utterance can consist of multiple units with complete semantic meanings. We first split the corrected raw ASR text into sentences by inserting break tokens. An example is shown in User_3 in Table TABREF2. Meanwhile, we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained (e.g.,“i like the movie a star is born\"). We also leverage timestep information to filter out false positive corrections. After segmentation, our coreference implementation leverages entity knowledge (such as person versus event) and replaces nouns with their actual reference by entity ranking. We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns. For instance, “him\" in the last segment in User_5 is replaced with “bradley cooper\" in Table TABREF2. Next, we use a constituency parser to generate noun phrases from each modified segment. Within the sequence pipeline to generate complete segments, Gunrock detects (1) topic, (2) named entities, and (3) sentiment using ASK in parallel. The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding.",
"In order to extract the intent for each segment, we designed MIDAS, a human-machine dialog act scheme with 23 tags and implemented a multi-label dialog act classification model using contextual information BIBREF9. Next, the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation, respectively."
],
[
"We implemented a hierarchical dialog manager, consisting of a high level and low level DMs. The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics. For example, “i just finished reading harry potter,\" triggers Sub-DM: Books. Utilizing the central element and features extracted from NLU, input utterances are mapped onto 11 possible topic dialog modules (e.g., movies, books, animals, etc.), including a backup module, retrieval.",
"Low level dialog management is handled by the separate topic dialog modules, which use modular finite state transducers to execute various dialog segments processed by the NLU. Using topic-specific modules enables deeper conversations that maintain the context. We design dialog flows in each of the finite state machines, as well. Dialog flow is determined by rule-based transitions between a specified fixed set of dialog states. To ensure that our states and transitions are effective, we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts. Meanwhile, dialog flow is customized to each user by tracking user attributes as dialog context. In addition, each dialog flow is adaptive to user responses to show acknowledgement and understanding (e.g., talking about pet ownership in the animal module). Based on the user responses, many dialog flow variations exist to provide a fresh experience each time. This reduces the feeling of dialogs being scripted and repetitive. Our dialog flows additionally interleave facts, opinions, experiences, and questions to make the conversation flexible and interesting.",
"In the meantime, we consider feedback signals such as “continue\" and “stop\" from the current topic dialog module, indicating whether it is able to respond to the following request in the dialog flow, in order to select the best response module. Additionally, in all modules we allow mixed-initiative interactions; users can trigger a new dialog module when they want to switch topics while in any state. For example, users can start a new conversation about movies from any other topic module."
],
[
"All topic dialog modules query knowledge bases to provide information to the user. To respond to general factual questions, Gunrock queries the EVI factual database , as well as other up-to-date scraped information appropriate for the submodule, such as news and current showing movies in a specific location from databases including IMDB. One contribution of Gunrock is the extensive Gunrock Persona Backstory database, consisting of over 1,000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions (see Table 2). We designed the system responses to elicit a consistent personality within and across modules, modeled as a female individual who is positive, outgoing, and is interested in science and technology."
],
[
"In order to avoid repetitive and non-specific responses commonly seen in dialog systems BIBREF10, Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state. One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms. Among these response templates for the same dialog state, one is randomly selected without repetition to provide variety unless all have been exhausted. When a response template is selected, any slots are substituted with actual contents, including queried information for news and specific data for weather. For example, to ground a movie name due to ASR errors or multiple versions, one template is “Are you talking about {movie_title} released in {release_year} starring {actor_name} as {actor_role}?\". Module-specific templates were generated for each topic (e.g., animals), but some of the templates are generalizable across different modules (e.g., “What’s your favorite [movie $|$ book $|$ place to visit]?\")",
"In many cases, response templates corresponding to different dialog acts are dynamically composed to give the final response. For example, an appropriate acknowledgement for the user’s response can be combined with a predetermined follow-up question."
],
[
"After NLG, we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation. We use a rule-based system to systematically add interjections, specifically Alexa Speechcons, and fillers to approximate human-like cognitive-emotional expression BIBREF11. For more on the framework and analysis of the TTS modifications, see BIBREF12."
],
[
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
[
"Two unique features of Gunrock are its ability to dissect longer, complex sentences, and its methods to encourage users to be active conversationalists, elaborating on their responses. In prior work, even if users are able to drive the conversation, often bots use simple yes/no questions to control the conversational flow to improve understanding; as a result, users are more passive interlocutors in the conversation. We aimed to improve user engagement by designing the conversation to have more open-ended opinion/personal questions, and show that the system can understand the users' complex utterances (See nlu for details on NLU). Accordingly, we ask if users' speech behavior will reflect Gunrock's technical capability and conversational strategy, producing longer sentences.",
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.",
"Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
],
[
"We assessed the user's interest in Gunrock by tagging instances where the user triggered Gunrock's backstory (e.g., “What's your favorite color?\"). For users with at least one backstory question, we modeled overall (log) Rating with a linear regression by the (log) `Number of Backstory Questions Asked' (log transformed due to the variables' nonlinear relationship). We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses. Overall, the number of times users queried Gunrock's backstory was strongly related to the rating they gave at the end of the interaction (log:$\\beta $=0.10, SE=0.002, t=58.4, p$<$0.001)(see Figure 3). This suggests that maintaining a consistent personality — and having enough responses to questions the users are interested in — may improve user satisfaction."
],
[
"Gunrock includes a specific topic module on animals, which includes a factual component where the system provides animal facts, as well as a more personalized component about pets. Our system is designed to engage users about animals in a more casual conversational style BIBREF14, eliciting follow-up questions if the user indicates they have a pet; if we are able to extract the pet's name, we refer to it in the conversation (e.g., “Oliver is a great name for a cat!\", “How long have you had Oliver?\"). In cases where the user does not indicate that they have a pet, the system solely provides animal facts. Therefore, the animal module can serve as a test of our interleaving strategy: we hypothesized that combining facts and personal questions — in this case about the user's pet — would lead to greater user satisfaction overall.",
"We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as “Yes\", “No\", or “NA\" (if users did not respond with an affirmative or negative response). We modeled user rating with a linear regression model, with predictor of “Has Pet' (2 levels: Yes, No). We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation ($\\beta $=0.15, SE=0.06, t=2.53, p$=$0.016) (see Figure 4). One interpretation is that interleaving factual information with more in-depth questions about their pet result in improved user experience. Yet, another interpretation is that pet owners may be more friendly and amenable to a socialbot; for example, prior research has linked differences in personality to pet ownership BIBREF15."
],
[
"Gunrock is a social chatbot that focuses on having long and engaging speech-based conversations with thousands of real users. Accordingly, our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation. Analysis shows that users' speech behavior reflects these capabilities. Longer sentences and more questions about Gunrocks's backstory positively correlate with user experience. Additionally, we find evidence for interleaved dialog flow, where combining factual information with personal opinions and stories improve user satisfaction. Overall, this work has practical applications, in applying these design principles to other social chatbots, as well as theoretical implications, in terms of the nature of human-computer interaction (cf. 'Computers are Social Actors' BIBREF16). Our results suggest that users are engaging with Gunrock in similar ways to other humans: in chitchat about general topics (e.g., animals, movies, etc.), taking interest in Gunrock's backstory and persona, and even producing more information about themselves in return."
],
[
"We would like to acknowledge the help from Amazon in terms of financial and technical support."
]
]
} |
{
"question": [
"What is the sample size of people used to measure user satisfaction?",
"What are all the metrics to measure user engagement?",
"What the system designs introduced?",
"Do they specify the model they use for Gunrock?",
"Do they gather explicit user satisfaction data on Gunrock?",
"How do they correlate user backstory queries to user satisfaction?"
],
"question_id": [
"44c7c1fbac80eaea736622913d65fe6453d72828",
"3e0c9469821cb01a75e1818f2acb668d071fcf40",
"a725246bac4625e6fe99ea236a96ccb21b5f30c6",
"516626825e51ca1e8a3e0ac896c538c9d8a747c8",
"77af93200138f46bb178c02f710944a01ed86481",
"71538776757a32eee930d297f6667cd0ec2e9231"
],
"nlp_background": [
"infinity",
"infinity",
"infinity",
"infinity",
"infinity",
"infinity"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no",
"no",
"no",
"no"
],
"search_query": [
"",
"",
"",
"",
"",
""
],
"question_writer": [
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7",
"2cfd959e433f290bb50b55722370f0d22fe090b7"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"34,432 user conversations"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
" Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")."
]
},
{
"unanswerable": false,
"extractive_spans": [
"34,432 "
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. Additionally, the natural language understanding (NLU) module can handle more complex sentences, including those with coreference. Second, Gunrock interleaves actions to elicit users' opinions and provide responses to create an in-depth, engaging conversation; while a related strategy to interleave task- and non-task functions in chatbots has been proposed BIBREF5, no chatbots to our knowledge have employed a fact/opinion interleaving strategy. Finally, we use an extensive persona database to provide coherent profile information, a critical challenge in building social chatbots BIBREF3. Compared to previous systems BIBREF4, Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs (see Table TABREF2 for an example).",
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
"Amazon Alexa Prize BIBREF0 provides a platform to collect real human-machine conversation data and evaluate performance on speech-based social conversational systems. Our system, Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2, BIBREF3, BIBREF4 including inconsistency and difficulty in complex sentence understanding (e.g., long utterances) and provides several contributions: First, Gunrock's multi-step language understanding modules enable the system to provide more useful information to the dialog manager, including a novel dialog act scheme. ",
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock.",
"We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations."
]
}
],
"annotation_id": [
"a7ea8bc335b1a8d974c2b6a518d4efb4b9905549",
"b9f1ba799b2d213f5d7ce0b1e03adcac6ad30772"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"overall rating",
"mean number of turns"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
],
"highlighted_evidence": [
" We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
]
},
{
"unanswerable": false,
"extractive_spans": [
"overall rating",
"mean number of turns"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
],
"highlighted_evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions."
]
}
],
"annotation_id": [
"430a57dc6dc6a57617791e25e886c1b8d5ad6c35",
"ea5628650f48b7c9dac7c9255f29313a794748e0"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Amazon Conversational Bot Toolkit",
"natural language understanding (NLU) (nlu) module",
"dialog manager",
"knowledge bases",
"natural language generation (NLG) (nlg) module",
"text to speech (TTS) (tts)"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
"highlighted_evidence": [
"We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts)."
]
}
],
"annotation_id": [
"7196fa2dc147c614e3dce0521e0ec664d2962f6f"
],
"worker_id": [
"71f73551e7aabf873649e8fe97aefc54e6dd14f8"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": false,
"free_form_answer": "",
"evidence": [
"Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
],
"highlighted_evidence": [
"We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response.",
"While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1."
]
}
],
"annotation_id": [
"88ef01edfa9b349e03b234f049663bd35c911e3b"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"From January 5, 2019 to March 5, 2019, we collected conversational data for Gunrock. During this time, no other code updates occurred. We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident. Overall, this resulted in a total of 34,432 user conversations. Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\"). Users engaged with Gunrock for an average of 20.92 overall turns (median 13.0), with an average of 6.98 words per utterance, and had an average conversation time of 7.33 minutes (median: 2.87 min.). We conducted three principal analyses: users' response depth (wordcount), backstory queries (backstorypersona), and interleaving of personal and factual responses (pets)."
],
"highlighted_evidence": [
"Together, these users gave Gunrock an average rating of 3.65 (median: 4.0), which was elicited at the end of the conversation (“On a scale from 1 to 5 stars, how do you feel about talking to this socialbot again?\")."
]
}
],
"annotation_id": [
"20c1065f9d96bb413f4d24665d0d30692ad2ded6"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"We assessed the degree of conversational depth by measuring users' mean word count. Prior work has found that an increase in word count has been linked to improved user engagement (e.g., in a social dialog system BIBREF13). For each user conversation, we extracted the overall rating, the number of turns of the interaction, and the user's per-utterance word count (averaged across all utterances). We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.",
"Results showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
],
"highlighted_evidence": [
"We modeled the relationship between word count and the two metrics of user engagement (overall rating, mean number of turns) in separate linear regressions.\n\nResults showed that users who, on average, produced utterances with more words gave significantly higher ratings ($\\beta $=0.01, SE=0.002, t=4.79, p$<$0.001)(see Figure 2) and engaged with Gunrock for significantly greater number of turns ($\\beta $=1.85, SE=0.05, t=35.58, p$<$0.001) (see Figure 2). These results can be interpreted as evidence for Gunrock's ability to handle complex sentences, where users are not constrained to simple responses to be understood and feel engaged in the conversation – and evidence that individuals are more satisfied with the conversation when they take a more active role, rather than the system dominating the dialog. On the other hand, another interpretation is that users who are more talkative may enjoy talking to the bot in general, and thus give higher ratings in tandem with higher average word counts."
]
}
],
"annotation_id": [
"9766d4b4b1500c83da733bd582476733ecd100ce"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} |
{
"caption": [
"Figure 1: Gunrock system architecture",
"Figure 2: Mean user rating by mean number of words. Error bars show standard error.",
"Figure 3: Mean user rating based on number of queries to Gunrock’s backstory. Error bars show standard error.",
"Figure 4: Mean user rating based ’Has Pet’. Error bars show standard error."
],
"file": [
"3-Figure1-1.png",
"4-Figure2-1.png",
"5-Figure3-1.png",
"5-Figure4-1.png"
]
} |
2002.06644
|
Towards Detection of Subjective Bias using Contextualized Word Embeddings
|
Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score.
|
{
"section_name": [
"Introduction",
"Baselines and Approach",
"Baselines and Approach ::: Baselines",
"Baselines and Approach ::: Proposed Approaches",
"Experiments ::: Dataset and Experimental Settings",
"Experiments ::: Experimental Results",
"Conclusion"
],
"paragraphs": [
[
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"There has been considerable work on capturing subjectivity using text-classification models ranging from linguistic-feature-based modelsBIBREF1 to finetuned pre-trained word embeddings like BERTBIBREF2. The detection of bias-inducing words in a Wikipedia statement was explored in BIBREF1. The authors propose the \"Neutral Point of View\" (NPOV) corpus made using Wikipedia revision history, containing Wikipedia edits that are specifically designed to remove subjective bias. They use logistic regression with linguistic features, including factive verbs, hedges, and subjective intensifiers to detect bias-inducing words. In BIBREF2, the authors extend this work by mitigating subjective bias after detecting bias-inducing words using a BERT-based model. However, they primarily focused on detecting and mitigating subjective bias for single-word edits. We extend their work by incorporating multi-word edits by detecting bias at the sentence level. We further use their version of the NPOV corpus called Wiki Neutrality Corpus(WNC) for this work.",
"The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3. However, they conduct majority of their experiments in controlled settings, limiting the type of articles from which the revisions were extracted. Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly. Consequently, we perform our experiments in the complete WNC corpus, which consists of $423,823$ revisions in Wikipedia marked by its editors over a period of 15 years, to simulate a more general setting for the bias.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy."
],
[
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection."
],
[
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
[
"Optimized BERT-based models: We use BERT-based models optimized as in BIBREF6 and BIBREF7, pretrained on a dataset as large as twelve times as compared to $BERT_{large}$, with bigger batches, and longer sequences. ALBERT, introduced in BIBREF7, uses factorized embedding parameterization and cross-layer parameter sharing for parameter reduction. These optimizations have led both the models to outperform $BERT_{large}$ in various benchmarking tests, like GLUE for text classification and SQuAD for Question Answering.",
"Distilled BERT-based models: Secondly, we propose to use distilled BERT-based models, as introduced in BIBREF8. They are smaller general-purpose language representation model, pre-trained by leveraging distillation knowledge. This results in significantly smaller and faster models with performance comparable to their undistilled versions. We finetune these pretrained distilled models on the training corpus to efficiently detect subjectivity.",
"BERT-based ensemble models: Lastly, we use the weighted-average ensembling technique to exploit the predictions made by different variations of the above models. Ensembling methodology entails engendering a predictive model by utilizing predictions from multiple models in order to improve Accuracy and F1, decrease variance, and bias. We experiment with variations of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ and outline selected combinations in tab:experimental-results."
],
[
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset.",
"For all BERT-based models, we use a learning rate of $2*10^{-5}$, a maximum sequence length of 50, and a weight decay of $0.01$ while finetuning the model. We use FastText's recently open-sourced automatic hyperparameter optimization functionality while training the model. For the BiLSTM baseline, we use a dropout of $0.05$ along with a recurrent dropout of $0.2$ in two 64 unit sized stacked BiLSTMs, using softmax activation layer as the final dense layer."
],
[
"tab:experimental-results shows the performance of different models on the WNC corpus evaluated on the following four metrics: Precision, Recall, F1, and Accuracy. Our proposed methodology, the use of finetuned optimized BERT based models, and BERT-based ensemble models outperform the baselines for all the metrics.",
"Among the optimized BERT based models, $RoBERTa_{large}$ outperforms all other non-ensemble models and the baselines for all metrics. It further achieves a maximum recall of $0.681$ for all the proposed models. We note that DistillRoBERTa, a distilled model, performs competitively, achieving $69.69\\%$ accuracy, and $0.672$ F1 score. This observation shows that distilled pretrained models can replace their undistilled counterparts in a low-computing environment.",
"We further observe that ensemble models perform better than optimized BERT-based models and distilled pretrained models. Our proposed ensemble comprising of $RoBERTa_{large}$, $ALBERT_{xxlarge.v2}$, $DistilRoBERTa$ and $BERT$ outperforms all the proposed models obtaining $0.704$ F1 score, $0.733$ precision, and $71.61\\%$ Accuracy."
],
[
"In this paper, we investigated BERT-based architectures for sentence level subjective bias detection. We perform our experiments on a general Wikipedia corpus consisting of more than $360k$ pre and post subjective bias neutralized sentences. We found our proposed architectures to outperform the existing baselines significantly. BERT-based ensemble consisting of RoBERTa, ALBERT, DistillRoBERTa, and BERT led to the highest F1 and Accuracy. In the future, we would like to explore document-level detection of subjective bias, multi-word mitigation of the bias, applications of detecting the bias in recommendation systems."
]
]
} |
{
"question": [
"Do the authors report only on English?",
"What is the baseline for the experiments?",
"Which experiments are perfomed?"
],
"question_id": [
"830de0bd007c4135302138ffa8f4843e4915e440",
"680dc3e56d1dc4af46512284b9996a1056f89ded",
"bd5379047c2cf090bea838c67b6ed44773bcd56f"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"bias",
"bias",
"bias"
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"dfc487e35ee5131bc5054463ace009e6bd8fc671"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"FastText",
"BiLSTM",
"BERT"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
],
"highlighted_evidence": [
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset."
]
},
{
"unanswerable": false,
"extractive_spans": [
"FastText",
"BERT ",
"two-layer BiLSTM architecture with GloVe word embeddings"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Baselines and Approach",
"In this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.",
"Baselines and Approach ::: Baselines",
"FastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.",
"BiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.",
"BERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
],
"highlighted_evidence": [
"Baselines and Approach\nIn this section, we outline baseline models like $BERT_{large}$. We further propose three approaches: optimized BERT-based models, distilled pretrained models, and the use of ensemble methods for the task of subjectivity detection.\n\n",
"Baselines and Approach ::: Baselines\nFastTextBIBREF4: It uses bag of words and bag of n-grams as features for text classification, capturing partial information about the local word order efficiently.\n\nBiLSTM: Unlike feedforward neural networks, recurrent neural networks like BiLSTMs use memory based on history information to learn long-distance features and then predict the output. We use a two-layer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline.\n\nBERT BIBREF5: It is a contextualized word representation model that uses bidirectional transformers, pretrained on a large $3.3B$ word corpus. We use the $BERT_{large}$ model finetuned on the training dataset.",
"FLOAT SELECTED: Table 1: Experimental Results for the Subjectivity Detection Task"
]
}
],
"annotation_id": [
"23c76dd5ac11dd015f81868f3a8e1bafdf3d424c",
"2c63f673e8658e64600cc492bc7d6a48b56c2119"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "They used BERT-based models to detect subjective language in the WNC corpus",
"evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\\%$ of Accuracy.",
"Experiments ::: Dataset and Experimental Settings",
"We perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019. We randomly shuffled these sentences and split this dataset into two parts in a $90:10$ Train-Test split and perform the evaluation on the held-out test dataset."
],
"highlighted_evidence": [
"In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints.",
"In this work, we investigate the application of BERT-based models for the task of subjective language detection.",
"Experiments ::: Dataset and Experimental Settings\nWe perform our experiments on the WNC dataset open-sourced by the authors of BIBREF2. It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view. It contains $180k$ biased sentences, and their neutral counterparts crawled from $423,823$ Wikipedia revisions between 2004 and 2019"
]
}
],
"annotation_id": [
"293dcdfb800de157c1c4be7641cd05512cc26fb2"
],
"worker_id": [
"5053f146237e8fc8859ed3984b5d3f02f39266b7"
]
}
]
} |
{
"caption": [
"Table 1: Experimental Results for the Subjectivity Detection Task"
],
"file": [
"2-Table1-1.png"
]
} |
1909.11189
|
Diachronic Topics in New High German Poetry
|
Statistical topic models are increasingly and popularly used by Digital Humanities scholars to perform distant reading tasks on literary data. It allows us to estimate what people talk about. Especially Latent Dirichlet Allocation (LDA) has shown its usefulness, as it is unsupervised, robust, easy to use, scalable, and it offers interpretable results. In a preliminary study, we apply LDA to a corpus of New High German poetry (textgrid, with 51k poems, 8m token), and use the distribution of topics over documents for a classification of poems into time periods and for authorship attribution.
|
{
"section_name": [
"Corpus",
"Experiments",
"Experiments ::: Topic Trends",
"Experiments ::: Classification of Time Periods and Authorship",
"Experiments ::: Conclusion & Future Work"
],
"paragraphs": [
[
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
[
"We approach diachronic variation of poetry from two perspectives. First, as distant reading task to visualize the development of clearly interpretable topics over time. Second, as a downstream task, i.e. supervised machine learning task to determine the year (the time-slot) of publication for a given poem. We infer topic distributions over documents as features and pit them against a simple style baseline.",
"We use the implementation of LDA as it is provided in genism BIBREF4. LDA assumes that a particular document contains a mixture of few salient topics, where words are semantically related. We transform our documents (of wordforms) to a bag of words representation, filter stopwords (function words), and set the desired number of topics=100 and train for 50 epochs to attain a reasonable distinctness of topics. We choose 100 topics (rather than a lower number that might be more straightforward to interpret) as we want to later use these topics as features for downstream tasks. We find that wordforms (instead of lemma) are more useful for poetry topic models, as these capture style features (rhyme), orthographic variations ('hertz' instead of 'herz'), and generally offer more interpretable results."
],
[
"We retrieve the most important (likely) words for all 100 topics and interpret these (sorted) word lists as aggregated topics, e.g. topic 27 (figure 2) contains: Tugend (virtue), Kunst (art), Ruhm (fame), Geist (spirit), Verstand (mind) and Lob (praise). This topic as a whole describes the concept of ’artistic virtue’.",
"In certain clusters (topics) we find poetic residuals, such that rhyme words often cluster together (as they stand in proximity), e.g. topic 52 with: Mund (mouth), Grund (cause, ground), rund (round).",
"To discover trends of topics over time, we bin our documents into time slots of 25 years width each. See figure 1 for a plot of the number of documents per bin. The chosen binning slots offer enough documents per slot for our experiments. To visualize trends of singular topics over time, we aggregate all documents d in slot s and add the probabilities of topic t given d and divide by the number of all d in s. This gives us the average probability of a topic per timeslot. We then plot the trajectories for each single topic. See figures 2–6 for a selection of interpretable topic trends. Please note that the scaling on the y-axis differ for each topic, as some topics are more pronounced in the whole dataset overall.",
"Some topic plots are already very revealing. The topic ‘artistic virtue’ (figure 2, left) shows a sharp peak around 1700—1750, outlining the period of Enlightenment. Several topics indicate Romanticism, such as ‘flowers’ (figure 2, right), ‘song’ (figure 3, left) or ‘dust, ghosts, depths’ (not shown). The period of 'Vormärz' or 'Young Germany' is quite clear with the topic ‘German Nation’ (figure 3, right). It is however hardly distinguishable from romantic topics.",
"We find that the topics 'Beautiful Girls' (figure 4, left) and 'Life & Death' (figure 4, right) are always quite present over time, while 'Girls' is more prounounced in Romanticism, and 'Death' in Barock.",
"We find that the topic 'Fire' (figure 5, left) is a fairly modern concept, that steadily rises into modernity, possibly because of the trope 'love is fire'. Next to it, the topic 'Family' (figure 5, right) shows wild fluctuation over time.",
"Finally, figure 6 shows topics that are most informative for the downstream classification task: Topic 11 'World, Power, Time' (left) is very clearly a Barock topic, ending at 1750, while topic 19 'Heaven, Depth, Silence' is a topic that rises from Romanticism into Modernity."
],
[
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also, we use 50 year slots (instead of 25) to ease the task.",
"For each document we determine a class label for a time slot. The slot 1575–1624 receives the label 0, the slot 1625–1674 the label 1, etc.. In total, we have 7 classes (time slots).",
"As a baseline, we implement rather straightforward style features, such as line length, poem length (in token, syllables, lines), cadence (number of syllables of last word in line), soundscape (ratio of closed to open syllables, see BIBREF5), and a proxy for metre, the number of syllables of the first word in the line.",
"We split the data randomly 70:30 training:testing, where a 50:50 shows (5 points) worse performance. We then train Random Forest Ensemble classifiers and perform a grid search over their parameters to determine the best classifier. Please note that our class sizes are quite imbalanced.",
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%.",
"The most informative features (by information gain) are: Topic11 (.067), Topic 37 (.055), Syllables Per Line (.046), Length of poem in syllables (.031), Topic19 (.029), Topic98 (.025), Topic27 ('virtue') (.023), and Soundscape (.023).",
"For authorship attribution, we also use a 70:30 random train:test split and use the author name as class label. We only choose the most frequent 180 authors. We find that training on stanzas gives us 71% Accuracy, but when trained on full poems, we only get 13% Accuracy. It should be further investigated is this is only because of a surplus of data."
],
[
"We have shown the viability of Latent Dirichlet Allocation for a visualization of topic trends (the evolution of what people talk about in poetry). While most topics are easily interpretable and show a clear trend, others are quite noisy. For an exploratory experiment, the classification into time slots and for authors attribution is very promising, however far from perfect. It should be investigated whether using stanzas instead of whole poems only improves results because of more available data. Also, it needs to be determined if better topic models can deliver a better baseline for diachronic change in poetry, and if better style features will outperform semantics. Finally, only selecting clear trending and peaking topics (through co-variance) might further improve the results."
]
]
} |
{
"question": [
"What is the algorithm used for the classification tasks?",
"Is the outcome of the LDA analysis evaluated in any way?",
"What is the corpus used in the study?"
],
"question_id": [
"bfa3776c30cb30e0088e185a5908e5172df79236",
"a2a66726a5dca53af58aafd8494c4de833a06f14",
"ee87608419e4807b9b566681631a8cd72197a71a"
],
"nlp_background": [
"two",
"two",
"two"
],
"topic_background": [
"familiar",
"familiar",
"familiar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"German",
"German",
"German"
],
"question_writer": [
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c",
"486a870694ba60f1a1e7e4ec13e328164cd4b43c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Random Forest Ensemble classifiers"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also, we use 50 year slots (instead of 25) to ease the task."
],
"highlighted_evidence": [
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. "
]
}
],
"annotation_id": [
"b19621401c5d97df4f64375d16bc639aa58c460e"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [],
"yes_no": true,
"free_form_answer": "",
"evidence": [
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
],
"highlighted_evidence": [
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
]
}
],
"annotation_id": [
"764826094a9ccf5268e8eddab5591eb190c1ed63"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"TextGrid Repository"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
"highlighted_evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3."
]
},
{
"unanswerable": false,
"extractive_spans": [
"The Digital Library in the TextGrid Repository"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work."
],
"highlighted_evidence": [
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work)."
]
}
],
"annotation_id": [
"2f9121fabcdac24875d9a6d5e5aa2c12232105a3",
"82c7475166ef613bc8d8ae561ed1fc9eead8820c"
],
"worker_id": [
"34c35a1877e453ecaebcf625df3ef788e1953cc4",
"c1fbdd7a261021041f75fbe00a55b4c386ebbbb4"
]
}
]
} |
{
"caption": [
"Fig. 1: 25 year Time Slices of Textgrid Poetry (1575–1925)",
"Fig. 2: left: Topic 27 ’Virtue, Arts’ (Period: Enlightenment), right: Topic 55 ’Flowers, Spring, Garden’ (Period: Early Romanticism)",
"Fig. 3: left: Topic 63 ’Song’ (Period: Romanticism), right: Topic 33 ’German Nation’ (Period: Vormärz, Young Germany))",
"Fig. 4: left: Topic 28 ’Beautiful Girls’ (Period: Omnipresent, Romanticism), right: Topic 77 ’Life & Death’ (Period: Omnipresent, Barock",
"Fig. 5: left: Topic 60 ’Fire’ (Period: Modernity), right: Topic 42 ’Family’ (no period, fluctuating)",
"Fig. 6: Most informative topics for classification; left: Topic 11 ’World, Power, Lust, Time’ (Period: Barock), right: Topic 19 ’Heaven, Depth, Silence’ (Period: Romanticism, Modernity)"
],
"file": [
"2-Figure1-1.png",
"3-Figure2-1.png",
"3-Figure3-1.png",
"4-Figure4-1.png",
"4-Figure5-1.png",
"4-Figure6-1.png"
]
} |
2003.08553
|
QnAMaker: Data to Bot in 2 Minutes
|
Having a bot for seamless conversations is a much-desired feature that products and services today seek for their websites and mobile apps. These bots help reduce traffic received by human support significantly by handling frequent and directly answerable known questions. Many such services have huge reference documents such as FAQ pages, which makes it hard for users to browse through this data. A conversation layer over such raw data can lower traffic to human support by a great margin. We demonstrate QnAMaker, a service that creates a conversational layer over semi-structured data such as FAQ pages, product manuals, and support documents. QnAMaker is the popular choice for Extraction and Question-Answering as a service and is used by over 15,000 bots in production. It is also used by search interfaces and not just bots.
|
{
"section_name": [
"Introduction",
"System description ::: Architecture",
"System description ::: Bot Development Process",
"System description ::: Extraction",
"System description ::: Retrieval And Ranking",
"System description ::: Retrieval And Ranking ::: Pre-Processing",
"System description ::: Retrieval And Ranking ::: Features",
"System description ::: Retrieval And Ranking ::: Contextual Features",
"System description ::: Retrieval And Ranking ::: Modeling and Training",
"System description ::: Persona Based Chit-Chat",
"System description ::: Active Learning",
"Evaluation and Insights",
"Demonstration",
"Future Work"
],
"paragraphs": [
[
"QnAMaker aims to simplify the process of bot creation by extracting Question-Answer (QA) pairs from data given by users into a Knowledge Base (KB) and providing a conversational layer over it. KB here refers to one instance of azure search index, where the extracted QA are stored. Whenever a developer creates a KB using QnAMaker, they automatically get all NLP capabilities required to answer user's queries. There are other systems such as Google's Dialogflow, IBM's Watson Discovery which tries to solve this problem. QnAMaker provides unique features for the ease of development such as the ability to add a persona-based chit-chat layer on top of the bot. Additionally, bot developers get automatic feedback from the system based on end-user traffic and interaction which helps them in enriching the KB; we call this feature active-learning. Our system also allows user to add Multi-Turn structure to KB using hierarchical extraction and contextual ranking. QnAMaker today supports over 35 languages, and is the only system among its competitors to follow a Server-Client architecture; all the KB data rests only in the client's subscription, giving users total control over their data. QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack."
],
[
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
[
"Creating a bot is a 3-step process for a bot developer:",
"Create a QnaMaker Resource in Azure: This creates a WebApp with binaries required to run QnAMaker. It also creates an Azure Search Service for populating the index with any given knowledge base, extracted from user data",
"Use Management APIs to Create/Update/Delete your KB: The Create API automatically extracts the QA pairs and sends the Content to WebApp, which indexes it in Azure Search Index. Developers can also add persona-based chat content and synonyms while creating and updating their KBs.",
"Bot Creation: Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered. There are Bot-Framework templates provided for the same."
],
[
"The Extraction component is responsible for understanding a given document and extracting potential QA pairs. These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries. First, the basic blocks from given documents such as text, lines are extracted. Then the layout of the document such as columns, tables, lists, paragraphs, etc is extracted. This is done using Recursive X-Y cut BIBREF0. Following Layout Understanding, each element is tagged as headers, footers, table of content, index, watermark, table, image, table caption, image caption, heading, heading level, and answers. Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree. Leaf nodes from the hierarchy are considered as QA pairs. In the end, the intent tree is further augmented with entities using CRF-based sequence labeling. Intents that are repeated in and across documents are further augmented with their parent intent, adding more context to resolve potential ambiguity."
],
[
"QnAMaker uses Azure Search Index as it's retrieval layer, followed by re-ranking on top of retrieved results (Figure FIGREF21). Azure Search is based on inverted indexing and TF-IDF scores. Azure Search provides fuzzy matching based on edit-distance, thus making retrieval robust to spelling mistakes. It also incorporates lemmatization and normalization. These indexes can scale up to millions of documents, lowering the burden on QnAMaker WebApp which gets less than 100 results to re-rank.",
"Different customers may use QnAMaker for different scenarios such as banking task completion, answering FAQs on company policies, or fun and engagement. The number of QAs, length of questions and answers, number of alternate questions per QA can vary significantly across different types of content. Thus, the ranker model needs to use features that are generic enough to be relevant across all use cases."
],
[
"The pre-processing layer uses components such as Language Detection, Lemmatization, Speller, and Word Breaker to normalize user queries. It also removes junk characters and stop-words from the user's query."
],
[
"Going into granular features and the exact empirical formulas used is out of the scope of this paper. The broad level features used while ranking are:",
"WordNet: There are various features generated using WordNet BIBREF2 matching with questions and answers. This takes care of word-level semantics. For instance, if there is information about “price of furniture\" in a KB and the end-user asks about “price of table\", the user will likely get a relevant answer. The scores of these WordNet features are calculated as a function of:",
"Distance of 2 words in the WordNet graph",
"Distance of Lowest Common Hypernym from the root",
"Knowledge-Base word importance (Local IDFs)",
"Global word importance (Global IDFs)",
"This is the most important feature in our model as it has the highest relative feature gain.",
"CDSSM: Convolutional Deep Structured Semantic Models BIBREF3 are used for sentence-level semantic matching. This is a dual encoder model that converts text strings (sentences, queries, predicates, entity mentions, etc) into their vector representations. These models are trained using millions of Bing Query Title Click-Through data. Using the source-model for vectorizing user query and target-model for vectorizing answer, we compute the cosine similarity between these two vectors, giving the relevance of answer corresponding to the query.",
"TF-IDF: Though sentence-to-vector models are trained on huge datasets, they fail to effectively disambiguate KB specific data. This is where a standard TF-IDF BIBREF4 featurizer with local and global IDFs helps."
],
[
"We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways:",
"$Query_{modified}$ = Query + Previous Answer; For instance, if user query is “yes\" and the previous answer is “do you want to know about XYZ\", the current query becomes “do you want to know about XYZ yes\".",
"Candidate QnA pairs are appended with its parent Questions and Answers; no contextual information is used from the user's query. For instance, if a candidate QnA has a question “benefits\" and its parent question was “know about XYZ\", the candidate QA's question is changed to “know about XYZ benefits\".",
"The features mentioned in Section SECREF20 are calculated for the above combinations also. These features carry contextual information."
],
[
"We use gradient-boosted decision trees as our ranking model to combine all the features. Early stopping BIBREF5 based on Generality-to-Progress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting. We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically."
],
[
"We add support for bot-developers to directly enable handling chit-chat queries like “hi\", “thank you\", “what's up\" in their QnAMaker bots. In addition to chit-chat, we also give bot developers the flexibility to ground responses for such queries in a specific personality: professional, witty, friendly, caring, or enthusiastic. For example, the “Humorous\" personality can be used for a casual bot, whereas a “Professional\" personality is more suited in case of banking FAQs or task-completion bots. There is a list of 100+ predefined intents BIBREF7. There is a curated list of queries for each of these intents, along with a separate query understanding layer for ranking these intents. The arbitration between chit-chat answers and user's knowledge base answers is handled by using a chat-domain classifier BIBREF8."
],
[
"The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers. Active learning generates suggestions based on end-user feedback as well as ranker's implicit signals. For instance, if for a query, CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher, active learning system will try to disambiguate it by showing this as a suggestion to the bot developer. To avoid showing similar suggestions to developers, DB-Scan clustering is done which optimizes the number of suggestions shown."
],
[
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:",
"Around 27% of the knowledge bases created use pre-built persona-based chitchat, out of which, $\\sim $4% of the knowledge bases are created for chit-chat alone. The highest used personality is Professional which is used in 9% knowledge bases.",
"Around $\\sim $25% developers have enabled active learning suggestions. The acceptance to reject ratio for active learning suggestions is 0.31.",
"25.5% of the knowledge bases use one URL as a source while creation. $\\sim $41% of the knowledge bases created use different sources like multiple URLs. 15.19% of the knowledge bases use both URL and editorial content as sources. Rest use just editorial content."
],
[
"We demonstrate QnAMaker: a service to add a conversational layer over semi-structured user data. In addition to query-answering, we support novel features like personality-grounded chit-chat, active learning based on user-interaction feedback (Figure FIGREF40), and hierarchical extraction for multi-turn conversations (Figure FIGREF41). The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker. All the demonstrations will be done on the production website Demo Video can be seen here."
],
[
"The system currently doesn't highlight the answer span and does not generate answers taking the KB as grounding. We will be soon supporting Answer Span BIBREF9 and KB-grounded response generation BIBREF10 in QnAMaker. We are also working on user-defined personas for chit-chat (automatically learned from user-documents). We aim to enhance our extraction to be able to work for any unstructured document as well as images. We are also experimenting on improving our ranking system by using semantic vector-based search as our retrieval and transformer-based models for re-ranking."
]
]
} |
{
"question": [
"What experiments do the authors present to validate their system?",
"How does the conversation layer work?",
"What components is the QnAMaker composed of?"
],
"question_id": [
"fd0ef5a7b6f62d07776bf672579a99c67e61a568",
"071bcb4b054215054f17db64bfd21f17fd9e1a80",
"f399d5a8dbeec777a858f81dc4dd33a83ba341a2"
],
"nlp_background": [
"five",
"five",
"five"
],
"topic_background": [
"",
"",
""
],
"paper_read": [
"",
"",
""
],
"search_query": [
"",
"",
""
],
"question_writer": [
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37",
"e8b24c3133e0bec0a6465e1f13acd3a5ed816b37"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
" we measure our system's performance for datasets across various domains",
"evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19). Hybrid of deep learning(CDSSM) and machine learning features give our ranking model low computation cost, high explainability and significant F1/AUC score. Based on QnAMaker usage, we observed these trends:"
],
"highlighted_evidence": [
" To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary labels). Each query-QA pair is judged by two judges. We filter out data for which judges do not agree on the label. Chit-chat in itself can be considered as a domain. Thus, we evaluate performance on given KB both with and without chit-chat data (last two rows in Table TABREF19), as well as performance on just chit-chat data (2nd row in Table TABREF19)."
]
}
],
"annotation_id": [
"c6aac397b3bf27942363d5b4be00bf094654d366"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"3c069b65ef0117a5d5c4ee9ac49ab6709cfbe124"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"QnAMaker Portal",
"QnaMaker Management APIs",
"Azure Search Index",
"QnaMaker WebApp",
"Bot"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"System description ::: Architecture",
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
"highlighted_evidence": [
"System description ::: Architecture",
"The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. ",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. ",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. ",
"Bot: Calls the WebApp with the User's query to get results."
]
},
{
"unanswerable": false,
"extractive_spans": [
"QnAMaker Portal",
"QnaMaker Management APIs",
"Azure Search Index",
"QnaMaker WebApp",
"Bot"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:",
"QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.",
"QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.",
"Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.",
"QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.",
"Bot: Calls the WebApp with the User's query to get results."
],
"highlighted_evidence": [
"The components involved in the process are:\n\nQnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.\n\nQnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.\n\nAzure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.\n\nQnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.\n\nBot: Calls the WebApp with the User's query to get results."
]
}
],
"annotation_id": [
"443426bf61950f89af016a359cbdb0f5f3680d81",
"cc3663b4c97c95bfda1e9a6d64172abea619da01"
],
"worker_id": [
"c1018a31c3272ce74964a3280069f62f314a1a58",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} |
{
"caption": [
"Figure 1: Interactions between various components of QnaMaker, along with their scopes: server-side and client-side",
"Table 1: Retrieval And Ranking Measurements",
"Figure 2: QnAMaker Runtime Pipeline",
"Figure 3: Active Learning Suggestions",
"Figure 4: Multi-Turn Knowledge Base"
],
"file": [
"2-Figure1-1.png",
"3-Table1-1.png",
"3-Figure2-1.png",
"4-Figure3-1.png",
"4-Figure4-1.png"
]
} |
1909.09491
|
A simple discriminative training method for machine translation with large-scale features
|
Margin infused relaxed algorithms (MIRAs) dominate model tuning in statistical machine translation in the case of large scale features, but also they are famous for the complexity in implementation. We introduce a new method, which regards an N-best list as a permutation and minimizes the Plackett-Luce loss of ground-truth permutations. Experiments with large-scale features demonstrate that, the new method is more robust than MERT; though it is only matchable with MIRAs, it has a comparatively advantage, easier to implement.
|
{
"section_name": [
"Introduction",
"Plackett-Luce Model",
"Plackett-Luce Model in Statistical Machine Translation",
"Plackett-Luce Model in Statistical Machine Translation ::: N-best Hypotheses Resample",
"Evaluation",
"Evaluation ::: Plackett-Luce Model for SMT Tuning",
"Evaluation ::: Plackett-Luce Model for SMT Reranking"
],
"paragraphs": [
[
"Since Och BIBREF0 proposed minimum error rate training (MERT) to exactly optimize objective evaluation measures, MERT has become a standard model tuning technique in statistical machine translation (SMT). Though MERT performs better by improving its searching algorithm BIBREF1, BIBREF2, BIBREF3, BIBREF4, it does not work reasonably when there are lots of features. As a result, margin infused relaxed algorithms (MIRA) dominate in this case BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10.",
"In SMT, MIRAs consider margin losses related to sentence-level BLEUs. However, since the BLEU is not decomposable into each sentence, these MIRA algorithms use some heuristics to compute the exact losses, e.g., pseudo-document BIBREF8, and document-level loss BIBREF9.",
"Recently, another successful work in large-scale feature tuning include force decoding basedBIBREF11, classification based BIBREF12.",
"We aim to provide a simpler tuning method for large-scale features than MIRAs. Out motivation derives from an observation on MERT. As MERT considers the quality of only top1 hypothesis set, there might have more-than-one set of parameters, which have similar top1 performances in tuning, but have very different topN hypotheses. Empirically, we expect an ideal model to benefit the total N-best list. That is, better hypotheses should be assigned with higher ranks, and this might decrease the error risk of top1 result on unseen data.",
"PlackettBIBREF13 offered an easy-to-understand theory of modeling a permutation. An N-best list is assumedly generated by sampling without replacement. The $i$th hypothesis to sample relies on those ranked after it, instead of on the whole list. This model also supports a partial permutation which accounts for top $k$ positions in a list, regardless of the remaining. When taking $k$ as 1, this model reduces to a standard conditional probabilistic training, whose dual problem is actual the maximum entropy based BIBREF14. Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training, probabilistic models correlate with BLEU well when features are rich enough. The similar claim also appears in BIBREF15. This also make the new method be applicable in large-scale features."
],
[
"Plackett-Luce was firstly proposed to predict ranks of horses in gambling BIBREF13. Let $\\mathbf {r}=(r_{1},r_{2}\\ldots r_{N})$ be $N$ horses with a probability distribution $\\mathcal {P}$ on their abilities to win a game, and a rank $\\mathbf {\\pi }=(\\pi (1),\\pi (2)\\ldots \\pi (|\\mathbf {\\pi }|))$ of horses can be understood as a generative procedure, where $\\pi (j)$ denotes the index of the horse in the $j$th position.",
"In the 1st position, there are $N$ horses as candidates, each of which $r_{j}$ has a probability $p(r_{j})$ to be selected. Regarding the rank $\\pi $, the probability of generating the champion is $p(r_{\\pi (1)})$. Then the horse $r_{\\pi (1)}$ is removed from the candidate pool.",
"In the 2nd position, there are only $N-1$ horses, and their probabilities to be selected become $p(r_{j})/Z_{2}$, where $Z_{2}=1-p(r_{\\pi (1)})$ is the normalization. Then the runner-up in the rank $\\pi $, the $\\pi (2)$th horse, is chosen at the probability $p(r_{\\pi (2)})/Z_{2}$. We use a consistent terminology $Z_{1}$ in selecting the champion, though $Z_{1}$ equals 1 trivially.",
"This procedure iterates to the last rank in $\\pi $. The key idea for the Plackett-Luce model is the choice in the $i$th position in a rank $\\mathbf {\\pi }$ only depends on the candidates not chosen at previous stages. The probability of generating a rank $\\pi $ is given as follows",
"where $Z_{j}=1-\\sum _{t=1}^{j-1}p(r_{\\pi (t)})$.",
"We offer a toy example (Table TABREF3) to demonstrate this procedure.",
"Theorem 1 The permutation probabilities $p(\\mathbf {\\pi })$ form a probability distribution over a set of permutations $\\Omega _{\\pi }$. For example, for each $\\mathbf {\\pi }\\in \\Omega _{\\pi }$, we have $p(\\mathbf {\\pi })>0$, and $\\sum _{\\pi \\in \\Omega _{\\pi }}p(\\mathbf {\\pi })=1$.",
"We have to note that, $\\Omega _{\\pi }$ is not necessarily required to be completely ranked permutations in theory and in practice, since gamblers might be interested in only the champion and runner-up, and thus $|\\mathbf {\\pi }|\\le N$. In experiments, we would examine the effects on different length of permutations, systems being termed $PL(|\\pi |)$.",
"Theorem 2 Given any two permutations $\\mathbf {\\pi }$ and $\\mathbf {\\pi }\\prime $, and they are different only in two positions $p$ and $q$, $p<q$, with $\\pi (p)=\\mathbf {\\pi }\\prime (q)$ and $\\pi (q)=\\mathbf {\\pi }\\prime (p)$. If $p(\\pi (p))>p(\\pi (q))$, then $p(\\pi )>p(\\pi \\prime )$.",
"In other words, exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability.",
"This suggests the ground-truth permutation, ranked decreasingly by their probabilities, owns the maximum permutation probability on a given distribution. In SMT, we are motivated to optimize parameters to maximize the likelihood of ground-truth permutation of an N-best hypotheses.",
"Due to the limitation of space, see BIBREF13, BIBREF16 for the proofs of the theorems."
],
[
"In SMT, let $\\mathbf {f}=(f_{1},f_{2}\\ldots )$ denote source sentences, and $\\mathbf {e}=(\\lbrace e_{1,1},\\ldots \\rbrace ,\\lbrace e_{2,1},\\ldots \\rbrace \\ldots )$ denote target hypotheses. A set of features are defined on both source and target side. We refer to $h(e_{i,*})$ as a feature vector of a hypothesis from the $i$th source sentence, and its score from a ranking function is defined as the inner product $h(e_{i,*})^{T}w$ of the weight vector $w$ and the feature vector.",
"We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses.",
"The ground-truth permutation of an $n$best list is simply obtained after ranking by their sentence-level BLEUs. Here we only concentrate on their relative ranks which are straightforward to compute in practice, e.g. add 1 smoothing. Let $\\pi _{i}^{*}$ be the ground-truth permutation of hypotheses from the $i$th source sentences, and our optimization objective is maximizing the log-likelihood of the ground-truth permutations and penalized using a zero-mean and unit-variance Gaussian prior. This results in the following objective and gradient:",
"where $Z_{i,j}$ is defined as the $Z_{j}$ in Formula (1) of the $i$th source sentence.",
"The log-likelihood function is smooth, differentiable, and concave with the weight vector $w$, and its local maximal solution is also a global maximum. Iteratively selecting one parameter in $\\alpha $ for tuning in a line search style (or MERT style) could also converge into the global global maximum BIBREF17. In practice, we use more fast limited-memory BFGS (L-BFGS) algorithm BIBREF18."
],
[
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5.",
"In engineering, the size of an N-best list with unique hypotheses is usually less than several thousands. This suggests that, if features are up to thousands or more, the Plackett-Luce model is quite suitable here. Otherwise, we could reduce the size of N-best lists by sampling to make $r$ beyond the threshold.",
"Their may be other efficient sampling methods, and here we adopt a simple one. If we want to $m$ samples from a list of hypotheses $\\mathbf {e}$, first, the $\\frac{m}{3}$ best hypotheses and the $\\frac{m}{3}$ worst hypotheses are taken by their sentence-level BLEUs. Second, we sample the remaining hypotheses on distribution $p(e_{i})\\propto \\exp (h(e_{i})^{T}w)$, where $\\mathbf {w}$ is an initial weight from last iteration."
],
[
"We compare our method with MERT and MIRA in two tasks, iterative training, and N-best list rerank. We do not list PRO BIBREF12 as our baseline, as Cherry et al.BIBREF10 have compared PRO with MIRA and MERT massively.",
"In the first task, we align the FBIS data (about 230K sentence pairs) with GIZA++, and train a 4-gram language model on the Xinhua portion of Gigaword corpus. A hierarchical phrase-based (HPB) model (Chiang, 2007) is tuned on NIST MT 2002, and tested on MT 2004 and 2005. All features are eight basic ones BIBREF20 and extra 220 group features. We design such feature templates to group grammars by the length of source side and target side, (feat-type,a$\\le $src-side$\\le $b,c$\\le $tgt-side$\\le $d), where the feat-type denotes any of the relative frequency, reversed relative frequency, lexical probability and reversed lexical probability, and [a, b], [c, d] enumerate all possible subranges of [1, 10], as the maximum length on both sides of a hierarchical grammar is limited to 10. There are 4 $\\times $ 55 extra group features.",
"In the second task, we rerank an N-best list from a HPB system with 7491 features from a third party. The system uses six million parallel sentence pairs available to the DARPA BOLT Chinese-English task. This system includes 51 dense features (translation probabilities, provenance features, etc.) and up to 7440 sparse features (mostly lexical and fertility-based). The language model is a 6-gram model trained on a 10 billion words, including the English side of our parallel corpora plus other corpora such as Gigaword (LDC2011T07) and Google News. For the tuning and test sets, we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus."
],
[
"We conduct a full training of machine translation models. By default, a decoder is invoked for at most 40 times, and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms.",
"In getting the ground-truth permutations, there are many ties with the same sentence-level BLEU, and we just take one randomly. In this section, all systems have only around two hundred features, hence in Plackett-Luce based training, we sample 30 hypotheses in an accumulative $n$best list in each round of training.",
"All results are shown in Table TABREF10, we can see that all PL($k$) systems does not perform well as MERT or MIRA in the development data, this maybe due to that PL($k$) systems do not optimize BLEU and the features here are relatively not enough compared to the size of N-best lists (empirical Formula DISPLAY_FORM9). However, PL($k$) systems are better than MERT in testing. PL($k$) systems consider the quality of hypotheses from the 2th to the $k$th, which is guessed to act the role of the margin like SVM in classification . Interestingly, MIRA wins first in training, and still performs quite well in testing.",
"The PL(1) system is equivalent to a max-entropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis. When we increase the $k$, the performances improve at first. After reaching a maximum around $k=5$, they decrease slowly. We explain this phenomenon as this, when features are rich enough, higher BLEU scores could be easily fitted, then longer ground-truth permutations include more useful information."
],
[
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.",
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.",
"Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
]
]
} |
{
"question": [
"How they measure robustness in experiments?",
"Is new method inferior in terms of robustness to MIRAs in experiments?",
"What experiments with large-scale features are performed?"
],
"question_id": [
"d28260b5565d9246831e8dbe594d4f6211b60237",
"8670989ca39214eda6c1d1d272457a3f3a92818b",
"923b12c0a50b0ee22237929559fad0903a098b7b"
],
"nlp_background": [
"zero",
"zero",
"zero"
],
"topic_background": [
"unfamiliar",
"unfamiliar",
"unfamiliar"
],
"paper_read": [
"no",
"no",
"no"
],
"search_query": [
"",
"",
""
],
"question_writer": [
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c",
"258ee4069f740c400c0049a2580945a1cc7f044c"
],
"answers": [
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"We empirically provide a formula to measure the richness in the scenario of machine translation."
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5."
],
"highlighted_evidence": [
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation.",
"The greater, the richer. In practice, we find a rough threshold of r is 5"
]
},
{
"unanswerable": false,
"extractive_spans": [
"boost the training BLEU very greatly",
"the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.",
"Second, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line. After 500 L-BFGS iterations, their performances are no less than the baseline, though only by a small margin."
],
"highlighted_evidence": [
"First, the Plackett-Luce models boost the training BLEU very greatly, even up to 2.5 points higher than MIRA. This verifies our assumption, richer features benefit BLEU, though they are optimized towards a different objective.\n\nSecond, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$. In PL(1), the over-fitting is quite obvious, the portion in which the curve overpasses MIRA is the smallest compared to other $k$, and its convergent performance is below the baseline. When $k$ is not smaller than 5, the curves are almost above the MIRA line."
]
}
],
"annotation_id": [
"8408c034789c854514cebd1a01819cafc3ffee55",
"9b2644f3909be4ec61d48c8644297775e139f448"
],
"worker_id": [
"a0b403873302db7cada39008f04d01155ef68f4f",
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": true,
"extractive_spans": [],
"yes_no": null,
"free_form_answer": "",
"evidence": [],
"highlighted_evidence": []
}
],
"annotation_id": [
"c60226e79eec043a0ddb74ae86e428bf6037b38d"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
},
{
"answer": [
{
"unanswerable": false,
"extractive_spans": [
"Plackett-Luce Model for SMT Reranking"
],
"yes_no": null,
"free_form_answer": "",
"evidence": [
"Evaluation ::: Plackett-Luce Model for SMT Reranking",
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phenomena.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
],
"highlighted_evidence": [
" Plackett-Luce Model for SMT Reranking\nAfter being de-duplicated, the N-best list has an average size of around 300, and with 7491 features.",
"This experiment displays, in large-scale features, the Plackett-Luce model correlates with BLEU score very well, and alleviates overfitting in some degree."
]
}
],
"annotation_id": [
"3e2e4494d3cb470aa9c8301507e6f8db5dcf44ab"
],
"worker_id": [
"258ee4069f740c400c0049a2580945a1cc7f044c"
]
}
]
} |
{
"caption": [
"Table 2: PL(k): Plackett-Luce model optimizing the ground-truth permutation with length k. The significant symbols (+ at 0.05 level) are compared with MERT. The bold font numbers signifies better results compared to M(1) system.",
"Figure 1: PL(k) with 500 L-BFGS iterations, k=1,3,5,7,9,12,15 compared with MIRA in reranking."
],
"file": [
"4-Table2-1.png",
"5-Figure1-1.png"
]
} |
End of preview. Expand
in Data Studio
- Downloads last month
- 7