id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_100800 | For the two English-Swedish equivalent sentences: Cologne is located on both sides of the Rhine River and Köln ligger på båda sidorna av floden Rhen, Wikifier, on the English version, identifies Cologne and Rhine river as named entities and links them respectively to the en.wikipedia.org/wiki/Cologne and en.wikipedia.org/wiki/Rhine pages, while NEDforia, on the Swedish text, produces a ranked list of entity candidates for the words Köln and Rhen shown in Table 4. | this differs from the predicate sense, open.02, having the meaning of something beginning in a certain state, such as a stock opening at a certain price. | neutral |
train_100801 | It combines layout, discourse and terminological analyses to bridge the gap between the document layout and lexical resources. | table 4: Results for terms recognition vector-based strategies present interesting precisions, which seems to confirm a correlation between the lexical cohesion of terms and their likelihood of being involved in a relation. | neutral |
train_100802 | Some attempts have been made for improving these results (Oh et al., 2009;Yamada et al., 2009). | since it is expected that terms involved in the relation share the same lexical field, we also consider the cosine similarity between the term vectors. | neutral |
train_100803 | In the following, we explain it. | after applying λs.s(λx.x) to the semantic representation s i−1 , the semantic transition function t i is applied. | neutral |
train_100804 | This latter limitation is particularly critical since it leads to significant degradation in performance when the trained system is applied to texts from new domains. | roleF1 For the Brown (OOD) dataset, the role subsystem F1 improves significantly with POS and dependency parse information (+2.72%) while the sense subsystem benefits less (0.96%). | neutral |
train_100805 | An intermediate vector, δ, is calculated, which will contain the unnormalized log probability that any path through the trellis will pass through a particular state k for the particular word t. The δ vectors have a dimension of N, the number of tags, and they are re-used for the gradient calculation during back-propagation. | recent results using multilayer neural networks and pre-trained word embeddings have demonstrated high performance using a much smaller number of minimalist features. | neutral |
train_100806 | It is straightforward to see that J T (I) is an antimonotone function. | we then evaluate the EFMP approach to confirm that it provides much higher performance figures and is on par with the state of the art. | neutral |
train_100807 | Due to the tight coupling of multiple resolvers, a direct comparison with systems focusing on discourse deixis is hard. | negated parent/candidate We consider a verb token to be negated if it has a child connected with a negation label. | neutral |
train_100808 | Due to the tight coupling of multiple resolvers, a direct comparison with systems focusing on discourse deixis is hard. | current coreference resolution systems ignore this phenomenon. | neutral |
train_100809 | The degree to which two words are near-synonyms is proportional to the degree to which one can substitute for another in a given context (Inkpen and Hirst, 2006). | despite the lower overall emotionality of Hypothesis 2 All Pairs, our hypothesis that metaphorical instances are more emotional than the literal ones still holds. | neutral |
train_100810 | Previous research on metaphor annotation identified metaphorical uses of words in text, thus analysing data for only one sense at a time. | focus Word 1: attack Sentence 1: I attacked the problem as soon as I was up. | neutral |
train_100811 | But it is unable to distinguish between events and propositions and between predicate and sentence modifiers, and the formal interpretation of quantification in HLF can lead to contradiction (Schubert, 2015). | in this paper we limit ourselves to discussion of the "forward" version of gloss-derived axioms. | neutral |
train_100812 | SRL systems (Täckström et al., 2015;Roth and Woodsend, 2014) have benefited NLP applications, and many approaches have been proposed to transfer semantic roles from English to other languages without further reliance on manual annotation (Kozhevnikov and Titov, 2013;Padó and Lapata, 2009). | our goal is to distinguish implicit roles from other translation shifts that cause poor alignment in SRL projection. | neutral |
train_100813 | Accordingly, a transduction grammar is able to generate, translate or accept a transduction or a set of bisentences. | the English sentence can be divided into three major parts: "the Japanese islands", "run northeast to southwest" and "in the northwest part of the pacific ocean.". | neutral |
train_100814 | OWL concepts and a set of SWRL rules then derive what the text implies about (the author's view of) reality and what the reader might make of it. | an important trait of their approach lies in the fact that these annotations are always relative to sources mentioned in the text, typically subjects or objects of source-introducing predicates, for instance, "said the minister". | neutral |
train_100815 | Our work has further contributions. | for SE-WSI-CRP, we first learn word embeddings with word2vec 1 , then use them as pre-trained vectors to learn sense embeddings. | neutral |
train_100816 | In a rigorous evaluation, showed that neural word embeddings such as skipgram have an edge over traditional count-based models. | for instance, the word segmentation has 1105 positive dimensions in the vanilla configuration, but only 577 in the latter. | neutral |
train_100817 | They also define a normalized modified purity and normalized inverse purity for evaluation, explained below. | the verb senses are generated through a DPMM, and we do not have a gold-label assignment of VerbNet classes to each sense. | neutral |
train_100818 | For example, while an expected surge in unemployment is not a surge in unemployment, a policy that deals with an expected surge deals with a surge. | (3b) UNKNOWN Thrun predicted that leaps in artificial intelligence would lead to driverless cars on the roads by 2030. | neutral |
train_100819 | In our model, alignment between Q and R and the prior for the author of R with respect to marker class m correspond to feature vectors a and b, respectively, with a first feature indicating marker presence and a second feature accounting for marker frequency. | in the current setting, if w 2 = 0, we obtain the original measure where only the presence of a marker is recorded, without taking into account frequency and hence post length. | neutral |
train_100820 | 4 Both LDA-based models (Misra et al., 2009;Riedl and Biemann, 2012) and GRAPHSEG rely on corpus-derived word representations. | the GRAPHSEG algorithm significantly outperforms the topictiling method (p < 0.05, Student's t-test). | neutral |
train_100821 | All of these features are expected to be useful in stance classification as well. | observe that the word embedding features are beneficial for four out of five targets. | neutral |
train_100822 | This approach was therefore specific to the domain of discussion threads. | we then perform a study of the employability of LSTM and CNN for this kind of text classification task. | neutral |
train_100823 | This schema is composed of all types from diverse available ontologies. | the test collection is composed of a knowledge base of more than 350,000 entity categories obtained from the complete set of Wikipedia 2014 categories, but removing those containing non-ASCII characters. | neutral |
train_100824 | The Yago project (Suchanek et al., 2007) shares unary properties associating hundreds of thousands of descriptive categories manually created by the Wikipedia community to DBpedia entities (Auer et al., 2007). | the problem resides in the fact that places and demonyms have a high relatedness with common nouns. | neutral |
train_100825 | To measure the efficacy of vector arithmetic in a manner controlled for variances in prior vector similarity, we propose a baseline, defined for each analogy as the best ranking between the word most similar to w 2 and the word most similar to w 3 : For the above example, as dog is the most similar word to dogs, there is no improvement to be made upon baseline. | categories are grouped into a "semantic" and a "syntactic" subset, and results are often reported averaged over each rather than by category. | neutral |
train_100826 | During preprocessing we handled removal of punctuation. | • We demonstrate improvements to an integrated path-based and distributional model, showing that our morphology-aware neural network model, AntNET, performs better than state-of-the-art methods for antonym detection. | neutral |
train_100827 | For example, from the paraphrase pair (caught/not escape), we can derive the antonym pair (caught/escape) by just removing the negating word 'not'. | automated methods have been proposed to determine for a given term-pair (x, y), whether x and y are antonyms of each other, based on their occurrences in a large corpus. | neutral |
train_100828 | For algorithms based on sentence order, it is not clear whether the problem lies in the lack of wider collections of texts in some languages, or rather on the maximum amount of information about polarity that is learnt through a sentence-level distributional hypothesis. | we decoded the unsupervised representations multiple times through different initialisation of the MLP weights, hence we report both the mean value and its standard deviation. | neutral |
train_100829 | Context-aware models outperform context-agnostic models by up to 3 points 5 . | this yields three d-dimensional vectors for w l ( w l,max , w l,min , w l,mean ), and three for w r ( w r,max , w r,min , w r,mean ). | neutral |
train_100830 | We require a dataset to study hypernymy detection in context to satisfy the following desiderata: (1) the dataset should make it possible to assess the sensitivity of context-aware models to contexts that signal different word senses, and (2) the dataset should help quantify the extent to which models detect the asymmetric direction of hypernymy, rather than symmetric semantic similarity. | we propose to detect hypernymy between word meanings described by specific contexts. | neutral |
train_100831 | Our D-TopWords model will give more weights to the probability whether a sequence can be a word or not, and the TopWords model will more reliable on the external information. | the proposed model can extract these two types of words jointly. | neutral |
train_100832 | Recent efforts have, however, proposed semi-automated methods for resolving these inconsistencies (Chan et al., 2017). | the value of ρ was set to 50, and the LFN had a single hidden layer containing 1000 neurons with the tanh activation function. | neutral |
train_100833 | (2014b) our aim is to create dense input features to allow neural network architectures, as well as other machine learning algorithms, to be trained on them. | the second link is from token 6-7 in the gold standard which matches the model's prediction. | neutral |
train_100834 | Existing affect datasets are predominantly categorical. | • Some tweets have words clearly indicative of the focus emotion and some tweets do not. | neutral |
train_100835 | Within the NLP community, Best-Worst Scaling (BWS) has thus far been used only to annotate words: for example, for creating datasets for relational similarity (Jurgens et al., 2012), word-sense disambiguation (Jurgens, 2013), word-sentiment intensity (Kiritchenko et al., 2014), and phrase sentiment composition (Kiritchenko and Mohammad, 2016). | a promising avenue of future work is to test theories of emotion composition: e.g, whether optimism is indeed a combination of joy and anticipation, whether awe if fear and surprise, and so on, as some have suggested (Plutchik, 1980). | neutral |
train_100836 | Best results for each column are shown in bold: highest score by a feature set, highest score using a single lexicon, and highest score using feature set combinations. | most work on automatic emotion detection has focused on categorical classification (presence of anger, joy, sadness, etc.). | neutral |
train_100837 | In this work we demonstrate how online Deep Active Learning can be integrated with standard neural network based dialogue systems to enhance their open-domain conversational skills. | the weight λ associated with this metric is tuned to aggressively promote diversity between the first tokens of each of the K generated sequences, thereby avoiding similar beams like I don't know and I really don't know. | neutral |
train_100838 | The first predicts all negative attachments, which yields an accuracy of 85.8% on the test set (with F1 of 0). | then, for each example e, we compute the Ad-dCos lexical substitutability between w p and the target word in context C e . | neutral |
train_100839 | Target-Only gets very high scores on FrameNet dataset. | we use cross-validation on the training set and we observe the model performs better when we update the word vectors which is different from the preceding experimental setup. | neutral |
train_100840 | All dependents related to the target, their POS tags, dependency relations, lemmas, NE tags and the target itself will be extracted as features. | the number of training instances per target is limited. | neutral |
train_100841 | Frame semantics currently used in NLP have a rich history in linguistic literature. | the combined 20 properties of Reisinger et al. | neutral |
train_100842 | Also, its sharing of training data among relations should lead to more reliable learning for infrequent relations. | in this paper, we assess to what extent relational attributes of entities are easily accessible from word embedding space. | neutral |
train_100843 | In addition, all subjects were native speakers of Standard American English, while half the participants in the CXD corpus were native speakers of Mandarin Chinese. | we compared the cosine similarity of the turn and the question to the cosine similarity of any previous identified matches. | neutral |
train_100844 | (2) so that the predicate matrix does not learn a direct transformation from an argument vector to a phrase vector, but rather a difference between these vectors: This justifies the addition of predicate vector in (1). | we investigate the application of PLF to Croatian language, a Slavic language with relatively free word order. | neutral |
train_100845 | Table 2 shows that our model outperforms the baseline Word2Vec Skip-gram model (in fifth row from bottom). | the models are trained using Stochastic Gradient Descent (SGD). | neutral |
train_100846 | argument roles, in our model. | our system first performs clustering of verb arguments to identify their possible semantic roles and then computes the level of association between a given argument role and the verb, thus deriving the structure of the semantic frame in which the verb participates. | neutral |
train_100847 | Titov and Khoddam (2015) proposed a reconstruction-error minimization approach using a log-linear model to predict roles given syntactic and lexical features and a probabilistic tensor factorization model to identify argument fillers based on the role predictions and the predicate. | these works point out the coverage limitations of the hand-constructed FrameNet database, suggesting that a data-driven frame acquisition method is needed to enable the integration of frame semantics into real-world NLP applications. | neutral |
train_100848 | 8 To ensure the quality of workers, we applied a qualification test and required a 99% approval rate for at least 1,000 prior tasks. | finally, we rank the predicate pairs according to the number of instances in which they were aligned ( §3.4). | neutral |
train_100849 | We extract propositions from news tweets using PropS , which simplifies dependency trees by conveniently marking a wide range of predicates (e.g, verbal, adjectival, nonlexical) and positioning them as direct heads of their corresponding arguments. | to demonstrate the added value of our resource, we show that even in its current size, it already contains accurate predicate pairs which are absent from the existing resources. | neutral |
train_100850 | At the current stage of the resource there is a long tail of paraphrases that appeared few times. | we run PropS over dependency trees predicted by spaCy 6 and extract predicate types (as in Table 1) composed of verbal predicates, datives, prepositions, and auxiliaries. | neutral |
train_100851 | (2017) evaluated an unstructured QA system against semantic parsing benchmarks. | note that TF-IDF is by far the most impactful feature, leading to a large drop of 12 points in performance. | neutral |
train_100852 | Secondly, we introduce a semantic composition weight that is used to model the reading times of metonymic sentences reported in previous experimental studies and to predict the covert event in a binary classification task (Section 3). | we totally extracted 1,043,766 events that include at least one of the words of the evaluation datasets. | neutral |
train_100853 | Competition between different adposition construals accounts for many of the alternations that are near-paraphrases, but potentially involve slightly different nuances of meaning (e.g., "talk to someone" vs. "talk with someone"; "angry at someone" vs. "angry with someone"). | scenes of emotion and perception (Dirven, 1997;Osmond, 1997;Radden, 1998) provide a compelling case for the construal analysis. | neutral |
train_100854 | Every token was initially labeled by at least two independent annotators, and differences were adjudicated by experts. | the semantics of the phrase also includes EXPE-RIENCER-thus, it seems inappropriate to force a choice between EXPERIENCER and POSSESSOR for this token. | neutral |
train_100855 | We gratefully acknowledge the Alexander von Humboldt Foundation for an Anneliese-Maier prize and grant to Pelletier, and the Deutsche Forschungsgemeinschaft (KI-759/5) grant to Kiss, for their support of the work reported here and our other reports. | there are six of these tests in all, four syntactic and two semantic, chosen for their relevance to various of the issues that are salient in the studies of +MASS and +COUNt terms. | neutral |
train_100856 | b. invention: the process of generating some new type of thing vs. the actual kind of thing that has been generated. | without a detailed account of the wide range of senses of the component nouns, it will be impossible to give the desired group of semantic rules. | neutral |
train_100857 | COMPLETE requires that each of the nodes ofē i in R(p) have been identified, these nodes match up with the corresponding external nodes of the subgraph J, and that the subgraphs I and J are edge-disjoint. | (strings) by restricting the form of the productions in the grammars. | neutral |
train_100858 | We count instantiations of COMPLETE for an upper bound on complexity (McAllester, 2002), using similar logic to (Chiang et al., 2013). | given a graph g, a path in g from a node v to a node v is a sequence The length of this path is k. A path is terminal if every edge in the path has a terminal label. | neutral |
train_100859 | Then, to introduce distractors, we sample a random nominal from a unigram noise distribution. | according to his model, the predicate embeddings are distributed as: where W is a K × d matrix of weights and σ V is a hyperparameter. | neutral |
train_100860 | In practise, the likelihood component is optimized using negative sampling with EM for the latent slots. | while this step is not strictly required, we found that it leads to generally better results than random initialization given the relatively small size of our predicate-argument training corpus. | neutral |
train_100861 | Since this is a data driven approach, we identify non-core roles as well, if they occur with predicates often enough in the data. | for example, we have relational indicators such as "L-on", "R-before", "L-because", "R-None", etc. | neutral |
train_100862 | In order to circumvent the manual creation of patterns, nu-merous approaches have been investigated to derive patterns automatically. | for our experiments on this corpus, we divided the full set of patterns in the corpus (around 500 per relation) into two equally-sized portions, one for creating training data for the RTE engine, and one for evaluating pattern selection methods. | neutral |
train_100863 | For the marriage relation, the results not only show that our RTE engine adaptations yielded a much higher recall (with almost no loss in precision) than the original implementation (thanks to an increased number of relevant alignments), but also that pattern selection can in fact benefit from the graph structure: Entailment graphs created using MultiAlignAdapted achieved much better performance than a selection based on pair-wise entailment computation using the same RTE model. | it cannot express that the relation expressed by pattern P3 entails the relation expressed by pattern P1, but not vice versa. | neutral |
train_100864 | Tree-based LSTM models have been shown to often perform better than purely sequential bi-LSTMs (Tai et al., 2015;Miwa and Bansal, 2016), but depend on parsed input. | adding genre information does not help, but adding attention within the local clause yields an improvement of 2.44 percentage points (pp). | neutral |
train_100865 | For writing samples dataset, syntactic features are found to be the most successful in classification whereas for the less structured Twitter dataset, a combination of features performed the best. | negative sentiment could also be prominent in other psychiatric diseases such as post-traumatic stress disorder (PTSD) , as such, by itself, it may not be a discriminatory feature for schizophrenia patients. | neutral |
train_100866 | We thank Dr. Michael Roth for conducting the semantic role labelling evaluation using features from our model to see whether it is beneficial. | we design a multi-task model (NNRF-MT) which shares the event participant embedding for the event context and tackles role-filler prediction and semantic role classification simultaneously. | neutral |
train_100867 | The thematic fit judgements from the tasks discussed in section 6 only contain ratings of the fit between the predicate and one role-filler. | accuracy p-value NNRF-MT 89.1 -RoFa-MT 94.8 < 0.0001 ResRoFa-MT 94.7 < 0.0001 Table 1: Semantic role classification results for the three multi-task architectures. | neutral |
train_100868 | Because we use random parameter initialisation, to observe its effect to our evaluations, we train 10 instances of each model and report average performance (we do not use these 10 models as an ensemble method such as labelling by majority voting). | other event participants contained in a clause can affect human expectations of the upcoming role-fillers. | neutral |
train_100869 | Alternatively to applying nonparametric operators on word embeddings to create sentence embeddings, recurrent neural networks can learn the optimal weight matrix that can produce an accurate sentence embedding when repeatedly applied to the constituent word embeddings. | our experimental results show that the proposed NWS scores outperform baseline methods, previously proposed word salience scores and sentence embedding methods on a range of benchmark datasets selected from past SemEval STS tasks. | neutral |
train_100870 | We use SGNS, CBOW and GloVe word embedding learning algorithms to learn 300 dimensional word embeddings from the Toronto books corpus. | likewise, for a randomly selected sentence S j / ∈ {S i−1 , S i , S i+1 }, the expect similarity between S j and S i would be low. | neutral |
train_100871 | Using t and h(s i , s j ) above, we compute the cross-entropy error E(t, (S i , S j )) for an instance (t, (S i , S j )) as follows: Next, we backpropagate the error gradients via the network to compute the updates as follows: Here, we drop the arguments of the error and simply write it as E to simplify the notation. | in principle, we can learn NWS scores using any pre-trained set of word embeddings. | neutral |
train_100872 | The two sub-sections below present the results from the analysis for gender bias and race bias, respectively. | the bias can originate from any or several of these parts. | neutral |
train_100873 | The Equity Evaluation Corpus and the proposed methodology to examine bias are not meant to be comprehensive. | 8 Each point (L, M, G) on the plot corresponds to the difference in scores predicted by the system on one sentence pair. | neutral |
train_100874 | (2018) continued the work in this direction recently and applied convolutional neural networks to cross-lingual EL. | table 3 lists the main selected hyperparameters for the VCG model 6 and we also report the results for each model's best configuration on the development set in table 2. | neutral |
train_100875 | We extract all entities that are mentioned in the question from the SPARQL query. | the parameter sharing between mention encoding and entity label encoding ensures that the representation of a mention is similar to the entity label. | neutral |
train_100876 | We extract gold entities for each question from the SPARQL query and map them to Wikidata. | such approaches can not handle entities of types other than those that are supplied by the named entity recognizer. | neutral |
train_100877 | (2014), we expect concrete target words to have significantly less diverse context dimensions than abstract target words, as the former should co-occur within a restricted set of context words. | more recently, embodied theories of cognition have suggested that word meanings are grounded in the sensory-motor system (Barsalou and Wiemer-Hastings, 2005;Glenberg and Kaschak, 2002;Hill et al., 2014;Pecher et al., 2011). | neutral |
train_100878 | In this work, we provide a detailed characterisation of the distributional nature of abstract and concrete words across 16,620 English nouns, verbs and adjectives. | we expect concrete words to occur in a limited set of different contexts because there is only a restricted amount of words that have a high probability to fit a specific context. | neutral |
train_100879 | For future work, we would like to evaluate the performance of EmoWordNet on larger datasets and we would like to improve the accuracy of the recognition model. | while sentiment is usually represented by three labels namely positive, negative or neutral, several representation models exist for emotions such as Ekman representation (Ekman, 1992) (happiness, sadness, fear, anger, surprise and disgust) or Plutchik model (Plutchik, 1994) that includes trust and anticipation in addition to Ekman's six emotions. | neutral |
train_100880 | Mohammad and Turney (2013) presented challenges that researchers face for developing emotion lexicons and devised an annotation strategy to create a good quality and inexpensive emo-tion lexicon, EmoLex, by utilizing crowdsourcing. | the second approach is cheap and results in large scale emotion lexicons but with lower accuracy compared to manually developed emotion lexicons in terms of accurately representing the emotion of the term. | neutral |
train_100881 | So it is not surprising that cross-language extensions of such models (crosslanguage word embeddings) rapidly gained popularity in the NLP community (Vulić and Moens, 2013), proving their effectiveness in certain crosslanguage NLP tasks (Upadhyay et al., 2016). | the task is to predict the similarity score for a word a in language A and a word b in language B. | neutral |
train_100882 | Similarly, we observed that increasing the number of interaction types M does not guarantee consistent performance gain. | for each Implementation-wise, we vertically stack G for M times to constructG ∈ R M d×3 , and use each row g i as input to Eq. | neutral |
train_100883 | We are grateful to Tagyoung Chung and Valeria de Paiva for helpful comments on presentation, though they should not be held responsible for the views expressed. | but the need to assert a negative rarely arises: better to wait until the corresponding incompatible positive is known, or as a last resort make up a positive fact that is incompatible with negative (e.g. | neutral |
train_100884 | This is not directly comparable to existing work because it is a unique setup; but we note that this is likely a difficult task because of the large output space. | in our preliminary work, we experimented with CNN and RNNbased architectures but their performance was inferior to the model described here both in terms of accuracy and speed. | neutral |
train_100885 | Problems Our end-to-end system is sound in the sense that it polarizes the correctly input semantic representations. | we are able to take simple sentences all the way through. | neutral |
train_100886 | The ability of the system to successfully cluster instances varies from one gold frame category to another one. | for input strings in each cluster E i , we instantiate a new G i and perform parameter fitting and splitting independently of other E i s. The corresponding probabilistic G i is initialized by assigning random parameters to its rules and then smoothing them ( § 3.1.3) by the fitted parameters for G. We apply the aforementioned process several times, until the number of independently generated clusters is at least twice as large as |T v |. | neutral |
train_100887 | Then with (1, 2)=before and (2, 3)=vague, we can see that (1, 3) cannot be uniquely determined, but it is restricted to be selected from {bef ore, vague} rather than the entire label set. | 3 Joint Learning on F and P In this work, we study two learning paradigms that make use of both F and P. In the first, we simply treat those edges that are annotated in P as edges in F so that the learning process can be performed on top of the union of F and P. This is the most straightforward approach to using F and P jointly and it is interesting to see if it already helps. | neutral |
train_100888 | Edges are presented one-byone and the annotator has to choose a label for it (note that there is a vague label in case the TempRel is not clear or does not exist). | in practice, every annotation comes at a cost, either time or the expenses paid to annotators, and as more edges are annotated, the marginal "benefit" of one edge is going down (an extreme case is that an edge is of no value if it can be inferred from existing edges). | neutral |
train_100889 | 3 While incorporating transitivity constraints in inference is widely used, Ning et al. | since annotators are not required to label all the edges in these datasets, it is less labor intensive to collect P than to collect F. existing TempRel extraction methods only work on one type of datasets (i.e., either F or P), without taking advantage of both. | neutral |
train_100890 | (1), as was done by (Bramsen et al., 2006;Chambers and Jurafsky, 2008;Denis and Muller, 2011;Do et al., 2012;Ning et al., 2017). | p does not contain any vague training examples, so System 3 would only predict specific TempRels, leading to a low precision. | neutral |
train_100891 | In this work, we adopt a corpus linguistics approach in a similar way to Antoniak and Mimno (2018), Hamilton et al. | the POS also has a high impact on the model trained. | neutral |
train_100892 | by observing a few nearest neighbors). | concerning these two last features (polysemy and entropy) experiments confirm that distributional semantics has more difficulty in representing the meaning of words that appear in a variety of contexts. | neutral |
train_100893 | For ACL and PLOS we also noticed that words belonging to the transdisciplinary scientific lexicon remained stable (conjunctive adverbs such as nevertheless, moreover, furthermore and scientific processes such as hypothethize, reason, describe). | we are aware that this measure assess only a part of what has changed from one model to the other based on the number of neighbors observed. | neutral |
train_100894 | This setup is more realistic than the random and lexical split setups, in which the classifiers can benefit from memorizing verbatim words (random) or regions in the vector space (lexical split) that fit a specific slot of each relation. | this setup is more realistic than the random and lexical split setups, in which the classifiers can benefit from memorizing verbatim words (random) or regions in the vector space (lexical split) that fit a specific slot of each relation. | neutral |
train_100895 | 2016where we apply an long short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) layer over the characters of a word and then concatenate the output with a word embedding to create a word representation that combines both character-level and word-level information. | hand-crafting these lexicons is time-consuming and expensive and so these resources are often either unavailable or sparse for many domains and languages. | neutral |
train_100896 | The score p(l|w) of the words shown for SNLI deviate strongly, regardless of the label. | we find that this baseline can perform above the majority-class prior across most of the ten examined datasets. | neutral |
train_100897 | We collected annotations for 4586 contextcontinuation pairs, collecting the following three criteria for each pair: • Overall quality (O): a subjective judgment by the annotator of the quality of the continuation, i.e., roughly how much the annotator thinks the continuation adds to the story. | we develop simple systems based on recurrent neural networks and similarity-based retrieval and train them on the ROC story dataset . | neutral |
train_100898 | We attempted to replace the PMI features by similar features based on word embedding similarity, following the argument that skip-gram embeddings with negative sampling form an approximate factorization of a PMI score matrix (Levy and Goldberg, 2014). | we use the initial data release of 45,502 stories. | neutral |
train_100899 | Our model can be applied to any new terms in any domain, given some context of the term usage and their domainagnostic definitions. | this is expected to generate more indicative features than a similar work (Shwartz et al., 2016), which simply concatenated distributional models with path-based models; • HYPERDEF employs definitions to provide richer information for the terms. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.