id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_300
Note that in the absence of labeled training data, two of the random variables E ¡ and ¢ £ are observed, while the sense variable 8 is not.
we can derive the possible values for our sense labels from WordNet, which gives us the possible senses for each English word E ¡ .
contrasting
train_301
For the Sense Model, these probabilities form the initial priors over the senses, while all English (and Spanish) words belonging to a sense are initially assumed to be equally likely.
initialization of the Concept Model using the same knowledge is trickier.
contrasting
train_302
This may be counter-intuitive since Concept Model involves an extra concept variable.
the dissociation of Spanish and English senses can significantly reduce the parameter space.
contrasting
train_303
Second, Hownet and Rocling sometimes use adjectives or non-words as category glosses, such as animate and LandVehicle etc., which have no WordNet nominal hypernyms at all.
those adjectives or non-words usually have straightforward meanings and can be easily reassigned to an appropriate WordNet category.
contrasting
train_304
Correlation between the automatic tagging precision and STE is expected to be high if SALAAM has good quality translations and good quality alignments.
this correlation is a low 9 4 A C B E 8 8 .
contrasting
train_305
We observe cases with a low PerpDiff such as holiday (PerpDiff of A C BA ¡ ), yet the PR is a low .
items such as art have a relatively high PerpDiff of .
contrasting
train_306
Accordingly, in this case, STE coupled with the noisy tagging could have resulted in the low PR.
for circuit, the STE value for its respective condition is a high for the manually annotated data.
contrasting
train_307
It seems that the f-structure-based approach is more abstract (99 LDD path types against approximately 9,000 tree-fragment types in (Johnson, 2002)) and fine-grained in its use of lexical information (subcat frames).
to Johnson's approach, our LDD resolution algorithm is not biased.
contrasting
train_308
Earlier HPSG work (Tateisi et al., 1998) is based on independently constructed hand-crafted XTAG resources.
we acquire our resources from treebanks and achieve substantially wider coverage.
contrasting
train_309
Knowledge of these hidden relationships is in turn essential to semantic interpretation of the kind practiced in the semantic parsing (Gildea and Jurafsky, 2002) and QA (Pasca and Harabagiu, 2001) literatures.
work in statistical parsing has for the most part put these needs aside, being content to recover surface context-free (CF) phrase structure trees.
contrasting
train_310
2 NEGRA's original annotation is as dependency trees with phrasal nodes, crossing branches, and no empty elements.
the distribution includes a context-free version produced algorithmically by recursively remapping discontinuous parts of nodes upward into higher phrases and marking their sites of origin.
contrasting
train_311
Such low value depends on the fact that different arguments tend to appear in different structures.
if two structures differ only for a few nodes (especially terminals or near terminal nodes) the similarity remains quite high.
contrasting
train_312
Finally, from the above observations follows that the PAF representation may be used with PAK to classify arguments.
sCF lacks important information, thus, alone it may be used only to classify verbs in syntactic categories.
contrasting
train_313
In fact, in PropBank arguments are annotated consistently with syntactic alternations (see the Annotation guidelines for Prop-Bank at www.cis.upenn.edu/∼ace).
frameNet roles represent the final semantic product and they are assigned according to semantic considerations rather than syntactic aspects.
contrasting
train_314
Although consistently with prior research we find that the combined feature sets usually outperform the speech-only feature sets, the combined feature sets frequently perform worse than the lexical-only feature sets.
we will see in Section 6 that combining knowledge sources does improve prediction performance in human-human dialogues.
contrasting
train_315
Moreover, it is no longer the case that every feature Table 9: %Corr., NPN Consensus, MAJ=48.35% set performs as well as or better than their baselines 6 ; within the "-id" sets, NnN "sp" and EnE "lex" perform significantly worse than their baselines.
again we see that the "+id" sets do consistently better than the "-id" sets and moreover always outperform the baselines.
contrasting
train_316
Every feature set performs significantly better than their baselines.
unlike the computerhuman data, we don't see the "+id" sets performing better than the "-id" sets; rather, both sets perform about the same.
contrasting
train_317
attempt to derive relative subcategorization frequency for individual predicates".
the system of (Briscoe and Carroll, 1997) distinguishes 163 verbal subcategorisation classes by means of a statistical shallow parser, a classifier of subcategorisation classes, and a priori estimates of the probability that any verb will be a member of those classes.
contrasting
train_318
(2004) found that the majority of the recorded transitions in the configuration of Centering used in this study are nocbs.
we also explained in section 2.3 that what really matters when trying to determine whether a text might have been generated only paying attention to Centering constraints is the extent to which it would be possible to 'improve' upon the ordering chosen in that text, given the information that the text structuring algorithm had to convey.
contrasting
train_319
Subsequent work on referring expression generation has expanded the logical framework to allow reference by negation (the dog that is not black) and references to multiple entities (the brown or black dogs) (van Deemter, 2002), explored different search algorithms for finding the minimal description (e.g., Horacek (2003)) and offered different representation frameworks like graph theory (Krahmer et al., 2003) as alternatives to AVMs.
all these approaches are based on very similar formalisations of the problem, and all make the following assumptions: 1.
contrasting
train_320
For attributes, we defined DQ = CQ − SQ.
as the linguistic realisation of a relation is a phrase and not a word, we would like to normalise the discriminating power of a relation with the length of its linguistic realisation.
contrasting
train_321
We define the parsability of a word R(w) as the ratio of the number of times the word occurs in a sentence with a successful parse (C(w|OK)) and the total number of sentences that this word occurs in (C(w)): Thus, if a word only occurs in sentences that cannot be parsed successfully, the parsability of that word is 0.
if a word only occurs in sentences with a successful parse, its parsability is 1.
contrasting
train_322
: word break, named entity recognition and morphological analysis) in a unified approach.
it is difficult to develop a training corpus where new words are annotated because "we usually do not know what we don't know".
contrasting
train_323
Whenever we deploy the segmenter for any application, we need to customize the output of the segmenter according to an application-specific standard, which is not always explicitly defined.
it is often implicitly defined in a given amount of application data (called adaptation data) from which the specific standard can be partially learned.
contrasting
train_324
A number of studies are related to the work we presented, most specifically work on parallel-text based "information projection" for parsing (Hwa et al., 2002), but also grammar induction work based on constituent/distituent information (Klein and Manning, 2002) and (language-internal) alignmentbased learning (van Zaanen, 2000).
to our knowledge the specific way of bringing these aspects together is new.
contrasting
train_325
In those experiments, the model above was trained on over 30M words of raw newswire, using EM in an entirely unsupervised fashion, and at great computational cost.
as shown in figure 3, the resulting parser predicted dependencies at below chance level (measured by choosing a random dependency structure).
contrasting
train_326
Clark (2001) and Klein and Manning (2002) show that this approach can be successfully used for discovering syntactic constituents as well.
as one might expect, it is easier to cluster word sequences (or word class sequences) than to tell how to put them together into trees.
contrasting
train_327
DA happily obtained a 10% reduction in tag error rate on training data, and an 11% reduction on test data.
it did not manage to improve likelihood over EM.
contrasting
train_328
It turned out that the implementation was fast, but many operations wasted a lot of memory as their resulting transducer had been fully expanded in memory.
we plan to also make this initial version publically available.
contrasting
train_329
The disadvantage of using pipes is that automata must be serialized and get fully expanded by the next executable in chain.
an advantage of multiple executables is that memory does not get fragmented through the interaction of different algorithms.
contrasting
train_330
If one equates optimum parameter estimation with finding the global maximum for the likelihood of the training data, then this result would seem to show no improvement is possible.
in virtually every application of statistical techniques in natural-language processing, maximizing the likelihood of the training data causes overfitting, resulting in lower task performance than some other estimates for the model parameters.
contrasting
train_331
In comparison to parallel corpora, ie corpora which are mutual translations, comparable corpora have not received much attention from the research community, and very few methods have been proposed to extract bilingual lexicons from such corpora.
except for those found in translation services or in a few international organisations, which, by essence, produce parallel documentations, most existing multilingual corpora are not parallel, but comparable.
contrasting
train_332
Such a combination could involve Fisher kernels with different latent classes in a first step, and a final combination of the different methods.
the results we obtained so far suggest that the rank of the candidates is an important feature.
contrasting
train_333
A lot of effort has been spent on constructing translation lexicons from domain-specific corpora in an automatic way (Melamed, 2000;Smadja et al., 1996;Kupiec, 1993).
such methods encounter two fundamental problems: translation of regional variations and the lack of up-to-date and high-lexical-coverage corpus source, which are worthy of further investigation.
contrasting
train_334
Only 100 snippets retrieved might not balance or be sufficient for translation extraction.
the inclusion rates for the three regions were close in the top-5.
contrasting
train_335
In general, the translations in the two types were adapted to the use in different regions.
10% (15/147) and 8% (12/143) of the translations in types organization and others, respectively, had regional variations, because most of the terms in type others were general terms such as "bank" and "movies" and in type organization many local companies in Taiwan had no translation variations in mainland China and HK.
contrasting
train_336
In PCFGs and PPDAs, probabilities are assigned to rules or transitions, respectively.
these probabilities cannot be chosen entirely arbitrarily.
contrasting
train_337
It is not difficult to see that there exists a bijection f from complete computations of A to complete derivations of G , and that we have for each complete computation c. Thus (G , p G ) is consistent.
note that (G , p G ) is not proper.
contrasting
train_338
As discussed in the introduction, the properness condition reduces the number of parameters of a PPDT.
our results show that if the PPDT has the CPP and the SPP then the properness assumption is not restrictive, i.e., by lifting properness we do not gain new distributions with respect to those induced by the underlying PCFG.
contrasting
train_339
For a total of 454 questions (91%), a simple reciprocal constraint could be generated.
for 61 of those, the reciprocal question was sufficiently non-specific that the sought reciprocal answer was unlikely to be found in a reasonably-sized hit-list.
contrasting
train_340
Altogether, we see that some of the very problems we aimed to skirt are still present and need to be addressed.
we have shown that even disregarding these issues, QDC was able to provide substantial improvement in accuracy.
contrasting
train_341
These parameters can be easily determined in PDT, as each incoming case is classified into each class with a probability.
the incoming cases in NBC are grouped into one class which is assigned the highest score.
contrasting
train_342
Our informativeness-based measure is similar to these works.
these works just follow a single criterion.
contrasting
train_343
Moreover, the representativeness measure we use is relatively general and easy to adapt to other tasks, in which the example selected is a sequence of words, such as text chunking, POS ta gging, etc.
(Brinker 2003) first incorporate diversity in active learning for text classification.
contrasting
train_344
Their work is similar to our local consideration in Section 2.3.2.
he didn' t further explore how to avoid selecting outliers to a batch.
contrasting
train_345
the gunman kill police As we have shown earlier, BLEU-2 cannot differentiate S2 from S3.
s2 has a ROUGE-L score of 3/4 = 0.75 and s3 has a ROUGE-L score of 2/4 = 0.5, with β = 1.
contrasting
train_346
The Pearson's ρ correlation values in the Stem set of the Fluency Table, indicates that BLEU12 has the highest correlation (0.93) with fluency.
it is statistically indistinguishable with 95% confidence from all other metrics shown in the Case set of the Fluency Table except for WER and GTM10.
contrasting
train_347
It can be seen from the table that there is a strong positive correlation between the baseline BLEU scores and human scores for Fluency: r(2)=0.9807, p <0.05.
the correlation with Adequacy is much weaker and is not statistically significant: r(2)= 0.5918, p >0.05.
contrasting
train_348
Future work will apply our approach to evaluating MT into languages other than English, extending the experiment to a larger number of MT systems built on different architectures and to larger corpora.
the results of the experiment may also have implications for MT development: significance weights may be used to rank the relative "importance" of translation equivalents.
contrasting
train_349
Almost all of sense disambiguation methods are heavily dependant on manually compiled lexical resources.
these lexical resources often miss domain specific word senses, even many new words are not included inside.
contrasting
train_350
The naïve Bayes model was found to be the most accurate classifier in a comparative study using a subset of Senseval-2 English lexical sample data by Yarowsky and Florian (2002).
the maximum entropy (Jaynes, 1978) was found to yield higher accuracy than naïve Bayes in a subsequent comparison by Klein and Manning (2002), who used a different subset of either Senseval-1 or Senseval-2 English lexical sample data.
contrasting
train_351
The label and string position method is useful if one sees the task as inserting empty nodes into a string, and thus is quite useful for evaluating systems that detect empty categories without parse trees, as in Dienes and Dubey (2003a).
if the task is to insert empty nodes into a tree, then the method leads both to false positives and to false negatives.
contrasting
train_352
If a system places the trace in position 2 instead, the string position method will count it as an error, since 1 and 2 have different string positions.
it is not at all clear what it means to say that one of those two positions is correct and the other not, since there is no semantic, grammatical, or textual indicator of its exact position.
contrasting
train_353
The items of Translator CT have a -dimensional label vector, as usual.
their d-span vectors are only -dimensional, because it is not necessary to constrain absolute word positions in the output dimensions.
contrasting
train_354
The move to MCSG is motivated by our desire to more perspicuously account for certain syntactic phenomena that cannot be easily captured by context-free grammars, such as clitic climbing, extraposition, and other types of longdistance movement (Becker et al., 1991).
mCSG still observes some restrictions that make the set of languages it generates less ex-pensive to analyze than the languages generated by (properly) context-sensitive formalisms.
contrasting
train_355
A CFG can always be binarized into another CFG: two adjacent nonterminals are replaced with a single nonterminal that yields them.
it can be impossible to binarize a -GMTG into an equivalent -GMTG¨ .
contrasting
train_356
Until recently, HMMs were the predominant formalism to model label sequences.
they have two major shortcomings.
contrasting
train_357
In general, labelers followed the basic conventions of EToBI for coding (Taylor, 2000).
the Tilt coding scheme was simplified.
contrasting
train_358
So far, all of our predictors are ones easily computed from a string of text.
we have included a few variables that affect the likelihood of a word being accented that require some acoustic data.
contrasting
train_359
Given the distinct natures of corpora used, it is difficult to compare these results with earlier models.
in experiment 1 (HMM: POS, Unigram vs CRF: POS, Unigram) we have shown that a CRF model achieves a better performance than an HMM model using the same features.
contrasting
train_360
However, in experiment 1 (HMM: POS, Unigram vs CRF: POS, Unigram) we have shown that a CRF model achieves a better performance than an HMM model using the same features.
the real strength of CRFs comes from their ability to incorporate different sources of information efficiently, as is demonstrated in our experiments.
contrasting
train_361
We did not test directly the probabilistic measures (or collocation measures) that have been used before for this task, namely information content (IC) (Pan and McKeown, 1999) and mutual information (Pan and Hirschberg, 2001).
the measures we have used encompass similar information.
contrasting
train_362
The disjunctive discourse marker or is also NON-VERIDICAL, because it does not imply that both of its arguments are true.
and does imply this, and so has the feature veridical-ity=VERIDICAL.
contrasting
train_363
For Spanish CallHome, (Ries, 1999) reports 76.2% accuracy with a hybrid approach that couples Neural Networks and ngram backoff modeling; the former uses prosodic features and POS tags, and interestingly works best with unigram backoff modeling, i.e., without taking into account the DA history -see our discussion of the ineffectiveness of the DA history below.
(Ries, 1999) does not mention 4 The baselines for CallHome37 and CallHome10 are the same because in both statement is the most frequent DA.
contrasting
train_364
We choose this task because the original intention of this shared task was to test the effectiveness of semi-supervised learning methods.
it turned out that none of the top performing systems used unlabeled data.
contrasting
train_365
Conditional Random Fields (CRFs) have been applied with considerable success to a number of natural language processing tasks.
these tasks have mostly involved very small label sets.
contrasting
train_366
CRFs are usually estimated using gradient-based methods such as limited memory variable metric (LMVM).
even with these efficient methods, training can be slow.
contrasting
train_367
Evaluating the first term of the derivative is quite simple.
the sum over all possible labellings in the second term poses more difficulties.
contrasting
train_368
When using a very short code, the error-correcting CRF will not adequately model the decision boundaries between all classes.
using a long code will lead to a higher degree of dependency between pairs of classifiers, where both model similar concepts.
contrasting
train_369
Using such a large number of weak learners is costly, in this case taking roughly ten times longer to train than the multiclass CRF.
much shorter codes can also achieve similar results.
contrasting
train_370
Specificity ordering is a necessary step for building a noun hierarchy.
this approach clearly cannot build a hierarchy alone.
contrasting
train_371
The interaction between the head verb and the preposition determine whether the noun is an indirect object of a ditransitive verb or alternatively the head of a PP that is modifying the verb.
sEXTANT always attaches the PP to the previous phrase.
contrasting
train_372
This technique is similar to Hearst and Schütze (1993) and Widdows (2003).
sometimes the unknown noun does not appear in our 2 billion word corpus, or at least does not appear frequently enough to provide sufficient contextual information to extract reliable synonyms.
contrasting
train_373
We have also experimented with filtering out highly polysemous nouns by eliminating words with two, three or more synonyms.
such a filter turned out to make little difference.
contrasting
train_374
We overcome this by using the synonyms of the last word in the multi-word term.
there are 174 multi-word terms (23%) in the WORDNET 1.7.1 test set which we could probably tag more accurately with synonyms for the whole multi-word term.
contrasting
train_375
Our WSD system uses heuristics to attempt to detect predicate arguments from parsed sentences.
recognition of predicate argument structures is not straightforward, because a natural language will have several different syntactic realizations of the same predicate argument relations.
contrasting
train_376
Accuracy orig 0.564 orig*fset 0.587 orig+pb 0.597 (orig+pb)*fset 0.628 Table 5: Accuracy of system on WordNet sensetagging of 20 Senseval-2 verbs with more than one frameset, with and without gold-standard frameset tag.
partitioning the instances using the automatically generated frameset tags has no significant effect on the system's performance; the information provided by the automatically assigned coarse-grained sense tag is already encoded in the features used for fine-grained sense-tagging.
contrasting
train_377
Regarding the role of NL interfaces for ITSs, only very recently have the first few results become available, to show that first of all, students do learn when interacting in NL with an ITS (Litman et al., 2004;Graesser et al., 2005).
there are very few studies like ours, that evaluate specific features of the NL interaction, e.g.
contrasting
train_378
In the ideal situation, the generator would have produced texts with the perfect regression scores and they would be identical to the stylistic scores, so the graph in the figure 9 would be like a gridshape one as in figure 6.
we have already seen in figure 7, that this is not the case for the relation between the target coordinates and the regression scores.
contrasting
train_379
The disadvantage of this is that variation is not grounded in some 'intuitive' notion of style: the interpretation of the stylistic dimensions is subjective and tentative.
as no comprehensive computationally realisable theory of style yet exists, we believe that this approach has considerable promise for practical, empirically-based stylistic control.
contrasting
train_380
IDL-expressions are a convenient way to compactly represent finite languages.
iDLexpressions do not directly allow formulations of algorithms to process them.
contrasting
train_381
The results shown in Figure 7 suggest that we could further improve parsing performance by increasing the model size.
both the memory size and the training time are more than linear in vH w v , and the training time for the largest ( ) models was about 15 hours for the models created using CENTER-PARENT, CENTER-HEAD, and LEFT and about 20 hours for the model created using RIGHT.
contrasting
train_382
The above method allows for the tractable estimation of log-linear models on exponentially-many HPSG parse trees.
despite the development of methods to improve HPSG parsing efficiency (Oepen et al., 2002a), the exhaustive parsing of all sentences in a treebank is still expensive.
contrasting
train_383
Actually, in our previous study (Miyao et al., 2003), we successfully developed a probabilistic model including features on nonlocal predicate-argument dependencies.
since we could not observe significant improvements by incorporating nonlocal features, this paper investigates only the features described above.
contrasting
train_384
Table 4 revealed that our simple method of filtering caused a fatal bias in training data when a preliminary distribution was used only for filtering.
the model combined with a preliminary model achieved sufficient accuracy.
contrasting
train_385
In contrast, RULE, SYM, and LE features did not affect the accuracy.
if each of them was removed together with another feature, the accuracy decreased drastically.
contrasting
train_386
Eisner and Satta (1999) give a cubic algorithm for lexicalized phrase structures.
it only works for a limited class of languages in which tree spines are regular.
contrasting
train_387
However, they are fundamentally limited by their approximate search algorithm.
our system searches the entire space of dependency trees and most likely benefits greatly from this.
contrasting
train_388
As shown in Table 3, the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT.
the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT.
contrasting
train_389
It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases.
if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge.
contrasting
train_390
The most informative scheme, Head+Path, gives the highest scores, although with respect to Head the difference is not statistically significant, while the least informative scheme, Path -with almost the same performance on treebank transformation -is significantly lower (p < 0.01).
given that all schemes have similar parsing accuracy overall, this means that the Path scheme is the least likely to introduce errors on projective arcs.
contrasting
train_391
They define the feature recall of w with respect to v as the weighted proportion of features of v that also appear in the vector of w. Then, they suggest that a hypernym would have a higher feature recall for its hyponyms (specifications), than vice versa.
their results in predicting the hyponymy-hyperonymy direction (71% precision) are comparable to the naïve baseline (70% precision) that simply assumes that general words are more frequent than specific ones.
contrasting
train_392
According to Hypothesis I we expect that a pair (w, v) that satisfies entailment will also preserve feature inclusion.
by Hypothesis II if all the features of w are included by v then we expect that w entails v. We observed that Hypothesis I is better attested by our data than the second hypothesis.
contrasting
train_393
At the word level we observed 14% invalid pairs of the first case and 30% of the second case.
our manual analysis shows, that over 90% of the first case pairs were due to a different sense of one of the entailing word, e.g.
contrasting
train_394
Precision was significantly improved, filtering out 60% of the incorrect pairs.
the relative recall (considering RFF recall as 100%) was only reduced by 13%, consequently leading to a better relative F1, when considering the RFF-top-40 output as 100% recall (Table 5).
contrasting
train_395
Clearly, OVA makes no explicit use of pairwise label or item relationships.
it can perform well if each class exhibits sufficiently distinct language; see Section 4 for more discussion.
contrasting
train_396
One possible interpretation is that the relevant structure of the problem is already captured by linear regression (and perhaps a different kernel for regression would have improved its three-class performance).
according to additional experiments we ran in the four-class situation, the test-set-optimal parameter settings for metric labeling would have produced significant improvements, indicating there may be greater potential for our framework.
contrasting
train_397
If we had a large collection of sensetagged text, then we could extract disambiguated feature vectors by collecting co-occurrence features for each word sense.
since there is little sense-tagged text available, the feature vectors for a random WordNet concept would be very sparse.
contrasting
train_398
Clearly, when the system returns at most one or two attachments, the recall on the polysemous nodes is lower than on the Full set.
it is interesting to note that recall on the polysemous nodes equals the recall on the Full set after K=3.
contrasting
train_399
We do not assume that local coherence is sufficient to uniquely determine the best ordering -other constraints clearly play a role here.
we expect that the accuracy of a coherence model is reflected in its performance in the ordering task.
contrasting