id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_101200
Each of these graphs is "smooth" because it incrementally adds new languages as n or m increases.
the embedded distances are reasonably faithful to the symmetrized dissimilarities: metric MDS achieves a low value of 0.20 on its "stress" objective, and we find that Kendall's tau = 0.76, meaning that if one pair of languages is displayed as farther apart than another, then in over 7/8 of cases, that pair is in fact more dissimilar.
neutral
train_101201
These "unearthly" languages are intended to be at least sim-ilar to possible human languages.
we could train a system to extract features from the surface of a language that are predictive of its deeper structure.
neutral
train_101202
): the morphology is correct, but the numeric value is wrong.
but not all ambiguities can be handled by normal form constraints.
neutral
train_101203
The second was a TensorFlow-based RNN with an attention mechanism (Mnih et al., 2014), using an overall architecture similar to that used in a system for end-to-end speech recognition (Chan et al., 2016).
this approach can be implemented with a backtracking recursive descent parser enforcing the aforementioned normal form constraints and propagating the top-down arithmetic constraints.
neutral
train_101204
Above, we focused solely on cardinal numbers, and specifically their citation forms.
while the former approach is certainly more appealing given current trends in NLP, only the latter is feasible for lowresource languages which most need an automated approach to text normalization.
neutral
train_101205
The COMSENSE approach models the decision as interactions between high-level categories of entities, actions and utterances.
most interestingly, it does so with a larger margin when tested over the out-of-domain dataset, demonstrating that it is more resistant to overfitting compared to other models.
neutral
train_101206
Similarly, we associated with the quote topic PROFANITY a list of profanity words.
the high level categories assigned to the NRG vertices are not observed, and as a result we view it as a weakly supervised learning problem, where the category assignments correspond to latent variable assignments.
neutral
train_101207
Our time-aware method was able to assign higher weights to the new taxonomic relations than the older relations due to their recent timestamps, even though the frequencies of these new relations are fewer than that of the older relations.
secondly, if we normalize the score, the information on the number of evidence sentences, which is important for the LsP method to recognize true taxonomic relationships, will be lost.
neutral
train_101208
This effect is likely due to the network graph becoming increasingly dense with more nodes (Figure 1, bottom right).
although it is not surprising that the network graph visualization better communicates common phrases in the corpus as edges are drawn between these phrases, this suggests other approaches to drawing edges.
neutral
train_101209
This was because averaging z-transformed values (and back transforming) tends to yield less biased estimates of the population value than averaging the raw coefficients (Silver and Dunlap, 1987).
the image-based model is built using a deep convolutional neural network approach, similar in nature to those recently used to study neural representations of visual stimuli (see Kriegeskorte (2015), although note this is the first application to study word elicited neural activation known to the authors).
neutral
train_101210
Also, they do not include selectional preference or common-sense knowledge effects in their analysis.
in order to choose which inference rules can be applied to yield "soap", we can inspect Figure 4.
neutral
train_101211
The guessing experiment provides a basis to estimate human expectation of already mentioned DRs (the number of clicks on the respective NPs in text).
this feature captures the distance l t (d) between the position t and the last occurrence of the candidate DR d. As a distance measure, we use the number of sentences from the last mention and exponentiate this number to make the dependence more extreme; only very recent DRs will receive a noticeable weight: exp(−l t (d)).
neutral
train_101212
(2008) used the concept of spine for TAG (Schabes, 1992;Vijay-Shanker and Joshi, 1988), which is similar to our constituent hierarchy.
the weights of contribution β ijk are computed using the attention mechanism (Bahdanau et al., 2015).
neutral
train_101213
(2011) run a baseline parser for a few future steps, and use the output actions to guide the current action.
wang and Xue (2014) and do joint POS tagging and parsing.
neutral
train_101214
Each constituent hierarchy is generated bottom-up recurrently.
it ends a constituent hierarchy, with a constituent S on the top level, dominating a VP (starting from the word "like"), and then an NP (starting from the noun phrase "this book"), and then a PP (starting from the word "in"), and finally an NP (starting from the word "the").
neutral
train_101215
Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.
for any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).
neutral
train_101216
In this setup, the distribution over next symbols given a context u is drawn hierarchically from a Dirichlet process whose base measure is another Dirichlet process associated with context tl(u), and so on, with all draws ultimately backing off into some unconditioned distribution over all possible next symbols.
for all but one lexicon, we find that the autosegmental models do not significantly outperform the N-gram models on artificial data.
neutral
train_101217
And-node C is a class node bundling together the oral features of vowel segments.
we have presented a model where forms are built out of parts by structure-building operations.
neutral
train_101218
For example, "2 + Context Gate (both)" in Row 3 denotes integrating "Context Gate (both)" into the baseline in Row 2 (i.e., GroundHog (vanilla)).
over GroundHog-Coverage (GRU) We finally tested on a stronger baseline, which employs a coverage mechanism to indicate whether or not a source word has already been translated (Tu et al., 2016).
neutral
train_101219
In neural machine translation (NMT), generation of a target word depends on both source and target contexts.
they employ a gating scalar to scale only the source context.
neutral
train_101220
After identifying drug, gene and mutation mentions in the text, co-occurring triples with known interactions were chosen as positive examples.
to overcome these challenges, we explore a general relation extraction framework based on graph LStMs.
neutral
train_101221
(2015) focuses specifically on causal language -language used to appeal to psychological notions of cause and effect.
to generalize across these, we crafted a set of scripts for tSurgeon (Levy and Andrew, 2006), a tree-transformation utility built on tRegex.
neutral
train_101222
Accordingly, on Each syntactic pattern inherently encodes the positions in the tree of the cause and effect heads.
two of the reasons for this are familiar issues in NLP.
neutral
train_101223
We say that a system predicts "rightward" according to whetherp(→| r, L) > 0.5.
3 we ask: given that u i and u j are nearby tag tokens in the test corpus u, are they likely to be linked?
neutral
train_101224
Figure 5c shows more complex interactions between syntax and sentiment for deciding the head word.
theis and Bethge (2015) proposed Spatial LStMs to handle image data.
neutral
train_101225
We consider three baseline methods, namely left branching (L), right branching (R) and averaging (A).
this parameter set is not very large compared to the modern memory capacity, even for a computer with 16GB RAM.
neutral
train_101226
(2013b), Con-Tree and DepTree denote constituency Tree LSTMs and dependency Tree LSTMs, respectively.
x t is used to estimate the input (i t ), output 1 In this paper, we work on binary trees only, which is a common form for CKY and shift-reduce parsing.
neutral
train_101227
While there is prior research in topic modeling that provides topic-specific indices when modeling the link structure, these do not extend to individual indices, and most previous citation-based indices are defined for each individual but without considering topics.
figure 4: Word-level prediction result.
neutral
train_101228
Once the topic variables are learned using the vanilla LDA, authority and citation variables are then inferred consecutively.
several variants of topic modeling consider the relationship between topics and citations in academic corpora.
neutral
train_101229
π i←ja ∈ {0, 1} is an element of π i←j which measures the influence of author a on the citation from j to i and sums up to one ( a∈A i π i←ja = 1) over all authors of publication i.
our model combines the merits of both topic-specific and individual-specific indices to provide topical authority information for academic researchers.
neutral
train_101230
Using this approximation, we can represent with the time complexity of O(S E K 2 ), where D − a is the number of rows with negative links in Ψ a .
if topic distributions of paper i and j are similar and if η values of the cited paper's authors are high, the citation formation probability increases.
neutral
train_101231
We consider debate as a process of argument exchange.
empirical evidence has also led to an increasing awareness of the dangers of style and rhetoric in biasing participants towards the most skillful, charismatic, or numerous speakers (Noelle-Neumann, 1974;Sunstein, 1999).
neutral
train_101232
We have two sets of AMIE features: typed and untyped.
this allows us to distinguish different contextual uses of a verb without introducing a proliferation of verb sense symbols.
neutral
train_101233
For other domains like medicine, legal it might require domain experts to encode this knowledge.
in our domain, we use a manually constructed inventory of 45 types (animal, artifact, body part, measuring instrument, etc.).
neutral
train_101234
We summarize these features in Table 1, where ⊕ denotes the binary operator which defines features as a combination of word forms at different (not necessarily contiguous) positions of a sentence.
we also il-lustrate the advantageous generalization property of our model as it retained 89.8% of its original average POS tagging accuracy when trained on only 1.2% of the total accessible training sentences.
neutral
train_101235
Similar to our POS tagging experiments, using polyglot SC vectors tend to perform best for NER as well.
we then use the previously described set of features in a linear chain CRF (Lafferty et al., 2001) using CRFsuite (Okazaki, 2007) with its default settings for hyperparameters, i.e., the coefficients of 1.0 and 0.001 for 1 and 2 regularization, respectively.
neutral
train_101236
for j := 1 to minibatch size : for t := 0 to T −1 : Intervene: Evaluate each action at s t forā t ∈ A(s t ) : 11: 12: Improve: Train with dataset aggregation 13: Finalize: Pick the best policy over all iterations encountered when running the new policy π i+1 (or when parsing test sentences).
in fact, a user does not even have to quantify their global preferences by writing down such a function.
neutral
train_101237
First, the number of fully projected trees in some languages is so low such that the density-driven approach cannot start with a good initialization to fill in partial dependencies.
we use the more efficient algorithm from Stratos et al.
neutral
train_101238
Twitter's "following" relation is a relatively low-cost form of social engagement, and it is less public than retweeting or mentioning another user.
for each basis model k, each author a has an instance weight α a,k .
neutral
train_101239
But personalization requires annotated data for each individual user-something that may be possible in interactive settings such as information retrieval, but is not typically feasible in natural language processing.
3 we use the Twitter API to crawl the friends of the SemEval users (individuals that they follow) and the most recent 3,200 tweets in their timelines.
neutral
train_101240
This allows us to embed vector spaces of multiple languages into a single vector space, exploiting information from high-resource languages to improve the word representations of lower-resource ones.
their performance has not surpassed that of fine-tuning methods.
neutral
train_101241
These include: 1) the ATTRACT-REPEL source code; 2) bilingual word vector collections combining English with 51 other languages; 3) Hebrew and Croatian intrinsic evaluation datasets; and 4) Italian and German Dialogue State Tracking datasets collected for this work.
ppDB relies on large, high-quality parallel corpora such as Europarl (Koehn, 2005).
neutral
train_101242
In these dialogues, the speaker is trying to identify their (privately assigned) target color for the listener.
this prediction results in the blends L a and L e preferring the true target, allowing the speaker's perspective to override the listener's.
neutral
train_101243
However, if the target were the cyan, the speaker would have many good options.
the base listener interprets this as referring to the cyan with 91% probability, perhaps due to the extreme saturation of the cyan maximally activating certain parts of the neural network.
neutral
train_101244
Understanding model behavior We find that much of the gain in model performance comes from the first two rounds of training.
our proposed approach attempts to combine the advantages of both approaches, by defining an objective that incorporates both levels of linguistic properties over the entire forest representation, and adopting an alternating training regime for optimization.
neutral
train_101245
A commonly used log-linear formulation enables these models to consider a rich set of features ranging from orthographic patterns to semantic relatedness.
data To obtain gold information about morphological clusters, we use CELEX (Baayen et al., 1993).
neutral
train_101246
Minimized number of roots relatively to vocabulary size Similarly, the number of roots, and consequently the number of morphological families is markedly smaller than the size of the vocabulary.
these models generally bypass global constraints (Narasimhan et al., 2015) or require performing inference over very large spaces (Poon et al., 2009).
neutral
train_101247
Now let us consider how to derive our ILP formulation using the notations above.
they have barely trained word vectors, contributing to the low recall value.
neutral
train_101248
(2016) report that characterlevel decoding is only 14% slower than subwordlevel decoding.
(1) Training/decoding latency For the decoder, although the sequence to be generated is much longer, each character-level softmax operation costs considerably less compared to a word-or subword-level softmax.
neutral
train_101249
This gives a source BPE vocabulary of size 20k−24k for each language.
a character-level model has a much better chance recovering the original word or sentence.
neutral
train_101250
The minibatch size was fixed at 100, and the learning rate was controlled by RMSprop.
similar to QANTA, the method adopts the LR classifier with the derived representations as features.
neutral
train_101251
Table 5: Accuracies of the proposed method and the state-of-the-art methods for the factoid QA task.
this simple method enables us to easily train the distributed representations of words and entities simultaneously.
neutral
train_101252
Figure 1: Syntactic trees of the sentence "The little boy likes red tomatoes.".
such binarization requires a set of language-specific rules, which hampers adaptation of parsing to other languages.
neutral
train_101253
We find that the bottom-up parser and the top-down parser have similar results under the greedy setting, and the in-order parser outperforms both of them.
the top-down and the bottom-up parsers incorrectly recognize the "This has both made investors uneasy" as a complete sentence.
neutral
train_101254
Our focus here is on replicating an advantage of normalized features found in cognitive models of categorization.
if the cognitive model were extended to infant discrimination tasks, a similar strategy could be used to evaluate theories of human representation learning with respect to children's discrimination abilities throughout the learning process.
neutral
train_101255
Speaker normalization has previously been found to improve phonetic categorization (Cole et al., 2010;Nearey, 1978), increase a phonetic categorization model's match with human behavior (Apfelbaum and McMurray, 2015;McMurray and Jongman, 2011), and improve the performance of speech recognizers (Povey and Saon, 2006;Wegmann et al., 1996).
the following section describes the method we use for evaluating these features against human discrimination data.
neutral
train_101256
The acoustic values are assumed to lie in a continuous feature space, such as MFCCs.
mFCCs have the advantage that they are computed deterministically from the speech signal, and thus are not subject to the error inherent in automatic formant tracking.
neutral
train_101257
If there is a path from p to q, q is reachable from p. Item Types As shown in Figure 1, our items start and end on words, fully covering the spaces in between.
such an edge must exist.
neutral
train_101258
We estimate monolingual phrase embeddings via the element-wise addition of the word embeddings composing the phrase.
in phrase-based statistical machine translation (SMT), translation models are estimated over a large amount of parallel data.
neutral
train_101259
1), we evaluated whether our induced phrase table improves the translation of in-domain texts over the vanilla SMT system which used only one phrase table trained from general-domain parallel data.
as science domain monolingual data for the Science translation task, we used the English side of the aSPEC parallel corpus (Nakazawa et al., 2016).
neutral
train_101260
(2013a) with the only exception that we deal with not only words but also phrases.
for each pair of domain and translation direction, we have four word embedding spaces: those with 300 or 800 dimensions for source and target languages.
neutral
train_101261
The main contribution of Zhao et al.
these methods heavily rely on document-level information (Zhao and Vogel, 2002;Utiyama and Isahara, 2003;Fung and Cheung, 2004;Munteanu and Marcu, 2005) to reduce their search space by scoring only sentence pairs extracted from each pair of documents.
neutral
train_101262
We apply our method on Chinese-English, because the annotated alignment type training data is provided in this language pair.
in alignment models, a hidden variable a = {a 1 , a 2 , .
neutral
train_101263
Similarly, some of the major gains in F 1 for the disease types, and major losses in F 1 for the disaster types, do not come from the most or least frequent document labels.
this framework naturally generalizes to hierarchical and semisupervised extensions with no additional modeling assumptions.
neutral
train_101264
Unlike the gains in topic overlap and coherence, the F 1 score increases do not simply correlate with which document labels appeared most frequently.
these generalizations come at the cost of increasingly elaborate and unwieldy generative assumptions.
neutral
train_101265
Episodes are ordered by increasing model precision.
we set the learning rate to 0.0001.
neutral
train_101266
(2015) used sentence vectors obtained by a pre-trained hierarchical CNN (Denil et al., 2014) as features under an unweighted average MIL objective.
the burger and fries were good.
neutral
train_101267
We describe such an architecture in more detail as we use it as a point of comparison with our own model.
the acquisition of sentence-or phrase-level sentiment labels remains a laborious and expensive endeavor despite its relevance to various opinion mining applications, e.g., detecting or summarizing consumer opinions in online product reviews.
neutral
train_101268
This dataset enforces that all words are composed of exactly two morphemes.
morphological segmentation is a structured prediction task that seeks to break a word up into its constituent morphemes.
neutral
train_101269
(2017) proposed a neural architecture for jointly mapping instructions and visual observations (pixels) to actions in the environment.
their work stops short of exploring generalization over map layouts, which our model is designed to handle.
neutral
train_101270
Yelp reviews were obtained from the 2013 Yelp Dataset Challenge.
we used the Chu-Liu-Edmonds algo- 14.4% depth 6 10.3% 4.5% Same Edges 38.7% Table 5: Descriptive statistics for dependency trees produced by our model and the Stanford parser on the SNLI test set.
neutral
train_101271
The proposed approach returns multiple relevant temporal expressions only for a small fraction of events.
the choice of the activation function is a hyperparameter and was optimized on a development set.
neutral
train_101272
For multi-token expressions, we only use the first token.
using the relaxed metric, we see an accuracy of 94.9% for the begin point and 80.2% for the end point.
neutral
train_101273
SGNS randomly initializes all the embeddings before training begins, and it relies on negative samples created by randomly selecting word and context pairs (Mikolov et al., 2013;Levy et al., 2015).
we predict that multiple runs of SGNS on the same corpus will not produce the same results.
neutral
train_101274
Happily, this variability can be quantified by averaging results over multiple bootstrap samples.
sampling in PPMI is inspired by a similar method in the word2vec implementation of sGNs (Levy et al., 2015).
neutral
train_101275
Indeed, community style matching is shown to be correlated to comment popularity in Reddit (Tran and Ostendorf, 2016).
words in comments and replies associated with overpredicted (controversial) cases are related to controversial topics (sexual, regulate, liberals), named political parties, and mentions of downvoting or indication that the comment has been edited with the word "Edit."
neutral
train_101276
Time structure is represented only among a set of siblings: p(t) is the sibling predecessor in time, and s(t) is the sibling successor.
in several prior studies, authors constrained the problem to reduce the effect of those factors (Lakkaraju et al., 2013;Tan et al., 2014;Jaech et al., 2015).
neutral
train_101277
Likewise, the embeddings enable predicting correctly the relative sizes of unseen objects.
we split the data into 10 parts and employ 90% of the data for training and 10% for testing.
neutral
train_101278
Formally, the FUTURE layer is a recurrent neural network (the first gray layer in Figure 2) , and its state at time step t is computed by where F is the activation function for the FUTURE layer.
we attribute the former case to the lack of untranslated future modeling, and the latter one to the overloaded use of the decoder state where the language modeling of the decoder leads to the fluent but wrong predictions.
neutral
train_101279
Table 4 lists the alignment performances of our proposed model.
the proposed approach almost addresses the errors in these cases.
neutral
train_101280
The supervised baseline results are lower than the French-English case, which is partly due to the low coverage of WordNet translations for Arabic (see Table 5).
we will refer to this model as the subword skip-gram.
neutral
train_101281
For each of the datasets described above, we generated 100-dimensional word embeddings using the subword skip-gram model (Bojanowski et al., 2017).
such alignments are not available for all languages and dialects, and while a small dictionary might be feasible to acquire, discovering word mappings with no prior knowledge whatsoever is valuable.
neutral
train_101282
11 This starting tensor has a precision of about 80% and acts as a valuable resource for challenging tasks such as question answering.
advances in the latter set of methods have led to several embedding-based methods that are highly successful at KB completion for named entities (Nickel et al., 2011;Riedel et al., 2013;Dong et al., 2014;Trouillon et al., 2016;Nickel et al., 2016a).
neutral
train_101283
For each relation r, the relation schema imposes a type constraint on the entities that may appear as its source or target.
first, we use taxonomy guided uncertainty sampling to propose a list L to potentially query.
neutral
train_101284
(5) and (6) can be used to infer uncertain triples: if every sibling ofẽ has relationship r with an entity e , we can infer for "free" that this is the case forẽ as well.
transE is omitted due to its very low precision here, around 10%.
neutral
train_101285
(2017) proposed a neural model for fine-grained entity typing and for robustly using type information to improve relation extraction, but this is targeted for Freebase style named entities.
when siblings disagree in this respect, there is more uncertainty about (ẽ, r, e ) (according to (5) and (6)), making this triple a good candidate to query.
neutral
train_101286
2013, the fork and join decision, and the preterminal, top and bottom category label sub-models described in Section 3 can now be defined in terms of these sideand depth-specific grammars G s,d and depth-specific left-corner expectations E + d .
for CCL and UPPARSE, the NP agg f1 scores are not reported because they do not produce labeled constituents.
neutral
train_101287
When there is no join, the top category is defined in terms of a depth-specific sub-model: 4 Decisions about the bottom categories b 1..D t (which correspond to right children in tree structures) also depend on the outcome of the fork and join variables, but are defined in terms of a side-and depth-specific sub-model in every case: 5 2 Here, again,d In a sequence model inducer like Shain et al.
the competing systems include UPPARSE (Ponvert et al., 2011) 13 , CCL (Seginer, 2007a) 14 , BMMM+DMV with undirected dependency features (Christodoulopoulos et al., 2012) 15 and UHHMM (Shain et al., 2016).
neutral
train_101288
22 Table 5 shows the PARSEVAL scores of all systems.
(2016), these depth-specific models are assumed to be independent of each other and fit with a Gibbs sampler, backward sampling hidden variable sequences from forward distributions using this compiled transition model M (Carter and Kohn, 1996), then counting individual sub-model outcomes from sampled hidden variable sequences, then resampling each sub-model using these counts with Dirichlet priors over a, b, and p models and Beta priors over f and j models, then re-compiling these resampled models into a new M. note that with K category labels this model contains DK 2 + 3DK 3 separate parameters for preterminal categories and top and bottom categories of derivation fragments at every depth level, each of which can be independently learned by the Gibbs sampler.
neutral
train_101289
In addition, these result populations can be statistically compared for significance, allowing for stronger claims on improvement.
the correlation coefficients across devices presented in table 9 leads us to suspect that there can be substantial changes in effectiveness when switching the backend from CPU to GPU and vice versa.
neutral
train_101290
(2017) and on SNLI.
(2017) present a model which explicitly computes O(N 2 ) possible tree nodes for N words, and uses a soft gating strategy to approximately select valid combinations of these nodes that form a tree.
neutral
train_101291
In Sections 3 to 5, we describe the three main stages of our approach respectively.
their method could have a relatively large variance in the document sentiment classification performance because of the domain mismatch (e.g., F 1 = 0.874 for the "Tangled" tweets and F 1 = 0.647 for the "Obama" tweets), whereas our approach would perform quite consistently over different domains.
neutral
train_101292
In the second dataset, MEDHOP, the goal is to establish drug-drug interactions based on scientific findings about drugs and proteins and their interactions, found across multiple MEDLINE abstracts.
overall, although both neural RC models clearly outperform the other baselines, they still have large room for improvement compared to human performance at 74% / 85% for WIKIHoP.
neutral
train_101293
When answering 100 questions, the annotator knew the answer prior to reading the documents for 9%, and produced the correct answer after reading the document sets for 74% of the cases.
here the first document states that the drug Leuprolide causes GnRh receptor-induced synaptic potentiations, which can be blocked by the protein Progonadoliberin-1.
neutral
train_101294
As possible end points, we consider any other drug, apart from drug1 and those interacting with drug1 other than drug2.
for a given query-answer pair, the item entity is chosen as the starting point for the graph traversal.
neutral
train_101295
Finally, we identified the selection of relevant document sets as the most promising direction for future research.
5 For FastQA we use the implementation provided by the authors, also with pre-trained GloVe embeddings, no characterembeddings, no maximum support length, hidden size 50, and batch size 64 for 50 epochs.
neutral
train_101296
(2017a) are used, with pretrained GloVe (Pennington et al., 2014) embeddings.
note that WIK-IHOP inherits the train, development, and test set splits from WIKIREADInG -i.e., the full dataset creation, filtering, and sub-sampling pipeline is executed on each set individually.
neutral
train_101297
Masking is consistent within one sample, but generally different for the same expression across samples.
for the open-domain setting of WIKIHOP, a reduction of the answer vocabulary to 100 random single-token mask expressions clearly helps the model in selecting a candidate span, compared to the multi-token candidate expressions in the unmasked setting.
neutral
train_101298
Following the same general methodology, we next construct a second dataset for the domain of molecular biology -a field that has been undergoing exponential growth in the number of publications (Cohen and Hunter, 2004).
the question tokens are helpful to detect relevant documents, but exploiting only this information compares poorly to the other baselines.
neutral
train_101299
We can utilize this relatedness by sharing the vocabulary across all related languages.
previous work has looked at the most natural setup -training on a single language pair.
neutral