ID
large_stringlengths
10
61
year
int64
1.96k
2.03k
title
large_stringlengths
4
560
abstract
large_stringlengths
0
12.8k
nederhof-1997-regular
1,997
Regular Approximations of CFLs: A Grammatical View
We show that for each context-free grammar a new grammar can be constructed that generates a regular language. This construction differs from existing methods of approximation in that use of a pushdown automaton is avoided . This allows better insight into how the generated language is affected. The new method is also more attractive from a computational viewpoint.
samuelsson-1997-left
1,997
A Left-to-right Tagger for Word Graphs
An algorithm is presented for tagging input word graphs and producing output tag graphs that are to be subjected to further syntactic processing. It is based on an extension of the basic HMM equations for tagging an input word string that allows it to handle word-graph input, where each arc has been assigned a probability. The scenario is that of some word-graph source, e.g., an acoustic speech recognizer, producing the arcs of a word graph, and the tagger will in turn produce output arcs, labelled with tags and assigned probabilities. The processing as done entirely left-to-right, and the output tag graph is constructed using a minimum of lookahead, facilitating real-time processing.
schmid-1997-parsing
1,997
Parsing by Successive Approximation
It is proposed to parse feature structure-based grammars in several steps. Each step is aimed to eliminate as many invalid analyses as possible as efficiently as possible. To this end the set of feature constraints is divided into three subsets, a set of context-free constraints, a set of filtering constraints and a set of structure-building constraints, which are solved in that order. The best processing strategy differs: Context-free constraints are solved efficiently with one of the well-known algorithms for context-free parsing. Filtering constraints can be solved using unification algorithms for non-disjunctive feature structures whereas structure-building constraints require special techniques to represent feature structures with embedded disjunctions efficiently. A compilation method and an efficient processing strategy for filtering constraints are presented.
srinivas-1997-performance
1,997
Performance Evaluation of Supertagging for Partial Parsing
In previous work we introduced the idea of supertagging as a means of improving the efficiency of a lexicalized grammar parser. In this paper, we present supertagging in conjunction with a lightweight dependency analyzer as a robust and efficient partial parser. The present work is significant for two reasons. First, we have vastly improved our results; 92{\%} accurate for supertag disambiguation using lexical information, larger training corpus and smoothing techniques. Second, we show how supertagging can be used for partial parsing and provide detailed evaluation results for detecting noun chunks, verb chunks, preposition phrase attachment and a variety of other linguistic constructions. Using supertag representation, we achieve a recall rate of 93.0{\%} and a precision rate of 91.8{\%} for noun chunking, improving on the best known result for noun chunking.
tendeau-1997-earley
1,997
An Earley Algorithm for Generic Attribute Augmented Grammars and Applications
We describe an extension of Earley`s algorithm which computes the decoration of a shared forest in a generic domain. At tribute computations are defined by a morphism from leftmost derivations to the generic domain, which leaves the computations independent from (even if guided by) the parsing strategy. The approach is illustrated by the example of a definite clause grammar, seen as CF-grammars decorated by attributes.
visser-1997-case
1,997
A Case Study in Optimizing Parsing Schemata by Disambiguation Filters
Disambiguation methods for context-free grammars enable concise specification of programming languages by ambiguous grammars. A disambiguation filter is a function that selects a subset from a set of parse trees the possible parse trees for an ambiguous sentence. The framework of filters provides a declarative description of disambiguation methods independent of parsing. Although filters can be implemented straightforwardly as functions that prune the parse forest produced by some generalized parser, this can be too inefficient for practical applications. In this paper the optimization of parsing schemata, a framework for high-level description of parsing algorithms, by disambiguation filters is considered in order to find efficient parsing algorithms for declaratively specified disambiguation methods. As a case study the optimization of the parsing schema of Earley`s parsing algorithm by two filters is investigated. The main result is a technique for generation of efficient LR-like parsers for ambiguous grammars disambiguated by means of priorities.
yoon-etal-1997-new
1,997
New Parsing Method using Global Association Table
This paper presents a new parsing method using statistical information extracted from corpus, especially for Korean. The structural ambiguities are occurred in deciding the dependency relation between words in Korean. While figuring out the correct dependency, the lexical associations play an important role in resolving the ambiguities. Our parser uses statistical cooccurrence data to compute the lexical associations. In addition, it can be shown that sentences are parsed deterministically by the global management of the association. In this paper, the global association table (GAT) is defined and the association between words is recorded in the GAT. The system is the hybrid semi-deterministic parser and is controlled not by the condition-action rule. but by the association value between phrases. Whenever the expectation of the parser fails, it chooses the alternatives using a chart to remove the backtracking.
ciortuz-1997-constraint
1,997
Constraint-driven Concurrent Parsing Applied to Romanian VP
We show that LP constraints (together with language specific constraints) could be interpreted as meta-rules in (an extended) head-corner parsing algorithm using weakened ID rule schemata from the theory of HPSG [Pollard and Sag, 1994].
derksen-etal-1997-robustness
1,997
Robustness and Efficiency in AGFL
lie-etal-1997-language
1,997
Language Analysis in SCHISMA
lyon-dickerson-1997-reducing
1,997
Reducing the Complexity of Parsing by a Method of Decomposition
platek-etal-1997-formal
1,997
Formal Tools for Separating Syntactically Correct and Incorrect Structures
In this paper we introduce a class of formal grammars with special measures capable to describe typical syntactic inconsistencies in free word order languages. By means of these measures it is possible to characterize more precisely the problems connected with the task of building a robust parser or a grammar checker of Czech.
la-serna-etal-1997-parsers
1,997
Parsers Optimization for Wide-coverage Unification-based Grammars using the Restriction Technique
This article describes the methodology we have followed in order to improve the efficiency of a parsing algorithm for wide coverage unification-based grammars. The technique used is the restriction technique (Shieber 85), which has been recognized as an important operation to obtain efficient parsers for unification-based grammars. The main objective of the research is how to choose appropriate restrictors for using the restriction technique. We have developed a statistical model for selecting restrictors. Several experiments have been done in order to characterise those restrictors.
nn-1995-proceedings
1,995
Proceedings of the Fourth International Workshop on Parsing Technologies
aarts-1995-acyclic
1,995
Acyclic Context-sensitive Grammars
A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.
op-den-akker-etal-1995-parsing
1,995
Parsing in Dialogue Systems Using Typed Feature Structures
The analysis of natural language in the context of keyboard-driven dialogue systems is the central issue addressed in this paper. A module that corrects typing errors, performs domain-specific morphological analysis is developed. A parser for typed unification grammars is designed and implemented in C++; for description of the lexicon and the grammer a specialised specification language is developed. It is argued that typed unification grammars and especially the newly developed specification language are convenient formalisms for describing natural language use in dialogue systems. Research on these issues is carried out in the context of the SCHISMA project, a research project in linguistic engineering; participants in SCHISMA are KPN Research and the University of Twente.
amtrup-1995-parallel
1,995
Parallel Parsing: Different Distribution Schemata for Charts
asveld-1995-fuzzy
1,995
A Fuzzy Approach to Erroneous Inputs in Context-Free Language Recognition
Using fuzzy context-free grammars one can easily describe a finite number of ways to derive incorrect strings together with their degree of correctness. However, in general there is an infinite number of ways to perform a certain task wrongly. In this paper we introduce a generalization of fuzzy context-free grammars, the so-called fuzzy context-free $K$-grammars, to model the situation of malting a finite choice out of an infinity of possible grammatical errors during each context-free derivation step. Under minor assumptions on the parameter $K$ this model happens to be a very general framework to describe correctly as well as erroneously derived sentences by a single generating mechanism. Our first result characterizes the generating capacity of these fuzzy context-free $K$-grammars. As consequences we obtain: (i) bounds on modeling grammatical errors within the framework of fuzzy context-free grammars, and (ii) the fact that the family of languages generated by fuzzy context-free $K$-grammars shares closure properties very similar to those of the family of ordinary context-free languages. The second part of the paper is devoted to a few algorithms to recognize fuzzy context-free languages: viz. a variant of a functional version of Cocke-Younger-Kasami`s algorithm and some recursive descent algorithms. These algorithms tum out to be robust in some very elementary sense and they can easily be extended to corresponding parsing algorithms.
becker-rambow-1995-parsing
1,995
Parsing Non-Immediate Dominance Relations
We present a new technique for parsing grammar formalisms that express non-immediate dominance relations by {\textquoteleft}dominance-links'. Dominance links have been introduced in various formalisms such as extensions to CFG and TAG in order to capture long-distance dependencies in free-word order languages (Becker et al., 1991; Rambow, 1994). We show how the addition of {\textquoteleft}link counters' to standard parsing algorithms such as CKY- and Earley-based methods for TAG results in a polynomial time complexity algorithm for parsing lexicalized V-TAG, a multi-component version of TAGs defined in (Rambow, 1994). A variant of this method has previously been applied to context-free grammar based formalisms such as UVG-DL.
boullier-1995-yet
1,995
Yet Another $0(n^6)$ Recognition Algorithm for Mildly Context-Sensitive Languages
Vijay-Shanker and Weir have shown in [17] that Tree Adjoining Grammars and Combinatory Categorial Grammars can be transformed into equivalent Linear Indexed Grammars (LIGs) which can be recognized in $0(n^6)$ time using a Cocke-Kasami-Younger style algorithm. This paper exhibits another recognition algorithm for LIGs, with the same upper-bound complexity, but whose average case behaves much better. This algorithm works in two steps: first a general context-free parsing algorithm (using the underlying context-free grammar) builds a shared parse forest, and second, the LIG properties are checked on this forest. This check is based upon the composition of simple relations and does not require any computation of symbol stacks.
briscoe-carroll-1995-developing
1,995
Developing and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels
We describe an approach to robust domain-independent syntactic parsing of unrestricted naturally-occurring (English) input. The technique involves parsing sequences of part-of-speech and punctuation labels using a unification-based grammar coupled with a probabilistic LR parser. We describe the coverage of several corpora using this grammar and report the results of a parsing experiment using probabilities derived from bracketed training data. We report the first substantial experiments to assess the contribution of punctuation to deriving an accurate syntactic analysis, by parsing identical texts both with and without naturally-occurring punctuation marks.
carpenter-qu-1995-abstract
1,995
An Abstract Machine for Attribute-Value Logics
A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information.
chen-lee-1995-chunking
1,995
A Chunking-and-Raising Partial Parser
Parsing is often seen as a combinatorial problem. It is not due to the properties of the natural languages, but due to the parsing strategies. This paper investigates a Constrained Grammar extracted from a Treebank and applies it in a non-combinatorial partial parser. This parser is a simpler version of a chunking-and-raising parser. The chunking and raising actions can be done in linear time. The short-term goal of this research is to help the development of a partially bracketed corpus, i.e., a simpler version of a treebank. The long-term goal is to provide high level linguistic constraints for many natural language applications.
diagne-etal-1995-distributed
1,995
Distributed Parsing With HPSG Grammars
fischer-etal-1995-chart
1,995
Chart-based Incremental Semantics Construction with Anaphora Resolution Using $\lambda$-DRT
gerdemann-1995-term
1,995
Term Encoding of Typed Feature Structures
gonzalo-solias-1995-generic
1,995
Generic Rules and Non-Constituent Coordination
We present a metagrammatical formalism, \textit{generic rules}, to give a default interpretation to grammar rules. Our formalism introduces a process of \textit{dynamic binding} interfacing the level of pure grammatical knowledge representation and the parsing level. We present an approach to non-constituent coordination within categorial grammars, and reformulate it as a generic rule. This reformulation is context-free parsable and reduces drastically the search space associated to the parsing task for such phenomena.
grinberg-etal-1995-robust
1,995
A Robust Parsing Algorithm for Link Grammars
In this paper we present a robust parsing algorithm based on the link grammar formalism for parsing natural languages. Our algorithm is a natural extension of the original dynamic programming recognition algorithm which recursively counts the number of linkages between two words in the input sentence. The modified algorithm uses the notion of a null link in order to allow a connection between any pair of adjacent words, regardless of their dictionary definitions. The algorithm proceeds by making three dynamic programming passes. In the first pass, the input is parsed using the original algorithm which enforces the constraints on links to ensure grammaticality. In the second pass, the total cost of each substring of words is computed, where cost is determined by the number of null links necessary to parse the substring. The final pass counts the total number of parses with minimal cost. All of the original pruning techniques have natural counterparts in the robust algorithm. When used together with memoization, these techniques enable the algorithm to run efficiently with cubic worst-case complexity. We have implemented these ideas and tested them by parsing the Switchboard corpus of conversational English. This corpus is comprised of approximately three million words of text, corresponding to more than 150 hours of transcribed speech collected from telephone conversations restricted to 70 different topics. Although only a small fraction of the sentences in this corpus are {\textquotedblleft}grammatical{\textquotedblright} by standard criteria, the robust link grammar parser is able to extract relevant structure for a large portion of the sentences. We present the results of our experiments using this system, including the analyses of selected and random sentences from the corpus. We placed a version of the robust parser on the Word Wide Web for experimentation. It can be reached at URL \url{http://www.cs.cmu.edu/afs/es.emu.edu/project/link/www/robust.html}. In this version there are some limitations such as the maximum length of a sentence in words and the maximum amount of memory the parser can use.
holan-etal-1995-implementation
1,995
An Implementation of Syntactic Analysis of Czech
This paper describes current results achieved during the work on parsing of a free-word-order natural language (Czech) . It contains theoretical base for a new class of grammars - CFG extended for dependecies and non-projectivities {--} and also the description of the implementation of a parser and grammar-checker. The paper also describes some typical problems of parsing of free-word-order languages and their solutions (or discusssion of those problems), which are still subject of investigation. The implementation described here serves currently as a testing tool for the development of a large scale grammar of Czech. Some of the quantitative data from a processing of test sentences are also included.
kurohashi-1995-analyzing
1,995
Analyzing Coordinate Structures Including Punctuation in English
We present a met hod of identifying coordinate structure scopes and determining usages of commas in sentences at the same time. All possible interpretations concerning comma usages and coordinate structure scopes are ranked by taking advantage of parallelism between conjoined phrases/clauses/sentences and calculating their similarity scores. We evaluated this method through experiments on held-out test sentences and obtained promising results: both the success ratio of interpreting commas and that of detecting CS scopes were about 80{\%}.
ciravegna-lavelli-1995-parsing
1,995
On Parsing Control for Efficient Text Analysis
lombardo-lesmo-1995-practical
1,995
A Practical Dependency Parser
The working assumption is that cognitive modeling of NLP and engineering solutions to free text parsing can converge to optimal parsing. The claim of the paper is that the methodology to achieve such a result is to develop a concrete environment with a flexible parser, that allows the testing of various psycholinguistic strategies on real texts. In this paper we outline a flexible parser based on a dependency grammar.
luz-filho-sturt-1995-labelled
1,995
A Labelled Analytic Theorem Proving Environment for Categorial Grammar
We present a system for the investigation of computational properties of categorial grammar parsing based on a labelled analytic tableaux theorem prover. This proof method allows us to take a modular approach, in which the basic grammar can be kept constant, while a range of categorial calculi can be captured by assigning different properties to the labelling algebra. The theorem proving strategy is particularly well suited to the treatment of categorial grammar, because it allows us to distribute the computational cost between the algorithm which deals with the grammatical types and the algebraic checker which constrains the derivation.
morawietz-1995-unification
1,995
A Unification-Based ID/LP Parsing Schema
In contemporary natural language formalisms like HPSG (Pollard and Sag 1994) the ID/LP format is used to separate the information on dominance from the one on linear precedence thereby allowing significant generalizations on word order. In this paper, we define unification ID/LP grammars. But as mentioned in Seiffert (1991) there are problems concerning the locality of the information determining LP acceptability during parsing. Since one is dealing with partially specified data, the information that is relevant to decide whether the local tree under construction is LP acceptable might be instantiated further during processing. In this paper we propose a modification of the Earley/Shieber algorithm on direct parsing of ID/LP grammars. We extend the items involved to include the relevant underspecified information using it in the completion steps to ensure the acceptability of the resulting structure. Following Sikkel (1993) we define it not as an algorithm, but as a parsing schema to allow the most abstract representation.
mori-nagao-1995-parsing
1,995
Parsing Without Grammar
We describe and evaluate experimentally a method to parse a tagged corpus without grammar modeling a natural language on context-free language. This method is based on the following three hypotheses. 1) Part-of-speech sequences on the right-hand side of a rewriting rule are less constrained as to what part-of-speech precedes and follows them than non-constituent sequences. 2) Part-of-speech sequences directly derived from the same non-terminal symbol have similar environments. 3) The most suitable set of rewriting rules makes the greatest reduction of the corpus size. Based on these hypotheses, the system finds a set of constituent-like part-of-speech sequences and replaces them with a new symbol. The repetition of these processes brings us a set of rewriting rules, a grammar, and the bracketed corpus.
nasr-1995-formalism
1,995
A Formalism and a Parser for Lexicalised Dependency Grammars
oflazer-1995-error
1,995
Error-tolerant Finite State Recognition
Error-tolerant recognition enables the recognition of strings that deviate slightly from any string in the regular set recognized by the underlying finite state recognizer. In the context of natural language processing, it has applications in error-tolerant morphological analysis, and spelling correction. After a description of the concepts and algorithms involved, we give examples from these two applications: In morphological analysis, error-tolerant recognition allows misspelled input word forms to be corrected, and morphologically analyzed concurrently. The algorithm can be applied to the moiphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes (such as agglutination or productive compounding) and morphographemic phenomena involved. We present an application to error tolerant analysis of agglutinative morphology of Turkish words. In spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. It can be applied to any language whose morphology is fully described by a finite state transducer, or with a word list comprising all inflected forms with very large word lists of root and inflected forms (some containing well over 200,000 forms), generating all candidate solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerant recognition operating with a (circular) recognizer of Turkish words (with about 29,000 states and 119,000 transitions) can generate all candidate words in less than 20 milliseconds (with edit distance 1). Spelling correction using a recognizer constructed from a large word German list that simulates compounding, also indicates that the approach is applicable in such cases.
samuelsson-1995-novel
1,995
A Novel Framework for Reductionistic Statistical Parsing
A reductionistic statistical framework for part-of-speech tagging and surface syntactic parsing is presented that has the same expressive power as the highly successful Constraint Grammar approach, see [Karlsson et al. 1995]. The structure of the Constraint Grammar rules allows them to be viewed as conditional probabilities that can be used to update the lexical tag probabilities, after which low-probability tags are repeatedly removed. Experiments using strictly conventional information sources on the Susanne and Teleman corpora indicate that the system performs as well as a traditional HMM-based part-of-speech tagger, yielding state-of-the-art results. The scheme also enables using the same information sources as the Constraint Grammar approach, and the hope is that it can improve on the performance of both statistical taggers and surface-syntactic analyzers.
sekine-grishman-1995-corpus
1,995
A Corpus-based Probabilistic Grammar with Only Two Non-terminals
The availability of large, syntactically-bracketed corpora such as the Penn Tree Bank affords us the opportunity to automatically build or train broad-coverage grammars, and in particular to train probabilistic grammars. A number of recent parsing experiments have also indicated that grammars whose production probabilities are dependent on the context can be more effective than context-free grammars in selecting a correct parse. To make maximal use of context, we have automatically constructed, from the Penn Tree Bank version 2, a grammar in which the symbols S and NP are the only real nonterminals, and the other non-terminals or grammatical nodes are in effect embedded into the right-hand-sides of the S and NP rules. For example, one of the rules extracted from the tree bank would be S -{\ensuremath{>}} NP VBX JJ CC VBX NP [1] ( where NP is a non-terminal and the other symbols are terminals {--} part-of-speech tags of the Tree Bank). The most common structure in the Tree Bank associated with this expansion is (S NP (VP (VP VBX (ADJ JJ) CC (VP VBX NP)))) [2]. So if our parser uses rule [1] in parsing a sentence, it will generate structure [2] for the corresponding part of the sentence. Using 94{\%} of the Penn Tree Bank for training, we extracted 32,296 distinct rules ( 23,386 for S, and 8,910 for NP). We also built a smaller version of the grammar based on higher frequency patterns for use as a back-up when the larger grammar is unable to produce a parse due to memory limitation. We applied this parser to 1,989 Wall Street Journal sentences (separate from the training set and with no limit on sentence length). Of the parsed sentences (1,899), the percentage of no-crossing sentences is 33.9{\%}, and Parseval recall and precision are 73.43{\%} and 72 .61{\%}.
srinivas-etal-1995-heuristics
1,995
Heuristics and Parse Ranking
There are currently two philosophies for building grammars and parsers {--} Statistically induced grammars and Wide-coverage grammars. One way to combine the strengths of both approaches is to have a wide-coverage grammar with a heuristic component which is domain independent but whose contribution is tuned to particular domains. In this paper, we discuss a three-stage approach to disambiguation in the context of a lexicalized grammar, using a variety of domain independent heuristic techniques. We present a training algorithm which uses hand-bracketed treebank parses to set the weights of these heuristics. We compare the performance of our grammar against the performance of the IBM statistical grammar, using both untrained and trained weights for the heuristics.
tendeau-1995-stochastic
1,995
Stochastic Parse-Tree Recognition by a Pushdown Automaton
We present the stochastic generalization of what is usually called correctness theorems: we guarantee that the probabilities computed operationally by the parsing algorithms are the same as those defined denotationally on the trees and forests defined by the grammar. The main idea of the paper is to precisely relate the parsing strategy with a parse-tree exploration strategy: a computational path of a parsing. algorithm simply performs an exploration of a parse-tree for the input portion already parsed. This approach is applied in particular to Earley and Left-Corner parsing algorithms. Probability computations follow parsing operations: looping problems (in rule prediction and subtree recognition) are solved by introducing probability variables (which may not be immediately evaluated). Convergence is ensured by the syntactic construction that leads to stochastic equations systems, which are solved as soon as possible. Our algorithms accept any (probabilistic) CF grammar. No restrictions are made such as prescribing normal form, proscribing empty rules or cyclic grammars.
torisawa-tsujii-1995-hpsg
1,995
An HPSG-based Parser for Automatic Knowledge Acquisition
vijay-shanker-etal-1995-parsing
1,995
Parsing D-Tree Grammars
wauschkuhn-1995-influence
1,995
The Influence of Tagging on the Results of Partial Parsing in German Corpora
weng-stolcke-1995-partitioning
1,995
Partitioning Grammars and Composing Parsers
wintner-francez-1995-parsing
1,995
Parsing with Typed Feature Structures
In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE ([2]). Our motivation being the design of an abstract (WAM-like) machine for the formalism ([14]), we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of \textit{abstract typed feature structures} (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of \textit{multi-rooted feature structures} (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We then present a simple bottom-up chart parser as a model for computation: grammars written in the TFS-based formalism are executed by the parser. Finally, we show that the parser is correct.
mckelvie-thompson-1994-tei
1,994
TEI-Conformant Structural Markup of a Trilingual Parallel Corpus in the ECI Multilingual Corpus 1
In this paper we provide an overview of the ACL European Corpus Initiative (ECI) Multilingual Corpus 1 (ECI/MC1). In particular, we look at one particular subcorpus in the ECI/MC1, the trilingual corpus of International Labour Organisation reports, and discuss the problems involved in TEI-compliant structural markup and preliminary alignment of this large corpus. We discuss gross structural alignment down to the level of text paragraphs. We see this as a necessary first step in corpus preparation before detailed (possibly automatic) alignment of text is possible. We try and generalise our experience with this corpus to illustrate the process of preliminary markup of large corpora which in their raw state can be in an arbitrary format (eg printers tapes, proprietary word-processor format); noisy (not fully parallel, with structure obscured by spelling mistakes); full of poorly documented formatting instructions; and whose structure is present but anything but explicit. We illustrate these points by reference to other parallel subcorpora of ECI/MC1. We attempt to define some guidelines for the development of corpus annotation toolkits which would aid this kind of structural preparation of large corpora.
yarowsky-1994-comparison
1,994
A Comparison of Corpus-based Techniques for Restoring Accents in Spanish and French Text
This paper will explore and compare three corpus-based techniques for lexical ambiguity resolution, focusing on the problem of restoring missing accents to Spanish and French text. Many of the ambiguities created by missing accents are differences in part of speech: hence one of the methods considered is an N-gram tagger using Viterbi decoding, such as is found in stochastic part-of-speech taggers. A second technique, Bayesian classification, has been successfully applied to word-sense disambiguation and is well suited for some of the semantic ambiguities which arise from missing accents. The third approach, based on decision lists, combines the strengths of the two other methods, incorporating both local syntactic patterns and more distant collocational evidence, and outperforms them both. The problem of accent restoration is particularly well suited for demonstrating and testing the capabilities of the given algorithms because it requires the resolution of both semantic and syntactic ambiguity, and offers an objective ground truth for automatic evaluation. It is also a practical problem with immediate application.
uramoto-1994-extracting
1,994
Extracting a Disambiguated Thesaurus from Parallel Dictionary Definitions
This paper describes a method for extracting disambiguated (bilingual) is-a relationships from parallel (English and Japanese) dictionary definitions by using word-level alignment. Definitions have a specific pattern, namely, a ``genus term and differentia'' structure; therefore, bilingual genus terms can be extracted by using bilingual pattern matching. For the alignment of words in the genus terms, a dynamic programming framework for sentence-level alignment proposed by Gale et al. [6] is used.
kita-etal-1994-application
1,994
Application of Corpora in Second Language Learning: The Problem of Collocational Knowledge Acquisition
While corpus-based studies are now becoming a new methodology in natural language processing, second language learning offers one interesting potential application. In this paper, we are primarily concerned with the acquisition of collocational knowledge from corpora for use in language learning. First we discuss the importance of collocational knowledge in second language learning, and then take up two measures, mutual information and cost criteria, for automatically identifying or extracting collocations from corpora. Comparitive experiments are made between the two measures using both Japanese and English corpora. In our experiments, the cost criteria measure proved more effective in extracting interesting collocations such as fundamental idiomatic expressions and phrases.
grishman-1994-iterative
1,994
Iterative Alignment of Syntactic Structures for a Bilingual Corpus
Alignment of parallel bilingual corpora at the level of syntactic structure holds the promise of being able to discover detailed bilingual structural correspondences automatically. This paper describes a procedure for the alignment of regularized syntactic structures, proceeding bottom-up through the trees. It makes use of information about possible lexical correspondences, from a bilingual dictionary, to generate initial candidate alignments. We consider in particular how much dictionary coverage is needed for the alignment process, and how the alignment can be iteratively improved by having an initial alignment generate additional lexical correspondences for the dictionary, and then using this augmented dictionary for subsequent alignment passes.
fung-wu-1994-statistical
1,994
Statistical Augmentation of a Chinese Machine-Readable Dictionary
We describe a method of using statistically-collected Chinese character groups from a corpus to augment a Chinese dictionary. The method is particularly useful for extracting domain-specific and regional words not readily available in machine-readable dictionaries. Output was evaluated both using human evaluators and against a previously available dictionary. We also evaluated performance improvement in automatic Chinese tokenization. Results show that our method outputs legitimate words, acronymic constructions, idioms, names and titles, as well as technical compounds, many of which were lacking from the original dictionary.
fuji-croft-1994-comparing
1,994
Comparing the Retrieval Performance of English and Japanese Text Databases
The retrieval effectiveness for English and Japanese full-text databases are studied using the INQUERY retrieval system. Two series of experiments - short queries and longer TIPSTER queries - were examined. For short queries, Japanese generally performed more effectively than English. For longer queries, relative effectiveness showed little correlation among various query strategies. This result suggests that the best Japanese query processing strategy may be quite different from the English one.
merkel-etal-1994-phrase
1,994
A Phrase-Retrieval System based on Recurrence
The paper describes a simple but useful phrase-retrieval system that primarily is intended as a support tool for computer-aided translation. Given no other input than a text (and a word list used for filtering purposes), the system retrieves recurrent sentences and phrases of the text and their positions. In addition the system provides information on internal and external recurrence rates.
sekine-1994-automatic
1,994
Automatic Sublanguage Identification for a New Text
A number of theoretical studies have been devoted to the notion of sublanguage, which mainly concerns linguistic phenomena restricted by the domain or context. Furthermore, there are some successful NLP systems which have explicitly or implicitly addressed the sublanguage restrictions (e.g. TAUM-METEO, ATR). This suggests the following two objectives for future NLP research: 1) automatic linguistic knowledge acquisition for sublanguage, and 2) automatic definition of sublanguage and identification of it for a new text. The two issues become realistic owing to the appearance of large corpora. Despite of the recent bloom of the research on the first objective, there are few on the second objective. If this objective is achieved, NLP systems will be able to optimize to the sublanguage before processing the text, and this will be a significant help in automatic processing. A preliminary experiment aiming at the second objective is addressed in this paper. It is conducted on about 3 MB of Wall Street Journal corpus. We made up article clusters (sublanguages) based on word appearance, and the closest article cluster among the set of clusters is chosen for each test article. The comparison between the new articles and the clusters shows the success of the sublanguage identification and also the promising ability of the method. Also the result of an experiment using the first two sentences in the articles indicates the feasibility of applying this method to speech recognition or other systems which can`t access the whole article prior to the processing.
umemura-1994-string
1,994
String Comparison based on Substring Equations
This paper describes a practical method to compute whether two strings are equivalent under certain equations. This method uses a procedure called Critical-Pair/Completion. that generates rewriting rules from equations. Unlike other Critical-Pair/Completion procedures, the procedure described here always stops for all equations because it treats strings of bounded length. This paper also explains the importance of the string equivalence problem if international data handling is required.
santos-1994-bilingual
1,994
Bilingual Alignment and Tense
In this paper, I describe one annotation of tense transfer in parallel English and Portuguese texts. Even though the primary aim of the study is to compare the tense and aspect systems of the two languages, it also raises some questions as far as bilingual alignment in general is concerned. First, I present a detailed list of clausal mismatches, which shows that intra-sentential alignment is not an easy task. Subsequently, I present a detailed quantitative description of the translation pairs found and discuss some possible conclusions for the translation of tense. Finally, I discuss some theoretical problems related to translation.
van-der-eijk-1994-comparative
1,994
Comparative Discourse Analysis of Parallel Texts
A quantitative representation of discourse structure can be computed by measuring lexical cohesion relations among adjacent blocks of text. These representations have been proposed to deal with sub-topic text segmentation. In a parallel corpus, similar representations can be derived for versions of a text in various languages. These can be used for parallel segmentation and as an alternative measure of text-translation similarity.
lewis-1994-machine
1,994
Machine translation -- ten years on: an overview of the conference
The International Conference, Machine Translation - Ten Years On, took place at Cranfield University, 12-14 November 1994. The occasion was the tenth anniversary of the previous international conference on Machine Translation (MT) held at Cranfield. The 1994 conference was organised by Cranfield University in conjunction with the Natural Language Translation Specialist Group of the British Computer Society. Apart from detailed descriptions of prototype systems, the conference provided overviews of general developments in the field of MT. Considerable research is taking place into speech recognition and dialogue systems, and into incorporating features of spoken language and discourse into computer representations of natural language. At the same time, more sophisticated techniques for the statistical analysis of text corpora are emerging that may fundamentally alter the direction of MT research. It is clear that knowledge-based systems representing conceptual information for particular subject domains independently of specific languages are seen as a practical way forward for MT. Another promising direction is the emergence of interactive systems that can be used by non-translators working within a distributed processing environment. Moving away from research and development, the conference afforded practical insights into a number of operational systems. These ranged from large, established systems such as SYSTRAN, to smaller interactive programs for a PC. The evaluation and commercial performance of MT systems remains a key issue, alongside the wider question of who actually uses MT.
wilks-1994-notes
1,994
Some notes on the state of the art: Where are we now in MT: what works and what doesn`t?
The paper examines briefly the impact of the {\textquotedblleft}statistical turn{\textquotedblright} in machine translation (MT) R{\&}D in the last decade, and particularly the way in which it has made large scale language resources (lexicons, text corpora etc.) more important than ever before and reinforced the role of evaluation in the development of the field. But resources mean, almost by definition, co-operation between groups and, in the case of MT, specifically co-operation between language groups and states. The paper then considers what alternatives there are now for MT R{\&}D. One is to continue with interlingual methods of translation, even though those are not normally thought of as close to statistical methods. The reason is that statistical methods, taken alone, have almost certainly reached a ceiling in terms of the proportion of sentences and linguistic phenomena they can translate successfully. Interlingual methods remain popular within large electronics companies in Japan, and in a large US Government funded project (PANGLOSS). The question then discussed is what role there can be for interlinguas and interlingual methods in co-operation in MT across linguistic and national boundaries. The paper then turns to evaluation and asks whether, across national and continental boundaries, it can become a co-operative or a {\textquotedblleft}hegemonic{\textquotedblright} enterprise. Finally the paper turns to resources themselves and asks why co-operation on resources is proving so hard, even though there are bright spots of real co-operation.
mcenery-etal-1994-use
1,994
The use of approximate string matching techniques in the alignment of sentences in parallel corpora
Parallel corpora such as the Canadian Hansard corpus and the International Telecommunications Union (ITU) corpus each provide the same text in two or more languages, and have been aptly described as the ``Rosetta Stone'' of modern corpus linguistics [1]. Their use within MT is burgeoning, permeating all levels of the discipline, and even being used as the basis of full-blown statistically based MT systems. This paper will concern itself with the task of automatic bilingual lexicon construction, which is one of the major goals of the CRATER project ({\textquotedblleft}Corpus Resources and Terminology Extraction{\textquotedblright}, funded under the MLAP initiative of the CEC, grant number MLAP-93/20). The approach to bilingual lexicon alignment taken here entails the alignment of corpora, and then a detailed search through the corpus for lexical cognates. Consequently the paper will begin with a brief discussion of the alignment procedures used on the project to date, and move to a discussion of various similarity metrics used to evaluate lexical similarity.
al-hafez-etal-1994-semantic
1,994
A semantic knowledge-based computational dictionary
The Computational Dictionary, described in this paper, is structured on a knowledge base. The semantic features of each word, in a relevant grammatical category, can be determined through a hierarchical tree structure. Semantic knowledge of verbs is represented using predicate calculus definitions. This allows each expression, e.g. sentence or command, to be tested in order to determine whether it is meaningful and, if meaningful, what its meaning is or indeed whether it is ambiguous.
mitkov-haller-1994-machine
1,994
Machine translation, ten years on: Discourse has yet to make a breakthrough
Progress in Machine Translation (MT) during the last ten years has been observed at different levels, but discourse has yet to make a breakthrough. MT research and development has concentrated so far mostly on sentence translation (discourse analysis being a very complicated task) and the successful operation of most of the working MT systems does not usually go beyond the sentence level. To start with, the paper will refer to the MT research and development in the last ten years at the IAI in Saarbr{\"u}cken. Next, the MT discourse issues will be discussed both from the point of view of source language analysis and target text generation, and on the basis of the preliminary results of an ongoing ``discourse-oriented MT'' project . Probably the most important aspect in successfully analysing multisentential source texts is the capacity to establish the anaphoric references to preceding discourse entities. The paper will discuss the problem of anaphora resolution from the perspective of MT. A new integrated model for anaphora resolution, developed for the needs of MT, will be also outlined. As already mentioned, most machine translation systems perform translation sentence by sentence. But even in the case of paragraph translation, the discourse structure of the target text tends to be identical to that of the source text. However, the sublanguage discourse structures may differ across the different languages, and thus a translated text which assumes the same discourse structure as the source text may sound unnatural and perhaps disguise the true intent of the writer. Finally, the paper will outline a new approach for generating discourse structures, appropriate to the target sublanguage and will discuss some of the complicated problems encountered.
sokolova-1994-stylus
1,994
STYLUS - the MT product line for Russian: the current state
The current state of the machine translation system STYLUS is described. The system can produce smooth and accurate translation for more than 80{\%} of source text in the domain chosen. The modular structure of the dictionaries gives the possibility of customising the system to personal needs. The grammar employed is based on ATN-like formalism.
hahn-angelova-1994-providing
1,994
Providing factual information in MAT
Most translations are needed for technical documents in specific domains and often the domain knowledge available to the translator is crucial for the efficiency and quality of the translation task. Our project1 aims at the investigation of a MAT-paradigm where the human user is supported by linguistic as well as by subject information ([vHa90], [vHAn92]). The basic hypotheses of the approach are: - domain knowledge is not encoded in the lexicon entries, i.e. we clearly distinguish between the language layer and the conceptual layer; - the representation of domain knowledge is language independent and replaces most of the semantic entries in a traditional semantic lexicon of MT/MAT-systems; - the user accesses domain information by highlighting a sequence in the source text and specifying the type of query; - factual explanations to the user should be simple and transparent although the underlying formalisms for knowledge representation and processing might be very complex; - as a language for knowledge representation, conceptual graphs (CGs) of Sowa [Sow84] were chosen. In providing connections between the terms (lexical entries) and the knowledge base our approach will be compared to terminological knowledge bases (TKBs) which are hybrid systems between concept-oriented term banks and knowledge bases. This paper presents: - a contrastive view to knowledge based techniques in MAT, - mechanisms for mapping the ``ordinary'' linguistic lexicon and the terminological lexicon of two languages onto one knowledge base, - methods to access the domain knowledge in a flexible way without allowing completely free linguistic dialogues, - techniques to present the result of queries to the translator in restricted natural language, and - use of domain knowledge to solve specific translation difficulties.
moghrabi-1994-parametering
1,994
On parametering the choice of words in text generation and its usefulness in machine translation
This paper describes briefly the overall architecture of a machine translation system between French and Arabic in the sub-world of cooking recipes. It continues to describe in more detail the design of the generation component and how this design allows a variety of outputs all expressing the same conceptual meaning. This system is of the family of knowledge-based interlingua translation systems as it emphasises the importance of the meaning of the text being processed and articulates all its available knowledge-bases in order to achieve one major goal: flexible meaningful wording. We agree with S. Nirenburg that ``the ability and the right to subdivide sentences or to combine them together in the Target language are powerful tools in the hands of human translators.'' These are some tools that we want our MT systems to be able to use. The way the system is modularised allowed us to experiment with the generation of: sentence to sentence translations, text to text translations, more concise as opposed to more generalised wording, and varying word orders. The modules are declarative and loosely coupled. This strategy allowed us to experiment with regenerating back the French text. As a matter of fact, the generation component of this MT system is multilingual and capable of accommodating Arabic and French; two languages belonging to two different origins, namely Indo-European and Semitic. This system, being functional in the domain of cooking recipes, allowed us to concentrate on the lexical semantics of its vocabulary and on the modularisation of its linguistic knowledge, whether it is morphological, syntactic or stylistic, as opposed to its pragmatic knowledge. Now that we have tested the design on different languages, we are studying its feasibility in new domains where texts are mainly constituted of verbal phrases, such as in gardening and Chemistry laboratory manuals.
boitet-1994-dialogue
1,994
Dialogue-Based MT and self-explaining documents as an alternative to MAHT and MT of controlled languages
We argue that, in many situations, Dialogue-Based MT is likely to offer better solutions to translation needs than machine aids to translators or batch MT, even if controlled languages are used. Objections to DBMT have led us to introduce the new concept of {\textquotedblleft}self-explaining document{\textquotedblright}, which might be used in monolingual as well as in multilingual contexts, and deeply change our way of understanding important or difficult written material.
wasyliw-clarke-1994-natural
1,994
Natural language analysis and machine translation in pilot-ATC communication
A significant factor in air accidents is ``pilot error''. Included in this category are errors in natural language communication between the pilot and air traffic control (ATC); errors possibly compounded by the use of English as a standard language for such communication. We concentrate on the likelihood of misunderstanding created by ambiguities in these messages. Often only a few seconds exist between the receipt of an ambiguous message and the subsequent incorrect action (potentially) leading to a fatal accident. We consider the feasibility of filtering each spoken message through an ``intelligent computer interface'', testing for ambiguities and only transmitting those messages which are clear and unambiguous. Unclear, ambiguous messages should be ``authenticated'' before transmission. The procedures for computer analysis would require not only sensitive speech recognition equipment but also complex software performing sophisticated linguistic analysis at the phonetic, syntactic, semantic and pragmatic levels. Analysis must also take place in ``real time'' so that both pilot and controller can receive warning that ambiguities exist in the last communication and corrective action taken in the short time available. Consideration is also given to extending the system from the monolingual to multilingual level allowing pilot and controller each to think and speak in his own native tongue. The sophisticated language analysis is being extended to allow for appropriate disambiguated, bilingual machine translation.
bernhard-1994-machine
1,994
Machine translation: ten years on: Where are the users?
Early attempts to process natural language by mechanical means or machines date back to the thirties of this century. The first machine translation applications are known from the fifties. In view of the long history of machine translation, it is rather strange that even in the mid-nineties this technology is used quite rarely in the daily work of translators. Based on eight years' experience as a user of machine translation (starting with LOGOS and changing to METAL), I will discuss the reasons why translators are still reluctant to use machine translation for their everyday work.
vella-vella-1994-implications
1,994
The implications of machine translation
In this paper we take a broad look at the likely implications of future developments in machine translation. In order to do this effectively we consider firstly what constitutes machine translation in all its various forms. We then take a number of scenarios which differ in the extent to which machine translation is successful.
bunt-1993-proceedings
1,993
Proceedings of the Third International Workshop on Parsing Technologies (IWPT `93)
bod-1993-monte
1,993
Monte Carlo Parsing
In stochastic language processing, we are often interested in the most probable parse of an input string. Since there can be exponentially many parses, comparing all of them is not efficient. The Viterbi algorithm (Viterbi, 1967; Fujisaki et al., 1989) provides a tool to calculate in cubic time the most probable derivation of a string generated by a stochastic context free grammar. However, in stochastic language models that allow a parse tree to be generated by different derivations {--} like Data Oriented Parsing (DOP) or Stochastic Lexicalized Tree-Adjoining Grammar (SLTAG) {--} the most probable derivation does not necessarily produce the most probable parse. In such cases, a Viterbi-style optimisation does not seem feasible to calculate the most probable parse. In the present article we show that by incorporating Monte Carlo techniques into a polynomial time parsing algorithm, the maximum probability parse can be estimated as accurately as desired in polynomial time. Monte Carlo parsing is not only relevant to DOP or SLTAG, but also provides for stochastic CFGs an interesting alternative to Viterbi. Unlike the current versions of Viterbi style optimisation (Fujisaki et al., 1989; Jelinek et al., 1990; Wright et al., 1991), Monte Carlo parsing is not restricted to CFGs in Chomsky Normal Form. For stochastic grammars that are parsable in cubic time, the time complexity of estimating the most probable parse with Monte Carlo turns out to be $O(n^2\varepsilon^{-2})$, where $n$ is the length of the input string and $\varepsilon$ the estimation error. In this paper we will treat Monte Carlo parsing first of all in the context of the DOP model, since it is especially here that the number of derivations generating a single tree becomes dramatically large. Finally, a Monte Carlo Chart parser is used to test the DOP model on a set of hand-parsed strings from the Air Travel Information System (ATIS) spoken language corpus. Preliminary experiments indicate 96{\%} test set parsing accuracy.
brill-1993-transformation
1,993
Transformation-Based Error-Driven Parsing
In this paper we describe a new technique for parsing free text: a transformational grammar is automatically learned that is capable of accurately parsing text into binary-branching syntactic trees. The algorithm works by beginning in a very naive state of knowledge about phrase structure. By repeatedly comparing the results of bracketing in the current state to proper bracketing provided in the training corpus, the system learns a set of simple structural transformations that can be applied to reduce the number of errors. After describing the algorithm, we present results and compare these results to other recent results in automatic grammar induction.
bunt-van-der-sloat-1993-parsing
1,993
Parsing as Dynamic Interpretation
In this paper we consider the merging of the language of feature structures with a formal logical language, and how the semantic definition of the resulting language can be used in parsing. For the logical language we use the language EL, defined and implemented earlier for computational semantic purposes. To this language we add the basic constructions and operations of feature structures. The extended language we refer to as {\textquoteleft}Generalized EL', or {\textquoteleft}GEL'. The semantics of EL, and that of its extension GEL, is defined model-theoretically: for each construction of the language, a recursive rule describes how its value can be computed from the values of its constituents. Since GEL talks not only about semantic objects and their relations but also about syntactic concepts, GEL models are nonstandard in containing both kinds of entities. Whereas phrase-structure rules are traditionally viewed procedurally, as recipes for building phrases, and a rule in the parsing-as-deduction is viewed declaratively, as a proposition which is true when the conditions for building the phrase are satisfied, a rule in GEL is best viewed as a proposition in Dynamic Semantics: it can be evaluated recursively, and evaluates not to true or false, but to the minimal change in the model, needed to make the proposition true. The viability of this idea has been demonstrated by a proof-of-concept implementation for DPSG chart parsing and an emulation of HPSG parsing in the STUF environment.
carpenter-1993-compiling
1,993
Compiling Typed Attribute-Value Logic Grammars
The unification-based approach to processing attribute-value logic grammars, similar to Prolog \textit{interpretation}, has become the standard. We propose an alternative, embodied in the Attribute-Logic Engine (ALE) (Carpenter 1993) , based on the Warren Abstract Machine (WAM) approach to compiling Prolog (A{\"i}t-Kaci 1991). Phrase structure grammars with procedural attachments, similar to Definite Clause Grammars (DCG) (Pereira {---} Warren 1980), are specified using a typed version of Rounds-Kasper logic (Carpenter 1992). We argue for the benefits of a strong and total version of typing in terms of both clarity and efficiency. Finally, we discuss the compilation of grammars into a few efficient low-level instructions for the basic feature structure operations.
costagliola-1993-pictorial
1,993
(Pictorial) LR Parsing from an Arbitrary Starting Point
In pictorial LR parsing it is always difficult to establish from which point of a picture the parsing process has to start. This paper introduces an algorithm that allows any element of the input to be considered as the starting one and, at the same time, assures that the parsing process is not compromised. The algorithm is first described on string grammars seen as a subclass of pictorial grammars and then adapted to the two-dimensional case. The extensions to generalized LR parsing and pictorial generalized LR parsing are immediate.
ellis-etal-1993-new
1,993
A New Transformation into Deterministically Parsable Form for Natural Language Grammars
Marcus demonstrated that it was possible to construct a deterministic grammar/interpreter for a subset of natural language [Marcus, 1980]. Although his work with PARSIFAL pioneered the field of deterministic natural language parsing, his method has several drawbacks: {\textbullet} The rules and actions in the grammar / interpreter are so embedded that it is difficult to distinguish between them. {\textbullet} The grammar / interpreter is very difficult to construct (the small grammar shown in [Marcus, 1980] took about four months to construct). {\textbullet} The grammar is very difficult to maintain, as a small change may have several side effects. This paper outlines a set of structure transformations for converting a non-deterministic grammar into deterministic form. The original grammar is written in a context free form; this is then transformed to resolve ambiguities.
garman-etal-1993-principle
1,993
A Principle-based Parser for Foreign Language Training in German and Arabic
In this paper we discuss the design and implementation of a parser for German and Arabic, which is currently being used in a tutoring system for foreign language training. Computer-aided language tutoring is a good application for testing the robustness and flexibility of a parsing system, since the input is usually ungrammatical in some way. Efficiency is also a concern, as tutoring applications typically run on personal computers, with the parser sharing memory with other components of the system. Our system is principle-based, which ensures a compact representation, and improves portability, needed in order to extend the initial design from German to Arabic and (eventually) Spanish. Currently, the parser diagnoses agreement errors, case errors, selection errors, and some word order errors. The parser can handle simple and complex declaratives and questions, topicalisations, verb movement, relative clauses {---} broad enough coverage to be useful in the design of real exercises and dialogues.
van-der-hoeven-1993-algorithm
1,993
An Algorithm for the Construction of Dependency Trees
A casting system is a dictionary which contains information about words, and relations that can exist between words in sentences. A casting system allows the construction of dependency trees for sentences. They are trees which have words in roles at their nodes, and arcs which correspond to dependency relations. The trees are related to dependency trees in classical dependency syntax, but they are not the same. Formally, casting systems define a family of languages which is a proper subset of the contextfree languages. It is richer than the family of regular languages however. The interest in casting systems arose from an experiment in which it was investigated whether a dictionary of words and word-relations created by a group of experts on the basis of the analysis of a corpus of titles of scientific publications, would suffice to automatically produce reasonable but maybe superficial syntactical analyses of such titles. The results of the experiment were encouraging, but not clear enough to draw firm conclusions. A technical question which arose during the experiment, concerns the choice of a proper algorithm to construct the forest of dependency trees for a given sentence. It turns out that Earley`s well-known algorithm for the parsing of contextfree languages can be adapted to construct dependency trees on the basis of a casting system. The adaptation is of cubic complexity. In fact one can show that contextfree grammars and dictionaries of words and word-relations like casting systems, both belong to a more general family of systems, which associate trees with sequences of tokens. Earley`s algorithm cannot just be adapted to work for casting systems, but it can be generalized to work for the entire large family.
hozumi-etal-1993-integration
1,993
Integration of Morphological and Syntactic Analysis Based on LR Parsing Algorithm
Morphological analysis of Japanese is very different from that of English, because no spaces are placed between words. The analysis includes segmentation of words. However, ambiguities in segmentation is not always resolved only with morphological information. This paper proposes a method to integrate the morphological and syntactic analysis based on LR parsing algorithm. An LR table derived from grammar rules is modified on the basis of connectabilities between two adjacent words. The modified LR table reflects both the morphological and syntactic constraints. Using the LR table, efficient morphological and syntactic analysis is available.
kurohashi-nagao-1993-structural
1,993
Structural Disambiguation in Japanese by Evaluating Case Structures based on Examples in a Case Frame Dictionary
A case structure expression is one of the most important forms to represent the \textit{meaning} of a sentence. Case structure analysis is usually performed by consulting \textit{case frame information} in verb dictionaries and by selecting a \textit{proper case frame} for an input sentence. However, this analysis is very difficult because of \textit{word sense ambiguity} and \textit{structural ambiguity}. A conventional method for solving these problems is to use the method of \textit{selectional restriction}, but this method has a drawback in the semantic marker (SM) system {--} the trade-off between descriptive power and construction cost. This paper describes a method of case structure analysis of Japanese sentences which overcomes the drawback in the SM system, concentrating on the structural disambiguation. This method selects a proper case frame for an input by the similarity measure between the input and typical example sentences of each case frame. When there are two or more possible readings for an input because of structural ambiguity, the best reading will be selected by evaluating case structures in each possible reading by the similarity measure with typical example sentences of case frames.
lavie-tomita-1993-glr
1,993
GLR* -- An Efficient Noise-skipping Parsing Algorithm For Context Free Grammars
This paper describes GLR*, a parser that can parse any input sentence by ignoring unrecognizable parts of the sentence. In case the standard parsing procedure fails to parse an input sentence, the parser nondeterministically skips some word(s) in the sentence, and returns the parse with fewest skipped words. Therefore, the parser will return some parse(s) with any input sentence, unless no part of the sentence can be recognized at all. The problem can be defined in the following way: Given a context-free grammar $G$ and a sentence $S$, find and parse $S'$ {--} the largest subset of words of $S$, such that $S' \in L(G)$. The algorithm described in this paper is a modification of the Generalized LR (Tomita) parsing algorithm [Tomita, 1986] . The parser accommodates the skipping of words by allowing shift operations to be performed from inactive state nodes of the Graph Structured Stack. A heuristic similar to beam search makes the algorithm computationally tractable. There have been several other approaches to the problem of robust parsing, most of which are special purpose algorithms [Carbonell and Hayes, 1984] , [Ward, 1991] and others. Because our approach is a modification to a standard context-free parsing algorithm, all the techniques and grammars developed for the standard parser can be applied as they are. Also, in case the input sentence is by itself grammatical, our parser behaves exactly as the standard GLR parser. The modified parser, GLR*, has been implemented and integrated with the latest version of the Generalized LR Parser/Compiler [Tomita et al , 1988], [Tomita, 1990]. We discuss an application of the GLR* parser to spontaneous speech understanding and present some preliminary tests on the utility of the GLR* parser in such settings.
leermakers-1993-use
1,993
The Use of Bunch Notation in Parsing Theory
Much of mathematics, and therefore much of computer science, is built on the notion of sets. In this paper it is argued that in parsing theory it is sometimes convenient to replace sets by a related notion, \textit{bunches}. The replacement is not so much a matter of principle, but helps to create a more concise theory. Advantages of the bunch concept are illustrated by using it in descriptions of a formal semantics for context-free grammars and of functional parsing algorithms.
lutz-1993-chart
1,993
Chart Parsing of Attributed Structure-Sharing Flowgraphs with Tie-Point Relationships
Many applications make use of diagrams to represent complex objects. In such applications it is often necessary to recognise how some diagram has been pieced together from other diagrams. Examples are electrical circuit analysis, and program understanding in the plan calculus (Rich, 1981). In these applications the recognition process can be formalised as flowgraph parsing, where a flowgraph is a special case of a plex (Feder 1971) . Nodes in a flowgraph are connected to each other via intermediate points known as tie-points. Lutz (1986, 1989) generalised chart parsing of context-free string languages (Thompson {--} Ritchie, 1984) to context-free flowgraph languages, enabling bottom-up and top-down recognition of flowgraphs. However, there are various features of the plan calculus that complicate this - in particular attributes, structure sharing, and relationships between tie-points. This paper will present a chart parsing algorithm for analysing graphs with all these features, suitable for both program understanding and digital circuit analysis. For a fixed grammar, this algorithm runs in time polynomial in the number of tie-points in the input graph.
mcdonald-1993-interplay
1,993
The Interplay of Syntactic and Semantic Node Labels in Partial Parsing
Our natural language comprehension system, {\textquotedblleft}Sparser{\textquotedblright} , uses a semantic grammar in conjunction with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal`s {\textquotedblleft}Who`s News{\textquotedblright} column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitutents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.
nederhof-sarbo-1993-increasing
1,993
Increasing the Applicability of LR Parsing
In this paper we describe a phenomenon present in some context-free grammars, called \textit{hidden left recursion}. We show that ordinary LR parsing according to hidden left-recursive grammars is not possible and we indicate a range of solutions to this problem. One of these solutions is a new parsing technique, which is a variant of traditional LR parsing. This new parsing technique can be used both with and without lookahead and the nondeterminism can be realized using backtracking or using a graph-structured stack.
odonnell-1993-reducing
1,993
Reducing Complexity in A Systemic Parser
Parsing with a large systemic grammar brings one face-to-face with the problem of unification with disjunctive descriptions. This paper outlines some techniques which we employed in a systemic parser to reduce the average-case complexity of such unification.
luttighuis-sikkel-1993-generalized
1,993
Generalized LR parsing and attribute evaluation
This paper presents a thorough discussion of generalized \textit{LR} parsing with simultaneous attribute evaluation. Nondeterministic parsers and combined parser/evaluators are presented for the \textit{LL}(0) , \textit{LR}(0) , and \textit{SKLR}(0) strategies. \textit{SKLR}(0) parsing occurs as an intermediate strategy between the first two. Particularly in the context of simultaneous attribute evaluation, generalized \textit{SKLR}(0) parsing is a sensible alternative for generalized \textit{LR}(0) parsing.
raaijmakers-1993-proof
1,993
A Proof-Theoretic Reconstruction of HPSG
A reinterpretation of Head-Driven Phrase Structure Grammar (HPSG) in a proof-theoretic context is presented. This approach yields a \textit{decision procedure} which can be used to establish whether certain strings are generated by a given HPSG grammar. It is possible to view HPSG as a fragment of linear logic (Girard, 1987), subject to partiality and side conditions on inference rules. This relates HPSG to several categorial logics (Morrill, 1990) . Specifically, HPSG signs are mapped onto quantified formulae, which can be interpreted as \textit{second-order} types given the Curry-Howard isomorphism. The logic behind type inference will, aside from the usual quantifier introduction and elimination rules, consist of a partial logic for the undirected implication connective. It will be shown how this logical perspective can be turned into a parsing perspective. The enterprise takes the standard HPSG of Pollard {--} Sag (1987) as a starting point, since this version of HPSG is well-documented and has been around long enough to have displayed both merits and shortcomings; the approach is directly applicable to more recent versions of HPSG, however. In order to make the proof-theoretic recasting smooth, standard HPSG is reformulated in a binary format.
schabes-waters-1993-stochastic
1,993
Stochastic Lexicalized Context-Free Grammar
Stochastic lexicalized context-free grammar (SLCFG) is an attractive compromise between the parsing efficiency of stochastic context-free grammar (SCFG) and the lexical sensitivity of stochastic lexicalized tree-adjoining grammar (SLTAG) . SLCFG is a restricted form of SLTAG that can only generate context-free languages and can be parsed in cubic time. However, SLCFG retains the lexical sensitivity of SLTAG and is therefore a much better basis for capturing distributional information about words than SCFG.
sikkel-op-den-akker-1993-predictive
1,993
Predictive Head-Corner Chart Parsing
Head-Corner (HC) parsing has come up in computational linguistics a few years ago, motivated by linguistic arguments. This idea is a heuristic, rather than a fail-safe principle, hence it is relevant indeed to consider the worst-case behaviour of the HC parser. We define a novel predictive head-corner chart parser of cubic time complexity. We start with a left-corner (LC) chart parser, which is easier to understand. Subsequently, the LC chart parser is generalized to an HC chart parser. It is briefly sketched how the parser can be enhanced with feature structures.
sleator-temperley-1993-parsing
1,993
Parsing English with a Link Grammar
We define a new formal grammatical system called a \textit{link grammar}. A sequence of words is in the language of a link grammar if there is a way to draw \textit{links} between words in such a way that (1) the local requirements of each word are satisfied, (2) the links do not cross, and (3) the words form a connected graph. We have encoded English grammar into such a system, and written a program (based on new algorithms) for efficiently parsing with a link grammar. The formalism is lexical and makes no explicit use of constituents and categories. The breadth of English phenomena that our system handles is quite large. A number of sophisticated and new techniques were used to allow efficient parsing of this very complex grammar. Our program is written in C, and the entire system may be obtained via anonymous ftp. Several other researchers have begun to use link grammars in their own research.
strzalkowski-scheyen-1993-evaluation
1,993
Evaluation of TTP Parser: A Preliminary Report
TTP (Tagged Text Parser) is a fast and robust natural language parser specifically designed to process vast quantities of unrestricted text. TTP can analyze written text at the speed of approximately 0.3 sec/sentence, or 73 words per second. An important novel feature of TTP parser is that it is equipped with a skip-and-fit recovery mechanism that allows for fast closing of more difficult sub-constituents after a preset amount of time has elapsed without producing a parse. Although a complete analysis is attempted for each sentence, the parser may occasionally ignore fragments of input to resume {\textquotedblleft}normal{\textquotedblright} processing after skipping a few words. These fragments are later analyzed separately and attached as incomplete constituents to the main parse tree. TTP has recently been evaluated against several leading parsers. While no formal numbers were released (a formal evaluation is planned later this year), TTP has performed surprisingly well. The main argument of this paper is that TTP can provide a substantial gain in parsing speed giving up relatively little in terms of the quality of output it produces. This property allows TTP to be used effectively in parsing large volumes of text.
ushioda-etal-1993-frequency
1,993
Frequency Estimation of Verb Subcategorization Frames Based on Syntactic and Multidimensional Statistical Analysis
We describe a mechanism for automatically estimating frequencies of verb subcategorization frames in a large corpus. A tagged corpus is first partially parsed to identify noun phrases and then a regular grammar is used to estimate the appropriate subcategorization frame for each verb token in the corpus. In an experiment involving the identification of six fixed subcategorization frames, our current system showed more than 80{\%} accuracy. In addition, a new statistical method enables the system to learn patterns of errors based on a set of training samples and substantially improves the accuracy of the frequency estimation.
weng-1993-handling
1,993
Handling Syntactic Extra-Grammaticality
This paper reviews and summarizes six different types of extra-grammatical phenomena and their corresponding recovery principles at the syntactic level, and describes some techniques used to deal with four of them completely within an Extended GLR parser (EGLR). Partial solutions to the remaining two by the EGLR parser are also discussed. The EGLR has been implemented.
wittenburg-1993-adventures
1,993
Adventures in Multi-dimensional Parsing: Cycles and Disorders
Among the proposals for multidimensional grammars is a family of constraint-based grammatical frameworks, including Relational Grammars. In Relational languages, expressions are formally defined as a set of relations whose tuples are taken from an indexed set of symbols. Both bottom-up parsing and Earley-style parsing algorithms have previously been proposed for different classes of Relational languages. The Relational language class for Earley style parsing in Wittenburg (1992a) requires that each relation be a partial order. However, in some real-world domains, the relations do not naturally conform to these restrictions. In this paper I discuss motivations and methods for predictive, Earley-style parsing of multidimensional languages when the relations involved do not necessarily yield an ordering, e.g., when the relations are symmetric and/or nontransitive. The solution involves guaranteeing that a single initial start position for parsing can be associated with any member of the input set. The domains in which these issues are discussed involve incremental parsing in interfaces and off-line verification of multidimensional data.
weerasinghe-fawcett-1993-probabilistic
1,993
Probabilistic Incremental Parsing in Systemic Functional Grammar
In this paper we suggest that a key feature to look for in a successful parser is its ability to lend itself naturally to semantic interpretation. We therefore argue in favour of a parser based on a semantically oriented model of grammar, demonstrating some of the benefits that such a model offers to the parsing process. In particular we adopt a systemic functional syntax as the basis for implementing a chart based probabilistic incremental parser for a non-trivial subset of English.
steffens-1993-introduction
1,993
Introduction
ide-veronis-1993-knowledge
1,993
Knowledge extraction from machine-readable dictionaries: an evaluation
Machine-readable versions of everyday dictionaries have been seen as a likely source of information for use in natural language processing because they contain an enormous amount of lexical and semantic knowledge. However, after 15 years of research, the results appear to be disappointing. No comprehensive evaluation of machine-readable dictionaries (MRDs) as a knowledge source has been made to date, although this is necessary to determine what, if anything, can be gained from MRD research. To this end, this paper will first consider the postulates upon which MRD research has been based over the past fifteen years, discuss the validity of these postulates, and evaluate the results of this work. We will then propose possible future directions and applications that may exploit these years of effort, in the light of current directions in not only NLP research, but also fields such as lexicography and electronic publishing.
storrer-schwall-1993-description
1,993
Description and acquisition of multiword lexemes
This paper deals with multiword lexemes (MWLs), focussing on two types of verbal MWLs: verbal idioms and support verb constructions. We discuss the characteristic properties of MWLs, namely non-standard compositionality, restricted substitutability of components, and restricted morpho-syntactic flexibility, and we show how these properties may cause serious problems during the analysis, generation, and transfer steps of machine translation systems. In order to cope with these problems, MT lexicons need to provide detailed descriptions of MWL properties. We list the types of information which we consider the necessary minimum for a successful processing of MWLs, and report on some feasibility studies aimed at the automatic extraction of German verbal multiword lexemes from text corpora and machine-readable dictionaries.