ID
large_stringlengths 10
61
| year
int64 1.96k
2.03k
| title
large_stringlengths 4
560
| abstract
large_stringlengths 0
12.8k
|
---|---|---|---|
ahmad-1993-pragmatics | 1,993 | Pragmatics of specialist terms: the acquisition and representation of terminology | The compilation of specialist terminology requires an understanding of how specialists coin and use terms of their specialisms. We show how an exploitation of the pragmatic features of specialist terms will help in the semi-automatic extraction of terms and in the organisation of terms in terminology data banks. |
procter-1993-cambridge | 1,993 | The Cambridge language survey | The Cambridge Language Survey is a research project whose activities centre around the use of an Integrated Language Database, whereby a computerised dictionary is used for intelligent cross-reference during corpus analysis - searching for example for all the inflections of a verb rather than just the base form. Types of grammatical coding and semantic categorisation appropriate to such a computerised dictionary are discussed, as are software tools for parsing, finding collocations, and performing sense-tagging. The weighted evaluation of semantic, grammatical, and collocational information to discriminate between word senses is described in some detail. Mention is made of several branches of research including the development of parallel corpora, semantic interpretation by sense-tagging, and the use of a Learner Corpus for the analysis of errors made by non-native-speakers. Sense-tagging is identified as an under-exploited approach to language analysis and one for which great opportunities for product development exist. |
daelemans-1993-memory | 1,993 | Memory-based lexical acquisition and processing | Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described. |
krieger-1993-typed | 1,993 | Typed feature formalisms as a common basis for linguistic specification | Typed feature formalisms (TFF) play an increasingly important role in NLP and, in particular, in MT. Many of these systems are inspired by Pollard and Sag`s work on Head-Driven Phrase Structure Grammar (HPSG), which has shown that a great deal of syntax and semantics can be neatly encoded within TFF. However, syntax and semantics are not the only areas in which TFF can be beneficially employed. In this paper, I will show that TFF can also be used as a means to model finite automata (FA) and to perform certain types of logical inferencing. In particular, I will (i) describe how FA can be defined and processed within TFF and (ii) propose a conservative extension to HPSG, which allows for a restricted form of semantic processing within TFF, so that the construction of syntax and semantics can be intertwined with the simplification of the logical form of an utterance. The approach which I propose provides a uniform, HPSG-oriented framework for different levels of linguistic processing, including allomorphy and morphotactics, syntax, semantics, and logical form simplification. |
calzolari-1993-european | 1,993 | European efforts towards standardizing language resources | This paper aims at providing a broad overview of the situation in Europe during the past few years, regarding efforts and concerted actions towards the standardization of large language resources, with particular emphasis on what is taking place in the fields of Computational Lexicons and Text Corpora. Attention will be focused on the plans, work in progress, and a few preliminary results of the LRE project EAGLES (Expert Advisory Group on Language Engineering Standards). |
group-koch-1993-machine | 1,993 | Machine translation and terminology database - uneasy bedfellows? | The software company SAP translates its documentation into more than 12 languages. To support the translation department, SAPterm is used as a traditional terminology database for all languages, and the machine translation system METAL for German-to-English translation. The maintenance of the two terminology databases in parallel, SAPterm and the METAL lexicons, requires a comparison of the entries in order to ensure terminological consistency. However, due to the differences in the structure of the entries in SAPterm and METAL, an automatic comparison has not yet been implemented. The search for a solution has led to the consideration of using another existing SAP tool, called Proposal Pool. |
bachut-etal-1993-generic | 1,993 | A generic lexical model | Linguistic engineering presupposes lexical resources. For translation, it is highly desirable that a Machine Translation engine and human translators should have access to the same dictionary information. The present paper describes a multilingual dictionary model, which integrates information for use by both humans and a variety of NLP systems. The model is used as a reference in the design of commercial translation products. |
blaser-1993-translexis | 1,993 | TransLexis: an integrated environment for lexicon and terminology management | The IBM lexicon and terminology management system TransLexis provides an integrated solution for developing and maintaining lexical and terminological data for use by humans and computer programs. In this paper, the conceptual schema of TransLexis, its user interface, and its import and export facilities will be described. TransLexis takes up several ideas emerging from the reuse discussion. In particular, it strives for a largely theory-neutral representation of multilingual lexical and terminological data, it includes export facilities to derive lexicons for different applications, and it includes programs to import lexical and terminological data from existing sources. |
karkaletsis-etal-1993-use | 1,993 | The use of terminological knowledge bases in software localisation | This paper describes the work that was undertaken in the Glossasoft project in the area of terminology management. Some of the draw-backs of existing terminology management systems are outlined and an alternative approach to maintaining terminological data is proposed. The approach which we advocate relies on knowledge-based representation techniques. These are used to model conceptual knowledge about the terms included in the database, general knowledge about the subject domain, application-specific knowledge, and - of course - language-specific terminological knowledge. We consider the multifunctionality of the proposed architecture to be one of its major advantages. To illustrate this, we outline how the knowledge representation scheme, which we suggest, could be drawn upon in message generation and machine-assisted translation. |
mayer-1993-navigation | 1,993 | Navigation through terminological databases | Translating technical texts may cause many problems concerning terminology, even for the professional technical translator. For this reason, tools such as terminological databases or termbanks have been introduced to support the user in finding the most suitable translation. Termbanks are a type of machine-readable dictionary and contain extensive information on technical terms. But a termbank offers more possibilities than providing users with the electronic version of a printed dictionary. This paper describes a multilingual termbank, which was developed within the ESPRIT project Translator`s Workbench. The termbank allows the user to create, maintain, and retrieve specialised vocabulary. In addition, it offers the user the possibility to look up definitions, foreign language equivalents, and background knowledge. In this paper, an introduction to the database underlying the termbank and the user interface is given with the emphasis lying on those functions which initiate the user into a new subject by allowing him or her to navigate through a terminology field. It will be shown how, by clustering the term explanation texts and by linking them to a type of semantic network, such functions can be implemented. |
caroli-1993-types | 1,993 | Types of lexical co-occurrences: descriptive parameters | In this article, I will discuss different types of lexical co-occurrences and examine the requirements for representing them in a reusable lexical resource. I will focus the discussion on the delimitation of a limited set of descriptive parameters rather than on an exhaustive classification of idioms or multiword units. Descriptive parameters will be derived from a detailed discussion of the problem of how to determine adequate translations for such units. Criteria for determining translation equivalences between multiword units of two languages will be: the syntactic and the semantic structure as well as functional, pragmatic, and stylistic properties. |
ostler-1993-perception | 1,993 | Perception vocabulary in five languages - towards an analysis using frame elements | This essay introduces the first linguistic task of the DELIS project': to undertake a corpus-based examination of the syntactic and semantic properties of perception vocabulary in five languages, English, Danish, Dutch, French and Italian. The theoretical background is Fillmore`s Frame Semantics. The paper reviews some of the variety of facts to be accounted for, particularly in the specialization of sense associated with some collocations, and the pervasive phenomenon of Intensionality. Through this review, we aim to focus our understanding of cross-linguistic variation in this one domain, both by noting specific differences in word-sense correlation, and by exhibiting a general means of representation. |
heid-1993-relating | 1,993 | Relating parallel monolingual lexicon fragments for translation purposes | In this paper, we introduce the methodology for the construction of dictionary fragments under development in DELIS. The approach advocated is corpus-based, computationally supported, and aimed at the construction of parallel monolingual dictionary fragments which can be linked to form translation dictionaries without many problems.The parallelism of the monolingual fragments is achieved through the use of a shared inventory of descriptive devices, one common representation formalism (typed feature structures) for linguistic information from all levels, as well as a working methodology inspired by onomasiology: treating all elements of a given lexical semantic field consistently with common descriptive devices at the same time.It is claimed that such monolingual dictionaries are particularly easy to relate in a machine translation application. The principles of such a combination of dictionary fragments are illustrated with examples from an experimental HPSG-based interlingua-oriented machine translation prototype. |
schutz-etal-1991-architecture | 1,991 | An Architecture Sketch of Eurotra-II | This paper outlines a new architecture for a NLP/MT development environment for the EUROTRA project, which will be fully operational in the 1993-94 time frame. The proposed architecture provides a powerful and flexible platform for extensions and enhancements to the existing EUROTRA translation philosophy and the linguistic work done so far, thus allow- ing the reusability of existing grammatical and lexical resources, while ensuring the suitability of EUROTRA methods and tools for other NLP/MT system developers and researchers. |
rimon-etal-1991-advances | 1,991 | Advances in Machine Translation Research in IBM | IBM is engaged in advanced research and development projects on various aspects of machine translation, between several language pairs. The activities reported on hero are all parts of a rather large-scale, international effort, following Michael McCord`s LMT approach. The paper focuses on seven selected topics: recent enhancements made in the Slot Grammar formalism and the specific analysis components; specification of a semantic type hierarchy and its use for verb sense disambiguation; incorporation of statistical techniques in the translation process; anaphora resolution; linkage of target morphology modules; methods for the construction of large MT lexicons; and interactive disambiguation. |
farwell-wilks-1991-ultra | 1,991 | ULTRA: A Multi-lingual Machine Translator | ULTRA (Universal Language TRAnslator) is a multilingual, interlingual machine translation system currently under development at the Computing Research Laboratory at New Mexico State University. It translates between five languages (Chinese, English, German, Japanese, Spanish) with vocabularies in each language based on approximately 10,000 word senses. The major design criteria are that the system be robust and general purpose with simple to use utilities for customization to suit the needs of particular users. This paper describes the central characteristics of the system: the intermediate representation, the language components, semantic and pragmatic processes, and supporting lexical entry tools. |
barnett-etal-1991-capturing | 1,991 | Capturing Language-Specific Semantic Distinctions in Interlingua-based MT | We describe an interlingua-based approach to machine translation, in which a DRS representation of the source text is used as the interlingua representation. A target DRS is then created and used to construct the target text. We describe several advantages of this level of representation. We also argue that problems of translation mismatch and divergence should properly be viewed not as translation problems per se but rather as generation problems, although the source text can be used to guide the target generator. The system we have built relics exclusively on monolingual linguistic descriptions that are also, for the most part, bi-directional. |
chen-etal-1991-archtran | 1,991 | ArchTran: A Corpus-based Statistics-oriented English-Chinese Machine Translation System | The ArchTran English-Chinese Machine Translation System is among the first commercialized English-Chinese machine translation systems in the world. A prototype system was released in 1989 and currently serves as the kernel of a value-added network-based translation service. The main design features of the ArchTran system are the adoption of a mixed (bottom-up parsing with top-down filtering) parsing strategy, a scored parsing mechanism, and the corpus-based, statistics-oriented paradigm for linguistic knowledge acquisition. Under this framework, research directions are toward designing systematic and automatic methods for acquiring language model parameters, and toward using preference measure with uniform probabilistic score function for ambiguity resolution. In this paper, the underlying probabilistic models of the ArchTran designing philosophy will be presented. |
schneider-1991-metal | 1,991 | The METAL System. Status 1991 | The METAL system which originally evolved from a cooperation between the University of Texas and Siemens became a product in 1988. METAL is implemented on multi-user worksta- tions with a LISP server in the background. It is integrated into the office environment and permits automatic deformatting and reformatting of documents. METAL is characterized by recursive grammars, best paths parsing and a modular lexicon structure. Recent changes in system design have focussed both on internal structure and on user interface. Experiences with productive use have proven METAL`s cost-effectiveness but have also shown the need for increased cooperation between developers and end-users. |
bouillon-boeseleldt-1991-applying | 1,991 | Applying an Experimental MT System to a Realistic Problem | This presentation outlines the implementation of a machine translation system for avalanche warning bulletins in natural language, using a unification-based formalism developed at ISSCO, which will be introduced at the same occasion. Concrete examples taken from this project exemplify a modern approach to ma- chine translation: a rich representation of the semantic content of a sentence, the use of a sin- gle grammar for parsing and generating as well as generation and transfer based exclusively on the semantic representation of a sentence. Simultaneously, the limits of bidirectional trans- fer are being tested. |
trujillo-plowman-1991-automation | 1,991 | Automation of Bilingual Lexicon Compilation | This paper shows that, there are a number of common concepts which are used to define a class of nouns in standard, monolingual English and Spanish dictionaries. An experiment is described to show how a small sot of such con- cepts was derived semi-automatically by automatically analysing the definitions in each language and then matching equivalent definitions manually. Also, some of the benefits of constructing such sets are described, together with the problems encountered while carrying out the experiment. |
mitamura-etal-1991-efficient | 1,991 | An Efficient Interlingua Translation System for Multi-lingual Document Production | Knowledge-based interlingual machine translation systems produce semantically accurate translations, but typically require massive knowledge acquisition. This paper describes KANT, a system that reduces this requirement to produce practical, scalable, and accurate KBMT applications. First, the set of requirements is discussed, then the full KANT architecture is illustrated, and finally results from a fully implemented prototype are presented. |
ozeki-nishihara-1991-mt | 1,991 | MT Application For The Translation Agency | |
okumura-etal-1991-multi | 1,991 | Multi-lingual Sentence Generation from the PIVOT interlingua | We wrote this report in Japanese and translated it by NEC`s machine translation system PIVOT/JE.) IBS (International Business Service) is the company which does the documentation service which contains translation business. We introduced a machine translation system into translation business in earnest last year. The introduction of a machine translation system changed the form of our translation work. The translation work was divided into some steps and the person who isn`t experienced became able to take it of the work of each of translation steps. As a result, a total translation cost reduced. In this paper, first, we report on the usage of our machine translation system. Next, we report on translation quality and the translation cost with a machine translation system. Lastly, we report on the merit which was gotten by introducing machine translation. |
hirakawa-etal-1991-ej | 1,991 | EJ/JE Machine Translation System ASTRANSAC --- Extensions toward Personalization | The demand for personal use of a translation system seems to be increasing in accordance with the improvement in MT quality. A recent portable and powerful engineering workstation, such as AS1000 (SPARC LT), enables us to develop a personal-use oriented MT system This paper describes the outline of ASTRANSAC (an English-Japanese/Japanese- English bi-directional MT system) and the extensions related to the personalization of ASTRANSAC, which have been newly made since the MT Summit II. |
kugler-etal-1991-translators | 1,991 | The Translator`s Workbench: An Environment for Multi-Lingual Text Processing and Translation | The Translator`s Workbench provides the user with a set of computer-based tools for speeding up the translation process and facilitate multilingual text processing and technical writing. The tools include dictionaries, spelling, gram- mar, punctuation and style checkers, text pro- cessing utilities, remote access to a fully auto- matic machine translation system and to termi- nological data bases, an on-line termbank, and a translation memory in an integrated framework covering several European languages. |
jin-1991-translation | 1,991 | Translation Accuracy and Translation Efficiency | ULTRA (Universal Language Translator) is a multi-lingua] bidirectional translation system between English, Spanish, German, Japanese and Chinese. It employs an interlingua] structure to translate among these five languages. An interlingual representation is used as a deep structure through which any pair of these languages can be translated in either direction. This paper describes some techniques used in the Chinese system to solve problems in word ordering, language equivalency, Chinese verb constituent and prepositional phrase attachment. By means of these techniques translation quality has been significantly improved. Heuristic search, which results in translation efficiency, is also discussed. |
kitano-etal-1991-toward | 1,991 | Toward High Performance Machine Translation: Preliminary Results from Massively Parallel Memory-Based Translation on SNAP | This paper describes a memory-based machine translation system developed for the Semantic Net- work Array Processor (SNAP). The goal of our work is to develop a scalable and high-performance memory-based machine translation system which utilizes the high degree of parallelism provided by the SNAP machine. We have implemented an experimental machine translation system DMSNAP as a central part of a real-time speech-to-speech dia- logue translation system. It is a SNAP version of the {\ensuremath{\Phi}}DMDIALOG speech-to-speech translation system. Memory-based natural language processing and syntactic constraint network model has been incorporated using parallel marker-passing which is directly supported from hardware level. Experimental results demonstrate that the parsing of a sentence is done in the order of milliseconds. |
ikehara-etal-1991-toward | 1,991 | Toward an MT System without Pre-Editing: Effects of a New Method in ALT-J/E | Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of re-writing the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing. |
jappinen-etal-1991-kielikone | 1,991 | KIELIKONE Machine Translation Workstation | All human languages are open and complex communication systems. No machine translation system will ever be able to automatically translate all possible sentences from one language to another in high quality. One way to combat complexity and openness of language translation is to decompose the task into well-defined sequential subtasks and solve each using declarative, modular rules. This paper describes such an MT system. A language-independent MT Machine has been designed for the transformation of linguistic trees in a general fashion. A full MT system is composed of a sequence of instances of that machine. Finnish-English implementation is discussed. |
jain-etal-1991-connectionist | 1,991 | Connectionist and Symbolic Processing in Speech-to-Speech Translation: The JANUS System | We present JANUS, a speech-to-speech translation system that utilizes diverse processing strategies including connectionist learning, traditional AI knowledge representation approaches, dynamic programming, and stochastic techniques. JANUS translates continuously spoken English utterances into Japanese and German speech utterances. The overall system performance on a corpus of conference registration conversations is 87{\%}. Two versions of JANUS are compared: one using an LR parser (JANUS-LR) and one using a neural-network based parser (JANUS-NN). Performance results are mixed, with JANUS-LR deriving benefit from a tighter language model and JANUS-NN benefiting from greater flexibility. |
tomita-etal-1991-proceedings | 1,991 | Proceedings of the Second International Workshop on Parsing Technologies (IWPT `91) | February 13-25, 1991 |
hasida-tsuda-1991-parsing | 1,991 | Parsing without Parser | In the domain of artificial intelligence, the pattern of information flow varies drastically from one context to another. To capture this diversity of information flow, a natural-language processing (NLP) system should consist of modules of constraints and one general constraint solver to process all of them; there should be no specialized procedure module such as a parser and a generator. This paper presents how to implement such a constraint-based approach to NLP. Dependency Propagation (DP) is a constraint solver which transforms the program (=constraint) represented in terms of logic programs. Constraint Unification (CU) is a unification method incorporating DP. cu-Prolog is an extended Prolog which employs CU instead of the standard unification. cu-Prolog can treat some lexical and grammatical knowledge as constraints on the structure of grammatical categories, enabling a very straightforward implementation of a parser using constraint-based grammars. By extending DP, one can deal efficiently with phrase structures in terms of constraints. Computation on category structures and phrase structures are naturally integrated in an extended DP. The computation strategies to do all this are totally attributed to a very abstract, task-independent principle: prefer computation using denser information. Efficient parsing is hence possible without any parser. |
dasigi-1991-parsing | 1,991 | Parsing = Parsimonious Covering? | Many researchers believe that certain aspects of natural language processing, such as word sense disambiguation and plan recognition in stories, constitute abductive inferences. We have been working with a specific model of abduction, called \textit{parsimonious covering}, applied in diagnostic problem solving, word sense disambiguation and logical form generation in some restricted settings. Diagnostic parsimonious covering has been extended into a dual-route model to account for syntactic and semantic aspects of natural language. The two routes of covering are integrated by defining {\textquotedblleft}open class{\textquotedblright} linguistic concepts, aiding each other. The diagnostic model has dealt with sets, while the extended version, where syntactic considerations dictate word order, deals with sequences of linguistic concepts. Here we briefly describe the original model and the extended version, and briefly characterize the notions of covering and different criteria of parsimony. Finally we examine the question of whether parsimonious covering can serve as a general framework for parsing. |
schabes-1991-valid | 1,991 | The Valid Prefix Property and Left to Right Parsing of Tree-Adjoining Grammar | The valid prefix property (VPP), the capability of a left to right parser to detect errors as soon as possible, often goes unnoticed in parsing CFGs. Earley`s parser for CFGs (Earley, 1968; Earley, 1970) maintains the valid prefix property and obtains an $O(n^3)$-time worst case complexity, as good as parsers that do not maintain such as the CKY parser (Younger, 1967; Kasami, 1965). Contrary to CFGs, maintaining the valid prefix property for TAGs is costly. In 1988, Schabes and Joshi proposed an Earley-type parser for TAGs. It maintains the valid prefix property at the expense of its worst case complexity ($O(n^9)$-time). To our knowledge, it is the only known polynomial time parser for TAGs that maintains the valid prefix property. In this paper, we explain why the valid prefix property is expensive to maintain for TAGs and we introduce a predictive left to right parser for TAGs that does not maintain the valid prefix property but that achieves an $O(n^6)$-time worst case behavior, $O(n^4)$-time for unambiguous grammars and linear time for a large class of grammars. |
futrelle-etal-1991-preprocessing | 1,991 | Preprocessing and lexicon design for parsing technical text | Technical documents with complex structures and orthography present special difficulties for current parsing technology. These include technical notation such as subscripts, superscripts and numeric and algebraic expressions as well as Greek letters, italics, small capitals, brackets and punctuation marks. Structural elements such as references to figures, tables and bibliographic items also cause problems. We first hand-code documents in Standard Generalized Markup Language (SGML) to specify the document`s logical structure (paragraphs, sentences, etc.) and capture significant orthography. Next, a regular expression analyzer produced by LEX is used to tokenize the SGML text. Then a token-based phrasal lexicon is used to identify the longest token sequences in the input that represent single lexical items. This lookup is efficient because limits on lookahead are precomputed for every item. After this, the Alvey Tools parser with specialized subgrammars is used to discover items such as floating-point numbers. The product of these preprocessing stages is a text that is acceptable to a full natural language parser. This work is directed towards automating the building of knowledge bases from research articles in the field of bacterial chemotaxis, but the techniques should be of wide applicability. |
shilling-1991-incremental | 1,991 | Incremental LL(1) Parsing in Language-Based Editors | This paper introduces an efficient incremental LL(1) parsing algorithm for use in language-based editors that use the structure recognition approach. It features very fine grained analysis and a unique approach to parse control and error recovery. It also presents incomplete LL(1) grammars as a way of dealing with the complexity of full language grammars and as a mechanism for providing structured editor support for task languages that are only partially structured. The semantics of incomplete grammars are presented and it is shown how incomplete LL(1) grammars can be transformed into complete LL(1) grammars. The algorithms presented have been implemented in the fred language-based editor |
apollonskaya-etal-1991-linguistic | 1,991 | Linguistic Information in the Databases as a Basis for Linguistic Parsing Algorithms | The focus of this paper is investigation of linguistic data base design in conjugation with parsing algorithms. The structure of linguistic data base in natural language processing systems, the structure of lexicon items and the structure and the volume of linguistic information in automatic dictionary is the base for linguistic parsing organization. |
delmonte-bianchi-1991-binding | 1,991 | Binding Pronominals with an LFG Parser | This paper describes an implemented algorithm for handling pronominal reference and anaphoric control within an LFG framework. At first there is a brief description of the grammar implemented in Prolog using XGs (extraposition grammars) introduced by Pereira (1981;1983). Then the algorithm mapping binding equations is discussed at length. In particular the algorithm makes use of f-command together with the obviation principle, rather than c-command which is shown to be insufficient to explain the facts of binding of both English and Italian. Previous work (Ingria,1989;Hobbs,1978) was based on English and the classes of pronominals to account for were two: personal and possessive pronouns and anaphors - reflexives and reciprocals. In Italian, and in other languages of the world, the classes are many more. We dealt with four: a.pronouns - personal and independent pronouns, epithets, possessive pronouns; b.clitic pronouns and Morphologically Unexpressed PRO/pros; c.long distance anaphors; short distance anaphors. Binding of anaphors and coreference of pronouns is extensively shown to depend on structural properties of f-structures, on thematic roles and grammatical functions associated with the antecedents or controller, on definiteness of NPs and mood of clausal f-structures. The algorithm uses feature matrixes to tell pronominal classes apart and scores to determine the ranking of candidates for antecedenthood, as well as for restricting the behaviour of proforms and anaphors. |
vosse-kempen-1991-hybrid | 1,991 | A Hybrid Model of Human Sentence Processing: Parsing Right-Branching, Center-Embedded and Cross-Serial Dependencies | A new cognitive architecture for the syntactic aspects of human sentence processing (called Unification Space) is tested against experimental data from human subjects. The data, originally collected by Bach, Brown and Marslen-Wilson (1986), concern the comprehensibility of verb dependency constructions in Dutch and German: right-branching, center-embedded, and cross-serial dependencies of one to four levels deep. A satisfactory fit is obtained between comprehensibility data and parsability scores in the model. |
habert-1991-using | 1,991 | Using Inheritance in Object-Oriented Programming to Combine Syntactic Rules and Lexical Idiosyncrasies | In parsing idioms and frozen expressions in French, one needs to combine general syntactic rules and idiosyncratic constraints. The inheritance structure provided by Object-Oriented Programming languages, and more specifically the combination of methods present in CLOS, Common Lisp Object System, appears as an elegant and efficient approach to deal with such a complex interaction. |
charles-1991-lr | 1,991 | An LR(k) Error Diagnosis and Recovery Method | In this paper, a new practical, efficient and language-independent syntactic error recovery method for LR(\textit{k}) parsers is presented. This method is similar to and builds upon the three-level approach of Burke-Fisher. However, it is more time- and space-efficient and fully automatic. |
wright-etal-1991-adaptive | 1,991 | Adaptive Probabilistic Generalized LR Parsing | Various issues in the implementation of generalized LR parsing with probability are discussed. A method for preventing the generation of infinite numbers of states is described and the space requirements of the parsing tables are assessed for a substantial natural-language grammar. Because of a high degree of ambiguity in the grammar, there are many multiple entries and the tables are rather large. A new method for grammar adaptation is introduced which may help to reduce this problem. A probabilistic version of the Tomita parse forest is also described. |
maxwell-1991-phonological | 1,991 | Phonological Analysis and Opaque Rule Orders | General morphological/phonological analysis using ordered phonological rules has appeared to be computationally expensive, because ambiguities in feature values arising when phonological rules are {\textquotedblleft}un-applied{\textquotedblright} multiply with additional rules. But in fact those ambiguities can be largely ignored until lexical lookup, since the underlying values of altered features are needed only in the case of rare opaque rule orderings, and not always then. |
sikkel-nijholt-1991-efficient | 1,991 | An Efficient Connectionist Context-Free Parser | A connectionist network is defined that parses a grammar in Chomsky Normal Form in logarithmic time, based on a modification of Rytter`s recognition algorithm. A similar parsing network can be defined for an arbitrary context-free grammar. Such networks can be integrated into a connectionist parsing environment for interactive distributed processing of syntactic, semantic and pragmatic information. |
de-vreught-honig-1991-slow | 1,991 | Slow and Fast Parallel Recognition | In the first part of this paper a slow parallel recognizer is described for general CFG`s. The recognizer runs in $\Theta(n^3/p(n))$ time with $p(n) = O(n^2)$ processors. It generalizes the items of the Earley algorithm to double dotted items, which are more suited to parallel parsing. In the second part a fast parallel recognizer is given for general CFG`s. The recognizer runs in $O(log n)$ time using $O(n^6)$ processors. It is a generalisation of the Gibbons and Rytter algorithm for grammars in CNF. |
kita-etal-1991-processing | 1,991 | Processing Unknown Words in Continuous Speech Recognition | Current continuous speech recognition systems essentially ignore unknown words. Systems are designed to recognize words in the lexicon. However, for using speech recognition systems in real applications of spoken-language processing, it is very important to process unknown words. This paper proposes a continuous speech recognition method which accepts any utterance that might include unknown words. In this method, words not in the lexicon are transcribed as phone sequences, while words in the lexicon are recognized correctly. The HMM-LR speech recognition system, which is an integration of Hidden Markov Models and generalized LR parsing, is used as the baseline system, and enhanced with the trigram model of syllables to take into account the stochastic characteristics of a language. Preliminary results indicate that our approach is very promising. |
carpenter-etal-1991-specification | 1,991 | The Specification and Implementation of Constraint-Based Unification Grammars | Our aim is to motivate and provide a specification for a unification-based natural language processing system where grammars are expressed in terms of principles which constrain linguistic representations. Using typed feature structures with multiple inheritance for our linguistic representations and definite attribute-value logic clauses to express constraints, we will develop the bare essentials required for an implementation of a parser and generator for the Head-driven Phrase Structure Grammar (HPSG) formalism of Pollard and Sag (1987). |
ng-tomita-1991-probabilistic | 1,991 | Probabilistic LR Parsing for General Context-Free Grammars | To combine the advantages of probabilistic grammars and generalized LR parsing, an algorithm for constructing a probabilistic LR parser given a probabilistic context-free grammar is needed. In this paper, implementation issues in adapting Tomita`s generalized LR parser with graph-structured stack to perform probabilistic parsing are discussed. Wright and Wrigley (1989) has proposed a probabilistic LR-table construction algorithm for non-left-recursive context-free grammars. To account for left recursions, a method for computing item probabilities using the generation of systems of linear equations is presented. The notion of deferred probabilities is proposed as a means for dealing with similar item sets with differing probability assignments. |
tomabechi-1991-quasi | 1,991 | Quasi-Destructive Graph Unification | Graph unification is the most expensive part of unification-based grammar parsing. It often takes over 90{\%} of the total parsing time of a sentence. We focus on two speed-up elements in the design of unification algorithms: 1) elimination of excessive copying by only copying successful unifications, 2) Finding unification failures as soon as possible. We have developed a scheme to attain these two criteria without expensive overhead through temporarily modifying graphs during unification to eliminate copying during unification. The temporary modification is invalidated in constant time and therefore, unification can continue looking for a failure without the overhead associated with copying. After a successful unification because the nodes are temporarily prepared for copying, a fast copying can be performed without overhead for handling reentrancy, loops and variables. We found that parsing relatively long sentences (requiring about 500 unifications during a parse) using our algorithm is 100 to 200 percent faster than parsing the same sentences using Wroblewski`s algorithm. |
kitano-1991-unification | 1,991 | Unification Algorithms for Massively Parallel Computers | This paper describes unification algorithms for fine-grained massively parallel computers. The algorithms are based on a parallel marker-passing scheme. The marker-passing scheme in our algorithms carry only bit-vectors, address pointers and values. Because of their simplicity, our algorithms can be implemented on various architectures of massively parallel machines without loosing the inherent benefits of parallel computation. Also, we describe two augmentations of unification algorithms such as multiple unification and fuzzy unification. Experimental results indicate that our algorithm attaines more than 500 unification per seconds (for DAGs of average depth of 4) and has a linear time-complexity. This leads to possible implementations of massively parallel natural language parsing with full linguistic analysis. |
kwon-yoon-1991-unification | 1,991 | Unification-Based Dependency Parsing of Governor-Final Languages | This paper describes a unification-based dependency parsing method for governor-final languages. Our method can parse not only projective sentences but also non-projective sentences. The feature structures in the tradition of the unification-based formalism are used for writing dependency relations. We use a structure sharing and a local ambiguity packing to save storage. |
magerman-marcus-1991-pearl | 1,991 | Pearl: A Probabilistic Chart Parser | This paper describes a natural language parsing algorithm for unrestricted text which uses a probability-based scoring function to select the {\textquotedblleft}best{\textquotedblright} parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up chart parser with Earley-type top-down prediction which pursues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpretation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on context to predict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word models, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, determining categories for unknown words, and selecting correct parses first using a very loosely fitting covering grammar. |
herz-rimon-1991-local | 1,991 | Local Syntactic Constraints | A method to reduce ambiguity at the level of word tagging, on the basis of local syntactic constraints, is described. Such {\textquotedblleft}short context{\textquotedblright} constraints are easy to process and can remove most of the ambiguity at that level, which is otherwise a source of great difficulty for parsers and other applications in certain natural languages. The use of local constraints is also very effective for quick invalidation of a large set of ill-formed inputs. While in some approaches local constraints are defined manually or discovered by processing of large corpora, we extract them directly from a grammar (typically context free) of the given language. We focus on deterministic constraints, but later extend the method for a probabilistic language model. |
corazza-etal-1991-stochastic | 1,991 | Stochastic Context-Free Grammars for Island-Driven Probabilistic Parsing | In automatic speech recognition the use of language models improves performance. Stochastic language models fit rather well the uncertainty created by the acoustic pattern matching. These models are used to score \textit{theories} corresponding to partial interpretations of sentences. Algorithms have been developed to compute probabilities for theories that grow in a strictly left-to-right fashion. In this paper we consider new relations to compute probabilities of partial interpretations of sentences. We introduce theories containing a gap corresponding to an uninterpreted signal segment. Algorithms can be easily obtained from these relations. Computational complexity of these algorithms is also derived. |
rekers-koorn-1991-substring | 1,991 | Substring Parsing for Arbitrary Context-Free Grammars | A substring recognizer for a language $L$ determines whether a string $s$ is a substring of a sentence in $L$, i.e., \textit{substring-recognize(s)} succeeds if and only if $\exists v,w: vsw \in L$. The algorithm for substring recognition presented here accepts general context-free grammars and uses the same parse tables as the parsing algorithm from which it was derived. Substring recognition is useful for \textit{non-correcting} syntax error recovery and for incremental parsing. By extending the substring \textit{recognizer} with the ability to generate trees for the possible contextual completions of the substring, we obtain a substring \textit{parser}, which can be used in a syntax-directed editor to complete fragments of sentences. |
wittenburg-1991-parsing | 1,991 | Parsing with Relational Unification Grammars | In this paper we present a unification-based grammar formalism and parsing algorithm for the purposes of defining and processing non-concatenative languages. In order to encompass languages that are characterized by relations beyond simple string concatenation, we introduce relational constraints into a linguistically-based unification grammar formalism and extend bottom-up chart parsing methods. This work is currently being applied in the interpretation of hand-sketched mathematical expressions and structured flowcharts on notebook computers and interactive worksurfaces. |
costagliola-chang-1991-parsing | 1,991 | Parsing 2-D Languages with Positional Grammars | In this paper we will present a way to parse two-dimensional languages using LR parsing tables. To do this we describe two-dimensional (positional) grammars as a generalization of the context-free string grammars. The main idea behind this is to allow a traditional LR parser to choose the next symbol to parse from a two-dimensional space. Cases of ambiguity are analyzed and some ways to avoid them are presented. Finally, we construct a parser for the two-dimensional arithmetic expression language and implement it by using the tool Yacc. |
kasper-1989-unification | 1,989 | Unification and Classification: An Experiment in Information-Based Parsing | When dealing with a phenomenon as vast and com plex as natural language, an experimental approach is often the best way to discover new computational methods and determine their usefulness. The experimental process includes designing and selecting new experiments, carrying out the experiments, and evaluating the experiments. Most conference presentations are about finished experiments, completed theoretical results, or the evaluation of systems already in use. In this workshop setting, I would like to depart from this tendency to discuss some experiments that we are beginning to perform, and the reasons for investigating a particular approach to parsing. This approach builds on recent work in unification-based parsing and classification-based knowledge representation, developing an architecture that brings together the capabilities of these related frameworks. |
gerdemann-1989-using | 1,989 | Using Restriction to Optimize Unification Parsing | |
maxwell-iii-kaplan-1989-overview | 1,989 | An Overview of Disjunctive Constraint Satisfaction | |
lang-1989-uniform | 1,989 | A Uniform Formal Framework for Parsing | |
satta-stock-1989-head | 1,989 | Head-Driven Bidirectional Parsing: A Tabular Method | |
kay-1989-head | 1,989 | Head-Driven Parsing | |
gibson-1989-parsing | 1,989 | Parsing with Principles: Predicting a Phrasal Node Before Its Head Appears | |
fong-berwick-1989-computational | 1,989 | The Computational Implementation of Principle-Based Parsers | This paper addresses the issue of how to organize linguistic principles for efficient processing. Based on the general characterization of principles in terms of purely computational properties, the effects of principle-ordering on parser performance are investigated. A novel parser that exploits the possible variation in principle-ordering to dynamically re-order principles is described. Heuristics for minimizing the amount of unnecessary work performed during the parsing process are also discussed. |
fujisaki-etal-1989-probabilistic | 1,989 | Probabilistic Parsing Method for Sentence Disambiguation | |
su-etal-1989-sequential | 1,989 | A Sequential Truncation Parsing Algorithm Based on the Score Function | In a natural language processing system, a large amount of ambiguity and a large branching factor are hindering factors in obtaining the desired analysis for a given sentence in a short time. In this paper, we are proposing a sequential truncation parsing algorithm to reduce the searching space and thus lowering the parsing time. The algorithm is based on a score function which takes the advantages of probabilistic characteristics of syntactic information in the sentences. A preliminary test on this algorithm was conducted with a special version of our machine translation system, the ARCHTRAN, and an encouraging result was observed. |
wright-wrigley-1989-probabilistic | 1,989 | Probabilistic LR Parsing for Speech Recognition | An LR parser for probabilistic context-free grammars is described. Each of the standard versions of parser generator (SLR, canonical and LALR) may be applied. A graph-structured stack permits action conflicts and allows the parser to be used with uncertain input, typical of speech recognition applications. The sentence uncertainty is measured using entropy and is significantly lower for the grammar than for a first-order Markov model. |
huber-1989-parsing | 1,989 | Parsing Speech for Structure and Prominence | |
kita-etal-1989-parsing | 1,989 | Parsing Continuous Speech by HMM-LR Method | This paper describes a speech parsing method called HMM-LR. In HMM-LR, an LR parsing table is used to predict phones in speech input, and the system drives an HMM-based speech recognizer directly without any intervening structures such as a phone lattice. Very accurate, efficient speech parsing is achieved through the integrated processes of speech recognition and language analysis. The HMM-LR m ethod is applied to large-vocabulary speaker-dependent Japanese phrase recognition. The recognition rate is 87.1{\%} for the top candidates and 97.7{\%} for the five best candidates. |
kogure-1989-parsing | 1,989 | Parsing Japanese Spoken Sentences Based on HPSG | An analysis method for Japanese spoken sentences based on HPSG has been developed. Any analysis module for the interpreting telephony task requires the following capabilities: (i) the module must be able to treat spoken-style sentences; and, (ii) the module must be able to take, as its input, lattice-like structures which include both correct and incorrect constituent candidates of a speech recognition module. To satisfy these requirements, an analysis method has been developed, which consists of a grammar designed for treating spoken-style Japanese sentences and a parser designed for taking as its input speech recognition output lattices. The analysis module based on this method is used as part of the NADINE (Natural Dialogue Interpretation Expert) system and the SL-TRANS (Spoken Language Translation) system. |
van-zuijlen-1989-probabilistic | 1,989 | Probabilistic Methods in Dependency Grammar Parsing | Authentic text as found in corpora cannot be described completely by a formal system, such as a set of grammar rules. As robust parsing is a prerequisite for any practical natural language processing system, there is certainly a need for techniques that go beyond merely formal approaches. Various possibilities, such as the use of simulated annealing, have been proposed recently and we have looked at their suitability for the parse process of the DLT machine translation system, which will use a large structured bilingual corpus as its main linguistic knowledge source. Our findings are that parsing is not the type of task that should be tackled solely through simulated annealing or similar stochastic optimization techniques but that a controlled application of probabilistic methods is essential for the performance of a corpus-based parser. On the basis of our explorative research we have planned a number of small-scale implementations in the near future. |
wall-wittenburg-1989-predictive | 1,989 | Predictive Normal Forms for Composition in Categorical Grammars | Extensions to Categorial Grammars proposed to account for nonconstitutent conjunction and long-distance dependencies introduce the problem of equivalent derivations, an issue we have characterized as spurious ambiguity from the parsing perspective. In Wittenburg (1987) a proposal was made for compiling Categorial Grammars into predictive forms in order to solve the spurious ambiguity problem. This paper investigates formal properties o f grammars that use predictive versions of function composition. Among our results are (1) that grammars with predictive composition are in general equivalent to the originals if and only if a restriction on predictive rules is applied, (2) that modulo this restriction, the predictive grammars have indeed eliminated the problem of spurious ambiguity, and (3) that the issue o f equivalence is decidable, i.e., for any particular grammar, whether one needs to apply the restriction or not to ensure equivalence is a decidable question. |
steedman-1989-parsing | 1,989 | Parsing Spoken Language Using Combinatory Grammars | |
vijay-shanker-weir-1989-recognition | 1,989 | Recognition of Combinatory Categorial Grammars and Linear Indexed Grammars | |
nozohoor-farshi-1989-handling | 1,989 | Handling of Ill-Designed Grammars in Tomita`s Parsing Algorithm | In this paper, we show that some non-cyclic context-free grammars with $\varepsilon$-rules cannot be handled by Tomita`s algorithm properly. We describe a modified version of the algorithm which remedies the problem. |
kipps-1989-analysis | 1,989 | Analysis of Tomita`s Algorithm for General Context-Free Parsing | A variation on Tomita`s algorithm is analyzed in regards to its time and space complexity. It is shown to have a general time bound of $0(n^{\tilde{\rho}+1})$, where $n$ is the length of the input string and $\rho$ is the length of the longest production. A modified algorithm is presented in which the time bound is reduced to $0(n^3)$. The space complexity of Tomita`s algorithm is shown to be proportional to $n^2$ in the worst case and is changed by at most a constant factor with the modification. Empirical results are used to illustrate the trade off between time and space on a simple example. A discussion of two subclasses of context-free grammars that can be recognized in $0(n^2)$ and $O(n)$ is also included. |
johnson-1989-computational | 1,989 | The Computational Complexity of Tomita`s Algorithm | |
seneff-1989-probabilistic | 1,989 | Probabilistic Parsing for Spoken Language Applications | A new natural language system, TINA, has been developed for applications involving spoken language tasks, which integrate key ideas from context free grammars, Augmented Transition Networks (ATN`s) [6], and Lexical Functional Grammars (LFG`s) [1]. The parser uses a best-first strategy, with probability assignments on all arcs obtained automatically from a set of example sentences. An initial context-free grammar, derived from the example sentences, is first converted to a probabilistic network structure. Control includes both top-down and bottom-up cycles, and key parameters are passed among nodes to deal with long-distance movement, agreement, and semantic constraints. The probabilities provide a natural mechanism for exploring more common grammatical constructions first. One novel feature of TINA is that it provides an atuomatic sentence generation capability, which has been very effective for identifying overgeneration problems. A fully integrated spoken language system using this parser is under development. |
mcclelland-1989-connectionist | 1,989 | Connectionist Models of Language | |
jain-waibel-1989-connectionist | 1,989 | A Connectionist Parser Aimed at Spoken Language | We describe a connectionist model which learns to parse single sentences from sequential word input. A parse in the connectionist network contains information about role assignment, prepositional attachment, relative clause structure, and subordinate clause structure. The trained network displays several interesting types of behavior. These include predictive ability, tolerance to certain corruptions of input word sequences, and some generalization capability. We report on experiments in which a small number of sentence types have been successfully learned by a network. Work is in progress on a larger database. Application of this type of connectionist model to the area of spoken language processing is discussed. |
kitano-etal-1989-massively | 1,989 | Massively Parallel Parsing in $\Phi$DmDialog: Integrated Architecture for Parsing Speech Inputs | This paper describes the parsing scheme in the $\Phi$DmDialog speech-to-speech dialog translation system, with special emphasis on the integration of speech and natural language processing. We propose an integrated architecture for parsing speech inputs based on a parallel marker-passing scheme and attaining dynamic participation of knowledge from the phonological-level to the discourse-level. At the phonological level, we employ a stochastic model using a transition matrix and a confusion matrix and markers which carry a probability measure. At a higher level, syntactic/semantic and discourse processing, we integrate a case-based and constraint-based scheme in a consistent manner so that a priori probability and constraints, which reflect linguistic and discourse factors, are provided to the phonological level of processing. A probability/cost-based scheme in our model enables ambiguity resolution at various levels using one uniform principle. |
nijholt-1989-parallel | 1,989 | Parallel Parsing Strategies in Natural Language Processing | We present a concise survey of approaches to the context-free parsing problem of natural languages in parallel environments. The discussion includes parsing schemes which use more than one traditional parser, schemes where separate processes are assigned to the {\textquoteleft}non-deterministic' choices during parsing, schemes where the number of processes depends on the length of the sentence being parsed, and schemes where the number of processes depends on the grammar size rather than on the input length. In addition we discuss a connectionist approach to the parsing problem. |
hausser-1989-complexity | 1,989 | Complexity and Decidability in Left-Associative Grammar | |
shann-1989-selection | 1,989 | The selection of a parsing strategy for an on-line machine translation system in a sublanguage domain. A new practical comparison | |
black-1989-finite | 1,989 | Finite State Machines from Feature Grammars | This paper describes the conversion of a set of feature grammar rules into a deterministic finite state machine that accepts the same language (or at least a well-defined related language). First the reasoning behind why this is an interesting thing to do within the Edinburgh speech recogniser project, is discussed. Then details about the compilation algorithm are given. Finally, there is some discussion of the advantages and disadvantages of this method of implementing feature based grammar formalisms. |
yamai-etal-1989-effective | 1,989 | An Effective Enumeration Algorithm of Parses for Ambiguous CFL | An efficient algorithm that enumerates parses of ambiguous context-free languages is described, and its time and space complexities are discussed. When context-free parsers are used for natural language parsing, pattern recognition, and so forth, there may be a great number of parses for a sentence. One common strategy for efficient enumeration of parses is to assign an appropriate weight to each production, and to enumerate parses in the order of the total weight of all applied production. However, the existing algorithms taking this strategy can be applied only to the problems of limited areas such as regular languages; in the other areas only inefficient exhaustive searches are known. In this paper, we first introduce a hierarchical graph suitable for enumeration. Using this graph, enumeration of parses in the order of acceptablity is equivalent to finding paths of this graph in the order of length. Then, we present an efficient enumeration algorithm with this graph, which can be applied to arbitrary context-free grammars. For enumeration of $k$ parses in the order of the total weight of all applied productions, the time and space complexities of our algorithm are $0(n^3 + kn^2)$ and $0(n^3 + kn)$, respectively. |
weber-1989-morphological | 1,989 | A Morphological Parser for Linguistic Exploration | |
adriaens-1989-parallel | 1,989 | The Parallel Expert Parser: A Meaning-Oriented, Lexically-Guided, Parallel-Interactive Model of Natural Language Understanding | The Parallel Expert Parser (PEP) is a natural language analysis model belonging to the interactive model paradigm that stresses the parallel interaction of relatively small distributed knowledge components to arrive at the meaning of a fragment of text. It borrows the idea of words as basic dynamic entities triggering a set of interactive processes from the Word Expert Parser (Small 1980), but tries to improve on the clarity of interactive processes and on the organization of lexically-distributed knowledge. As of now, especially the procedural aspects have received attention: instead of having wild-running uncontrollable interactions, PEP restricts the interactions to explicit communications on a structured blackboard; the communication protocols are a compromise betwenn maximum parallelism and controllability. At the same time, it is no longer just words that trigger processes; words create larger units (constituents), that are in turn interacting entities on a higher level. Lexical experts contribute their associated knowledge, create higher-level experts, and die away. The linguists define the levels to be considered, and write expert processes in a language that tries to hide the procedural aspects of the parallel-interactive model from them. Problems include the possiblity of deadlock situations when processes wait infinitely for each other, the way to efficiently pursue different alternatives (as of now, the system just uses don`t-care determinism), and testing whether the protocols allow linguists to fully express their needs. PEP has been implemented in Flat Concurrent Prolog, using the Logix programming environment. Current research is oriented more towards the problem of distributed knowledge representation. Abstractions and generalizations across lexical experts could be made using principles from object-oriented programming (introducing generic, prototypical experts; cp. Hahn 1987). Thoughts also go in the direction of an integration of the coarse-grained parallelism with knowledge representation in a fine-grained parallel (connectionist) way. |
thompson-1989-chart | 1,989 | Chart Parsing for Loosely Coupled Parallel Systems | |
tanaka-numazaki-1989-parallel | 1,989 | Parallel Generalized LR Parsing based on Logic Programming | A generalized LR parsing algorithm, which has been developed by Tomita [Tomita 86], can treat a context free grammar. His algorithm makes use of breadth first strategy when a conflict occcurs in a LR parsing table. It is well known that the breadth first strategy is suitable for parallel processing. This paper presents an algorithm of a parallel parsing system (PLR) based on a generalized LR parsing. PLR is implemented in GHC [Ueda 85] that is a concurrent logic programming language developed by Japanese 5th generation computer project. The feature of PLR is as follows: Each entry of a LR parsing table is regarded as a process which handles shift and reduce operations. If a process discovers a conflict in a LR parsing table, it activates subprocesses which conduct shift and reduce operations. These subprocesses run in parallel and simulate breadth first strategy. There is no need to make some subprocesses synchronize during parsing. Stack information is sent to each subprocesses from their parent process. A simple experiment for parsing a sentence revealed the fact that PLR runs faster than PAX [Matsumoto 87][Matsumoto 89] that has been known as the best parallel parser. |
schabes-joshi-1989-relevance | 1,989 | The Relevance of Lexicalization to Parsing | In this paper, we investigate the processing of the so-called {\textquoteleft}lexicalized' grammar. In {\textquoteleft}lexicalized' grammars (Schabes, Abeille and Joshi, 1988), each elementary structure is systema tically associated with a lexical {\textquoteleft}head'. These structures specify extended domains of locality (as compared to CFGs) over which constraints can be stated. The {\textquoteleft}grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the {\textquoteleft}head' . There are no separate grammar rules. There are, of course, {\textquoteleft}rules' which tell us how these structures are combined. A general two-pass parsing strategy for {\textquoteleft}lexicalized' grammars follows naturally. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. We evaluate this strategy with respect to two characteristics. First, the amount of filtering on the entire grammar is evaluated: once the first pass is performed, the parser uses only a subset of the grammar. Second, we evaluate the use of non-local information: the structures selected during the first pass encode the morphological value (and therefore the position in the string) of their {\textquoteleft}head'; this enables the parser to use non-local in form ation to guide its search. We take Lexicalized Tree Adjoining Grammars as an in stance of lexicalized grammar. We illustrate the organization of the grammar. Then we show how a general Earley-type TAG parser (Schabes and Joshi, 1988) can take advantage of lexicalization. Empirical data show that the filtering of the grammar and the non-local in formation provided by the two-pass strategy improve the performance of the parser. We explain how constraints over the elementary structures expressed by unification equations can be parsed by a simple extension of the Earley-type TAG parser. Lexicalization guarantees termination of the algorithm without special devices such as restrictors. |
marino-1989-framework | 1,989 | A Framework for the Development of Natural Language Grammars | This paper describes a parsing system used in a framework for the development of Natural Language grammars. It is an interactive environment suitable for writing robust NL applications generally. Its heart is the SAIL parsing algorithm that uses a Phrase-Structure Grammar with extensive augmentations. Furthermore, some particular parsing tools are embedded in the system, and provide a powerful environment for developing grammars, even of large coverage. |
malone-felshin-1989-efficient | 1,989 | An Efficient Method for Parsing Erroneous Input | In a natural language processing system designed for language learners, it is necessary to accept both well-formed and ill-formed input. This paper describes a method of maintaining parsing efficiency for well-formed sentences while still accepting a wide range of ill-formed input. |
yoon-kim-1989-analysis | 1,989 | Analysis Techniques for Korean Sentences Based on Lexical Functional Grammar | The Unification-based Grammars seem to be adequate for the analysis of agglutinative languages such as Korean, etc. In this paper, the merits of Lexical Functional Grammar is analyzed and the structure of Korean Syntactic Analyzer is described. Verbal complex category is used for the analysis of several linguistic phenomena and a new attribute of UNKNOWN is defined for the analysis of grammatical relations. |
matsumoto-etal-1989-learning | 1,989 | Learning Cooccurrences by using a Parser | This paper describes two methods for the acquisition and utilization of lexical cooccurrence relationships. Under these method, cooccurrence relationships are obtained from two kinds of inputs: example sentences and the corresponding correct syntactic structure. The first of the two methods treats a set of governors each element of which is bound to a element of sister nodes set in a syntactic structure under consideration, as a cooccurrence relationship. In the second method, a cooccurrence relationship name and affiliated attribute names are manually given in the description of augmented rewriting rules. Both methods discriminate correctness of cooccurrence by the use of the correct syntactic structure mentioned above. Experiment is made for both methods to find if thus obtained cooccurrence relationship is useful for the correct analysis. |
church-etal-1989-parsing-word | 1,989 | Parsing, Word Associations and Typical Predicate-Argument Relations | There are a number of collocational constraints in natural languages that ought to play a more important role in natural language parsers. Thus, for example, it is hard for most parsers to take advantage of the fact that wine is typically drunk, produced, and sold, but (probably) not pruned. So too, it is hard for a parser to know which verbs go with which prepositions (e.g., set up) and which nouns fit together to form compound noun phrases (e.g., computer programmer). This paper will attempt to show that many of these types of concerns can be addressed with syntactic methods (symbol pushing), and need not require explicit semantic interpretation. We have found that it is possible to identify many of these interesting co-occurrence relations by computing simple summary statistics over millions of words of text. This paper will summarize a number of experiments carried out by various subsets of the authors over the last few years. The term collocation will be used quite broadly to include constraints on SVO (subject verb object) triples, phrasal verbs, compound noun phrases, and psychoiinguistic notions of word association (e.g., doctor/nurse). |
simpkins-hancox-1989-efficient | 1,989 | An Efficient, Primarily Bottom-Up Parser for Unification Grammars | The search for efficient parsing strategies has a long history, dating back to at least the Cocke/Younger/Kusami parser of the early sixties. The publication of the Earley parser in 1970 has had a significant influence on context-free (CF) parsing for natural language processing, evidenced by the interest in the variety of chart parsers implemented since then. The development of unification grammars (with their complex feature structures) has put new life into the discussion of efficient parsing strategies, and there has been some debate on the use of essentially bottom-up or top-down strategies, the efficacy of top-down filtering and so on. The approacn to parsing described here is suitable for complex category, unification-based grammars. The concentration here is on a unification grammar which has a context-free backbone, Lexical-Functional Grammer (LFG). The parser is designed primarily for simplicity, efficiency and practical application. The parser outlined here results in a high-level, but still efficient, language system without making a requirement on the grammar/lexicon writer to understand its implementation details. The parsing algorithm operates in a systematic bottom-up (BU) fashion, thus taking earliest advantage of LFQ`s concentration of information in the lexicon and also making use of unrestricted feature structures to realize LFG`s Top-Down (TD) predictive potential. While LFG can make special use of its CF backbone, the algorithm employed is not restricted to grammars having a CF backbone and is equally suited to complex-feature-based formalisms. Additionally, the algorithm described (which is a systematic left-to-right (left comer) parsing algorithm) allows us to take full advantage of both BU and TD aspects of a unificatin-based grammar without incurring prohibitive overheads such as feature-structure comparison or subsumption checking. The use of TD prediction, which in the Earley algorithm is allowed to hypothesize new parse paths, is here restricted to confirming initial parses produced BU, and specializing these according to future (feature) expectations. |
slator-wilks-1989-premo | 1,989 | PREMO: Parsing by Conspicuous Lexical Consumption | PREMO is a knowledge-based Preference Semantics parser with access to a large, lexical semantic knowledge base and organized along the lines of an operating system. The state of every partial parse is captured in a structure called a language object, and the control structure of the preference machine is a priority queue of these language objects. The language object at the front of the queue has the highest score as computed by a preference metric that weighs grammatical predictions, semantic type matching, and pragmatic coherence. The highest priority language object is the intermediate reading that is currently most preferred (the others are still {\textquotedblleft}alive,{\textquotedblright} but not actively pursued); in this way the preference machine avoids combinatorial explosion by following a {\textquotedblleft}best-first{\textquotedblright} strategy for parsing. The system has clear extensions into parallel processing. |
Subsets and Splits