title
stringlengths
4
246
id
stringlengths
32
39
arxiv_url
stringlengths
32
39
pdf_url
stringlengths
32
39
published_date
stringlengths
10
10
updated_date
stringlengths
10
10
authors
sequencelengths
1
535
affiliations
sequencelengths
1
535
summary
stringlengths
23
3.54k
comment
stringlengths
0
762
journal_ref
stringlengths
0
545
doi
stringlengths
0
151
primary_category
stringclasses
156 values
categories
sequencelengths
1
11
Strategic polymorphism requires just two combinators!
http://arxiv.org/abs/cs/0212048v1
http://arxiv.org/abs/cs/0212048v1
http://arxiv.org/pdf/cs/0212048v1
2002-12-19
2002-12-19
[ "Ralf Laemmel", "Joost Visser" ]
[ "", "" ]
In previous work, we introduced the notion of functional strategies: first-class generic functions that can traverse terms of any type while mixing uniform and type-specific behaviour. Functional strategies transpose the notion of term rewriting strategies (with coverage of traversal) to the functional programming paradigm. Meanwhile, a number of Haskell-based models and combinator suites were proposed to support generic programming with functional strategies. In the present paper, we provide a compact and matured reconstruction of functional strategies. We capture strategic polymorphism by just two primitive combinators. This is done without commitment to a specific functional language. We analyse the design space for implementational models of functional strategies. For completeness, we also provide an operational reference model for implementing functional strategies (in Haskell). We demonstrate the generality of our approach by reconstructing representative fragments of the Strafunski library for functional strategies.
A preliminary version of this paper was presented at IFL 2002, and included in the informal preproceedings of the workshop
cs.PL
[ "cs.PL", "D.1.1; D.3.3; I.1.3" ]
Ownership Confinement Ensures Representation Independence for Object-Oriented Programs
http://arxiv.org/abs/cs/0212003v1
http://arxiv.org/abs/cs/0212003v1
http://arxiv.org/pdf/cs/0212003v1
2002-12-04
2002-12-04
[ "Anindya Banerjee", "David A. Naumann" ]
[ "", "" ]
Dedicated to the memory of Edsger W.Dijkstra. Representation independence or relational parametricity formally characterizes the encapsulation provided by language constructs for data abstraction and justifies reasoning by simulation. Representation independence has been shown for a variety of languages and constructs but not for shared references to mutable state; indeed it fails in general for such languages. This paper formulates representation independence for classes, in an imperative, object-oriented language with pointers, subclassing and dynamic dispatch, class oriented visibility control, recursive types and methods, and a simple form of module. An instance of a class is considered to implement an abstraction using private fields and so-called representation objects. Encapsulation of representation objects is expressed by a restriction, called confinement, on aliasing. Representation independence is proved for programs satisfying the confinement condition. A static analysis is given for confinement that accepts common designs such as the observer and factory patterns. The formalization takes into account not only the usual interface between a client and a class that provides an abstraction but also the interface (often called ``protected'') between the class and its subclasses.
88 pages, 13 figures
cs.PL
[ "cs.PL", "D.3.3; F.3.1" ]
Monadic Style Control Constructs for Inference Systems
http://arxiv.org/abs/cs/0211035v1
http://arxiv.org/abs/cs/0211035v1
http://arxiv.org/pdf/cs/0211035v1
2002-11-25
2002-11-25
[ "Jean-Marie Chauvet" ]
[ "" ]
Recent advances in programming languages study and design have established a standard way of grounding computational systems representation in category theory. These formal results led to a better understanding of issues of control and side-effects in functional and imperative languages. Another benefit is a better way of modelling computational effects in logical frameworks. With this analogy in mind, we embark on an investigation of inference systems based on considering inference behaviour as a form of computation. We delineate a categorical formalisation of control constructs in inference systems. This representation emphasises the parallel between the modular articulation of the categorical building blocks (triples) used to account for the inference architecture and the modular composition of cognitive processes.
25 pages
cs.AI
[ "cs.AI", "cs.PL", "68Q55" ]
Schedulers for Rule-based Constraint Programming
http://arxiv.org/abs/cs/0211019v1
http://arxiv.org/abs/cs/0211019v1
http://arxiv.org/pdf/cs/0211019v1
2002-11-15
2002-11-15
[ "Krzysztof R. Apt", "Sebastian Brand" ]
[ "", "" ]
We study here schedulers for a class of rules that naturally arise in the context of rule-based constraint programming. We systematically derive a scheduler for them from a generic iteration algorithm of Apt [2000]. We apply this study to so-called membership rules of Apt and Monfroy [2001]. This leads to an implementation that yields for these rules a considerably better performance than their execution as standard CHR rules.
8 pages. To appear in Proc. ACM Symposium on Applied Computing (SAC) 2003
cs.DS
[ "cs.DS", "cs.PL", "I.2.2; I.2.3; D.3.3; D.3.4" ]
The DLV System for Knowledge Representation and Reasoning
http://arxiv.org/abs/cs/0211004v3
http://arxiv.org/abs/cs/0211004v3
http://arxiv.org/pdf/cs/0211004v3
2002-11-04
2003-09-10
[ "Nicola Leone", "Gerald Pfeifer", "Wolfgang Faber", "Thomas Eiter", "Georg Gottlob", "Simona Perri", "Francesco Scarcello" ]
[ "", "", "", "", "", "", "" ]
This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to $\Delta^P_3$-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.
56 pages, 9 figures, 6 tables
ACM Transactions on Computational Logic 7(3):499-562, 2006
10.1145/1149114.1149117
cs.AI
[ "cs.AI", "cs.LO", "cs.PL", "I.2.3; I.2.4; D.3.1" ]
The Weaves Reconfigurable Programming Framework
http://arxiv.org/abs/cs/0210031v1
http://arxiv.org/abs/cs/0210031v1
http://arxiv.org/pdf/cs/0210031v1
2002-10-31
2002-10-31
[ "Srinidhi Varadarajan" ]
[ "" ]
This research proposes a language independent intra-process framework for object based composition of unmodified code modules. Intuitively, the two major programming models, threads and processes, can be considered as extremes along a sharing axis. Multiple threads through a process share all global state, whereas instances of a process (or independent processes) share no global state. Weaves provide the generalized framework that allows arbitrary (selective) sharing of state between multiple control flows through a process. The Weaves framework supports multiple independent components in a single process, with flexible state sharing and scheduling, all of which is achieved without requiring any modification to existing code bases. Furthermore, the framework allows dynamic instantiation of code modules and control flows through them. In effect, weaves create intra-process modules (similar to objects in OOP) from code written in any language. The Weaves paradigm allows objects to be arbitrarily shared, it is a true superset of both processes as well as threads, with code sharing and fast context switching time similar to threads. Weaves does not require any special support from either the language or application code, practically any code can be weaved. Weaves also include support for fast automatic checkpointing and recovery with no application support. This paper presents the elements of the Weaves framework and results from our implementation that works by reverse-analyzing source-code independent ELF object files. The current implementation has been validated over Sweep3D, a benchmark for 3D discrete ordinates neutron transport [Koch et al., 1992], and a user-level port of the Linux 2.4 family kernel TCP/IP protocol stack.
To be submitted to ACM TOCS
cs.PL
[ "cs.PL", "cs.OS", "D.2.11 D.2.12 D.1.3 D.3.2 D.3.4" ]
Proving correctness of Timed Concurrent Constraint Programs
http://arxiv.org/abs/cs/0208042v1
http://arxiv.org/abs/cs/0208042v1
http://arxiv.org/pdf/cs/0208042v1
2002-08-28
2002-08-28
[ "F. S. de Boer", "M. Gabbrielli", "M. C. Meo" ]
[ "", "", "" ]
A temporal logic is presented for reasoning about the correctness of timed concurrent constraint programs. The logic is based on modalities which allow one to specify what a process produces as a reaction to what its environment inputs. These modalities provide an assumption/commitment style of specification which allows a sound and complete compositional axiomatization of the reactive behavior of timed concurrent constraint programs.
cs.LO
[ "cs.LO", "cs.PL", "F.3.1;D.3.1;D.3.2" ]
Logic programming in the context of multiparadigm programming: the Oz experience
http://arxiv.org/abs/cs/0208029v1
http://arxiv.org/abs/cs/0208029v1
http://arxiv.org/pdf/cs/0208029v1
2002-08-20
2002-08-20
[ "Peter Van Roy", "Per Brand", "Denys Duchier", "Seif Haridi", "Martin Henz", "Christian Schulte" ]
[ "", "", "", "", "", "" ]
Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.
48 pages, to appear in the journal "Theory and Practice of Logic Programming"
cs.PL
[ "cs.PL", "D.1.6; D.3.2; D.3.3; F.3.3" ]
Offline Specialisation in Prolog Using a Hand-Written Compiler Generator
http://arxiv.org/abs/cs/0208009v1
http://arxiv.org/abs/cs/0208009v1
http://arxiv.org/pdf/cs/0208009v1
2002-08-07
2002-08-07
[ "Michael Leuschel", "Jesper Joergensen", "Wim Vanhoof", "Maurice Bruynooghe" ]
[ "", "", "", "" ]
The so called ``cogen approach'' to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that the cogen approach is also applicable to the specialisation of logic programs (also called partial deduction) and leads to effective specialisers. Moreover, using good binding-time annotations, the speed-ups of the specialised programs are comparable to the speed-ups obtained with online specialisers. The paper first develops a generic approach to offline partial deduction and then a specific offline partial deduction method, leading to the offline system LIX for pure logic programs. While this is a usable specialiser by itself, it is used to develop the cogen system LOGEN. Given a program, a specification of what inputs will be static, and an annotation specifying which calls should be unfolded, LOGEN generates a specialised specialiser for the program at hand. Running this specialiser with particular values for the static inputs results in the specialised program. While this requires two steps instead of one, the efficiency of the specialisation process is improved in situations where the same program is specialised multiple times. The paper also presents and evaluates an automatic binding-time analysis that is able to derive the annotations. While the derived annotations are still suboptimal compared to hand-crafted ones, they enable non-expert users to use the LOGEN system in a fully automated way. Finally, LOGEN is extended so as to directly support a large part of Prolog's declarative and non-declarative features and so as to be able to perform so called mixline specialisations.
52 pages, to appear in the journal "Theory and Practice of Logic Programming"
cs.PL
[ "cs.PL", "cs.AI", "D.1.6; D.1.2; I.2.2; F.4.1; I.2.3" ]
Soft Concurrent Constraint Programming
http://arxiv.org/abs/cs/0208008v1
http://arxiv.org/abs/cs/0208008v1
http://arxiv.org/pdf/cs/0208008v1
2002-08-06
2002-08-06
[ "S. Bistarelli", "U. Montanari", "F. Rossi" ]
[ "", "", "" ]
Soft constraints extend classical constraints to represent multiple consistency levels, and thus provide a way to express preferences, fuzziness, and uncertainty. While there are many soft constraint solving formalisms, even distributed ones, by now there seems to be no concurrent programming framework where soft constraints can be handled. In this paper we show how the classical concurrent constraint (cc) programming framework can work with soft constraints, and we also propose an extension of cc languages which can use soft constraints to prune and direct the search for a solution. We believe that this new programming paradigm, called soft cc (scc), can be also very useful in many web-related scenarios. In fact, the language level allows web agents to express their interaction and negotiation protocols, and also to post their requests in terms of preferences, and the underlying soft constraint solver can find an agreement among the agents even if their requests are incompatible.
25 pages, 4 figures, submitted to the ACM Transactions on Computational Logic (TOCL), zipped files
ACM Trans. Comput. Log. 7(3): 563-589 (2006)
10.1145/1149114.1149118
cs.PL
[ "cs.PL", "cs.AI", "D.1.3; D.3.1; D.3.2; D.3.3; F.3.2" ]
Defining Rough Sets by Extended Logic Programs
http://arxiv.org/abs/cs/0207089v1
http://arxiv.org/abs/cs/0207089v1
http://arxiv.org/pdf/cs/0207089v1
2002-07-25
2002-07-25
[ "Jan Małuszyński", "Aida Vitória" ]
[ "", "" ]
We show how definite extended logic programs can be used for defining and reasoning with rough sets. Moreover, a rough-set-specific query language is presented and an answering algorithm is outlined. Thus, we not only show a possible application of a paraconsistent logic to the field of rough sets as we also establish a link between rough set theory and logic programming, making possible transfer of expertise between both fields.
10 pages. Originally published in proc. PCL 2002, a FLoC workshop; eds. Hendrik Decker, Dina Goldin, Jorgen Villadsen, Toshiharu Waragai (http://floc02.diku.dk/PCL/)
cs.LO
[ "cs.LO", "cs.PL", "F.4.1; I.2.3; I.2.4; D.1.6" ]
Introducing Dynamic Behavior in Amalgamated Knowledge Bases
http://arxiv.org/abs/cs/0207076v1
http://arxiv.org/abs/cs/0207076v1
http://arxiv.org/pdf/cs/0207076v1
2002-07-22
2002-07-22
[ "Elisa Bertino", "Barbara Catania", "Paolo Perlasca" ]
[ "", "", "" ]
The problem of integrating knowledge from multiple and heterogeneous sources is a fundamental issue in current information systems. In order to cope with this problem, the concept of mediator has been introduced as a software component providing intermediate services, linking data resources and application programs, and making transparent the heterogeneity of the underlying systems. In designing a mediator architecture, we believe that an important aspect is the definition of a formal framework by which one is able to model integration according to a declarative style. To this purpose, the use of a logical approach seems very promising. Another important aspect is the ability to model both static integration aspects, concerning query execution, and dynamic ones, concerning data updates and their propagation among the various data sources. Unfortunately, as far as we know, no formal proposals for logically modeling mediator architectures both from a static and dynamic point of view have already been developed. In this paper, we extend the framework for amalgamated knowledge bases, presented by Subrahmanian, to deal with dynamic aspects. The language we propose is based on the Active U-Datalog language, and extends it with annotated logic and amalgamation concepts. We model the sources of information and the mediator (also called supervisor) as Active U-Datalog deductive databases, thus modeling queries, transactions, and active rules, interpreted according to the PARK semantics. By using active rules, the system can efficiently perform update propagation among different databases. The result is a logical environment, integrating active and deductive rules, to perform queries and update propagation in an heterogeneous mediated framework.
Other Keywords: Deductive databases; Heterogeneous databases; Active rules; Updates
cs.PL
[ "cs.PL", "cs.DB", "cs.LO", "D.1.6 Logic Programming; H.2.5 Heterogeneous Databases; F.4.1\n Mathematical Logic, Logic and constraint programming" ]
A continuation semantics of interrogatives that accounts for Baker's ambiguity
http://arxiv.org/abs/cs/0207070v2
http://arxiv.org/abs/cs/0207070v2
http://arxiv.org/pdf/cs/0207070v2
2002-07-18
2003-07-25
[ "Chung-chieh Shan" ]
[ "" ]
Wh-phrases in English can appear both raised and in-situ. However, only in-situ wh-phrases can take semantic scope beyond the immediately enclosing clause. I present a denotational semantics of interrogatives that naturally accounts for these two properties. It neither invokes movement or economy, nor posits lexical ambiguity between raised and in-situ occurrences of the same wh-phrase. My analysis is based on the concept of continuations. It uses a novel type system for higher-order continuations to handle wide-scope wh-phrases while remaining strictly compositional. This treatment sheds light on the combinatorics of interrogatives as well as other kinds of so-called A'-movement.
20 pages; typo fixed
Proceedings of SALT XII: Semantics and Linguistic Theory, ed. Brendan Jackson, 246-265 (2002)
cs.CL
[ "cs.CL", "cs.PL", "I.2.7" ]
Agent Programming with Declarative Goals
http://arxiv.org/abs/cs/0207008v1
http://arxiv.org/abs/cs/0207008v1
http://arxiv.org/pdf/cs/0207008v1
2002-07-03
2002-07-03
[ "F. S. de Boer", "K. V. Hindriks", "W. van der Hoek", "J. -J. Ch. Meyer" ]
[ "", "", "", "" ]
A long and lasting problem in agent research has been to close the gap between agent logics and agent programming frameworks. The main reason for this problem of establishing a link between agent logics and agent programming frameworks is identified and explained by the fact that agent programming frameworks have not incorporated the concept of a `declarative goal'. Instead, such frameworks have focused mainly on plans or `goals-to-do' instead of the end goals to be realised which are also called `goals-to-be'. In this paper, a new programming language called GOAL is introduced which incorporates such declarative goals. The notion of a `commitment strategy' - one of the main theoretical insights due to agent logics, which explains the relation between beliefs and goals - is used to construct a computational semantics for GOAL. Finally, a proof theory for proving properties of GOAL agents is introduced. Thus, we offer a complete theory of agent programming in the sense that our theory provides both for a programming framework and a programming logic for such agents. An example program is proven correct by using this programming logic.
cs.AI
[ "cs.AI", "cs.PL", "F.3.1;F.3.2;I.2.5;I.2.4" ]
Three-Tiered Specification of Micro-Architectures
http://arxiv.org/abs/cs/0205052v1
http://arxiv.org/abs/cs/0205052v1
http://arxiv.org/pdf/cs/0205052v1
2002-05-19
2002-05-19
[ "Vasu Alagar", "Ralf Laemmel" ]
[ "", "" ]
A three-tiered specification approach is developed to formally specify collections of collaborating objects, say micro-architectures. (i) The structural properties to be maintained in the collaboration are specified in the lowest tier. (ii) The behaviour of the object methods in the classes is specified in the middle tier. (iii) The interaction of the objects in the micro-architecture is specified in the third tier. The specification approach is based on Larch and accompanying notations and tools. The approach enables the unambiguous and complete specification of reusable collections of collaborating objects. The layered, formal approach is compared to other approaches including the mainstream UML approach.
cs.SE
[ "cs.SE", "cs.PL", "D.2.4; D.2.10; D.2.11; D.2.13" ]
Monads for natural language semantics
http://arxiv.org/abs/cs/0205026v1
http://arxiv.org/abs/cs/0205026v1
http://arxiv.org/pdf/cs/0205026v1
2002-05-17
2002-05-17
[ "Chung-chieh Shan" ]
[ "" ]
Accounts of semantic phenomena often involve extending types of meanings and revising composition rules at the same time. The concept of monads allows many such accounts -- for intensionality, variable binding, quantification and focus -- to be stated uniformly and compositionally.
14 pages
Proceedings of the 2001 European Summer School in Logic, Language and Information student session, ed. Kristina Striegnitz, 285-298
cs.CL
[ "cs.CL", "cs.PL", "I.2.7; D.3.1; F.3.2" ]
Typed Generic Traversal With Term Rewriting Strategies
http://arxiv.org/abs/cs/0205018v2
http://arxiv.org/abs/cs/0205018v2
http://arxiv.org/pdf/cs/0205018v2
2002-05-14
2002-07-28
[ "Ralf Laemmel" ]
[ "" ]
A typed model of strategic term rewriting is developed. The key innovation is that generic traversal is covered. To this end, we define a typed rewriting calculus S'_{gamma}. The calculus employs a many-sorted type system extended by designated generic strategy types gamma. We consider two generic strategy types, namely the types of type-preserving and type-unifying strategies. S'_{gamma} offers traversal combinators to construct traversals or schemes thereof from many-sorted and generic strategies. The traversal combinators model different forms of one-step traversal, that is, they process the immediate subterms of a given term without anticipating any scheme of recursion into terms. To inhabit generic types, we need to add a fundamental combinator to lift a many-sorted strategy $s$ to a generic type gamma. This step is called strategy extension. The semantics of the corresponding combinator states that s is only applied if the type of the term at hand fits, otherwise the extended strategy fails. This approach dictates that the semantics of strategy application must be type-dependent to a certain extent. Typed strategic term rewriting with coverage of generic term traversal is a simple but expressive model of generic programming. It has applications in program transformation and program analysis.
85 pages, submitted for publication to the Journal of Logic and Algebraic Programming
cs.PL
[ "cs.PL", "D.1.1; D.1.2; D.3.1; D.3.3; F.4.2; I.1 .3; I.2.2" ]
A Dynamic Approach to Characterizing Termination of General Logic Programs
http://arxiv.org/abs/cs/0204031v1
http://arxiv.org/abs/cs/0204031v1
http://arxiv.org/pdf/cs/0204031v1
2002-04-12
2002-04-12
[ "Yi-Dong Shen", "Jia-Huai You", "Li-Yan Yuan", "Samuel S. P. Shen", "Qiang Yang" ]
[ "", "", "", "", "" ]
We present a new characterization of termination of general logic programs. Most existing termination analysis approaches rely on some static information about the structure of the source code of a logic program, such as modes/types, norms/level mappings, models/interargument relations, and the like. We propose a dynamic approach which employs some key dynamic features of an infinite (generalized) SLDNF-derivation, such as repetition of selected subgoals and recursive increase in term size. We also introduce a new formulation of SLDNF-trees, called generalized SLDNF-trees. Generalized SLDNF-trees deal with negative subgoals in the same way as Prolog and exist for any general logic programs.
To appear in ACM TOCL
ACM Transactions on Computational Logic 4(4):417-430, 2003
cs.LO
[ "cs.LO", "cs.PL", "D.1.6;D.1.2;F.4.1" ]
A Framework for Datatype Transformation
http://arxiv.org/abs/cs/0204018v3
http://arxiv.org/abs/cs/0204018v3
http://arxiv.org/pdf/cs/0204018v3
2002-04-09
2003-02-24
[ "Jan Kort", "Ralf Laemmel" ]
[ "", "" ]
We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.
Minor revision; now accepted at LDTA 2003
cs.PL
[ "cs.PL", "D.1.1; D.2.3; D.2.6; D.2.7; D.3.4" ]
Making Abstract Domains Condensing
http://arxiv.org/abs/cs/0204016v1
http://arxiv.org/abs/cs/0204016v1
http://arxiv.org/pdf/cs/0204016v1
2002-04-09
2002-04-09
[ "R. Giacobazzi", "F. Ranzato", "F. Scozzari" ]
[ "", "", "" ]
In this paper we show that reversible analysis of logic languages by abstract interpretation can be performed without loss of precision by systematically refining abstract domains. The idea is to include semantic structures into abstract domains in such a way that the refined abstract domain becomes rich enough to allow approximate bottom-up and top-down semantics to agree. These domains are known as condensing abstract domains. Substantially, an abstract domain is condensing if goal-driven and goal-independent analyses agree, namely no loss of precision is introduced by approximating queries in a goal-independent analysis. We prove that condensation is an abstract domain property and that the problem of making an abstract domain condensing boils down to the problem of making the domain complete with respect to unification. In a general abstract interpretation setting we show that when concrete domains and operations give rise to quantales, i.e. models of propositional linear logic, objects in a complete refined abstract domain can be explicitly characterized by linear logic-based formulations. This is the case for abstract domains for logic program analysis approximating computed answer substitutions where unification plays the role of multiplicative conjunction in a quantale of idempotent substitutions. Condensing abstract domains can therefore be systematically derived by minimally extending any, generally non-condensing domain, by a simple domain refinement operator.
20 pages
cs.PL
[ "cs.PL", "cs.LO", "D.3.1; D.3.2; F.3.2" ]
Design Patterns for Functional Strategic Programming
http://arxiv.org/abs/cs/0204015v1
http://arxiv.org/abs/cs/0204015v1
http://arxiv.org/pdf/cs/0204015v1
2002-04-09
2002-04-09
[ "Ralf Laemmel", "Joost Visser" ]
[ "", "" ]
In previous work, we introduced the fundamentals and a supporting combinator library for \emph{strategic programming}. This an idiom for generic programming based on the notion of a \emph{functional strategy}: a first-class generic function that cannot only be applied to terms of any type, but which also allows generic traversal into subterms and can be customized with type-specific behaviour. This paper seeks to provide practicing functional programmers with pragmatic guidance in crafting their own strategic programs. We present the fundamentals and the support from a user's perspective, and we initiate a catalogue of \emph{strategy design patterns}. These design patterns aim at consolidating strategic programming expertise in accessible form.
cs.PL
[ "cs.PL", "D.1.1; D.2.3; D.2.10" ]
The Sketch of a Polymorphic Symphony
http://arxiv.org/abs/cs/0204013v2
http://arxiv.org/abs/cs/0204013v2
http://arxiv.org/pdf/cs/0204013v2
2002-04-08
2002-11-01
[ "Ralf Laemmel" ]
[ "" ]
In previous work, we have introduced functional strategies, that is, first-class generic functions that can traverse into terms of any type while mixing uniform and type-specific behaviour. In the present paper, we give a detailed description of one particular Haskell-based model of functional strategies. This model is characterised as follows. Firstly, we employ first-class polymorphism as a form of second-order polymorphism as for the mere types of functional strategies. Secondly, we use an encoding scheme of run-time type case for mixing uniform and type-specific behaviour. Thirdly, we base all traversal on a fundamental combinator for folding over constructor applications. Using this model, we capture common strategic traversal schemes in a highly parameterised style. We study two original forms of parameterisation. Firstly, we design parameters for the specific control-flow, data-flow and traversal characteristics of more concrete traversal schemes. Secondly, we use overloading to postpone commitment to a specific type scheme of traversal. The resulting portfolio of traversal schemes can be regarded as a challenging benchmark for setups for typed generic programming. The way we develop the model and the suite of traversal schemes, it becomes clear that parameterised + typed strategic programming is best viewed as a potent combination of certain bits of parametric, intensional, polytypic, and ad-hoc polymorphism.
cs.PL
[ "cs.PL", "D.1.1; D.3.3; I.1.3" ]
Three Optimisations for Sharing
http://arxiv.org/abs/cs/0203022v1
http://arxiv.org/abs/cs/0203022v1
http://arxiv.org/pdf/cs/0203022v1
2002-03-18
2002-03-18
[ "Jacob M. Howe", "Andy King" ]
[ "", "" ]
In order to improve precision and efficiency sharing analysis should track both freeness and linearity. The abstract unification algorithms for these combined domains are suboptimal, hence there is scope for improving precision. This paper proposes three optimisations for tracing sharing in combination with freeness and linearity. A novel connection between equations and sharing abstractions is used to establish correctness of these optimisations even in the presence of rational trees. A method for pruning intermediate sharing abstractions to improve efficiency is also proposed. The optimisations are lightweight and therefore some, if not all, of these optimisations will be of interest to the implementor.
To appear in Theiry and Practice of Logic Programming
cs.PL
[ "cs.PL", "D.1.6; f.3.2" ]
Composing Programs in a Rewriting Logic for Declarative Programming
http://arxiv.org/abs/cs/0203006v1
http://arxiv.org/abs/cs/0203006v1
http://arxiv.org/pdf/cs/0203006v1
2002-03-04
2002-03-04
[ "Juan M. Molina", "Ernesto Pimentel" ]
[ "", "" ]
Constructor-Based Conditional Rewriting Logic is a general framework for integrating first-order functional and logic programming which gives an algebraic semantics for non-deterministic functional-logic programs. In the context of this formalism, we introduce a simple notion of program module as an open program which can be extended together with several mechanisms to combine them. These mechanisms are based on a reduced set of operations. However, the high expressiveness of these operations enable us to model typical constructs for program modularization like hiding, export/import, genericity/instantiation, and inheritance in a simple way. We also deal with the semantic aspects of the proposal by introducing an immediate consequence operator, and studying several alternative semantics for a program module, based on this operator, in the line of logic programming: the operator itself, its least fixpoint (the least model of the module), the set of its pre-fixpoints (term models of the module), and some other variations in order to find a compositional and fully abstract semantics wrt the set of operations and a natural notion of observability.
47 pages. A shorter version (33 pages) will appear in the Journal of Theory and Practice of Logic Programming
cs.LO
[ "cs.LO", "cs.PL", "D.3.2;D.3.3;F.3.2;F.3.3;F.4.1" ]
Towards Generic Refactoring
http://arxiv.org/abs/cs/0203001v1
http://arxiv.org/abs/cs/0203001v1
http://arxiv.org/pdf/cs/0203001v1
2002-03-01
2002-03-01
[ "Ralf Laemmel" ]
[ "" ]
We study program refactoring while considering the language or even the programming paradigm as a parameter. We use typed functional programs, namely Haskell programs, as the specification medium for a corresponding refactoring framework. In order to detach ourselves from language syntax, our specifications adhere to the following style. (I) As for primitive algorithms for program analysis and transformation, we employ generic function combinators supporting generic traversal and polymorphic functions refined by ad-hoc cases. (II) As for the language abstractions involved in refactorings, we design a dedicated multi-parameter class. This class can be instantiated for abstractions as present in various languages, e.g., Java, Prolog or Haskell.
cs.PL
[ "cs.PL", "D.1.1; D.1.2; D.2.1; D.2.3; D.2.13; D.3.1; I.1.1; I.1.2; I.1.3" ]
Logic program specialisation through partial deduction: Control issues
http://arxiv.org/abs/cs/0202012v1
http://arxiv.org/abs/cs/0202012v1
http://arxiv.org/pdf/cs/0202012v1
2002-02-12
2002-02-12
[ "Michael Leuschel", "Maurice Bruynooghe" ]
[ "", "" ]
Program specialisation aims at improving the overall performance of programs by performing source to source transformations. A common approach within functional and logic programming, known respectively as partial evaluation and partial deduction, is to exploit partial knowledge about the input. It is achieved through a well-automated application of parts of the Burstall-Darlington unfold/fold transformation framework. The main challenge in developing systems is to design automatic control that ensures correctness, efficiency, and termination. This survey and tutorial presents the main developments in controlling partial deduction over the past 10 years and analyses their respective merits and shortcomings. It ends with an assessment of current achievements and sketches some remaining research challenges.
To appear in Theory and Practice of Logic Programming
cs.PL
[ "cs.PL", "cs.AI", "D.1.6; D.1.2; I.2.2; F.4.1; I.2.3" ]
Using parametric set constraints for locating errors in CLP programs
http://arxiv.org/abs/cs/0202010v1
http://arxiv.org/abs/cs/0202010v1
http://arxiv.org/pdf/cs/0202010v1
2002-02-11
2002-02-11
[ "W. Drabent", "J. Maluszynski", "P. Pietrzak" ]
[ "", "", "" ]
This paper introduces a framework of parametric descriptive directional types for constraint logic programming (CLP). It proposes a method for locating type errors in CLP programs and presents a prototype debugging tool. The main technique used is checking correctness of programs w.r.t. type specifications. The approach is based on a generalization of known methods for proving correctness of logic programs to the case of parametric specifications. Set-constraint techniques are used for formulating and checking verification conditions for (parametric) polymorphic type specifications. The specifications are expressed in a parametric extension of the formalism of term grammars. The soundness of the method is proved and the prototype debugging tool supporting the proposed approach is illustrated on examples. The paper is a substantial extension of the previous work by the same authors concerning monomorphic directional types.
64 pages, To appear in Theory and Practice of Logic Programming
Theory and Practice of Logic Programming, Vol 2(4&5), 2002, pp 549-611.
cs.PL
[ "cs.PL", "D1.6; D2.5; F3.1" ]
The Witness Properties and the Semantics of the Prolog Cut
http://arxiv.org/abs/cs/0201029v1
http://arxiv.org/abs/cs/0201029v1
http://arxiv.org/pdf/cs/0201029v1
2002-01-31
2002-01-31
[ "James H. Andrews" ]
[ "" ]
The semantics of the Prolog ``cut'' construct is explored in the context of some desirable properties of logic programming systems, referred to as the witness properties. The witness properties concern the operational consistency of responses to queries. A generalization of Prolog with negation as failure and cut is described, and shown not to have the witness properties. A restriction of the system is then described, which preserves the choice and first-solution behaviour of cut but allows the system to have the witness properties. The notion of cut in the restricted system is more restricted than the Prolog hard cut, but retains the useful first-solution behaviour of hard cut, not retained by other proposed cuts such as the ``soft cut''. It is argued that the restricted system achieves a good compromise between the power and utility of the Prolog cut and the need for internal consistency in logic programming systems. The restricted system is given an abstract semantics, which depends on the witness properties; this semantics suggests that the restricted system has a deeper connection to logic than simply permitting some computations which are logical. Parts of this paper appeared previously in a different form in the Proceedings of the 1995 International Logic Programming Symposium.
60 pages, 15 figures. Accepted for publication in Theory and Practice of Logic Programming
cs.PL
[ "cs.PL", "D.3.1; D.3.3" ]
Quantum Computers and Quantum Computer Languages: Quantum Assembly Language and Quantum C Language
http://arxiv.org/abs/quant-ph/0201082v1
http://arxiv.org/abs/quant-ph/0201082v1
http://arxiv.org/pdf/quant-ph/0201082v1
2002-01-18
2002-01-18
[ "Stephen Blaha" ]
[ "" ]
We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.
32 pages
quant-ph
[ "quant-ph", "cs.PL" ]
Efficient Groundness Analysis in Prolog
http://arxiv.org/abs/cs/0201012v1
http://arxiv.org/abs/cs/0201012v1
http://arxiv.org/pdf/cs/0201012v1
2002-01-16
2002-01-16
[ "Jacob M. Howe", "Andy King" ]
[ "", "" ]
Boolean functions can be used to express the groundness of, and trace grounding dependencies between, program variables in (constraint) logic programs. In this paper, a variety of issues pertaining to the efficient Prolog implementation of groundness analysis are investigated, focusing on the domain of definite Boolean functions, Def. The systematic design of the representation of an abstract domain is discussed in relation to its impact on the algorithmic complexity of the domain operations; the most frequently called operations should be the most lightweight. This methodology is applied to Def, resulting in a new representation, together with new algorithms for its domain operations utilising previously unexploited properties of Def -- for instance, quadratic-time entailment checking. The iteration strategy driving the analysis is also discussed and a simple, but very effective, optimisation of induced magic is described. The analysis can be implemented straightforwardly in Prolog and the use of a non-ground representation results in an efficient, scalable tool which does not require widening to be invoked, even on the largest benchmarks. An extensive experimental evaluation is given
31 pages To appear in Theory and Practice of Logic Programming
cs.PL
[ "cs.PL", "D.1.6; F.3.2" ]
A Backward Analysis for Constraint Logic Programs
http://arxiv.org/abs/cs/0201011v1
http://arxiv.org/abs/cs/0201011v1
http://arxiv.org/pdf/cs/0201011v1
2002-01-16
2002-01-16
[ "Andy King", "Lunjin Lu" ]
[ "", "" ]
One recurring problem in program development is that of understanding how to re-use code developed by a third party. In the context of (constraint) logic programming, part of this problem reduces to figuring out how to query a program. If the logic program does not come with any documentation, then the programmer is forced to either experiment with queries in an ad hoc fashion or trace the control-flow of the program (backward) to infer the modes in which a predicate must be called so as to avoid an instantiation error. This paper presents an abstract interpretation scheme that automates the latter technique. The analysis presented in this paper can infer moding properties which if satisfied by the initial query, come with the guarantee that the program and query can never generate any moding or instantiation errors. Other applications of the analysis are discussed. The paper explains how abstract domains with certain computational properties (they condense) can be used to trace control-flow backward (right-to-left) to infer useful properties of initial queries. A correctness argument is presented and an implementation is reported.
32 pages
cs.PL
[ "cs.PL", "cs.SE", "D.1.6; F.3.2" ]
A Quantum Computer Foundation for the Standard Model and SuperString Theories
http://arxiv.org/abs/hep-th/0201092v1
http://arxiv.org/abs/hep-th/0201092v1
http://arxiv.org/pdf/hep-th/0201092v1
2002-01-14
2002-01-14
[ "Stephen Blaha" ]
[ "" ]
We show the Standard Model and SuperString Theories can be naturally based on a Quantum Computer foundation. The Standard Model of elementary particles can be viewed as defining a Quantum Computer Grammar and language. A Quantum Computer in a certain limit naturally forms a Superspace upon which Supersymmetry rotations can be defined - a Continuum Quantum Computer. Quantum high-level computer languages such as Quantum C and Quantum Assembly language are also discussed. In these new linguistic representations, particles become literally symbols or letters, and particle interactions become grammar rules. This view is NOT the same as the often-expressed view that Mathematics is the language of Physics. Some new developments relating to Quantum Computers and Quantum Turing Machines are also described.
78 pages, PDF
hep-th
[ "hep-th", "cs.PL", "quant-ph" ]
HyperPro An integrated documentation environment for CLP
http://arxiv.org/abs/cs/0111046v1
http://arxiv.org/abs/cs/0111046v1
http://arxiv.org/pdf/cs/0111046v1
2001-11-19
2001-11-19
[ "AbdelAli Ed-Dbali", "Pierre Deransart", "Mariza A. S. Bigonha", "Jose de Siqueira", "Roberto da S. Bigonha" ]
[ "", "", "", "", "" ]
The purpose of this paper is to present some functionalities of the HyperPro System. HyperPro is a hypertext tool which allows to develop Constraint Logic Programming (CLP) together with their documentation. The text editing part is not new and is based on the free software Thot. A HyperPro program is a Thot document written in a report style. The tool is designed for CLP but it can be adapted to other programming paradigms as well. Thot offers navigation and editing facilities and synchronized static document views. HyperPro has new functionalities such as document exportations, dynamic views (projections), indexes and version management. Projection is a mechanism for extracting and exporting relevant pieces of code program or of document according to specific criteria. Indexes are useful to find the references and occurrences of a relation in a document, i.e., where its predicate definition is found and where a relation is used in other programs or document versions and, to translate hyper-texts links into paper references. It still lack importation facilities.
In A. Kusalik (ed), Proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.1.6; D.2.6 (possibly also D.2.5; F.4.1; I.2.3)" ]
An Environment for the Exploration of Non Monotonic Logic Programs
http://arxiv.org/abs/cs/0111049v1
http://arxiv.org/abs/cs/0111049v1
http://arxiv.org/pdf/cs/0111049v1
2001-11-19
2001-11-19
[ "Luis F. Castro", "David S. Warren" ]
[ "", "" ]
Stable Model Semantics and Well Founded Semantics have been shown to be very useful in several applications of non-monotonic reasoning. However, Stable Models presents a high computational complexity, whereas Well Founded Semantics is easy to compute and provides an approximation of Stable Models. Efficient engines exist for both semantics of logic programs. This work presents a computational integration of two of such systems, namely XSB and SMODELS. The resulting system is called XNMR, and provides an interactive system for the exploration of both semantics. Aspects such as modularity can be exploited in order to ease debugging of large knowledge bases with the usual Prolog debugging techniques and an interactive environment. Besides, the use of a full Prolog system as a front-end to a Stable Models engine augments the language usually accepted by such systems.
* In A. Kusalik (ed), Proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.LO", "D.1.6; D.2.6" ]
Prototyping CLP(FD) Tracers: a Trace Model and an Experimental Validation Environment
http://arxiv.org/abs/cs/0111043v1
http://arxiv.org/abs/cs/0111043v1
http://arxiv.org/pdf/cs/0111043v1
2001-11-16
2001-11-16
[ "Ludovic Langevine", "Pierre Deransart", "Mireille Ducasse", "Erwan Jahier" ]
[ "", "", "", "" ]
Developing and maintaining CLP programs requires visualization and explanation tools. However, existing tools are built in an ad hoc way. Therefore porting tools from one platform to another is very difficult. We have shown in previous work that, from a fine-grained execution trace, a number of interesting views about logic program executions could be generated by trace analysis. In this article, we propose a trace model for constraint solving by narrowing. This trace model is the first one proposed for CLP(FD) and does not pretend to be the ultimate one. We also propose an instrumented meta-interpreter in order to experiment with the model. Furthermore, we show that the proposed trace model contains the necessary information to build known and useful execution views. This work sets the basis for generic execution analysis of CLP(FD) programs.
In A. Kusalik (ed), Proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.1.6; D.2.6; D.2.5; F.4.1" ]
Proceedings of the Eleventh Workshop on Logic Programming Environments (WLPE'01)
http://arxiv.org/abs/cs/0111042v2
http://arxiv.org/abs/cs/0111042v2
http://arxiv.org/pdf/cs/0111042v2
2001-11-16
2001-11-22
[ "Anthony Kusalik" ]
[ "" ]
The Eleventh Workshop on Logic Programming Environments (WLPE'01) was one in a series of international workshops in the topic area. It was held on December 1, 2001 in Paphos, Cyprus as a post-conference workshop at ICLP 2001. Eight refereed papers were presented at the conference. A majority of the papers involved, in some way, constraint logic programming and tools for software development. Other topics areas addressed include execution visualization, instructional aids (for learning users), software maintenance (including debugging), and provisions for new paradigms.
8 refereed papers; Anthony Kusalik, editor; 11WLPE, WLPE 2001
cs.PL
[ "cs.PL", "cs.SE", "D.1.6; D.2.5; D.2.6; F.4.1; I.2.3" ]
Combining Propagation Information and Search Tree Visualization using ILOG OPL Studio
http://arxiv.org/abs/cs/0111040v2
http://arxiv.org/abs/cs/0111040v2
http://arxiv.org/pdf/cs/0111040v2
2001-11-15
2001-11-16
[ "Christiane Bracchi", "Christophe Gefflot", "Frederic Paulin" ]
[ "", "", "" ]
In this paper we give an overview of the current state of the graphical features provided by ILOG OPL Studio for debugging and performance tuning of OPL programs or external ILOG Solver based applications. This paper focuses on combining propagation and search information using the Search Tree view and the Propagation Spy. A new synthetic view is presented: the Christmas Tree, which combines the Search Tree view with statistics on the efficiency of the domain reduction and on the number of the propagation events triggered.
In A. Kusalik (ed), proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus, cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.1.6; D.2.6; D.2.5; F.4.1" ]
On the Design of a Tool for Supporting the Construction of Logic Programs
http://arxiv.org/abs/cs/0111041v3
http://arxiv.org/abs/cs/0111041v3
http://arxiv.org/pdf/cs/0111041v3
2001-11-15
2001-11-27
[ "Gustavo A. Ospina", "Baudouin Le Charlier" ]
[ "", "" ]
Environments for systematic construction of logic programs are needed in the academy as well as in the industry. Such environments should support well defined construction methods and should be able to be extended and interact with other programming tools like debuggers and compilers. We present a variant of the Deville methodology for logic program development, and the design of a tool for supporting the methodology. Our aim is to facilitate the learning of logic programming and to set the basis of more sophisticated tools for program development.
In A. Kusalik (ed), Proceedings of the Eleventh Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.1.6;D.2.6" ]
An Integrated Development Environment for Declarative Multi-Paradigm Programming
http://arxiv.org/abs/cs/0111039v2
http://arxiv.org/abs/cs/0111039v2
http://arxiv.org/pdf/cs/0111039v2
2001-11-14
2001-11-19
[ "Michael Hanus", "Johannes Koj" ]
[ "", "" ]
In this paper we present CIDER (Curry Integrated Development EnviRonment), an analysis and programming environment for the declarative multi-paradigm language Curry. CIDER is a graphical environment to support the development of Curry programs by providing integrated tools for the analysis and visualization of programs. CIDER is completely implemented in Curry using libraries for GUI programming (based on Tcl/Tk) and meta-programming. An important aspect of our environment is the possible adaptation of the development environment to other declarative source languages (e.g., Prolog or Haskell) and the extensibility w.r.t. new analysis methods. To support the latter feature, the lazy evaluation strategy of the underlying implementation language Curry becomes quite useful.
In A. Kusalik (ed), proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.1.1; D.1.3; D.1.6; D.2.6; D.2.5; D.3.4" ]
User-friendly explanations for constraint programming
http://arxiv.org/abs/cs/0111037v2
http://arxiv.org/abs/cs/0111037v2
http://arxiv.org/pdf/cs/0111037v2
2001-11-14
2001-11-16
[ "Narendra Jussien", "Samir Ouis" ]
[ "", "" ]
In this paper, we introduce a set of tools for providing user-friendly explanations in an explanation-based constraint programming system. The idea is to represent the constraints of a problem as an hierarchy (a tree). Users are then represented as a set of understandable nodes in that tree (a cut). Classical explanations (sets of system constraints) just need to get projected on that representation in order to be understandable by any user. We present here the main interests of this idea.
In A. Kusalik (ed), proceedings of the Eleventh International Workshop on Logic Programming Environments (WLPE'01), December 1, 2001, Paphos, Cyprus. cs.PL/0111042
cs.PL
[ "cs.PL", "cs.SE", "D.2.6;D.3.3; F.4.1;D.2.5" ]
Practical Aspects for a Working Compile Time Garbage Collection System for Mercury
http://arxiv.org/abs/cs/0110037v1
http://arxiv.org/abs/cs/0110037v1
http://arxiv.org/pdf/cs/0110037v1
2001-10-17
2001-10-17
[ "Nancy Mazur", "Peter Ross", "Gerda Janssens", "Maurice Bruynooghe" ]
[ "", "", "", "" ]
Compile-time garbage collection (CTGC) is still a very uncommon feature within compilers. In previous work we have developed a compile-time structure reuse system for Mercury, a logic programming language. This system indicates which datastructures can safely be reused at run-time. As preliminary experiments were promising, we have continued this work and have now a working and well performing near-to-ship CTGC-system built into the Melbourne Mercury Compiler (MMC). In this paper we present the multiple design decisions leading to this system, we report the results of using CTGC for a set of benchmarks, including a real-world program, and finally we discuss further possible improvements. Benchmarks show substantial memory savings and a noticeable reduction in execution time.
15 pages. A version of this paper will appear in Proceeding of the Seventeenth International Conference on Logic Programming (ICLP2001)
cs.PL
[ "cs.PL", "D.3.4;I.2.3" ]
On termination of meta-programs
http://arxiv.org/abs/cs/0110035v3
http://arxiv.org/abs/cs/0110035v3
http://arxiv.org/pdf/cs/0110035v3
2001-10-17
2003-12-24
[ "Alexander Serebrenik", "Danny De Schreye" ]
[ "", "" ]
The term {\em meta-programming} refers to the ability of writing programs that have other programs as data and exploit their semantics. The aim of this paper is presenting a methodology allowing us to perform a correct termination analysis for a broad class of practical meta-interpreters, including negation and performing different tasks during the execution. It is based on combining the power of general orderings, used in proving termination of term-rewrite systems and programs, and on the well-known acceptability condition, used in proving termination of logic programs. The methodology establishes a relationship between the ordering needed to prove termination of the interpreted program and the ordering needed to prove termination of the meta-interpreter together with this interpreted program. If such a relationship is established, termination of one of those implies termination of the other one, i.e., the meta-interpreter preserves termination. Among the meta-interpreters that are analysed correctly are a proof trees constructing meta-interpreter, different kinds of tracers and reasoners. To appear without appendix in Theory and Practice of Logic Programming.
To appear in Theory and Practice of Logic Programming (TPLP)
cs.PL
[ "cs.PL", "cs.LO", "D.1.6; D.2.4" ]
Inference of termination conditions for numerical loops in Prolog
http://arxiv.org/abs/cs/0110034v2
http://arxiv.org/abs/cs/0110034v2
http://arxiv.org/pdf/cs/0110034v2
2001-10-17
2003-07-10
[ "Alexander Serebrenik", "Danny De Schreye" ]
[ "", "" ]
We present a new approach to termination analysis of numerical computations in logic programs. Traditional approaches fail to analyse them due to non well-foundedness of the integers. We present a technique that allows overcoming these difficulties. Our approach is based on transforming a program in a way that allows integrating and extending techniques originally developed for analysis of numerical computations in the framework of query-mapping pairs with the well-known framework of acceptability. Such an integration not only contributes to the understanding of termination behaviour of numerical computations, but also allows us to perform a correct analysis of such computations automatically, by extending previous work on a constraint-based approach to termination. Finally, we discuss possible extensions of the technique, including incorporating general term orderings.
To appear in Theory and Practice of Logic Programming. To appear in Theory and Practice of Logic Programming
cs.PL
[ "cs.PL", "cs.LO", "D.1.6; D.2.4" ]
Mixed-Initiative Interaction = Mixed Computation
http://arxiv.org/abs/cs/0110022v1
http://arxiv.org/abs/cs/0110022v1
http://arxiv.org/pdf/cs/0110022v1
2001-10-09
2001-10-09
[ "Naren Ramakrishnan", "Robert Capra", "Manuel A. Perez-Quinones" ]
[ "", "", "" ]
We show that partial evaluation can be usefully viewed as a programming model for realizing mixed-initiative functionality in interactive applications. Mixed-initiative interaction between two participants is one where the parties can take turns at any time to change and steer the flow of interaction. We concentrate on the facet of mixed-initiative referred to as `unsolicited reporting' and demonstrate how out-of-turn interactions by users can be modeled by `jumping ahead' to nested dialogs (via partial evaluation). Our approach permits the view of dialog management systems in terms of their native support for staging and simplifying interactions; we characterize three different voice-based interaction technologies using this viewpoint. In particular, we show that the built-in form interpretation algorithm (FIA) in the VoiceXML dialog management architecture is actually a (well disguised) combination of an interpreter and a partial evaluator.
cs.PL
[ "cs.PL", "cs.HC", "F3.2; H.5.2" ]
Proceedings of the 6th Annual Workshop of the ERCIM Working Group on Constraints
http://arxiv.org/abs/cs/0110012v1
http://arxiv.org/abs/cs/0110012v1
http://arxiv.org/pdf/cs/0110012v1
2001-10-03
2001-10-03
[ "Krzysztof R. Apt", "Roman Bartak", "Eric Monfroy", "Francesca Rossi", "Sebastian Brand" ]
[ "", "", "", "", "" ]
Homepage of the workshop proceedings, with links to all individually archived papers
2 invited talks, 17 papers
cs.PL
[ "cs.PL", "D.3.3" ]
Variable and Value Ordering When Solving Balanced Academic Curriculum Problems
http://arxiv.org/abs/cs/0110007v1
http://arxiv.org/abs/cs/0110007v1
http://arxiv.org/pdf/cs/0110007v1
2001-10-02
2001-10-02
[ "Carlos Castro", "Sebastian Manzano" ]
[ "", "" ]
In this paper we present the use of Constraint Programming for solving balanced academic curriculum problems. We discuss the important role that heuristics play when solving a problem using a constraint-based approach. We also show how constraint solving techniques allow to very efficiently solve combinatorial optimization problems that are too hard for integer programming techniques.
12 pages, 4 figures
Proceedings of 6th Workshop of the ERCIM WG on Constraints (Prague, June 2001)
cs.PL
[ "cs.PL", "D.3" ]
Higher-Order Pattern Complement and the Strict Lambda-Calculus
http://arxiv.org/abs/cs/0109072v1
http://arxiv.org/abs/cs/0109072v1
http://arxiv.org/pdf/cs/0109072v1
2001-09-24
2001-09-24
[ "Alberto Momigliano", "Frank Pfenning" ]
[ "", "" ]
We address the problem of complementing higher-order patterns without repetitions of existential variables. Differently from the first-order case, the complement of a pattern cannot, in general, be described by a pattern, or even by a finite set of patterns. We therefore generalize the simply-typed lambda-calculus to include an internal notion of strict function so that we can directly express that a term must depend on a given variable. We show that, in this more expressive calculus, finite sets of patterns without repeated variables are closed under complement and intersection. Our principal application is the transformational approach to negation in higher-order logic programs.
37 pages
ACM Trans. Comput. Log. 4(4): 493-529 (2003)
10.1145/937555.937559
cs.LO
[ "cs.LO", "cs.PL", "D.3.3;D.1.6;F.4.1" ]
CLP Approaches to 2D Angle Placements
http://arxiv.org/abs/cs/0109066v1
http://arxiv.org/abs/cs/0109066v1
http://arxiv.org/pdf/cs/0109066v1
2001-09-24
2001-09-24
[ "Tomasz Szczygiel" ]
[ "" ]
The paper presents two CLP approaches to 2D angle placements, implemented in CHIP v.5.3. The first is based on the classical (rectangular) cumulative global constraint, the second on the new trapezoidal cumulative global constraint. Both approaches are applied to a specific presented.
Presented at the 6th Annual Workshop of the ERCIM Working Group on Constraints, 2001
cs.PL
[ "cs.PL", "D.3.3" ]
Branching: the Essence of Constraint Solving
http://arxiv.org/abs/cs/0109060v1
http://arxiv.org/abs/cs/0109060v1
http://arxiv.org/pdf/cs/0109060v1
2001-09-24
2001-09-24
[ "Antonio J. Fernandez", "Patricia M. Hill" ]
[ "", "" ]
This paper focuses on the branching process for solving any constraint satisfaction problem (CSP). A parametrised schema is proposed that (with suitable instantiations of the parameters) can solve CSP's on both finite and infinite domains. The paper presents a formal specification of the schema and a statement of a number of interesting properties that, subject to certain conditions, are satisfied by any instances of the schema. It is also shown that the operational procedures of many constraint systems including cooperative systems) satisfy these conditions. Moreover, the schema is also used to solve the same CSP in different ways by means of different instantiations of its parameters.
18 pages, 2 figures, Proceedings ERCIM Workshop on Constraints (Prague, June 2001)
cs.PL
[ "cs.PL", "D.3.3; D.3.2" ]
CLP versus LS on Log-based Reconciliation Problems
http://arxiv.org/abs/cs/0109033v1
http://arxiv.org/abs/cs/0109033v1
http://arxiv.org/pdf/cs/0109033v1
2001-09-18
2001-09-18
[ "Francois Fages" ]
[ "" ]
Nomadic applications create replicas of shared objects that evolve independently while they are disconnected. When reconnecting, the system has to reconcile the divergent replicas. In the log-based approach to reconciliation, such as in the IceCube system, the input is a common initial state and logs of actions that were performed on each replica. The output is a consistent global schedule that maximises the number of accepted actions. The reconciler merges the logs according to the schedule, and replays the operations in the merged log against the initial state, yielding to a reconciled common final state. In this paper, we show the NP-completeness of the log-based reconciliation problem and present two programs for solving it. Firstly, a constraint logic program (CLP) that uses integer constraints for expressing precedence constraints, boolean constraints for expressing dependencies between actions, and some heuristics for guiding the search. Secondly, a stochastic local search method with Tabu heuristic (LS), that computes solutions in an incremental fashion but does not prove optimality. One difficulty in the LS modeling lies in the handling of both boolean variables and integer variables, and in the handling of the objective function which differs from a max-CSP problem. Preliminary evaluation results indicate better performance for the CLP program which, on somewhat realistic benchmarks, finds nearly optimal solutions up to a thousands of actions and proves optimality up to a hundreds of actions.
Article presented at the 6th ERCIM workshop of the Constraint Group, Prague, Czech Republic, June 2001
cs.PL
[ "cs.PL", "D.1.6; D.3.2" ]
Dynamic Global Constraints: A First View
http://arxiv.org/abs/cs/0109025v1
http://arxiv.org/abs/cs/0109025v1
http://arxiv.org/pdf/cs/0109025v1
2001-09-18
2001-09-18
[ "Roman Bartak" ]
[ "" ]
Global constraints proved themselves to be an efficient tool for modelling and solving large-scale real-life combinatorial problems. They encapsulate a set of binary constraints and using global reasoning about this set they filter the domains of involved variables better than arc consistency among the set of binary constraints. Moreover, global constraints exploit semantic information to achieve more efficient filtering than generalised consistency algorithms for n-ary constraints. Continued expansion of constraint programming (CP) to various application areas brings new challenges for design of global constraints. In particular, application of CP to advanced planning and scheduling (APS) requires dynamic additions of new variables and constraints during the process of constraint satisfaction and, thus, it would be helpful if the global constraints could adopt new variables. In the paper, we give a motivation for such dynamic global constraints and we describe a dynamic version of the well-known alldifferent constraint.
11 pages. Proceedings ERCIM WG on Constraints (Prague, June 2001)
cs.PL
[ "cs.PL", "cs.AI", "D.3.2; D.3.3; D.1.6" ]
Verification of Timed Automata Using Rewrite Rules and Strategies
http://arxiv.org/abs/cs/0109024v1
http://arxiv.org/abs/cs/0109024v1
http://arxiv.org/pdf/cs/0109024v1
2001-09-17
2001-09-17
[ "Emmanuel Beffara", "Olivier Bournez", "Hassen Kacem", "Claude Kirchner" ]
[ "", "", "", "" ]
ELAN is a powerful language and environment for specifying and prototyping deduction systems in a language based on rewrite rules controlled by strategies. Timed automata is a class of continuous real-time models of reactive systems for which efficient model-checking algorithms have been devised. In this paper, we show that these algorithms can very easily be prototyped in the ELAN system. This paper argues through this example that rewriting based systems relying on rules and strategies are a good framework to prototype, study and test rather efficiently symbolic model-checking algorithms, i.e. algorithms which involve combination of graph exploration rules, deduction rules, constraint solving techniques and decision procedures.
cs.PL
[ "cs.PL", "I.2.3" ]
Interactive Timetabling
http://arxiv.org/abs/cs/0109022v1
http://arxiv.org/abs/cs/0109022v1
http://arxiv.org/pdf/cs/0109022v1
2001-09-17
2001-09-17
[ "Tomas Muller", "Roman Bartak" ]
[ "", "" ]
Timetabling is a typical application of constraint programming whose task is to allocate activities to slots in available resources respecting various constraints like precedence and capacity. In this paper we present a basic concept, a constraint model, and the solving algorithms for interactive timetabling. Interactive timetabling combines automated timetabling (the machine allocates the activities) with user interaction (the user can interfere with the process of timetabling). Because the user can see how the timetabling proceeds and can intervene this process, we believe that such approach is more convenient than full automated timetabling which behaves like a black-box. The contribution of this paper is twofold: we present a generic model to describe timetabling (and scheduling in general) problems and we propose an interactive algorithm for solving such problems.
12 pages. Proceedings ERCIM WG on Constraints (Prague, June 2001)
cs.PL
[ "cs.PL", "cs.AI", "D.3.2; D.3.3; F2.2" ]
Assigning Satisfaction Values to Constraints: An Algorithm to Solve Dynamic Meta-Constraints
http://arxiv.org/abs/cs/0109014v1
http://arxiv.org/abs/cs/0109014v1
http://arxiv.org/pdf/cs/0109014v1
2001-09-13
2001-09-13
[ "Janet van der Linden" ]
[ "" ]
The model of Dynamic Meta-Constraints has special activity constraints which can activate other constraints. It also has meta-constraints which range over other constraints. An algorithm is presented in which constraints can be assigned one of five different satisfaction values, which leads to the assignment of domain values to the variables in the CSP. An outline of the model and the algorithm is presented, followed by some initial results for two problems: a simple classic CSP and the Car Configuration Problem. The algorithm is shown to perform few backtracks per solution, but to have overheads in the form of historical records required for the implementation of state.
11 pages. Proceedings ERCIM WG on Constraints (Prague, June 2001)
cs.PL
[ "cs.PL", "cs.AI", "D.3.2; D3.3" ]
On the generalized dining philosophers problem
http://arxiv.org/abs/cs/0109003v1
http://arxiv.org/abs/cs/0109003v1
http://arxiv.org/pdf/cs/0109003v1
2001-09-03
2001-09-03
[ "Oltea Mihaela Herescu", "Catuscia Palamidessi" ]
[ "", "" ]
We consider a generalization of the dining philosophers problem to arbitrary connection topologies. We focus on symmetric, fully distributed systems, and we address the problem of guaranteeing progress and lockout-freedom, even in presence of adversary schedulers, by using randomized algorithms. We show that the well-known algorithms of Lehmann and Rabin do not work in the generalized case, and we propose an alternative algorithm based on the idea of letting the philosophers assign a random priority to their adjacent forks.
Proc. of the 20th ACM Symposium on Principles of Distributed Computing (PODC), pages 81-89, ACM, 2001
cs.PL
[ "cs.PL", "D.4.1;C.2.4" ]
Probabilistic asynchronous pi-calculus
http://arxiv.org/abs/cs/0109002v1
http://arxiv.org/abs/cs/0109002v1
http://arxiv.org/pdf/cs/0109002v1
2001-09-03
2001-09-03
[ "Oltea Mihaela Herescu", "Catuscia Palamidessi" ]
[ "", "" ]
We propose an extension of the asynchronous pi-calculus with a notion of random choice. We define an operational semantics which distinguishes between probabilistic choice, made internally by the process, and nondeterministic choice, made externally by an adversary scheduler. This distinction will allow us to reason about the probabilistic correctness of algorithms under certain schedulers. We show that in this language we can solve the electoral problem, which was proved not possible in the asynchronous $\pi$-calculus. Finally, we show an implementation of the probabilistic asynchronous pi-calculus in a Java-like language.
Report version (longer and more complete than the FoSSaCs 2000 version)
Jerzy Tiuryn, editor, Proceedings of FOSSACS 2000 (Part of ETAPS 2000), volume 1784 of Lecture Notes in Computer Science, pages 146--160. Springer-Verlag, 2000
cs.PL
[ "cs.PL", "D.1.3;D.3.2;D.3.3" ]
The Partial Evaluation Approach to Information Personalization
http://arxiv.org/abs/cs/0108003v1
http://arxiv.org/abs/cs/0108003v1
http://arxiv.org/pdf/cs/0108003v1
2001-08-07
2001-08-07
[ "Naren Ramakrishnan", "Saverio Perugini" ]
[ "", "" ]
Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.
Comprehensive overview of the PIPE model for personalization
cs.IR
[ "cs.IR", "cs.PL", "D.3.4; H.4.2; H5.2; H5.4" ]
An interactive semantics of logic programming
http://arxiv.org/abs/cs/0107022v1
http://arxiv.org/abs/cs/0107022v1
http://arxiv.org/pdf/cs/0107022v1
2001-07-17
2001-07-17
[ "Roberto Bruni", "Ugo Montanari", "Francesca Rossi" ]
[ "", "", "" ]
We apply to logic programming some recently emerging ideas from the field of reduction-based communicating systems, with the aim of giving evidence of the hidden interactions and the coordination mechanisms that rule the operational machinery of such a programming paradigm. The semantic framework we have chosen for presenting our results is tile logic, which has the advantage of allowing a uniform treatment of goals and observations and of applying abstract categorical tools for proving the results. As main contributions, we mention the finitary presentation of abstract unification, and a concurrent and coordinated abstract semantics consistent with the most common semantics of logic programming. Moreover, the compositionality of the tile semantics is guaranteed by standard results, as it reduces to check that the tile systems associated to logic programs enjoy the tile decomposition property. An extension of the approach for handling constraint systems is also discussed.
42 pages, 24 figure, 3 tables, to appear in the CUP journal of Theory and Practice of Logic Programming
cs.LO
[ "cs.LO", "cs.PL", "D.1.6; D.3.2; D.3.3; F.3.2" ]
Transformations of CCP programs
http://arxiv.org/abs/cs/0107014v1
http://arxiv.org/abs/cs/0107014v1
http://arxiv.org/pdf/cs/0107014v1
2001-07-10
2001-07-10
[ "Sandro Etalle", "Maurizio Gabbrielli", "Maria Chiara Meo" ]
[ "", "", "" ]
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input/output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
To appear in ACM TOPLAS
cs.PL
[ "cs.PL", "cs.AI", "cs.LO", "I.2.2; D.1.3; D.3.2" ]
The Logic Programming Paradigm and Prolog
http://arxiv.org/abs/cs/0107013v2
http://arxiv.org/abs/cs/0107013v2
http://arxiv.org/pdf/cs/0107013v2
2001-07-10
2001-07-12
[ "Krzysztof R. Apt" ]
[ "" ]
This is a tutorial on logic programming and Prolog appropriate for a course on programming languages for students familiar with imperative programming.
37 pages; unpublished
cs.PL
[ "cs.PL", "cs.AI", "D.1.6; D.3.2" ]
CHR as grammar formalism. A first report
http://arxiv.org/abs/cs/0106059v1
http://arxiv.org/abs/cs/0106059v1
http://arxiv.org/pdf/cs/0106059v1
2001-06-29
2001-06-29
[ "Henning Christiansen" ]
[ "" ]
Grammars written as Constraint Handling Rules (CHR) can be executed as efficient and robust bottom-up parsers that provide a straightforward, non-backtracking treatment of ambiguity. Abduction with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified for anaphora resolution, coordination and text interpretation.
12 pages. Presented at ERCIM Workshop on Constraints, Prague, Czech Republic, June 18-20, 2001
Proc. of ERCIM Workshop on Constraints, Prague, Czech Republic, June 18-20, 2001
cs.PL
[ "cs.PL", "cs.CL", "I.2.7;D.3.2;F.4.1;F.4.2" ]
Inference of termination conditions for numerical loops
http://arxiv.org/abs/cs/0106053v1
http://arxiv.org/abs/cs/0106053v1
http://arxiv.org/pdf/cs/0106053v1
2001-06-26
2001-06-26
[ "Alexander Serebrenik", "Danny De Schreye" ]
[ "", "" ]
We present a new approach to termination analysis of numerical computations in logic programs. Traditional approaches fail to analyse them due to non well-foundedness of the integers. We present a technique that allows to overcome these difficulties. Our approach is based on transforming a program in way that allows integrating and extending techniques originally developed for analysis of numerical computations in the framework of query-mapping pairs with the well-known framework of acceptability. Such an integration not only contributes to the understanding of termination behaviour of numerical computations, but also allows to perform a correct analysis of such computations automatically, thus, extending previous work on a constraints-based approach to termination. In the last section of the paper we discuss possible extensions of the technique, including incorporating general term orderings.
Presented at WST2001
cs.PL
[ "cs.PL", "cs.LO", "D.1.6; D.2.4" ]
Acceptability with general orderings
http://arxiv.org/abs/cs/0106052v1
http://arxiv.org/abs/cs/0106052v1
http://arxiv.org/pdf/cs/0106052v1
2001-06-26
2001-06-26
[ "Danny De Schreye", "Alexander Serebrenik" ]
[ "", "" ]
We present a new approach to termination analysis of logic programs. The essence of the approach is that we make use of general orderings (instead of level mappings), like it is done in transformational approaches to logic program termination analysis, but we apply these orderings directly to the logic program and not to the term-rewrite system obtained through some transformation. We define some variants of acceptability, based on general orderings, and show how they are equivalent to LD-termination. We develop a demand driven, constraint-based approach to verify these acceptability-variants. The advantage of the approach over standard acceptability is that in some cases, where complex level mappings are needed, fairly simple orderings may be easily generated. The advantage over transformational approaches is that it avoids the transformation step all together. {\bf Keywords:} termination analysis, acceptability, orderings.
To appear in "Computational Logic: From Logic Programming into the Future"
cs.PL
[ "cs.PL", "cs.LO", "D.1.6; D.2.4" ]
Classes of Terminating Logic Programs
http://arxiv.org/abs/cs/0106050v2
http://arxiv.org/abs/cs/0106050v2
http://arxiv.org/pdf/cs/0106050v2
2001-06-25
2002-07-22
[ "Dino Pedreschi", "Salvatore Ruggieri", "Jan-Georg Smaus" ]
[ "", "", "" ]
Termination of logic programs depends critically on the selection rule, i.e. the rule that determines which atom is selected in each resolution step. In this article, we classify programs (and queries) according to the selection rules for which they terminate. This is a survey and unified view on different approaches in the literature. For each class, we present a sufficient, for most classes even necessary, criterion for determining that a program is in that class. We study six classes: a program strongly terminates if it terminates for all selection rules; a program input terminates if it terminates for selection rules which only select atoms that are sufficiently instantiated in their input positions, so that these arguments do not get instantiated any further by the unification; a program local delay terminates if it terminates for local selection rules which only select atoms that are bounded w.r.t. an appropriate level mapping; a program left-terminates if it terminates for the usual left-to-right selection rule; a program exists-terminates if there exists a selection rule for which it terminates; finally, a program has bounded nondeterminism if it only has finitely many refutations. We propose a semantics-preserving transformation from programs with bounded nondeterminism into strongly terminating programs. Moreover, by unifying different formalisms and making appropriate assumptions, we are able to establish a formal hierarchy between the different classes.
50 pages. The following mistake was corrected: In figure 5, the first clause for insert was insert([],X,[X])
Theory and Practice of Logic Programming, 2(3), 369-418, 2002
cs.LO
[ "cs.LO", "cs.PL", "D.1.6; D.2.4; F.3.1" ]
Event Driven Computations for Relational Query Language
http://arxiv.org/abs/cs/0106026v1
http://arxiv.org/abs/cs/0106026v1
http://arxiv.org/pdf/cs/0106026v1
2001-06-12
2001-06-12
[ "Larissa Ismailova", "Konstantin Zinchenko", "Lioubouv Bourmistrova" ]
[ "", "", "" ]
This paper deals with an extended model of computations which uses the parameterized families of entities for data objects and reflects a preliminary outline of this problem. Some topics are selected out, briefly analyzed and arranged to cover a general problem. The authors intended more to discuss the particular topics, their interconnection and computational meaning as a panel proposal, so that this paper is not yet to be evaluated as a closed journal paper. To save space all the technical and implementation features are left for the future paper. Data object is a schematic entity and modelled by the partial function. A notion of type is extended by the variable domains which depend on events and types. A variable domain is built from the potential and schematic individuals and generates the valid families of types depending on a sequence of events. Each valid type consists of the actual individuals which are actual relatively the event or script. In case when a type depends on the script then corresponding view for data objects is attached, otherwise a snapshot is generated. The type thus determined gives an upper range for typed variables so that the local ranges are event driven resulting is the families of actual individuals. An expressive power of the query language is extended using the extensional and intentional relations.
Proceedings of the 1-st International Workshop on Computer Science and Information Technologies CSIT'99, Moscow, Russia, 1999. Vol.1, pp. 43--52
cs.LO
[ "cs.LO", "cs.DB", "cs.PL", "D.3.1; F.1; F.4.1" ]
Objects and their computational framework
http://arxiv.org/abs/cs/0106024v1
http://arxiv.org/abs/cs/0106024v1
http://arxiv.org/pdf/cs/0106024v1
2001-06-11
2001-06-11
[ "Viacheslav Wolfengagen" ]
[ "" ]
Most of the object notions are embedded into a logical domain, especially when dealing with a database theory. Thus, their properties within a computational domain are not yet studied properly. The main topic of this paper is to analyze different concepts of the distinct computational primitive frames to extract the useful object properties and their possible advantages. Some important metaoperators are used to unify the approaches and to establish their possible correspondences.
Proceedings of the 3-rd International Workshop on Advances in Databases and Information Systems, ADBIS'96, Moscow, September 10 --13, 1996, Vol. 1, pp. 66--74
cs.LO
[ "cs.LO", "cs.DB", "cs.PL", "D.3.1; F.1; F.4.1; D.1.1; H.2.1" ]
Object-oriented tools for advanced applications
http://arxiv.org/abs/cs/0106023v1
http://arxiv.org/abs/cs/0106023v1
http://arxiv.org/pdf/cs/0106023v1
2001-06-11
2001-06-11
[ "Larissa Ismailova", "Konstantin Zinchenko" ]
[ "", "" ]
This paper contains a brief discussion of the Application Development Environment (ADE) that is used to build database applications involving the graphical user interface (GUI). ADE computing separates the database access and the user interface. The variety of applications may be generated that communicate with different and distinct desktop databases. The advanced techniques allows to involve remote or stored procedures retrieval and call.
Proceedings of the 3-rd International Workshop on Advances in Databases and Information Systems, ADBIS'96, Moscow, September 10 --13, 1996, Vol. 2, pp. 27--31
cs.LO
[ "cs.LO", "cs.DB", "cs.PL", "D.3.1; F.1; F.4.1" ]
Object-oriented solutions
http://arxiv.org/abs/cs/0106021v1
http://arxiv.org/abs/cs/0106021v1
http://arxiv.org/pdf/cs/0106021v1
2001-06-11
2001-06-11
[ "Viacheslav Wolfengagen" ]
[ "" ]
In this paper are briefly outlined the motivations, mathematical ideas in use, pre-formalization and assumptions, object-as-functor construction, `soft' types and concept constructions, case study for concepts based on variable domains, extracting a computational background, and examples of evaluations.
Proceedings of the 2-nd International Workshop on Advances in Databases and Information Systems, ADBIS'95, Moscow, June 27 --30, 1995, Vol. 1, pp. 235--246
cs.LO
[ "cs.LO", "cs.DB", "cs.PL", "D.3.1; F.1; F.4.1; D.1.1; H.2.1" ]
Building the access pointers to a computation environment
http://arxiv.org/abs/cs/0106018v1
http://arxiv.org/abs/cs/0106018v1
http://arxiv.org/pdf/cs/0106018v1
2001-06-10
2001-06-10
[ "Viacheslav Wolfengagen" ]
[ "" ]
A common object technique equipped with the categorical and computational styles is briefly outlined. An object is evaluated by embedding in a host computational environment which is the domain-ranged structure. An embedded object is accessed by the pointers generated within the host system. To assist with an easy extract the result of the evaluation a pre-embedded object is generated. It is observed as the decomposition into substitutional part and access function part which are generated during the object evaluation.
Proceedings of the 1st East-European Symposium on Advances in Databases and Information Systems, ADBIS'97, St.-Petersburg, September 2--5, 1997, Russia. Vol. 1, pp. 117--122
cs.LO
[ "cs.LO", "cs.PL", "D.3.1; F.1; F.4.1" ]
An object evaluator to generate flexible applications
http://arxiv.org/abs/cs/0106017v1
http://arxiv.org/abs/cs/0106017v1
http://arxiv.org/pdf/cs/0106017v1
2001-06-10
2001-06-10
[ "Larissa Ismailova", "Konstantin Zinchenko" ]
[ "", "" ]
This paper contains a brief discussion of an object evaluator which is based on principles of evaluations in a category. The main tool system referred as the Application Development Environment (ADE) is used to build database applications involving the graphical user interface (GUI). The separation of a database access and the user interface is reached by distinguishing the potential and actual objects. The variety of applications may be generated that communicate with different and distinct desktop databases. The commutative diagrams' technique allows to involve retrieval and call of the delayed procedures.
Proceedings of the 1-st East-European Symposium on Advances in Databases and Information Systems, ADBIS'97, St.-Petersburg, September 2--5, 1997, Vol. 1, pp. 141--148
cs.LO
[ "cs.LO", "cs.PL", "H.1; H.2" ]
File mapping Rule-based DBMS and Natural Language Processing
http://arxiv.org/abs/cs/0106016v1
http://arxiv.org/abs/cs/0106016v1
http://arxiv.org/pdf/cs/0106016v1
2001-06-10
2001-06-10
[ "Vjacheslav M. Novikov" ]
[ "" ]
This paper describes the system of storage, extract and processing of information structured similarly to the natural language. For recursive inference the system uses the rules having the same representation, as the data. The environment of storage of information is provided with the File Mapping (SHM) mechanism of operating system. In the paper the main principles of construction of dynamic data structure and language for record of the inference rules are stated; the features of available implementation are considered and the description of the application realizing semantic information retrieval on the natural language is given.
17 pages, 3 figures
cs.CL
[ "cs.CL", "cs.AI", "cs.DB", "cs.IR", "cs.LG", "cs.PL", "D.3.2; H.2.4" ]
The Set of Equations to Evaluate Objects
http://arxiv.org/abs/cs/0106013v1
http://arxiv.org/abs/cs/0106013v1
http://arxiv.org/pdf/cs/0106013v1
2001-06-08
2001-06-08
[ "Larissa Ismailova" ]
[ "" ]
The notion of an equational shell is studied to involve the objects and their environment. Appropriate methods are studied as valid embeddings of refined objects. The refinement process determines the linkages between the variety of possible representations giving rise to variants of computations. The case study is equipped with the adjusted equational systems that validate the initial applicative framework.
5 pages
Proceedings of the 3-rd International Workshop on Computer Science and Information Technologies CSIT'2001, Ufa, Yangantau, Ruissia
cs.LO
[ "cs.LO", "cs.PL", "cs.SC", "D.3.1; D.3.2" ]
Computing Functional and Relational Box Consistency by Structured Propagation in Atomic Constraint Systems
http://arxiv.org/abs/cs/0106008v1
http://arxiv.org/abs/cs/0106008v1
http://arxiv.org/pdf/cs/0106008v1
2001-06-07
2001-06-07
[ "M. H. van Emden" ]
[ "" ]
Box consistency has been observed to yield exponentially better performance than chaotic constraint propagation in the interval constraint system obtained by decomposing the original expression into primitive constraints. The claim was made that the improvement is due to avoiding decomposition. In this paper we argue that the improvement is due to replacing chaotic iteration by a more structured alternative. To this end we distinguish the existing notion of box consistency from relational box consistency. We show that from a computational point of view it is important to maintain the functional structure in constraint systems that are associated with a system of equations. So far, it has only been considered computationally important that constraint propagation be fair. With the additional structure of functional constraint systems, one can define and implement computationally effective, structured, truncated constraint propagations. The existing algorithm for box consistency is one such. Our results suggest that there are others worth investigating.
Presented at the Sixth Annual Workshop of the ERCIM Working Group on Constraints. 12 pages
cs.PL
[ "cs.PL", "cs.AI", "D.3.2; D.3.3; F.4.1" ]
Soft Scheduling
http://arxiv.org/abs/cs/0106004v1
http://arxiv.org/abs/cs/0106004v1
http://arxiv.org/pdf/cs/0106004v1
2001-06-02
2001-06-02
[ "Hana Rudova" ]
[ "" ]
Classical notions of disjunctive and cumulative scheduling are studied from the point of view of soft constraint satisfaction. Soft disjunctive scheduling is introduced as an instance of soft CSP and preferences included in this problem are applied to generate a lower bound based on existing discrete capacity resource. Timetabling problems at Purdue University and Faculty of Informatics at Masaryk University considering individual course requirements of students demonstrate practical problems which are solved via proposed methods. Implementation of general preference constraint solver is discussed and first computational results for timetabling problem are presented.
10 pages; accepted to the Sixth Annual Workshop of the ERCIM Working Group on Constraints
cs.AI
[ "cs.AI", "cs.PL", "I.2.8; F.4.1; I.6.5" ]
Constraint Propagation in Presence of Arrays
http://arxiv.org/abs/cs/0105024v1
http://arxiv.org/abs/cs/0105024v1
http://arxiv.org/pdf/cs/0105024v1
2001-05-14
2001-05-14
[ "Sebastian Brand" ]
[ "" ]
We describe the use of array expressions as constraints, which represents a consequent generalisation of the "element" constraint. Constraint propagation for array constraints is studied theoretically, and for a set of domain reduction rules the local consistency they enforce, arc-consistency, is proved. An efficient algorithm is described that encapsulates the rule set and so inherits the capability to enforce arc-consistency from the rules.
10 pages. Accepted at the 6th Annual Workshop of the ERCIM Working Group on Constraints, 2001
cs.PL
[ "cs.PL", "cs.DS", "D.3.3; E.1" ]
A Logical Framework for Convergent Infinite Computations
http://arxiv.org/abs/cs/0105020v3
http://arxiv.org/abs/cs/0105020v3
http://arxiv.org/pdf/cs/0105020v3
2001-05-10
2002-02-07
[ "Wei Li", "Shilong Ma", "Yuefei Sui", "Ke Xu" ]
[ "", "", "", "" ]
Classical computations can not capture the essence of infinite computations very well. This paper will focus on a class of infinite computations called convergent infinite computations}. A logic for convergent infinite computations is proposed by extending first order theories using Cauchy sequences, which has stronger expressive power than the first order logic. A class of fixed points characterizing the logical properties of the limits can be represented by means of infinite-length terms defined by Cauchy sequences. We will show that the limit of sequence of first order theories can be defined in terms of distance, similar to the $\epsilon-N$ style definition of limits in real analysis. On the basis of infinitary terms, a computation model for convergent infinite computations is proposed. Finally, the interpretations of logic programs are extended by introducing real Herbrand models of logic programs and a sufficient condition for computing a real Herbrand model of Horn logic programs using convergent infinite computation is given.
17 pages. Welcome any comments to [email protected]
cs.LO
[ "cs.LO", "cs.PL", "F.4.1, D.1.6" ]
The alldifferent Constraint: A Survey
http://arxiv.org/abs/cs/0105015v1
http://arxiv.org/abs/cs/0105015v1
http://arxiv.org/pdf/cs/0105015v1
2001-05-08
2001-05-08
[ "W. J. van Hoeve" ]
[ "" ]
The constraint of difference is known to the constraint programming community since Lauriere introduced Alice in 1978. Since then, several solving strategies have been designed for this constraint. In this paper we give both a practical overview and an abstract comparison of these different strategies.
12 pages, 3 figures, paper accepted at the 6th Annual workshop of the ERCIM Working Group on Constraints
cs.PL
[ "cs.PL", "cs.AI", "D.3.3" ]
Component Programming and Interoperability in Constraint Solver Design
http://arxiv.org/abs/cs/0105011v1
http://arxiv.org/abs/cs/0105011v1
http://arxiv.org/pdf/cs/0105011v1
2001-05-07
2001-05-07
[ "Frederic Goualard" ]
[ "" ]
Prolog was once the main host for implementing constraint solvers. It seems that it is no longer so. To be useful, constraint solvers have to be integrable into industrial applications written in imperative or object-oriented languages; to be efficient, they have to interact with other solvers. To meet these requirements, many solvers are now implemented in the form of extensible object-oriented libraries. Following Pfister and Szyperski, we argue that ``objects are not enough,'' and we propose to design solvers as component-oriented libraries. We illustrate our approach by the description of the architecture of a prototype, and we assess its strong points and weaknesses.
11 pages, 1 figure, paper accepted at the 6th Annual workshop of the ERCIM Working Group on Constraints
cs.PL
[ "cs.PL", "D.3.3; D.2" ]
Reverse Engineering from Assembler to Formal Specifications via Program Transformations
http://arxiv.org/abs/cs/0105006v1
http://arxiv.org/abs/cs/0105006v1
http://arxiv.org/pdf/cs/0105006v1
2001-05-04
2001-05-04
[ "M. P. Ward" ]
[ "" ]
The FermaT transformation system, based on research carried out over the last sixteen years at Durham University, De Montfort University and Software Migrations Ltd., is an industrial-strength formal transformation engine with many applications in program comprehension and language migration. This paper is a case study which uses automated plus manually-directed transformations and abstractions to convert an IBM 370 Assembler code program into a very high-level abstract specification.
10 pages
7th Working Conference on Reverse Engineering 2000, 23--25 Nov 2000, Brisbane, Queensland, Australia. IEEE Computer Society
10.1109/WCRE.2000.891448
cs.SE
[ "cs.SE", "cs.PL", "D.2.7;D.3.2" ]
Chain Programs for Writing Deterministic Metainterpreters
http://arxiv.org/abs/cs/0104003v1
http://arxiv.org/abs/cs/0104003v1
http://arxiv.org/pdf/cs/0104003v1
2001-04-02
2001-04-02
[ "David A. Rosenblueth" ]
[ "" ]
Many metainterpreters found in the logic programming literature are nondeterministic in the sense that the selection of program clauses is not determined. Examples are the familiar "demo" and "vanilla" metainterpreters. For some applications this nondeterminism is convenient. In some cases, however, a deterministic metainterpreter, having an explicit selection of clauses, is needed. Such cases include (1) conversion of OR parallelism into AND parallelism for "committed-choice" processors, (2) logic-based, imperative-language implementation of search strategies, and (3) simulation of bounded-resource reasoning. Deterministic metainterpreters are difficult to write because the programmer must be concerned about the set of unifiers of the children of a node in the derivation tree. We argue that it is both possible and advantageous to write these metainterpreters by reasoning in terms of object programs converted into a syntactically restricted form that we call "chain" form, where we can forget about unification, except for unit clauses. We give two transformations converting logic programs into chain form, one for "moded" programs (implicit in two existing exhaustive-traversal methods for committed-choice execution), and one for arbitrary definite programs. As illustrations of our approach we show examples of the three applications mentioned above.
30 pages. To appear in the journal "Theory and Practice of Logic Programming"
cs.LO
[ "cs.LO", "cs.PL", "D.1.6; F.3.2; D.3.2; D.3.4" ]
Toward an architecture for quantum programming
http://arxiv.org/abs/cs/0103009v3
http://arxiv.org/abs/cs/0103009v3
http://arxiv.org/pdf/cs/0103009v3
2001-03-08
2003-03-27
[ "S. Bettelli", "L. Serafini", "T. Calarco" ]
[ "", "", "" ]
It is becoming increasingly clear that, if a useful device for quantum computation will ever be built, it will be embodied by a classical computing machine with control over a truly quantum subsystem, this apparatus performing a mixture of classical and quantum computation. This paper investigates a possible approach to the problem of programming such machines: a template high level quantum language is presented which complements a generic general purpose classical language with a set of quantum primitives. The underlying scheme involves a run-time environment which calculates the byte-code for the quantum operations and pipes it to a quantum device controller or to a simulator. This language can compactly express existing quantum algorithms and reduce them to sequences of elementary operations; it also easily lends itself to automatic, hardware independent, circuit simplification. A publicly available preliminary implementation of the proposed ideas has been realized using the C++ language.
23 pages, 5 figures, A4paper. Final version accepted by EJPD ("swap" replaced by "invert" for Qops). Preliminary implementation available at: http://sra.itc.it/people/serafini/quantum-computing/qlang.html
Eur. Phys. J. D, Vol. 25, No. 2, pp. 181-200 (2003)
10.1140/epjd/e2003-00242-2
cs.PL
[ "cs.PL", "quant-ph", "D.3.1" ]
The Limits of Horn Logic Programs
http://arxiv.org/abs/cs/0103008v3
http://arxiv.org/abs/cs/0103008v3
http://arxiv.org/pdf/cs/0103008v3
2001-03-08
2002-02-07
[ "Shilong Ma", "Yuefei Sui", "Ke Xu" ]
[ "", "", "" ]
Given a sequence $\{\Pi_n\}$ of Horn logic programs, the limit $\Pi$ of $\{\Pi_n\}$ is the set of the clauses such that every clause in $\Pi$ belongs to almost every $\Pi_n$ and every clause in infinitely many $\Pi_n$'s belongs to $\Pi$ also. The limit program $\Pi$ is still Horn but may be infinite. In this paper, we consider if the least Herbrand model of the limit of a given Horn logic program sequence $\{\Pi_n\}$ equals the limit of the least Herbrand models of each logic program $\Pi_n$. It is proved that this property is not true in general but holds if Horn logic programs satisfy an assumption which can be syntactically checked and be satisfied by a class of Horn logic programs. Thus, under this assumption we can approach the least Herbrand model of the limit $\Pi$ by the sequence of the least Herbrand models of each finite program $\Pi_n$. We also prove that if a finite Horn logic program satisfies this assumption, then the least Herbrand model of this program is recursive. Finally, by use of the concept of stability from dynamical systems, we prove that this assumption is exactly a sufficient condition to guarantee the stability of fixed points for Horn logic programs.
11 pages, added new results. Welcome any comments to [email protected]
In P. J. Stuckey (Ed.): Proc. of 18th ICLP (short paper), LNCS 2401, p. 467, Denmark, 2002.
cs.LO
[ "cs.LO", "cs.PL", "D.1.6; F.3.2" ]
Soundness, Idempotence and Commutativity of Set-Sharing
http://arxiv.org/abs/cs/0102030v1
http://arxiv.org/abs/cs/0102030v1
http://arxiv.org/pdf/cs/0102030v1
2001-02-27
2001-02-27
[ "Patricia M. Hill", "Roberto Bagnara", "Enea Zaffanella" ]
[ "", "", "" ]
It is important that practical data-flow analyzers are backed by reliably proven theoretical results. Abstract interpretation provides a sound mathematical framework and necessary generic properties for an abstract domain to be well-defined and sound with respect to the concrete semantics. In logic programming, the abstract domain Sharing is a standard choice for sharing analysis for both practical work and further theoretical study. In spite of this, we found that there were no satisfactory proofs for the key properties of commutativity and idempotence that are essential for Sharing to be well-defined and that published statements of the soundness of Sharing assume the occurs-check. This paper provides a generalization of the abstraction function for Sharing that can be applied to any language, with or without the occurs-check. Results for soundness, idempotence and commutativity for abstract unification using this abstraction function are proven.
48 pages
cs.PL
[ "cs.PL", "F.3.2" ]
An Effective Fixpoint Semantics for Linear Logic Programs
http://arxiv.org/abs/cs/0102025v2
http://arxiv.org/abs/cs/0102025v2
http://arxiv.org/pdf/cs/0102025v2
2001-02-23
2001-03-23
[ "Marco Bozzano", "Giorgio Delzanno", "Maurizio Martelli" ]
[ "", "", "" ]
In this paper we investigate the theoretical foundation of a new bottom-up semantics for linear logic programs, and more precisely for the fragment of LinLog that consists of the language LO enriched with the constant 1. We use constraints to symbolically and finitely represent possibly infinite collections of provable goals. We define a fixpoint semantics based on a new operator in the style of Tp working over constraints. An application of the fixpoint operator can be computed algorithmically. As sufficient conditions for termination, we show that the fixpoint computation is guaranteed to converge for propositional LO. To our knowledge, this is the first attempt to define an effective fixpoint semantics for linear logic programs. As an application of our framework, we also present a formal investigation of the relations between LO and Disjunctive Logic Programming. Using an approach based on abstract interpretation, we show that DLP fixpoint semantics can be viewed as an abstraction of our semantics for LO. We prove that the resulting abstraction is correct and complete for an interesting class of LO programs encoding Petri Nets.
39 pages, 5 figures. To appear in Theory and Practice of Logic Programming
cs.PL
[ "cs.PL", "D.3.1;F.3.1;F.3.2" ]
Decomposing Non-Redundant Sharing by Complementation
http://arxiv.org/abs/cs/0101025v1
http://arxiv.org/abs/cs/0101025v1
http://arxiv.org/pdf/cs/0101025v1
2001-01-23
2001-01-23
[ "Enea Zaffanella", "Patricia M. Hill", "Roberto Bagnara" ]
[ "", "", "" ]
Complementation, the inverse of the reduced product operation, is a technique for systematically finding minimal decompositions of abstract domains. File' and Ranzato advanced the state of the art by introducing a simple method for computing a complement. As an application, they considered the extraction by complementation of the pair-sharing domain PS from the Jacobs and Langen's set-sharing domain SH. However, since the result of this operation was still SH, they concluded that PS was too abstract for this. Here, we show that the source of this result lies not with PS but with SH and, more precisely, with the redundant information contained in SH with respect to ground-dependencies and pair-sharing. In fact, a proper decomposition is obtained if the non-redundant version of SH, PSD, is substituted for SH. To establish the results for PSD, we define a general schema for subdomains of SH that includes PSD and Def as special cases. This sheds new light on the structure of PSD and exposes a natural though unexpected connection between Def and PSD. Moreover, we substantiate the claim that complementation alone is not sufficient to obtain truly minimal decompositions of domains. The right solution to this problem is to first remove redundancies by computing the quotient of the domain with respect to the observable behavior, and only then decompose it by complementation.
To appear on Theory and Practice of Logic Programming. 30 pages, 4 figures
cs.PL
[ "cs.PL", "F.3.2" ]
Properties of Input-Consuming Derivations
http://arxiv.org/abs/cs/0101023v1
http://arxiv.org/abs/cs/0101023v1
http://arxiv.org/pdf/cs/0101023v1
2001-01-23
2001-01-23
[ "Annalisa Bossi", "Sandro Etalle", "Sabina Rossi" ]
[ "", "", "" ]
We study the properties of input-consuming derivations of moded logic programs. Input-consuming derivations can be used to model the behavior of logic programs using dynamic scheduling and employing constructs such as delay declarations. We consider the class of nicely-moded programs and queries. We show that for these programs a weak version of the well-known switching lemma holds also for input-consuming derivations. Furthermore, we show that, under suitable conditions, there exists an algebraic characterization of termination of input-consuming derivations.
33 pages
cs.PL
[ "cs.PL", "cs.LO", "D.1.6;D.3.1;F.3.2" ]
Semantics and Termination of Simply-Moded Logic Programs with Dynamic Scheduling
http://arxiv.org/abs/cs/0101022v1
http://arxiv.org/abs/cs/0101022v1
http://arxiv.org/pdf/cs/0101022v1
2001-01-23
2001-01-23
[ "Annalisa Bossi", "Sandro Etalle", "Sabina Rossi", "Jan-Georg Smaus" ]
[ "", "", "", "" ]
In logic programming, dynamic scheduling refers to a situation where the selection of the atom in each resolution (computation) step is determined at runtime, as opposed to a fixed selection rule such as the left-to-right one of Prolog. This has applications e.g. in parallel programming. A mechanism to control dynamic scheduling is provided in existing languages in the form of delay declarations. Input-consuming derivations were introduced to describe dynamic scheduling while abstracting from the technical details. In this paper, we first formalise the relationship between delay declarations and input-consuming derivations, showing in many cases a one-to-one correspondence. Then, we define a model-theoretic semantics for input-consuming derivations of simply-moded programs. Finally, for this class of programs, we provide a necessary and sufficient criterion for termination.
25 pages, long version of paper with same title at ESOP 2001
cs.LO
[ "cs.LO", "cs.PL", "D.1.3; D.1.6; F.3.2" ]
Generation of and Debugging with Logical Pre and Postconditions
http://arxiv.org/abs/cs/0101009v1
http://arxiv.org/abs/cs/0101009v1
http://arxiv.org/pdf/cs/0101009v1
2001-01-12
2001-01-12
[ "Angel Herrranz-Nieva Juan Jose Moreno Navarro" ]
[ "" ]
This paper shows the debugging facilities provided by the SLAM system. The SLAM system includes i) a specification language that integrates algebraic specifications and model-based specifications using the object oriented model. Class operations are defined by using rules each of them with logical pre and postconditions but with a functional flavour. ii) A development environment that, among other features, is able to generate readable code in a high level object oriented language. iii) The generated code includes (part of) the pre and postconditions as assertions, that can be automatically checked in the debug mode execution of programs. We focus on this last aspect. The SLAM language is expressive enough to describe many useful properties and these properties are translated into a Prolog program that is linked (via an adequate interface) with the user program. The debugging execution of the program interacts with the Prolog engine which is responsible for checking properties.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.PL
[ "cs.PL", "cs.SE", "D.2.5, F.3.1" ]
A Knowledge-based Automated Debugger in Learning System
http://arxiv.org/abs/cs/0101008v1
http://arxiv.org/abs/cs/0101008v1
http://arxiv.org/pdf/cs/0101008v1
2001-01-12
2001-01-12
[ "Abdullah Mohd Zin", "Syed Ahmad Aljunid", "Zarina Shukur", "Mohd Jan Nordin" ]
[ "", "", "", "" ]
Currently, programming instructors continually face the problem of helping to debug students' programs. Although there currently exist a number of debuggers and debugging tools in various platforms, most of these projects or products are crafted through the needs of software maintenance, and not through the perspective of teaching of programming. Moreover, most debuggers are too general, meant for experts as well as not user-friendly. We propose a new knowledge-based automated debugger to be used as a user-friendly tool by the students to self-debug their own programs. Stereotyped code (cliche) and bugs cliche will be stored as library of plans in the knowledge-base. Recognition of correct code or bugs is based on pattern matching and constraint satisfaction. Given a syntax error-free program and its specification, this debugger called Adil (Automated Debugger in Learning system) will be able locate, pinpoint and explain logical errors of programs. If there are no errors, it will be able to explain the meaning of the program. Adil is based on the design of the Conceiver, an automated program understanding system developed at Universiti Kebangsaan Malaysia.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Assertion checker for the C programming language based on computations over event traces
http://arxiv.org/abs/cs/0101007v1
http://arxiv.org/abs/cs/0101007v1
http://arxiv.org/pdf/cs/0101007v1
2001-01-12
2001-01-12
[ "Mikhail Auguston" ]
[ "" ]
This paper suggests an approach to the development of software testing and debugging automation tools based on precise program behavior models. The program behavior model is defined as a set of events (event trace) with two basic binary relations over events -- precedence and inclusion, and represents the temporal relationship between actions. A language for the computations over event traces is developed that provides a basis for assertion checking, debugging queries, execution profiles, and performance measurements. The approach is nondestructive, since assertion texts are separated from the target program source code and can be maintained independently. Assertions can capture the dynamic properties of a particular target program and can formalize the general knowledge of typical bugs and debugging strategies. An event grammar provides a sound basis for assertion language implementation via target program automatic instrumentation. An implementation architecture and preliminary experiments with a prototype assertion checker for the C programming language are discussed.
Proceedings of AADEBUG 2000 Fourth International Workshop on Automated Debugging Munich, Germany, 28-30 August 2000
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Slicing Event Traces of Large Software Systems
http://arxiv.org/abs/cs/0101005v1
http://arxiv.org/abs/cs/0101005v1
http://arxiv.org/pdf/cs/0101005v1
2001-01-11
2001-01-11
[ "Raymond Smith", "Bogdan Korel" ]
[ "", "" ]
Debugging of large software systems consisting of many processes accessing shared resources is a very difficult task. Many commercial systems record essential events during system execution for post-mortem analysis. However, the event traces of large and long-running systems can be quite voluminous. Analysis of such event traces to identify sources of incorrect behavior can be very tedious, error-prone, and inefficient. In this paper, we propose a novel technique of slicing event traces as a means of reducing the number of events for analysis. This technique identifies events that may have influenced observed incorrect system behavior. In order to recognize influencing events several types of dependencies between events are identified. These dependencies are determined automatically from an event trace. In order to improve the precision of slicing we propose to use additional dependencies, referred to as cause-effect dependencies, which can further reduce the size of sliced event traces. Our initial experience has shown that this slicing technique can significantly reduce the size of event traces for analysis.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Automated Debugging In Java Using OCL And JDI
http://arxiv.org/abs/cs/0101002v1
http://arxiv.org/abs/cs/0101002v1
http://arxiv.org/pdf/cs/0101002v1
2001-01-03
2001-01-03
[ "David J. Murray", "Dale E. Parson" ]
[ "", "" ]
Correctness constraints provide a foundation for automated debugging within object-oriented systems. This paper discusses a new approach to incorporating correctness constraints into Java development environments. Our approach uses the Object Constraint Language ("OCL") as a specification language and the Java Debug Interface ("JDI") as a verification API. OCL provides a standard language for expressing object-oriented constraints that can integrate with Unified Modeling Language ("UML") software models. JDI provides a standard Java API capable of supporting type-safe and side effect free runtime constraint evaluation. The resulting correctness constraint mechanism: (1) entails no programming language modifications; (2) requires neither access nor changes to existing source code; and (3) works with standard off-the-shelf Java virtual machines ("VMs"). A prototype correctness constraint auditor is presented to demonstrate the utility of this mechanism for purposes of automated debugging.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. See cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
A brief overview of the MAD debugging activities
http://arxiv.org/abs/cs/0012012v1
http://arxiv.org/abs/cs/0012012v1
http://arxiv.org/pdf/cs/0012012v1
2000-12-16
2000-12-16
[ "Dieter Kranzlmueller", "Christian Schaubschlaeger", "Jens Volkert" ]
[ "", "", "" ]
Debugging parallel and distributed programs is a difficult activitiy due to the multiplicity of sequential bugs, the existence of malign effects like race conditions and deadlocks, and the huge amounts of data that have to be processed. These problems are addressed by the Monitoring And Debugging environment MAD, which offers debugging functionality based on a graphical representation of a program's execution. The target applications of MAD are parallel programs applying the standard Message-Passing Interface MPI, which is used extensively in the high-performance computing domain. The highlights of MAD are interactive inspection mechanisms including visualization of distributed arrays, the possibility to graphically place breakpoints, a mechanism for monitor overhead removal, and the evaluation of racing messages occuring due to nondeterminism in the code.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035 (6 pages, 2 figures)
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
A General Framework for Automatic Termination Analysis of Logic Programs
http://arxiv.org/abs/cs/0012008v1
http://arxiv.org/abs/cs/0012008v1
http://arxiv.org/pdf/cs/0012008v1
2000-12-13
2000-12-13
[ "Nachum Dershowitz", "Naomi Lindenstrauss", "Yehoshua Sagiv", "Alexander Serebrenik" ]
[ "", "", "", "" ]
This paper describes a general framework for automatic termination analysis of logic programs, where we understand by ``termination'' the finitenes s of the LD-tree constructed for the program and a given query. A general property of mappings from a certain subset of the branches of an infinite LD-tree into a finite set is proved. From this result several termination theorems are derived, by using different finite sets. The first two are formulated for the predicate dependency and atom dependency graphs. Then a general result for the case of the query-mapping pairs relevant to a program is proved (cf. \cite{Sagiv,Lindenstrauss:Sagiv}). The correctness of the {\em TermiLog} system described in \cite{Lindenstrauss:Sagiv:Serebrenik} follows from it. In this system it is not possible to prove termination for programs involving arithmetic predicates, since the usual order for the integers is not well-founded. A new method, which can be easily incorporated in {\em TermiLog} or similar systems, is presented, which makes it possible to prove termination for programs involving arithmetic predicates. It is based on combining a finite abstraction of the integers with the technique of the query-mapping pairs, and is essentially capable of dividing a termination proof into several cases, such that a simple termination function suffices for each case. Finally several possible extensions are outlined.
Applicable Algebra in Engineering, Communication and Computing, vol. 12, no. 1/2, pp. 117-156, 2001
cs.PL
[ "cs.PL", "D.1.6" ]
Kima - an Automated Error Correction System for Concurrent Logic Programs
http://arxiv.org/abs/cs/0012007v3
http://arxiv.org/abs/cs/0012007v3
http://arxiv.org/pdf/cs/0012007v3
2000-12-13
2001-01-05
[ "Yasuhiro Ajiro", "Kazunori Ueda" ]
[ "", "" ]
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding/typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode/type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode/type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded/typed program, diagnoses the ``reasons'' of inconsistency by finding minimal inconsistent subsets of mode/type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Support for Debugging Automatically Parallelized Programs
http://arxiv.org/abs/cs/0012006v1
http://arxiv.org/abs/cs/0012006v1
http://arxiv.org/pdf/cs/0012006v1
2000-12-11
2000-12-11
[ "Robert Hood", "Gabriele Jost" ]
[ "", "" ]
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Value Withdrawal Explanation in CSP
http://arxiv.org/abs/cs/0012005v1
http://arxiv.org/abs/cs/0012005v1
http://arxiv.org/pdf/cs/0012005v1
2000-12-11
2000-12-11
[ "Gerard Ferrand", "Willy Lesaint", "Alexandre Tessier" ]
[ "", "", "" ]
This work is devoted to constraint solving motivated by the debugging of constraint logic programs a la GNU-Prolog. The paper focuses only on the constraints. In this framework, constraint solving amounts to domain reduction. A computation is formalized by a chaotic iteration. The computed result is described as a closure. This model is well suited to the design of debugging notions and tools, for example failure explanations or error diagnosis. In this paper we detail an application of the model to an explanation of a value withdrawal in a domain. Some other works have already shown the interest of such a notion of explanation not only for failure analysis.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]
Rewriting Calculus: Foundations and Applications
http://arxiv.org/abs/cs/0011043v1
http://arxiv.org/abs/cs/0011043v1
http://arxiv.org/pdf/cs/0011043v1
2000-11-28
2000-11-28
[ "Horatiu Cirstea" ]
[ "" ]
This thesis is devoted to the study of a calculus that describes the application of conditional rewriting rules and the obtained results at the same level of representation. We introduce the rewriting calculus, also called the rho-calculus, which generalizes the first order term rewriting and lambda-calculus, and makes possible the representation of the non-determinism. In our approach the abstraction operator as well as the application operator are objects of calculus. The result of a reduction in the rewriting calculus is either an empty set representing the application failure, or a singleton representing a deterministic result, or a set having several elements representing a not-deterministic choice of results. In this thesis we concentrate on the properties of the rewriting calculus where a syntactic matching is used in order to bind the variables to their current values. We define evaluation strategies ensuring the confluence of the calculus and we show that these strategies become trivial for restrictions of the general rewriting calculus to simpler calculi like the lambda-calculus. The rewriting calculus is not terminating in the untyped case but the strong normalization is obtained for the simply typed calculus. In the rewriting calculus extended with an operator allowing to test the application failure we define terms representing innermost and outermost normalizations with respect to a set of rewriting rules. By using these terms, we obtain a natural and concise description of the conditional rewriting. Finally, starting from the representation of the conditional rewriting rules, we show how the rewriting calculus can be used to give a semantics to ELAN, a language based on the application of rewriting rules controlled by strategies.
PhD Thesis in French
cs.SC
[ "cs.SC", "cs.LO", "cs.PL", "I.1; D.1; D.3; F.4.0; F.4.1" ]
Automatic Termination Analysis of Programs Containing Arithmetic Predicates
http://arxiv.org/abs/cs/0011036v1
http://arxiv.org/abs/cs/0011036v1
http://arxiv.org/pdf/cs/0011036v1
2000-11-23
2000-11-23
[ "Nachum Dershowitz", "Naomi Lindenstrauss", "Yehoshua Sagiv", "Alexander Serebrenik" ]
[ "", "", "", "" ]
For logic programs with arithmetic predicates, showing termination is not easy, since the usual order for the integers is not well-founded. A new method, easily incorporated in the TermiLog system for automatic termination analysis, is presented for showing termination in this case. The method consists of the following steps: First, a finite abstract domain for representing the range of integers is deduced automatically. Based on this abstraction, abstract interpretation is applied to the program. The result is a finite number of atoms abstracting answers to queries which are used to extend the technique of query-mapping pairs. For each query-mapping pair that is potentially non-terminating, a bounded (integer-valued) termination function is guessed. If traversing the pair decreases the value of the termination function, then termination is established. Simple functions often suffice for each query-mapping pair, and that gives our approach an edge over the classical approach of using a single termination function for all loops, which must inevitably be more complicated and harder to guess automatically. It is worth noting that the termination of McCarthy's 91 function can be shown automatically using our method. In summary, the proposed approach is based on combining a finite abstraction of the integers with the technique of the query-mapping pairs, and is essentially capable of dividing a termination proof into several cases, such that a simple termination function suffices for each case. Consequently, the whole process of proving termination can be done automatically in the framework of TermiLog and similar systems.
Appeared also in Electronic Notes in Computer Science vol. 30
cs.PL
[ "cs.PL", "D.1.6; D.2.4" ]
Extended Abstract - Model-Based Debugging of Java Programs
http://arxiv.org/abs/cs/0011027v1
http://arxiv.org/abs/cs/0011027v1
http://arxiv.org/pdf/cs/0011027v1
2000-11-20
2000-11-20
[ "Cristinel Mateis", "Markus Stumptner", "Dominik Wieland", "Franz Wotawa" ]
[ "", "", "", "" ]
Model-based reasoning is a central concept in current research into intelligent diagnostic systems. It is based on the assumption that sources of incorrect behavior in technical devices can be located and identified via the existence of a model describing the basic properties of components of a certain application domain. When actual data concerning the misbehavior of a system composed from such components is available, a domain-independent diagnosis engine can be used to infer which parts of the system contribute to the observed behavior. This paper describes the application of the model-based approach to the debugging of Java programs written in a subset of Java. We show how a simple dependency model can be derived from a program, demonstrate the use of the model for debugging and reducing the required user interactions, give a comparison of the functional dependency model with program slicing, and finally discuss some current research issues.
In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035
cs.SE
[ "cs.SE", "cs.PL", "D.2.5" ]