abstract
stringlengths
5
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
5
367
__index_level_0__
int64
1
1,000k
This letter considers a single hop wireless multicast network. We first introduce a new two-level queuing system consisting of a main queue and a virtual queue, where each packet in the virtual queue is associated with a user index set. Then, we propose a network coding based packet scheduling method to maximize the system input rate under the queue stability constraint. Our analytical and simulation results demonstrate the effectiveness of the proposed solution.
['Nadieh Moghadam', 'Hongxiang Li']
Queue Stability Analysis in Network Coded Wireless Multicast Network
722,001
Closed left-r.e. sets
['Sanjay Jain', 'Frank Stephan', 'Jason Teutsch']
Closed left-r.e. sets
691,263
Recently, convex solutions to low-rank matrix factorization problems have received increasing attention in machine learning. However, in many applications the data can display other structures beyond simply being low-rank. For example, images and videos present complex spatio-temporal structures, which are largely ignored by current low-rank methods. In this paper we explore a matrix factorization technique suitable for large datasets that captures additional structure in the factors by using a projective tensor norm, which includes classical image regularizers such as total variation and the nuclear norm as particular cases. Although the resulting optimization problem is not convex, we show that under certain conditions on the factors, any local minimizer for the factors yields a global minimizer for their product. Examples in biomedical video segmentation and hyperspectral compressed recovery show the advantages of our approach on high-dimensional datasets.
['Benjamin D. Haeffele', 'Eric D. Young', 'René Vidal']
Structured Low-Rank Matrix Factorization: Optimality, Algorithm, and Applications to Image Processing
84,975
Purpose – The aim of the paper is to explore the requirements of knowledge‐based management in the regional development network of the Tampere region in Finland.Design/methodology/approach – The requirements of knowledge‐based management are analysed on the basis of the perceptions of the regional developers interviewed (n=13) and by using the extended SECI model and intellectual capital framework as analytical tools.Findings – Different regions benefit from different knowledge‐based management activities according to their performance in the cycle of the extended SECI model and the intellectual capital available among the regional developers. When the knowledge‐based management requirements of a regional development network are identified by applying these two frameworks, more appropriate investments (e.g. for ICT infrastructure) and development activities can be made.Research limitations/implications – The perspectives of knowledge creation (i.e. the extended SECI model) and intellectual capital provide...
['Henna Salonius', 'Jonna Käpylä']
Exploring the requirements of regional knowledge‐based management
214,787
Oblivious Transfer via McEliece's PKC and Permuted Kernels.
['Kazukuni Kobara', 'Kirill Morozov', 'Raphael Overbeck']
Oblivious Transfer via McEliece's PKC and Permuted Kernels.
773,853
We present a new proof-theoretic approach to bounding the complexity of the decision problem for propositional modal logics. We formalize logics in a uniform way as sequent systems and then restrict the structural rules for particular systems. This, combined with an analysis of the accessibility relation of the corresponding Kripke structures, yields decision procedures with bounded space requirements. As examples we give O(n log n) space procedures for the modal logics K and T.
['David A. Basin', 'Seán Matthews', 'Luca Viganò']
A New Method for Bounding the Complexity of Modal Logics
297,542
SAML Privacy-Enhancing Profile.
['Moritz Horsch', 'Max Tuengerthal', 'Tobias Wich']
SAML Privacy-Enhancing Profile.
783,111
We consider the numerical approximations of a two-phase hydrodynamics coupled phase-field model that incorporates the variable densities, viscosities and moving contact line boundary conditions. The model is a nonlinear, coupled system that consists of incompressible Navier–Stokes equations with the generalized Navier boundary condition, and the Cahn–Hilliard equations with moving contact line boundary conditions. By some subtle explicit–implicit treatments to nonlinear terms, we develop two efficient, unconditionally energy stable numerical schemes, in particular, a linear decoupled energy stable scheme for the system with static contact line condition, and a nonlinear energy stable scheme for the system with dynamic contact line condition. An efficient spectral-Galerkin spatial discretization is implemented to verify the accuracy and efficiency of proposed schemes. Various numerical results show that the proposed schemes are efficient and accurate.
['Haijun Yu', 'Xiaofeng Yang']
Numerical Approximations for a Phase-Field Moving Contact Line Model with Variable Densities and Viscosities
992,712
Large holes are unavoidably generated in depth image based rendering (DIBR) using a single color image and its associated depth map. Such holes are mainly caused by disocclusion, which occurs around the sharp depth discontinuities in the depth map. We propose a divide-and-conquer hole-filling method which refines the background depth pixels around the sharp depth discontinuities to address the disocclusion problem. Firstly, the disocclusion region is detected according to the degree of depth discontinuity, and the target area is marked as a binary mask. Then, the depth pixels located in the target area are modified by a linear interpolation process, whose pixel values decrease from the foreground depth value to the background depth value. Finally, in order to remove the isolated depth pixels, median filtering is adopted to refine the depth map. In these ways, disocclusion regions in the synthesized view are divided into several small holes after DIBR, and are easily filled by image inpainting. Experimental results demonstrate that the proposed method can effectively improve the quality of the synthesized view subjectively and objectively.
['Jianjun Lei', 'Cuicui Zhang', 'Min Wu', 'Lei You', 'Kefeng Fan', 'Chunping Hou']
A divide-and-conquer hole-filling method for handling disocclusion in single-view rendering
690,601
Elderly people can suffer some degree of decline in their cognitive capacities, usually including different symptoms (decreased problem solving capacity, decreased ability to reason and to maintain focus, forgetfulness, etc.). Cognitive stimulation has been shown to decrease the rate of intellectual decay and potentially reverse age-related cognitive decline. Serious games provide new training opportunities to improve the decrease in selected social, sensory-motor, cognitive and emotional functions of elderly people. This paper details the objectives of the VIRTRA-EL web platform, which has been designed to evaluate and train cognitive skills to elderly users by means of serious games using the personal computer or tablet devices. Additionally, we present a serious game based on interactive 3D environments, which has been designed with the aim of helping to train memory, attention, planning and reasoning.
['María José Rodríguez-Fórtiz', 'Carlos Rodríguez-Domínguez', 'Pedro Cano', 'J. Revelles', 'María Luisa Rodríguez-Almendros', 'María Visitación Hurtado-Torres', 'Sandra Rute-Pérez']
Serious games for the cognitive stimulation of elderly people
903,610
This article extends turbo demodulation to the zero-padded OFDM (ZP-OFDM) system by accounting for the noise color and the symbols estimates correlation introduced by equalization. Resorting to realistic simulations, we show that turbo demodulation used with set partitioning labeling can significantly outperform noniterative decoding with Gray labeling. We also show that it increases the performance gap between ZP-OFDM and OFDM with cyclic prefix (CP-OFDM) relative to noniterative decoding, because it amplifies the performance gain due to the guaranteed symbol recovery of ZP-OFDM.
['B. Muquet', 'M. de Courville', 'Pierre Duhamel', 'G.B. Giannakis', 'P. Magniez']
Turbo demodulation of zero-padded OFDM transmissions
115,230
In response to the high cost and high risk associated with traditional de novo drug discovery, investigation of potential additional uses for existing drugs, also known as drug repositioning, has attracted increasing attention from both the pharmaceutical industry and the research community. In this paper, we propose a unified computational framework, called DDR, to predict novel drug-disease associations. DDR formulates the task of hypothesis generation for drug repositioning as a constrained nonlinear optimization problem. It utilizes multiple drug similarity networks, multiple disease similarity networks, and known drug-disease associations to explore potential new associations among drugs and diseases with no known links. A large-scale study was conducted using 799 drugs against 719 diseases. Experimental results demonstrated the effectiveness of the approach. In addition, DDR ranked drug and disease information sources based on their contributions to the prediction, thus paving the way for prioritizing multiple data sources and building more reliable drug repositioning models. Particularly, some of our novel predictions of drug-disease associations were supported by clinical trials databases, showing that DDR could serve as a useful tool in drug discovery to efficiently identify potential novel uses for existing drugs.
['Ping Zhang', 'Fei Wang', 'Jianying Hu']
Towards drug repositioning: a unified computational framework for integrating multiple aspects of drug similarity and disease similarity.
775,687
Cooperative relay communication in a fading channel environment under the orthogonal amplify-and-forward (OAF), nonorthogonal and orthogonal selection decode-and-forward (NSDF and OSDF) protocols is considered here. The diversity-multiplexing gain tradeoff (DMT) of the three protocols is determined and DMT-optimal distributed space-time (ST) code constructions are provided. The codes constructed are sphere decodable and in some instances incur minimum possible delay.
['Petros Elia', 'K. Vinodh', 'M. Anand', 'Pramod Kumar']
D-MG Tradeoff and Optimal Codes for a Class of AF and DF Cooperative Communication Protocols
503,294
This paper proposes a method for selecting the singular spectrum analysis components via the empirical mode decomposition approach for extracting the useful information for noninvasive blood glucose estimation systems. To perform the grouping, the total number of the groups of the singular spectrum analysis components is equal to the total number of the intrinsic mode functions. First each normalized singular spectrum analysis component is compared to each normalized intrinsic mode function. Second, the singular spectrum analysis component is assigned to the group corresponding to the intrinsic mode function having the highest correlation coefficient. Third, all the singular spectrum components belong to the same group are summed up together. This technique is applied to extracting the useful information for noninvasive blood glucose estimation systems. In particular, the measured signal is decomposed into a number of components via both the singular spectrum analysis approach and the empirical mode decomposition approach. After applying our proposed grouping method to obtain the singular spectrum analysis components, the obtained components enjoy the advantages of both the singular spectrum analysis approach and the empirical mode decomposition approach. Computer numerical simulations are performed on the practical measurements. The results show that more robust information can be found in the obtained singular spectrum analysis components.
['Peiru Lin', 'Weixi Li', 'Tuhong Zheng', 'Wing-Kuen Ling', 'Chi-Kong Li']
Selection of singular spectrum analysis components via empirical mode decomposition for extracting information for noninvasive blood glucose estimation system
993,409
Many academic disciplines - including information systems, computer science, and operations management - face scheduling problems as important decision making tasks. Since many scheduling problems are NP-hard in the strong sense, there is a need for developing solution heuristics. For scheduling problems with setup times on unrelated parallel machines, there is limited research on solution methods and to the best of our knowledge, parallel computer architectures have not yet been taken advantage of. We address this gap by proposing and implementing a new solution heuristic and by testing different parallelization strategies. In our computational experiments, we show that our heuristic calculates near-optimal solutions even for large instances and that computing time can be reduced substantially by our parallelization approach.
['Gerhard Rauchecker', 'Guido Schryen']
High-Performance Computing for Scheduling Decision Support: A Parallel Depth-First Search Heuristic
741,210
Directional SUSAN image boundary detection of breast thermogram
['Elham Mahmoudzadeh', 'Maryam Zekri', 'Mohammad Montazeri', 'Saeid Sadri', 'Sima Taghizadeh Dabbagh']
Directional SUSAN image boundary detection of breast thermogram
705,951
Optimized power allocation among the transmitters in MIMO radar networks has a potential to conserve a significant amount of energy without loss of target-estimation performance. In this paper, we develop power allocation strategies for joint target position and velocity estimation in a distributed MIMO radar network. Transmitting power optimization algorithms are proposed for predetermined parameter estimation accuracy requirements in terms of the equivalent Fisher information matrix (EFIM). These algorithms aim to minimize the power budget while guaranteeing the estimation accuracy with or without the presence of parameter uncertainties. Numerical results show that the proposed algorithms can achieve appreciable energy savings.
['Ning Zhang', 'Hu Song', 'Jun Tang', 'Yuan Shen']
Power optimization for joint target position and velocity estimation in MIMO radar networks
728,367
This paper describes an approach designed to create a mapping between corresponding activities from two business processes that is geared towards handling noisy similarity values for the labels describing these activities. This is achieved by formulating an optimization problem – maximize the behavioral similarity of the processes as a whole – whose target value depends on the mapping. Thereby, the mapping is created not only with respect to label similarities but also with respect to the overall control flow structure, which avoids some mistakes resulting from erroneous label similarities. A preliminary evaluation demonstrates the improvement.
['Jörg Becker', 'Dominic Breuker', 'Patrick Delfmann', 'Hanns-Alexander Dietrich', 'Matthias Steinhorst']
Identifying Business Process Activity Mappings by Optimizing Behavioral Similarity
355,980
Abstract Forensic failure analysis of automotive electronics deals in most cases with failures within the guarantee period. Frequently, specific operational conditions, even for a short moment, combine with specific electronic sensitivities – against ESD discharge, switching spikes, humidity ingress, vibration, undefined grounding circuitry. The paper lists impressive, partially curious examples of related failure anamnesis and analysis and tries to draw some important conclusions with respect to prevention and failure anamnesis/failure analysis methodologies.
['Peter Jacob']
Early life field failures in modern automotive electronics – An overview; root causes and precautions
888,940
Service delivery platforms in telecommunication environments aim to host multiple services and to provide context-awareness and personalization features. This calls for an appropriate management of user profiles containing general user information as well as situation-dependent user preferences for the contextual personalization of these services. We present a user profile selection approach that decides on the selection of matching situation-dependent user preferences concerning the user's current situation. The presented approach takes advantage of ontology reasoning and is compared to other approaches. The comparison shows that our ontology reasoning based approach allows for more expressiveness concerning the specification of situation-dependent user preferences and hence leads to an added value for platform users.
['Michael Sutterer', 'Olaf Droegehorn', 'Klaus David']
User Profile Selection by Means of Ontology Reasoning
7,146
A ranking and selection (R&S) procedure allowing comparisons between systems to be made based on any distributional property of interest would be useful. This paper presents initial work toward the development of such a procedure. Previously published work gives a method for using bootstrapping to develop fixed-width confidence intervals with a specified coverage probability around a property of interest. Empirical evidence is provided in support of the use of this approach for building fixed-width confidence intervals around both means and quantiles. Additionally, the use of fixed-width confidence intervals for bootstrapped R&S is demonstrated. For two systems, R&S is performed by building a confidence interval around the difference between two systems. Simultaneous fixed-width confidence intervals are used for R&S on more than 2 systems, and the approach is demonstrated for three systems. The technique is shown to be effective for R&S based on both quantiles and means.
['Jennifer M. Bekki', 'Barry L. Nelson', 'John W. Fowler']
Bootstrapping-based fixed-width confidence intervals for ranking and selection
485,383
Association rule mining is one of important research topics in knowledge discovery and data mining. Recent promising direction of association rule mining is mainly to mine closed itemsets. Based on the Galois closed operators, a mathematical relationship between the fixed point and closed itemset in association rule mining is discussed and several properties are obtained. To mine all frequent closed itemsets is equal to build the fixed point lattice and mine its all points that satisfy support constraints. A new method for visualization of association rules based on the generalized association rule base is also proposed.
['Tianrui Li', 'Da Ruan', 'Tianmin Huang', 'Yang Xu']
On a mathematical relationship between the fixed point and the closed itemset in association rule mining
366,494
The mesh structure of ad hoc networks, provides the possibility of establishing two disjoint paths from a sender to a receiver. Transmission of video over such networks due to their unpredictability and difficulty in securing reliable channels is challenging. Layered Coding (LC) and Multiple Description Coding (MDC) are two different techniques which can benefit from path diversity for robust video communication and also to adapt with preferences of users\network. This paper presents an approach to provide error resilient video transmission over a variety of network conditions and applications needs using combined LC and MDC schemes. In the proposed method two descriptions of each layer are generated in the FMO format of the H.264/AVC standard. Unlike the conventional approaches, in our work macroblocks of each layer are divided into two paths. Hence, in the bursty error conditions the error will be smoothly spread in all layers. For better protection and more network compatibility, the base layer is data partitioned and its important part (DP_A) is repeated in both paths. Simulation results show transmission of duplicated DP_A and the non-corresponding descriptions of two layers together on disjoint paths, can improve the error concealment of the decoder and consequently enhance video quality by up to 2 dB.
['Shahram Ghahremani', 'Mohammad Ghanbari']
Error resilient video transmission in ad hoc networks using layered and multiple description coding
713,697
Increasing resolutions push the throughput requirements of video codecs and complicate the challenges encountered during their cost-efficient implementations. We propose an FPGA implementation of a high-performance MPEG-4 simple profile video decoder, capable of parsing multiple bitstreams from different encoder sources. Its video pipeline architecture exploits the inherent functional parallelism and enables multi-stream support at a limited FPGA resource cost compared to a single stream version. The design is scalable with a number of added compile-time parameters - including maximum frame size and number of input bitstreams - which can be set by the user to suit his application.
['Paul R. Schumacher', 'Kristof Denolf', 'A. Chilira-RUs', 'R. Turney', 'Nicola J. Fedele', 'K. Vissers', 'J. Bormans']
A scalable, multi-stream MPEG-4 video decoder for conferencing and surveillance applications
129,602
The importance of collaborations across geographical, institutional and/or disciplinary boundaries has been widely recognized in research communities, yet there exist a range of obstacles to such collaborations. This study is concerned with understanding the potential of academic social networking services (ASNS) as a medium or platform for cross-disciplinary or multi-disciplinary collaborations. Many ASNS sites allow scholars to form online groups as well as to build up their professional network individually. In this study, we look at the patterns of user participation in online groups in a ASNS site, Mendeley, with an emphasis on assessing the degree to which people from different disciplinary backgrounds gather in these groups. The results show that while there exists a need for better means to facilitate group formation and growth, the groups in Mendeley exhibit a great deal of diversity in their member composition in terms of disciplines. Overall, the findings of this study support the argument that online social networking, especially ASNS, may foster multi-disciplinary collaborations by providing a platform for researchers from diverse backgrounds to find one another and cooperate on issues of common interests.
['Jung Sun Oh', 'Wei Jeng']
Groups in Academic Social Networking Services--An Exploration of Their Potential as a Platform for Multi-disciplinary Collaboration
933,146
Virtual Fly Brain - Using OWL to Support the Mapping and Genetic Dissection of the Drosophila Brain.
['David Osumi-Sutherland', 'Marta Costa', 'Robert Court', "Cahir J. O'Kane"]
Virtual Fly Brain - Using OWL to Support the Mapping and Genetic Dissection of the Drosophila Brain.
787,423
Purpose – This paper aims to provide robust evidence of the “IT organizational assimilation capacity” mediating role and to propose a complementary model.Design/methodology/approach – Based on theoretical proposition that IT business value is generated by the deployment of IT and complementary organizational resources, a research model was developed and two hypotheses were proposed. These are tested with a survey from 466 top managers in Italian companies. The 466 questionnaires were analyzed in two steps. In the first step, a series construct validation using factor analysis was performed in order to validate the scales. In the second step, a series of analyses using linear regression was performed between the two independent variables and the dependent variable to validate the mediator function of the IT organizational assimilation capacity.Findings – Data suggest that most firms have not merged information system (IS) integration with the right complementary organizational resources. The findings also ...
['Vincenzo Morabito', 'Marinos Themistocleous', 'Alan Serrano']
A survey on integrated IS and competitive advantage
277,817
In this paper we present a system, called GeoARCO, which enables presentation of virtual museum exhibitions in a geographical context. The system is partially based on the results of the European project ARCO — Augmented Representation of Cultural Objects, which has developed technology for museums enabling them to create and manage virtual museum exhibitions for use in interactive kiosk displays and on the Web. GeoARCO uses the Google Earth platform to enable presentation of digital artefacts as well as complete cultural heritage exhibitions on top of the 3D globe model. Users can browse and search available exhibitions, display current location of objects as well as historical data about the objects, such as the place where the objects were made or discovered. A user can also display detailed 3D models of artefacts, reconstructed sites or entire virtual exhibitions. The system cooperates with multiple ARCO databases run by different museums.
['Stawniak M', 'Krzysztof Walczak']
Geographical presentation of virtual museum exhibitions
328,217
This paper discusses the image processing techniques needed to localise the iris outline in images of the eye produced by a novel, portable eye tracker. The tracker allows gaze direction to be measured by supplying simultaneous views of a subject's eye and of the world from a headmounted camera. Finding gaze direction relative to the head requires accurate and robust measurement of the iris outline under a wide range of lighting conditions, in the presence of highlights, and when the iris is in an extreme position. We describe a reliable method to solve this problem, which achieves a gaze direction accuracy of under 2 degrees.
['Hilary Tunley', 'David S. Young']
Iris localisation for a head-mounted eye tracker
97,565
Cognitive radio(CR) is an emerging technology in wireless access, aiming at vastly improving the way radio spectrum is utilized. Its basic idea is that a secondary user (unlicensed user) can be permitted to use licensed spectrum, provided that it does not interfere with any primary users (licensed users). CR technology enables the development of an intelligent and adaptive wireless communication system that is essentially aware of the radio frequency environment. In this paper, we have proposed a novel cognitive MAC protocol for cognitive radio networks under the property-rights model, in which, secondary users are divided into several non-overlapping groups, and each group uses the proposed auction algorithm to bid for leasing the required channels from the auctioneer appointed by primary users. To the best of our knowledge, this protocol is the first to maximize the utilization of spectrum resources while achieving revenue maximization for primary users. Simulations indicate that our proposed MAC protocol utilizes spectrum resources to the maximum, guarantees the fairness and dynamics of channels allocation among groups.
['Hua Song', 'Xiaola Lin']
A Leasing Oriented MAC Protocol for High Spectrum Usage in Cognitive Radio Networks
207,587
Behavior-based systems have been successfully used in control and robotics applications. In traditional behavior-based systems, only a single behavior controls the agent in any time step. However, this behavior arbitration is not appropriate for many complex tasks. In this paper, we propose Hierarchical Soft Behavior-based Architecture that uses the concept of soft suppression to coordinate flexibly between behaviors. In our method, we use reinforcement learning to find an appropriate amount of suppression for each behavior in the architecture, in addition to learn the internal mechanism of each behavior. Several experiments are provided to show the effectiveness of our method in the mobile robot navigation task.
['Mohammad G. Azar', 'Majid Nili Ahmadabadi', 'Amir Massoud Farahmand', 'Babak Nadjar Araabi']
Learning to Coordinate Behaviors in Soft Behavior-Based Systems Using Reinforcement Learning
154,405
Cooperative transmission has been shown to be able to greatly improve the system performance by exploring the broadcasting nature of wireless channels and cooperation among users. While most existing works concentrate on improving the peer-to-peer link quality, we focus, in this paper, on resource allocation among users such that the system performance can be improved. In this work, two important questions are answered: who should help whom among the distributively located users; and how the users should cooperate to improve the performance. To quantify the questions, a power management problem is formulated over a multiuser OFDM network to minimize the overall system transmit power under the constraint of each user's desired transmission rate. Then, we develop an algorithm to find solutions for a two user case. From the simulation results, the proposed scheme achieves up to 50% overall power saving for the two-user system.
['Zhu Han', 'T. Himsoon', 'W.P. Siriwongpairat', 'K.J.R. Liu']
Energy-efficient cooperative transmission over multiuser OFDM networks: who helps whom and how to cooperate
360,955
In this paper, the nonparametric identification of nonlinear systems with binary-valued output observations is considered. The kernel-based stochastic approximation algorithm with expanding truncations (SAAWET) is proposed to recursively estimate the value of a nonlinear function representing the system at any fixed point. All estimates are proved to converge to the true values with probability one. A numerical example, which shows that the simulation results are consistent with the theoretical analysis, is given. Compared with the existing works on the identification of dynamic systems with binary-valued output observations, here we do not assume the complete knowledge of the system noise and the system itself is non-parameterized. On the other hand, we assume that we can adaptively design the threshold of the binary sensor to achieve a sufficient richness of information in the output observations.
['Wenxiao Zhao', 'Han-Fu Chen', 'Roberto Tempo', 'Fabrizio Dabbene']
Recursive identification of nonparametric nonlinear systems with binary-valued output observations
651,043
Designing non-classical non-transition-metal hydrogen complexes: Theoretical prediction of Si2F3(μ2-H2)
['Ioannis S. K. Kerkines', 'Cleanthes A. Nicolaides']
Designing non-classical non-transition-metal hydrogen complexes: Theoretical prediction of Si2F3(μ2-H2)
942,656
Current manufacturing systems have a very structured production model, especially when high precision is required, as in semiconductor device manufacturing. In addition, rapid changes in production and market requirements may occur, hence great flexibility is also essential. These goals often impose the re-engineering of periodically existing models, in order to remove its limitations, also adding new capabilities, significantly reducing the time-to-market of new products. Updating a model generally also requires the improvement of existing applications, both re-writing software components as well as adding new features. We consider the current model used inside STMicroelectronics facilities to define production flow. We first describe the model and its limitations, then we introduce an enhanced, object-oriented model, with inheritance and hierarchy support. We also consider which improvements are required on existing applications in order to support such a model, introducing an enhanced environment.
['Vincenza Carchiolo', "Sebastiano D'Ambra", 'Alessandro Longheu', 'Michele Malgeri']
Re-engineering the STMicroelectronics manufacturing model
136,488
In this paper, we study the problem of joint carrier frequency offset (CFO) and channel estimation for two-way relay network (TWRN) that comprises two source terminals and one relay node. We build up the signal model, from which we identify the CFO and channels at the two source terminals. As the very first attempt to discuss the joint CFO and the channel estimation for TWRN, we consider relay node that purely amplifies and forwards, which is also known as the repeater. The new model is different from the traditional ones in that the unknown CFO is combined with only part of the channel parameters. We then propose two joint estimation methods, i.e., the approximate maximum-likelihood (ML) method and the nulling-based method. The Cramer-Rao Bounds (CRB) of both methods are derived in closed-form. Simulations are then provided to corroborate the proposed studies.
['Gongpu Wang', 'Feifei Gao', 'Chintha Tellambura']
Joint Frequency Offset and Channel Estimation Methods for Two-Way Relay Networks
521,610
While OWL-S advertisements provide a rich (ontological and behavioural) description of Web services, there are no tools that support formal analyses of OWL-S services. In this paper we present a translator from OWL-S descriptions to Petri nets which makes such analyses possible thanks to the many tools available for Petri nets.
['Antonio Brogi', 'Sara Corfini', 'Stefano Iardella']
From OWL-S Descriptions to Petri Nets
556,777
A seller wants a buyer to choose a good whose value is the seller's private information. The buyer's memory is limited, and she decides whether to remember the good conditional on a signal about the value. The seller then decides whether to send a costless message that can remind the buyer of the good. Since the reminder could convey the seller's private information in equilibrium, whether to send a reminder is a non-trivial question. It is shown that costless messages can be informative in equilibrium in spite of the strong conflict of interest between the players. In any informative equilibrium, silence conveys positive information about the value, whereas the reminder conveys negative information.
['Toru Suzuki']
Reminder game: Indirectness in persuasion
904,827
Bluetooth has become an ubiquitous technology present in almost every electronic device. A question often asked by manufacturers and final users is whether it can be used for other uses different from the ones for which it was designed. In particular, for improved multimedia traffic support, with the fewest modifications in the implementation of the protocol (both hardware and software). In this paper we analyze the impact on performance of the lowest levels of the Bluetooth architecture through a relevant parameter known as the polling time, t_poll. We propose a novel algorithm (EMAAA Tpoll) that allows to adapt dynamically the value of the t_poll parameter during the transmission. Our results provide definite criteria about how to optimize the transmission of multimedia traffic over piconets at the lowest layers of the Bluetooth architecture. Specifically, we show that our algorithm can result in a significant energy saving (of 10%-20%) with respect to the Bluetooth specification.
['David Contreras', 'Mario Castro']
Adaptive Polling Enhances Quality and Energy Saving for Multimedia over Bluetooth
119,018
Diffraction shaders
['Jos Stam']
Diffraction shaders
682,190
The analysis of plates can be achieved using the quadratic MITC plate or MITC shell elements. The plate elements have a strong mathematical basis and have been shown to be optimal in their convergence behavior, theoretically and numerically. The shell elements have not (yet) been analyzed mathematically in depth for their rates of convergence, with the plate/shell thickness varying, but have been shown numerically to perform well. Since the shell elements are general and can be used for linear and nonlinear analyses of plates and shells, it is important to identify the differences in the performance of these elements when compared to the plate elements. We briefly review the quadratic quadrilateral and triangular MITC plate and shell elements and study their performances in linear plate analyses.
['Phill-Seung Lee', 'Klaus-Jürgen Bathe']
The quadratic MITC plate and MITC shell elements in plate bending
74,740
Abstract#R##N##R##N#A method is presented for the modelling and animation of generalized cylinders with variable radius offset space curves. The boundary surface of a generalized cylinder is constructed: either as a translational sweep of cross-sectional curves along the skeleton curve, or as a rotational sweep of profile curves around the skeleton curve. The cross-sectional curves are computed as the variable radius offset curves of a circle in the normal plane, and the profile curves are computed as the variable radius offset space curves of the skeleton curve. The offset curves are approximated by spline curves, and the boundary surface of a generalized cylinder is approximated by the tensor product surface patches of the offset spline curves.
['Myung-Soo Kim', 'Eun-Joo Park', 'Hwan‐Yong Lee']
Modelling and animation of generalized cylinders with variable radius offset space curves
422,881
Collecting natural data at regular, fine scales is an onerous and often costly procedure. However, there is a basic need for fine scale data when applying inductive methods such as neural networks or genetic algorithms for the development of ecological models. This paper will address the issues involved in interpolating data for use in machine learning methods by considering how to determine if a downscaling of the data is valid. The approach is based on a multi-scale estimate of errors. The resulting function has similar properties to a time series variogram; however, the comparison at different scales is based on the variance introduced by rescaling from the original sequence. This approach has a number of properties, including the ability to detect frequencies in the data below the current sampling rate, an estimate of the probable average error introduced when a sampled variable is downscaled and a method for visualising the sequences of a time series that are most susceptible to error due to sampling. The described approach is ideal for supporting the ongoing sampling of ecological data and as a tool for assessing the impact of using interpolated data for building inductive models of ecological response.
['Peter A. Whigham']
Visualising and assessing the probable error from downscaling ecological time series data
486,305
On the Equivalence of Degraded Gaussian MIMO Broadcast Channels
['Lennart Gerdes', 'Maximilian Riemensberger', 'Wolfgang Utschick']
On the Equivalence of Degraded Gaussian MIMO Broadcast Channels
602,298
The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.
['Hubert Rehrauer', 'Stefan Zoller', 'Ralph Schlapbach']
MAGMA: analysis of two-channel microarrays made easy
254,758
Volatile markets require that associated manufacturing systems cope with a variety of products and frequent changes in capabilities. The required paradigm is that of Reconfigurable Manufacturing Systems, which are designed for gracefully assimilating changes in functionality. This paper proposes a middleware infrastructure that facilitates the process of integrating and (re)configuring manufacturing systems. The middleware follows a layered approach, in which an information infrastructure is implemented over an interaction infrastructure. The interaction infrastructure encapsulates the interaction primitives required to manage distribution in a loosely coupled way. A service-oriented approach is used for discovering deployed elements. A message-oriented approach is used to asynchronously propagate invocations and events. The information infrastructure encapsulates functionality to facilitate the invocation, using an upper ontology for Semantic Web Services in conjunction with domain ontologies in order to...
['Ivan M. Delamer', 'Jose L. Martinez Lastra', 'María de los Ángeles Cavia Soto']
An event-based service-oriented infrastructure for reconfigurable manufacturing systems
223,854
The RKLT is a lossless approximation to the KLT, and has been recently employed for progressive lossy-to-lossless coding of hyperspectral images. Both yield very good coding performance results, but at a high computational price. In this paper we investigate two RKLT clustering approaches to lessen the computational complexity problem: a normal clustering approach, which still yields good performance; and a multi-level clustering approach, which has almost no quality penalty as compared to the original RKLT. Analysis of rate-distortion evolution and of lossless compression ratio is provided. The proposed approaches supply additional benefits, such as spectral scalability, and a decrease of the side information needed to invert the transform. Furthermore,since with a clustering approach, SERM factorization coefficients are bounded to a finite range, the proposed methods allow coding of large three dimensional images within JPEG2000.
['Ian Blanes', 'Joan Serra-Sagrista']
Clustered Reversible-KLT for Progressive Lossy-to-Lossless 3d Image Coding
158,881
To improve the performance of a DVB-S2 communication channel, it is necessary to remove the non-linear interferences induced by the power amplifier. Due to the presence of filters in the channel, this non-linear interference is also interference with memory. One of the most used techniques to cancel out the interferences in this case is the order p compensation. The order p compensation can be mathematically described as a recursive algorithm. The non- linear interference is supposed to be smaller after each step of the algorithm. However this is not always the case and it is necessary to define upper bounds on the system to ensure convergence of the algorithm. Upper bounds were already defined for general nonlinear systems, but are however very inaccurate for the case of a DVB-S2 communication channel. In this paper, we will present two new methods to refine these upper bounds.
['Thibault Deleu', 'Mathieu Dervin', 'Jean-Michel Dricot', 'Philippe De Doncker', 'François Horlin']
Finite Order Compensation of Non Linearities in DVB-S2 Communications
150,871
A distributed medium access control (MAC) algorithm for uplink OFDMA networks under the IEEE 802.16 framework is proposed and analyzed in this work. We present a simple yet efficient algorithm to enhance the system throughput by integrating opportunistic medium access and collision resolution through random subchannel backoff. Consequently, the resulting algorithm is called the opportunistic access with random subchannel backoff (OARSB) scheme. OARSB not only achieves distributed coordination among users but also reduces the amount of information exchange between the base station and users. The throughput and delay performance analysis of OARSB is conducted using a Markov chain model. The superior performance of OARSB over an existing scheme is demonstrated by analysis as well as computer simulation.
['Yu-Jung Chang', 'Feng-Tsun Chien', 'C.-C. Jay Kuo']
Opportunistic Access with Random Subchannel Backoff (OARSB) for OFDMA Uplink
102,925
Shah, Rashmi and Ramchandran recently considered a model for Private Information Retrieval (PIR) where a user wishes to retrieve one of several $R$-bit messages from a set of $n$ non-colluding servers. Their security model is information-theoretic. Their paper is the first to consider a model for PIR in which the database is not necessarily replicated, so allowing distributed storage techniques to be used. Shah et al. show that at least $R+1$ bits must be downloaded from servers, and describe a scheme with linear total storage (in $R$) that downloads between $2R$ and $3R$ bits. For any positive $\epsilon$, we provide a construction with the same storage property, that requires at most $(1+\epsilon)R$ bits to be downloaded; moreover one variant of our scheme only requires each server to store a bounded number of bits (in the sense of being bounded by a function that is independent of $R$). We also provide variants of a scheme of Shah et al which downloads exactly $R+1$ bits and has quadratic total storage. Finally, we simplify and generalise a lower bound due to Shah et al. on the download complexity a PIR scheme. In a natural model, we show that an $n$-server PIR scheme requires at least $nR/(n-1)$ download bits in many cases, and provide a scheme that meets this bound.
['Simon R. Blackburn', 'Tuvi Etzion', 'Maura B. Paterson']
PIR schemes with small download complexity and low storage requirements
897,380
Abstract Natural resource management in the United States has experienced dramatic change since landmark legislation in the 1960s and 1970s ultimately brought about high-visibility policy decisions on the public lands of the Pacific Northwest in the 1990s. The socio-political trajectory of that change has moved from institutionally imposed, agency-based decisions toward greater public involvement, increasingly calling upon new technologies to analyse data and communicate scientific findings. An investigation of the use of GIS technology in public involvement in the Coastal Landscape Analysis and Modeling Study in western Oregon finds that use of this technology plays a potentially transformative role that can encourage further movement along this social change–based trajectory but can also constrain it. Use of the technology can constrain change by increasing awareness of uncertainty and by supporting the development of privileged knowledge as held by GIS map-makers, typically scientists. It can encourage...
['Sally L. Duncan', 'Denise Lach']
GIS Technology in Natural Resource Management: Process as a Tool of Change
267,395
Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project.
['Ricard Campos', 'Nuno Gracias', 'Pere Ridao']
Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints
690,098
Rabies has continued to claim human life despite different efforts to controls its transmission cycles between humans and domestic dogs. New developments in ICT have provided an opportunity for increased possibilities for community involvement in rabies surveillance. The main objective of this study was to investigate on approaches and practices to improve the communication of rabies surveillance information at different levels. Specifically, a study was carried to establish the significance of applying human sensor web system. Human sensor web has a potential of strengthening rabies surveillance system and serves as applied research tools for investigating strategic spatially targeted control activities, identifying areas most at risk and early detection of rabies incursions. Web and mobile based rabies surveillance system was developed and piloted as a support tool for the detection, surveillance and control of rabies. Wide application of the developed system will pave way for effective and efficient country-wide sharing of rabies surveillance information.
['Maulilio J. Kipanyula', 'Anna M. Geofrey', 'Kadeghe G. Fue', 'M. R. S. Mlozi', 'Siza D. Tumbo', 'Ruth Haug', 'Camilius Sanga']
Web and Mobile Phone Based Rabies Surveillance System for Humans and Animals in Kilosa District, Tanzania
740,072
Ancient Japanese used to believe that Gods lived not only in nature but also in artifacts. They thought Gods lived in every pairs of chopsticks, every dishes, and every furniture sorounding them. Legend says ancient people used to talk to the Gods inside such commodities and everyday tools as if such stuffs had their souls themselves. Particularly furniture and houses surrounding people had been thought that they were even breezing (breezing is called ki in Japanese). Those people thought they had breezed together with furniture and house, they had interacted with those artifacts, and they had interacted each other by communicating to and sharing with those artifacts without words.
['Shinya Matsuyama', 'Masaki Yagisawa', 'Kenji Inokuchi', 'Hiroaki Kawamura', 'Shozo Kuze', 'Hisakazu Nabeshima', 'Makoto Hirahara', 'Koji Yamashita', 'Tomohiro Wakatsuki', 'Mitsuaki Watanabe', 'Ichi Kanaya']
fuwapica suite
668,128
Abstract#R##N##R##N#In recent years, discussion of the provision of government services has paid particular attention to notions of customer choice and improved service delivery. However, there appears to be marked shift in the relationship between the citizen and the state moving from government being responsive to the needs of citizens to viewing citizens explicitly as customers. This paper argues that this change is being accelerated by government use of techniques like benchmarking, which have been widely used in the private sector. To illustrate this point, the paper focuses on the adoption of website benchmarking techniques by the public sector. The paper argues that the essence of these benchmarking technologies, a process comprised of both finding and producing truth, is fundamentally based on the act of classifying and draws on Martin Heidegger's etymological enquiry to reinterpret classification as a dynamic movement towards order that both creates and obfuscates truth. In so doing, it demonstrates how Heidegger's seminal ideas can be adapted for critical social research by showing that technology is more than an instrument as it has epistemic implications for what counts as truth. This stance is used as the basis for understanding empirical work reporting on a UK government website benchmarking project. Our analysis identifies the means involved in producing the classifications inherent in such benchmarking projects and relates these to the more general move that is recasting the relationship between the citizen and the state, and increasingly blurring the boundaries between the state and the private sector. Recent developments in other attempts by the UK government to use private-sector technologies and approaches indicate ways in which this move might be challenged.
['Benjamin Mosse', 'Edgar A. Whitley']
Critically classifying: UK e‐government website benchmarking and the recasting of the citizen as customer
359,592
Assessment of Retinal Vascular Changes Through Arteriolar-to-Venular Ratio Calculation
['Behdad Dashtbozorg', 'Ana Maria Mendonça', 'Aurélio C. Campilho']
Assessment of Retinal Vascular Changes Through Arteriolar-to-Venular Ratio Calculation
657,261
In this paper, we provide a complete study on the training based channel estimation for relay networks that employ the decode-and-forward (DF) scheme. Since multiple relay nodes are geographically distributed over the service region, channel estimation is different from the traditional way in that each relay has its own individual power constraint. We consider the maximum likelihood (ML) channel estimation and derive closed form solutions for the optimal training as well as for the optimal power allocation. It is seen that the optimal power allocation follows a multi-level waterfilling structure.
['Feifei Gao', 'Tao Cui', 'Arumugam Nallanathan']
Maximum likelihood channel estimation in decode-and-forward relay networks
535,081
Sense Disambiguation: From Natural Language Words to Mathematical Terms
['Minh-Quoc Nghiem', 'Giovanni Yoko Kristianto', 'Goran Topic', 'Akiko Aizawa']
Sense Disambiguation: From Natural Language Words to Mathematical Terms
612,536
We propose a novel global pose estimation method to detect body parts of articulated objects in images based on non-tree graph models. There are two kinds of edges defined in the body part relation graph: Strong (tree) edges corresponding to the body plan that can enforce any type of constraint, and weak (non-tree) edges that express exclusion constraints arising from inter-part occlusion and symmetry conditions. We express optimal part localization as a multiple shortest path problem in a set of correlated trellises constructed from the graph model. Strong model edges generate the trellises, while weak model edges prohibit implausible poses by generating exclusion constraints among trellis nodes and edges. The optimization may be expressed as an integer linear program and solved using a novel two-stage relaxation scheme. Experiments show that the proposed method has a high chance of obtaining the globally optimal pose at low computational cost.
['Hao Jiang', 'David R. Martin']
Global pose estimation using non-tree models
69,342
In this paper, we consider rendering color videos using a non-photo-realistic art form technique commonly called stippling. Stippling is the art of rendering images using point sets, possibly with various attributes like sizes, elementary shapes, and colors. Producing nice stippling is attractive not only for the sake of image depiction but also because it yields a compact vectorial format for storing the semantic information of media. In order to create stippled videos, our method improves over the naive scheme by considering dynamic point creation and deletion according to the current scene semantic complexity. Furthermore, we explain how to produce high quality stippled "videos" (eg., fully dynamic spatio-temporal point sets) for media containing various fading effects. We report on practical performances of our implementation, and present several stippled video results rendered on-the-fly using our viewer that allows both spatio-temporal dynamic rescaling (eg., upscale vectorially frame rate).
['Thomas Houit', 'Frank Nielsen']
Video stippling
715,013
Planar target is one of the fundamental accessories in terrestrial laser scanner and plays an important role in multi-view point clouds registration and in calibration of terrestrial laser scanner, among which determining center position of planar target is a key step. At present, there are few studies on locating planar target. Based on introduction of fuzzy c-means clustering algorithm, we proposed a new method to locating center of planar target. The algorithm is divided into three steps: first using fuzzy clustering algorithm to partition planar target data into different types, and then extract the type that we interest and using robust plane fitting algorithm to delete outliers and last obtain target center coordinates. Experimental results show the correctness and robustness of the method.
['Yunlan Guan', 'Xiaojun Cheng', 'Shijian Zhou', 'Liting Zhang']
Robust Location Algorithm for Planar Target Based on Fuzzy Clustering
253,121
Data Mining involves discovery of required potentially qualified content from a heavy collection of heterogeneous data sources. Two decades passed, still it remains the interested area for researchers. It has become a flexible platform for mining engineers to analyse and visualize the hidden relationships among the data sources. Association rules have a strong place in representing those relationships by framing suitable rules. It has two powerful parameters namely support and confidence which helps to carry out framing of such rules. Frequent itemset mining is also termed to be frequent pattern mining. When the combination of items increases rapidly, we term it to be a pattern. The ultimate goal is to design rules over such frequent patterns in an effective manner i.e in terms of time complexity and space complexity. The count of evolutionary algorithms to achieve this goal is increasing day by day. Bio Inspired algorithms holds a strong place in machine learning, mining, evolutionary computing and so on. Ant Colony Algorithm is one such algorithm which is designed based on behaviour of biological inspired ants. This algorithm is adopted for its characteristic of parallel search and dynamic memory allocation. It works comparatively faster than basic Apriori algorithm, AIS, FP Growth algorithm. The two major parameters of this algorithm are pheromone updating rule and transition probability. The basic ant colony algorithm is improved by modifying the pheromone updating rule in such way to reduce multiple scan over data storage and reduced count of candidate sets. The proposed approach was tested using MATLAB along with WEKA toolkit. The experimental results prove that the stigmeric communication of improved ant colony algorithm helps in mining the frequent items faster and effectively than the above stated existing algorithms.
['Suriya Sundaramoorthy', 'S. P. Shantharajah']
An improved ant colony algorithm for effective mining of frequent items
660,723
Background#R##N#In the clinical context, samples assayed by microarray are often classified by cell line or tumour type and it is of interest to discover a set of genes that can be used as class predictors. The leukemia dataset of Golub et al. [1] and the NCI60 dataset of Ross et al. [2] present multiclass classification problems where three tumour types and nine cell lines respectively must be identified. We apply an evolutionary algorithm to identify the near-optimal set of predictive genes that classify the data. We also examine the initial gene selection step whereby the most informative genes are selected from the genes assayed.
['Thanyaluk Jirapech-Umpai', 'J. Stuart Aitken']
Feature selection and classification for microarray data analysis: Evolutionary methods for identifying predictive genes
213,006
This paper describes a participatory design process employed to invent an interface for 3D selection of neural pathways estimated from MRI imaging of human brains. Existing pathway selection interfaces are frustratingly difficult to use, since they require the 3D placement of regions-of-interest within the brain data using only a mouse and keyboard. The proposed system addresses these usability problems by providing an interface that is potentially more intuitive and powerful: converting 2D mouse gestures into 3D path selections. The contributions of this work are twofold: 1) we introduce a participatory design process in which users invent and test their own gestural selection interfaces using a Wizard of Oz prototype, and 2) this process has helped to yield the design of an interface for 3D pathway selection, a problem that is known to be difficult. Aspects of both the design process and the interface may generalize to other interface design problems.
['David Akers']
Wizard of Oz for participatory design: inventing a gestural interface for 3D selection of neural pathway estimates
397,991
A protocol for dynamic load balancing in distributed systems on the CSMA/CD local area network is presented. Using the protocol, the workload is evenly distributed throughout the system when load-balancing activity is triggered, and effective load distribution is accomplished through transmission of load-balancing messages in a collision-free manner. Analytical and simulation results are presented to show the efficiency of the protocol. >
['Junguk L. Kim', 'Jyh-Charn Liu', 'Ying Hao']
An all-sharing load balancing protocol in distributed systems on the CSMA/CD local area network
112,988
Abstract#R##N##R##N#The authors had the task of evaluating an innovative style of exhibition in the Royal Museums of Scotland. Believing in the principle (established by the Gothenberg Group) that learning must be studied from the viewpoint of the learner, the authors sought a practicable technique of formative evaluation which would give illuminative insights into experience. Retaining the tools of questionnaire and interview to provide baseline data on reported experience and opinion, they applied Kagan's technique of Interpersonal Process Recall (structured video recall) to gain direct insight into the experience of visitors to the exhibition. This account explains how the technique suited this purpose, and describes the modifications found necessary to classic IPR for this context.
['Judith Jenkins George', 'Anne Stevenson']
Structured video recall: a museum application
315,066
Software "self-healing" is an approach to detect improper operations of software applications, transactions and business processes, and then to initiate corrective action without disrupting users. The software engineering literature contains many studies on software error detection and error correction. In this paper, we introduce a "container based self-healing" framework and provide an outline on how the framework can help in evolving a self-healing system for a complex distributed system.
['Rajesh Kumar Ravi', 'Vinaya Sathyanarayana']
Container based framework for self-healing software system
510,293
Typical scientific applications require vast amounts of processing power coupled with significant I/O capacity. Highly parallel computer systems can provide processing power at low cost, but have historically lacked I/O capacity. By evaluating the performance and scalability of the Intel iPSC/860 Concurrent File System and the Connection Machine DataVault, one can get an idea of the current state of parallel I/O performance. The performance tests show that both systems are able to achieve 70% of peak I/O throughput. >
['John Krystynak', 'Bill Nitzberg']
Performance characteristics of the iPSC/860 and CM-2 I/O systems
497,014
Free floating autonomous underwater manipulation is still an open research topic; an important challenge is offered by free floating manipulation, where the vehicle maintains relevant velocities during manipulation tasks. This paper focuses on the modelling and the control of an Autonomous Underwater Vehicle for Intervention (I-AUV). To this aim, an accurate model of the I-AUV has been implemented, including the interaction with the fluid. Then, a control architecture for the whole system is proposed, with particular attention on a suitable grasp planning strategy. Finally, a free floating manipulation task has been simulated to analyse in detail the performances of the I-AUV control system.
['Roberto Conti', 'Francesco Fanelli', 'Enrico Meli', 'Alessandro Ridolfi', 'Riccardo Costanzi']
A free floating manipulation strategy for Autonomous Underwater Vehicles
911,328
Given the nature of high volume streaming environments, not all tuples can be processed within the required response time. In such instances, it is crucial to dedicate resources to producing the most important results. We will demonstrate the Proactive Promotion Engine (PP) which employs a new preferential resource allocation methodology for priority processing of stream tuples. Our key contributions include: 1) our promotion continuous query language allows the specification of priorities within a query, 2) our promotion query algebra supports proactive promotion query processing, 3) our promotion query optimization locates an optimized PP query plan, and 4) our adaptive promotion control adapts online which subset of tuples are given priority online within a single physical query plan. Our “Portland Home Arrest” demonstration facilitates the capture of in-flight criminals using data generated by the Virginia Tech Network Dynamics and Simulation Science Laboratory via simulation-based modeling techniques.
['Karen Works', 'Elke A. Rundensteiner']
The Proactive Promotion Engine
134,542
Brachial artery flow-mediated vasodilation is increasingly used as a measure of endothelial function. High resolution ultrasound provides a noninvasive method to observe this flow-mediated vasodilation by monitoring the diameter of the artery over time following a transient flow stimulus. Since hundreds of ultrasound images are required to continuously monitor brachial diameter for the 2-3 min during which the vasodilator response occurs, an automated diameter estimation is desirable. However, vascular ultrasound images suffer from structural noise caused by the constructive and destructive interference of the backscattered signals, and the true boundaries of interest that define the diameter are frequently obscured by the multiple-layer structure of the vessel wall. These problems make automated diameter estimation strategies based on the detection of the vessel wall boundary difficult. The authors obtain a robust automated measurement of the vasodilator response by automatically locating the artery using a variable window method, which gives both the lumen center and width. The vessel wall boundary is detected by a global constraint deformable model, which is insensitive to the structural noise in the boundary area. The ambiguity between the desired boundary and other undesired boundaries is resolved by a spatiotemporal strategy. The authors' method provides excellent reproducibility both for interreader and intrareader analyzes of percent change in diameter, and has been successfully used in analyzing over 4000 brachial flow-mediated vasodilation scans from several medical centers in the United States.
['Liexiang Fan', 'Peter Santago', 'Huai Jiang', 'David M. Herrington']
Ultrasound measurement of brachial flow-mediated vasodilator response
184,264
As recent programming languages provide improved conciseness and flexibility of syntax, the development of embedded or internal Domain-Specific Languages has increased. The field of Modeling and Simulation has had a long history of innovation in programming languages (e.g. Simula-67, GPSS). Much effort has gone into the development of Simulation Programming Languages. The ScalaTion project is working to develop an embedded or internal Domain-Specific Language for Modeling and Simulation which could streamline language innovation in this domain. One of its goals is to make the code concise, readable, and in a form familiar to experts in the domain. In some cases the code looks very similar to textbook formulas. To enhance readability by domain experts, a version of ScalaTion is provided that heavily utilizes Unicode. This paper discusses the development of the ScalaTion DSL and the underlying features of Scala that make this possible. It then provides an overview of ScalaTion highlighting some uses of Unicode. Statistical analysis capabilities needed for Modeling and Simulation are presented in some detail. The notation developed is clear and concise which should lead to improved usability and extendibility.
['Michael E. Cotterell', 'John A. Miller', 'Tom Horton']
Unicode in Domain-Specific Programming Languages for Modeling & Simulation ScalaTion as a Case Study
531,906
In this paper, we describe an event where 33 pre-service elementary school teachers planned and facilitated a School Maker Faire as part of their elementary science teaching methods course. We focus on one group of four pre-service teachers who facilitated a balloon rocket station and examine the decisions they made when facilitating children's interactions at the stations and how these decisions led to constraining or creating opportunities for children to engage in engineering design.
["Sean O'Brien", 'Alexandria K. Hansen', 'Danielle Boyd Harlow']
Educating Teachers for the Maker Movement: Pre-service Teachers' Experiences Facilitating Maker Activities
997,681
Empirical results were obtained for the blind source separation of more sources than mixtures using a previously proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: (1) learning an overcomplete representation for the observed data and (2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal.
['Te-Won Lee', 'Michael S. Lewicki', 'Mark A. Girolami', 'Terrence J. Sejnowski']
Blind source separation of more sources than mixtures using overcomplete representations
483,767
The concept of local design rules is introduced. These are integrated circuit (IC) layout rules that define the optimum feature size and spacing in relation to the surrounding geometry and are used to increase the yield of ICs. The impact of these rules on the performance and reliability of ICs is discussed. Algorithms that enable the automatic application of track displacement, track width, and contact size local design rules to IC layout are presented. Simulation results are provided for some layout examples. >
['Gerard A. Allan', 'Anthony Walton', 'R. J. Holwill']
A yield improvement technique for IC layout using local design rules
150,531
The paper presents an application of Influence Nets (INs) in the field of financial informatics. Influence Nets have primarily been used in war games to model effects based operations but, as shown in this paper, they can prove to be equally useful in other domains requiring decision making under uncertain situations. The primary advantage of INs lies in their ability to acquire knowledge from subject matter experts in problem domains that rely heavily on experts’ opinion. A sample case study from the fields of economics and finance is presented in this paper. The case study models the choices faced by a developing country to recover her economy which is going through a difficult phase due to global financial crisis, internal law and order situation and political instability.
['Sajjad Haider', 'Shafqat Ms', 'Shabih Haider']
Using Influence Nets in Financial Informatics: A Case Study of Pakistan
507,407
In the contiguous variant of the Scheduling with Interval Conflicts problem, there is a universe $\mathcal{U}$ consisting of elements being consecutive positive integers. An input is a sequence of conflicts in the form of intervals of length at most i¾ź. For each conflict, an algorithm has to choose at most one surviving element, with the ultimate goal of maximizing the number of elements that survived all conflicts. We present an Ologi¾ź/ loglogi¾ź-competitive randomized algorithm for this problem, beating known lower bound of Ωlogi¾ź that holds for deterministic algorithms.
['Marcin Bienkowski', 'Artur Kraska', 'Paweł Schmidt']
A Randomized Algorithm for Online Scheduling with Interval Conflicts
679,825
Several recent proposals for sharing congestion information across concurrent flows between end-systems overlook an important problem: two or more flows sharing congestion state may in fact not share the same bottleneck. In this paper, we categorize the origins of this false sharing into two distinct cases: (i) networks with QoS enhancements such as differentiated services, where a flow classifier segregates flows into different queues, and (ii) networks with path diversity where different flows to the same destination address are routed differently. We evaluate the impact of false sharing on flow performance and investigate how false sharing can be detected by a sender. We discuss how a sender must respond upon detecting false sharing. Our results show that persistent overload can be avoided with window-based congestion control even for extreme false sharing, but higher bandwidth flows run at a slower rate. We find that delay and reordering statistics can be used to develop robust detectors of false sharing and are superior to those based on loss patterns. We also find that it is markedly easier to detect and react to false sharing than it is to start by isolating flows and merge their congestion state afterward.
['Aditya Akella', 'Srinivasan Seshan', 'Hari Balakrishnan']
The impact of false sharing on shared congestion management
101,950
Supporting Practices in Professional Communities Using Mobile Cloud Services.
['Dejan Kovachev', 'Ralf Klamma']
Supporting Practices in Professional Communities Using Mobile Cloud Services.
750,532
A general holistic framework, also called a process—named “Lean Product Development Flow (LPDF)”—for organizing the engineering work of Product Development (PD), has been proposed as a contribution to the emerging field of Lean Systems Engineering. The framework is based on Lean Principles, with emphasis on PD value-pulling workflow pulsed by takt periods. The value is defined as (1) mission assurance/product quality, (the traditional goals of Systems Engineering) and (2) reduced program cost and schedule achieved by a radical reduction of waste. LPDF is recommended for smaller design programs based on a high degree of legacy knowledge, with technologies mature enough so that the program feasibility is not in question. LPDF may involve limited-scope research, provided that it can be identified early in the program, and carried out separate from the main workflow. The paper is focused on aerospace and defense programs, which are presently burdened with as much as 60–90p of waste, but the process is also applicable to commercial programs. LPDF can be applied to the entire PD, to one or more milestones, and to a multilevel program. LPDF requires both detailed preparations and disciplined execution. The preparations include detailed Value Stream Mapping, separation of research from the main workflow, parsing of the Value Stream map into Takt Periods, architecting the LPDF team using dynamic allocation of resources, and team training. LPDF execution is organized as a flow through a series of short and equal work Takt Periods, each followed by an Integrative Event for structured, comprehensive coordination. Strategic and flexible tactical mitigations of uncertainties must be applied during the flow. LPDF also requires excellent leadership of a Chief Engineer, modeled after Toyota and Honda, who is a dedicated program “owner,” an expert systems designer, a strong leader focused on the program and product integrity, and skilled in consensus-building. The Chief Engineer is responsible for the entire program, with Assistant Chiefs assisting in selected technical areas, and a Project Manager assisting with program administration. An industrial pilot program is currently being undertaken to validate the method. © 2004 Wiley Periodicals, Inc. Syst Eng 7: 352–376, 2004
['Bohdan W. Oppenheim']
Lean product development flow
398,752
This paper introduces a new strategy for setting the regularization parameter when solving large-scale discrete ill-posed linear problems by means of the Arnoldi-Tikhonov method. This new rule is essentially based on the discrepancy principle, although no initial knowledge of the norm of the error that affects the right-hand side is assumed; an increasingly more accurate approximation of this quantity is recovered during the Arnoldi algorithm. Some theoretical estimates are derived in order to motivate our approach. Many numerical experiments performed on classical test problems as well as image deblurring problems are presented.
['Silvia Gazzola', 'Paolo Novati', 'Maria Rosaria Russo']
Embedded techniques for choosing the parameter in tikhonov regularization
126,472
Background#R##N#We previously presented a group theoretical model that describes psychiatric patient states or clinical data in a graded vector-like format based on modulo groups. Meanwhile, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, the current version), is frequently used for diagnosis in daily psychiatric treatments and biological research. The diagnostic criteria of DSM-5 contain simple binominal items relating to the presence or absence of specific symptoms. In spite of its simple form, the practical structure of the DSM-5 system is not sufficiently systemized for data to be treated in a more rationally sophisticated way. To view the disease states in terms of symmetry in the manner of abstract algebra is considered important for the future systematization of clinical medicine.
['Jitsuki Sawamura', 'Shigeru Morishita', 'Jun Ishigooka']
Symmetrical treatment of "Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition", for major depressive disorders.
638,316
We present a secure Internet of Things (IoT) architecture for Smart Cities. The large-scale deployment of IoT technologies within a city promises to make city operations efficient while improving quality of life for city inhabitants. Mission-critical Smart City data, captured from and carried over IoT networks, must be secured to prevent cyber attacks that might cripple city functions, steal personal data and inflict catastrophic harm. We present an architecture containing four basic IoT architectural blocks for secure Smart Cities: Black Network, Trusted SDN Controller, Unified Registry and Key Management System. Together, these basic IoT-centric blocks enable a secure Smart City that mitigates cyber attacks beginning at the IoT nodes themselves.
['Shaibal Chakrabarty', 'Daniel W. Engels']
A secure IoT architecture for Smart Cities
706,275
Multi-agent system(MAS) is a blooming research area, which exhibits a new paradigm for the design, modeling and implementation of complex systems. A significant amount of effort has been made in establishing standards for agent communication and MAS platforms. However, communication is not the only difficulty faced by agent researchers. Research is also directed towards the formal aspects of agents and declarative approaches to model agents. This paper explores the bonding between high-level reasoning engines and low-level agent platforms in the practical setting of using three formal agent reasoning implementations together with an existing agent platform, OPAL, that supports the FIPA standards. We focus our discussion in this paper on our approach to provide declarative agent programming support in connection with the OPAL platform, and show how declarative goals can be used to glue the internal micro agents together to form the hierarchical architecture of the platform.
['Mengqiu Wang', 'Mariusz Nowostawski', 'Martin K. Purvis']
Declarative Agent Programming Support for a FIPA-Compliant Agent Platform
869,607
Applications using Air-Ground communications are expected to grow in the future. Low altitude phases of these wireless links are considered sever channels, as they experience huge delay and Doppler spreads, however, they are not yet accurately characterized in the literature. This paper presents an analytic three dimensional Air-Ground Doppler-delay spread spectrum model for dense scattering environments. The analysis is done by revisiting previous work in the literature. Also a Terrain based Doppler-delay spectrum simulator is proposed, where the results are verified using a real Digital Elevation Model (DEM).
['Mostafa Ibrahim', 'Huseyin Arslan']
Air-Ground Doppler-delay spread spectrum for dense scattering environments
589,308
This paper studies the issue of defining the fitness function for ranking-based selection. Two families of parametric nonlinear functions are considered, for reaching different selection pressures, controlled by the function parameter. Both the static versions and some dynamic varying versions of such functions are considered. The usual linear fitness function is shown to be systematically outperformed by several instances of nonlinear fitness. After a multiobjective analysis, it seems to be possible to recommend the usage of a specific static nonlinear fitness function.
['Vandenberg Lira Silva', 'A.R. da Cruz', 'Eduardo G. Carrano', 'Frederico G. Guimarães', 'Ricardo H. C. Takahashi']
On Nonlinear Fitness Functions for Ranking-Based Selection
263,418
In this paper, we present EX-COFALE, an extension to an existing open-source, web-based adaptive e-learning system, namely COFALE. COFALE, although offers facilities for adaptive content presentation, adaptive use of pedagogical devices and adaptive communication, it lacks facilities for adaptive student assessment. EX-COFALE remedies this deficiency of COFALE by allowing for automated test creation and assessment based on the students? knowledge information. To this end, COFALE has been modified to allow for representation of associations between test questions and learning concepts. Also, assessment is made at two levels, the concept and the goal level, taking into account the difficulty level of the questions. To technically achieve the above, expert systems technology is used.
['Ioannis Hatzilygeroudis', 'Constantinos Koutsojannis', 'Nikolaos Papachristou']
Adding adaptive assessment capabilities to an e-learning system
7,837
Accuracy has been used traditionally to evaluate the performance of classifiers. However, it is well known that accuracy is not able to capture all the different factors that characterize the performance of a multiclass classifier. In this manuscript, accuracy is studied and analyzed as a weighted average of the classification rate of each class. This perspective allows us to propose the dispersion of the classification rate of each class as its complementary measure. In this sense, a graphical performance metric, which is defined in a two dimensional space composed by accuracy and dispersion, is proposed to evaluate the performance of classifiers. We show that the combined values of accuracy and dispersion must fall within a clearly bounded two dimensional region, different for each problem. The nature of this region depends only on the a priori probability of each class, and not on the classifier used. Thus, the performance of multiclassifiers is represented in a two dimensional space where the models can be compared in a more fair manner, providing greater awareness of the strategies that are more accurate when trying to improve the performance of a classifier. Furthermore we experimentally analyze the behavior of seven different performance metrics based on the computation of the confusion matrix values in several scenarios, identifying clusters and relationships between measures. As shown in the experimentation, the graphical metric proposed is specially suitable in challenging, highly imbalanced and with a high number of classes datasets. The approach proposed is a novel point of view to address the evaluation of multiclassifiers and it is an alternative to other evaluation measures used in machine learning.
['Mariano Carbonero-Ruz', 'Francisco J. Martínez-Estudillo', 'Francisco Fernández-Navarro', 'David Becerra-Alonso', 'Alfonso C. Martínez-Estudillo']
A two dimensional accuracy-based measure for classification performance
955,457
We consider point-to-point multiple antenna communication systems in which multiple data streams are transmitted simultaneously. We consider systems which use Tomlinson-Harashima (TH) precoding to pre-subtract the interference among these data streams at the transmitter. In a conventional Tomlinson-Harashima precoding system, transmitter feedback and receiver feedforward processing matrices are used for interference pre-subtraction and channel spatial equalization. In addition to these matrices, we consider a transmitter precoding matrix that generalizes the permutation matrix used for ordering the precoded symbols in existing designs. This extra degree of freedom offers the potential for improved performance. In particular, under a mild signal to noise ratio (SNR) constraint, we find an optimum zero-forcing preceding matrix that minimizes the average symbol error rate (SER) of the data streams subject to a transmitter power constraint. We also show that the proposed design is optimal from an average bit error rate (BER) perspective. Simulation studies show significant improvement over conventional zero-forcing Tomlinson-Harashima precoders.
['Michael Botros Shenouda', 'Timothy N. Davidson']
Minimum SER Zero-Forcing Transmitter Design for MIMO Channels with Interference Pre-Subtraction
489,847
A reputation management system can promote trust in transactions in an online consumer-to-consumer (C2C) market. We model a C2C market by employing an agent-based approach. To discuss the characteristics of goods traded on the market, we define temptation and contribution indexes based on the payoff matrix of a game. According to the results of a simulation conducted with the model, we find that a positive reputation management system can promote cooperative behavior in online C2C markets. Moreover, we also find that such a system is especially effective for an online C2C market where expensive physical goods are traded, whereas a negative reputation management system is effective for an online C2C market where information goods are traded.
['Hitoshi Yamamoto', 'Kazunari Ishida', 'Toshizumi Ohta']
Temptation and contribution in c2c transactions: implications for designing reputation management systems
601,397
Software effort estimation (SEE) is a core activity in all software processes and development lifecycles. A range of increasingly complex methods has been considered in the past 30 years for the prediction of effort, often with mixed and contradictory results. The comparative assessment of effort prediction methods has therefore become a common approach when considering how best to predict effort over a range of project types. Unfortunately, these assessments use a variety of sampling methods and error measurements, making comparison with other work difficult. This article proposes an automatically transformed linear model (ATLM) as a suitable baseline model for comparison against SEE methods. ATLM is simple yet performs well over a range of different project types. In addition, ATLM may be used with mixed numeric and categorical data and requires no parameter tuning. It is also deterministic, meaning that results obtained are amenable to replication. These and other arguments for using ATLM as a baseline model are presented, and a reference implementation described and made available. We suggest that ATLM should be used as a baseline of effort prediction quality for all future model comparisons in SEE.
['Peter A. Whigham', 'Caitlin A. Owen', 'Stephen G. MacDonell']
A Baseline Model for Software Effort Estimation
576,244
The Heterogeneous Feature Code (HFC), a coding scheme based on both human and machine selected local features, is proposed for expression recognition. The HFC consists of two component codes, the Human Observable Code (HOC) and Boost Feature Code (BFC). The HOC is developed to capture the local deformation patches observable to humans when the face is showing an expression. Different expressions appear with a specific set of such patches with different deformation patterns at different locations, which are considered in the configuration of the HOC codewords. The BFC is built upon the mutually connected Haar-like features selected by a set of Adaboost classifiers followed by a multi-class SVM classifier. Unlike the HOC features, the BFC features can hardly be selected by human eyes. The HFC is probably the first code that combines human selected features and machine selected features, and proven effective for expression recognition. Performance evaluation on the Cohn-Kanade extension (CK+) database and the Japanese Female Facial Expression (JAFFE) shows that the HFC outperforms either HOC or BFC component code alone, and is competitive to the state-of-the-art.
['Gee-Sern Hsu', 'Shang-Min Yeh']
Heterogeneous feature code for expression recognition
15,777
The Institute of Computer Science of the Federal University of Rio Grande do Sul is participating in a project to develop a computer system infrastructure for supporting the Brazilian health ministry’s SUS (Sistema Unificado de Saude) program. In order to construct this heterogeneous system, we have to start from existing information sources (possibly "legacy") and their services and gradually integrate them with new software and hardware architectures, guided by requirements evolution and technical innovations. This requires to combine distributed query processing with a service-based middleware framework as defined, e.g., by CORBA. This paper describes the conceptual and implementational aspects of the SUS-specific solution to this challenge.
['J. M. V. de Castilho', 'R. P. da Rocha', 'Theo Härder', 'Joachim Thomas']
Global database views in a federation of autonomous databases
510,376
Computational techniques for topic classification can support qualitative research by automatically applying labels in preparation for qualitative analyses. This paper presents an evaluation of supervised learning techniques applied to one such use case, namely, that of labeling emotions, instructions and information in suicide notes. We train a collection of one-versus-all binary support vector machine classifiers, using cost-sensitive learning to deal with class imbalance. The features investigated range from a simple bag-of-words and n-grams over stems, to information drawn from syntactic dependency analysis and WordNet synonym sets. The experimental results are complemented by an analysis of systematic errors in both the output of our system and the gold-standard annotations. Category: Smart and intelligent computing
['Jonathon Read', 'Erik Velldal', 'Lilja Øvrelid']
Topic Classification for Suicidology
127,684
A Survey of Mathematical Methods for the Construction of Geometric Tolerance Zones
['T. M. Kethara Pasupathy', 'Edward P. Morse', 'Robert G. Wilhelm']
A Survey of Mathematical Methods for the Construction of Geometric Tolerance Zones
50,630
Hot data identification can be applied to a variety of fields. Particularly in flash memory, it has a critical impact on its performance (due to a garbage collection) as well as its life span (due to a wear leveling). Although the hot data identification is an issue of paramount importance in flash memory, little investigation has been made. Moreover, all existing schemes focus almost exclusively on a frequency viewpoint. However, recency also must be considered equally with the frequency for effective hot data identification. In this paper, we propose a novel hot data identification scheme adopting multiple bloom filters to efficiently capture finer-grained recency as well as frequency. In addition to this scheme, we propose a Window-based Direct Address Counting (WDAC) algorithm to approximate an ideal hot data identification as our baseline. Unlike the existing baseline algorithm that cannot appropriately capture recency information due to its exponential batch decay, our WDAC algorithm, using a sliding window concept, can capture very fine-grained recency information. Our experimental evaluation with diverse realistic workloads including real SSD traces demonstrates that our multiple bloom filter-based scheme outperforms the state-of-the-art scheme. In particular, ours not only consumes 50% less memory and requires less computational overhead up to 58%, but also improves its performance up to 65%.
['Dongchul Park', 'David Hung-Chang Du']
Hot data identification for flash-based storage systems using multiple bloom filters
122,080
Formalizing Lattice-Theoretical Aspects of Rough and Fuzzy Sets
['Adam Grabowski', 'Takashi Mitsuishi']
Formalizing Lattice-Theoretical Aspects of Rough and Fuzzy Sets
642,427
This paper presents a research-through-design study into interactive systems for a primary school setting to support teachers' everyday tasks. We developed an open-ended interactive system called FireFlies, which is intended to be interacted with in the periphery of the teacher's attention and thereby become an integral part of everyday routines. FireFlies uses light-objects and audio as a (background) information display. Furthermore, teachers can manipulate the light and audio through physical interaction. A working prototype of FireFlies was deployed in four different classrooms for six weeks. Qualitative results reveal that all teachers found a relevant way of working with FireFlies, which they applied every day of the evaluation. After the study had ended and the systems were removed from the schools, the teachers kept reaching for the devices and mentioned they missed FireFlies, which shows that it had become part of their everyday routine.
['Saskia Bakker', 'Elise van den Hoven', 'Berry Eggen']
FireFlies: physical peripheral interaction design for the everyday routine of primary school teachers
1,433
Every physical sensor is pre-defined to act on purpose. Likewise, a virtual sensor is destined to act on application needs, perhaps do more for a single sensor device. This paper explores the conceptual form of a fundamental virtual sensor and also provides experimental result so that it can be deployed as cloud based service.
['Atrayee Gupta', 'Nandini Mukherjee']
Poster: Virtual Sensor: The Purpose and Applications
832,784
The acquisition of syntactic categories is a crucial step in the process of acquiring syntax. At this stage, before a full grammar is available, only surface cues are available to the learner. Previous computational models have demonstrated that local contexts are informative for syntactic categorization. However, local contexts are affected by sentence-level structure. In this paper, we add sentence type as an observed feature to a model of syntactic category acquisition, based on experimental evidence showing that pre-syntactic children are able to distinguish sentence type using prosody and other cues. The model, a Bayesian Hidden Markov Model, allows for adding sentence type in a few different ways; we find that sentence type can aid syntactic category acquisition if it is used to characterize the differences in word order between sentence types. In these models, knowledge of sentence type permits similar gains to those found by extending the local context.
['Stella Frank', 'Sharon Goldwater', 'Frank Keller']
Adding sentence types to a model of syntactic category acquisition
348,140