abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
Compositional coordination models and languages serve as a means to formally specify and implement component and service connectors. They support large-scale distributed applications by allowing construction of complex component connectors out of simpler ones. In this paper, we extend the design model for the channel-based coordination language Reo by introducing designs for timed connectors. Design is a key concept in Unifying Theories of Programming (UTP), which is used to describe the contract between programmer and client. The model developed in this paper specifies properties of timed channels and timed component connectors properly. Implementation of the design model developed in JTom is provided. | ['Sun Meng'] | Connectors as Designs: The Time Dimension | 500,967 |
Fast solver for Toeplitz bidiagonal systems of linear equations. | ['Przemyslaw Stpiczynski'] | Fast solver for Toeplitz bidiagonal systems of linear equations. | 807,089 |
Grammatical Error Detection and Correction using a Single Maximum Entropy Model | ['Peilu Wang', 'Zhongye Jia', 'Hai Zhao'] | Grammatical Error Detection and Correction using a Single Maximum Entropy Model | 611,009 |
This paper describes a method to design a predistorter (PD) for a GaN-FET power amplifier (PA) by using nonlinear parameters extracted from measured IMD which has asymmetrical peaks peculiar to a memory effect with a second-order lag. While, computationally efficient equations have been reported by C. Rey et al. for the memory effect with a first-order lag. Their equations are extended to be applicable to the memory effect with the second-order lag. The extension provides a recursive algorithm for cancellation signals of the PD each of which updating is made by using signals in only two sampling points. The algorithm is equivalent to a memory depth of two in computational efficiency. The required times for multiplications and additions are counted for the updating of all the cancellation signals and it is confirmed that the algorithm reduces computational intensity lower than half of a memory polynomial in recent papers. A computer simulation has clarified that the PD improves the adjacent channel leakage power ratio (ACLR) of OFDM signals with several hundred subcarriers corresponding to 4G mobile radio communications. It has been confirmed that a fifth-order PD is effective up to a higher power level close to 1dB compression. The improvement of error vector magnitude (EVM) by the PD is also simulated for OFDM signals of which the subcarrier channels are modulated by 16 QAM. | ['Yasuyuki Oishi', 'Shigekazu Kimura', 'Eisuke Fukuda', 'Takeshi Takano', 'Daisuke Takago', 'Yoshimasa Daido', 'Kiyomichi Araki'] | Design of Predistorter with Efficient Updating Algorithm of Power Amplifier with Memory Effect | 443,014 |
Tall buildings are ubiquitous in major cities and house the homes and workplaces of many individuals. However, relatively few studies have been carried out to study the dynamic characteristics of tall buildings based on field measurements. In this paper, the dynamic behavior of the Green Building, a unique 21-story tall structure located on the campus of the Massachusetts Institute of Technology (MIT, Cambridge, MA, USA), was characterized and modeled as a simplified lumped-mass beam model (SLMM), using data from a network of accelerometers. The accelerometer network was used to record structural responses due to ambient vibrations, blast loading, and the October 16th 2012 earthquake near Hollis Center (ME, USA). Spectral and signal coherence analysis of the collected data was used to identify natural frequencies, modes, foundation rocking behavior, and structural asymmetries. A relation between foundation rocking and structural natural frequencies was also found. Natural frequencies and structural acceleration from the field measurements were compared with those predicted by the SLMM which was updated by inverse solving based on advanced multiobjective optimization methods using the measured structural responses and found to have good agreement. | ['Young-Jin Cha', 'Peter Trocha', 'Oral Buyukozturk'] | Field Measurement-Based System Identification and Dynamic Response Prediction of a Unique MIT Building | 826,791 |
Field data gathering techniques such as Contextual Inquiry enable a design team to gather the detailed data they need. These techniques produce enormous amounts of information on how the customers of a system work. This creates a new problem---how to represent all this detail in a coherent, comprehensible form, which can be a suitable basis for design. An affinity diagram effectively shows the scope of the customer problem, but is less effective at capturing and coherently representing the details of how people work. Design teams need a way to organize this detail so they can use it in their own development process.In this tutorial we present our latest methods for representing detailed information about work practice and using these representations to drive system design. These methods have been adopted over the last few years by major product development and information systems organizations. We show how to represent the work of individual users in models, how to generalize these to describe a whole market or department, and how to use these to drive innovative design. We present the process by which we build and use the models and practice key steps. We show how these methods fit into the overall design process, and summarize Contextual Design, which gathers field data and uses it to drive design through a well-defined series of steps.The tutorial is appropriate for those who have used field techniques, especially Contextual Inquiry, and would like to put more structure on the process of using field data.We use shopping as our example of work practice throughout this tutorial, since shopping is simple and understood by everyone. We encourage participants to go grocery shopping shortly before the tutorial, and bring any shopping list they may have used, their store receipt, and a drawing of the store layout and their movement through it. | ['Karen Holtzblatt', 'Hugh Beyer'] | Contextual design: using customer work models to drive systems design | 127,376 |
The challenge of equation-based analog synthesis comes from its dual nature: functions producing good least-square fits to SPICE-generated data are non-convex, hence not amenable to efficient optimization. In this paper, we leverage recent progress on Semidefinite Programming (SDP) relaxations of polynomial (non-convex) optimization. Using a general polynomial allows for much more accurate fitting of SPICE data compared to the more restricted functional forms. Recent SDP techniques for convex relaxations of polynomial optimizations are powerful but alone still insufficient: even for small problems, the resulting relaxations are prohibitively high dimensional. We harness these new polynomial tools and realize their promise by introducing a novel regression technique that fits non-convex polynomials with a special sparsity structure. We show that the coupled sparse fitting and optimization (CSFO) flow that we introduce allows us to find accurate high-order polynomials while keeping the resulting optimization tractable. Using established circuits for optimization experiments, we demonstrate that by handling higher-order polynomials we reduce fitting error to 3.6% from 10%, on average. This translates into a dramatic increase in the rate of constraint satisfaction: for a 1% violation threshold, the success rate is increased from 0% to 78%. | ['Ye Wang', 'Michael Orshansky', 'Constantine Caramanis'] | Enabling Efficient Analog Synthesis by Coupling Sparse Regression and Polynomial Optimization | 380,227 |
Many communication algorithms in parallel systems can be efficiently solved by obtaining edge disjoint Hamiltonian cycles in the interconnection topology of the network. The Eisenstein-Jacobi (EJ) network generated by α = a + b ? , where ? = ( 1 + i 3 ) / 2 , is a degree six symmetric interconnection network. The hexagonal network is a special case of the EJ network that can be obtained by α = a + ( a + 1 ) ? . Generating three edge disjoint Hamiltonian cycles in the EJ network with generator α = a + b ? for gcd ( a , b ) = 1 has been shown before. However, this problem has not been solved when gcd ( a , b ) = d 1 . In this paper, some results to this problem are given. Applications of Hamiltonian cycles are given in the introduction.Rectangular representation is constructed to help finding the solution since it gives a clear visualization of the network.The first 2 edge disjoint Hamiltonian cycles are constructed based on the rectangular representation.The third Hamiltonian cycle is first divided into two cases, when norm is odd or even, and then it is constructed. | ['Zaid Hussain', 'Bella Bose', 'Abdullah Al-Dhelaan'] | Edge disjoint Hamiltonian cycles in Eisenstein-Jacobi networks | 581,301 |
FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere. | ['Gershon Dublon', 'Joseph A. Paradiso'] | FingerSynth: Wearable Transducers for Exploring the Environment and Playing Music Everywhere. | 801,309 |
Peer-to-peer swarming is one of the de facto solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate such a dependence of peers on a publisher. For this purpose, we propose a new metric, namely swarm self-sustainability. A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulations. Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers. | ['Daniel Sadoc Menasché', 'Antonio Augusto de Aragão Rocha', 'Edmundo de Souza e Silva', 'Rosa Maria Meri Leão', 'Donald F. Towsley', 'Arun Venkataramani'] | Estimating self-sustainability in peer-to-peer swarming systems | 58,522 |
Block-fading is a popular channel model that approximates the behavior of different wireless communication systems. In this paper, a union bound on the error probability of binary-coded systems over block-fading channels is proposed. The bound is based on uniform interleaving of the coded sequence prior to transmission over the channel. The distribution of error bits over the fading blocks is computed. For a specific distribution pattern, the pairwise error probability is derived. Block-fading channels modeled as Rician and Nakagami distributions are studied. We consider coherent receivers with perfect and imperfect channel side information (SI) as well as noncoherent receivers employing square-law combining. Throughout the paper, imperfect SI is obtained using pilot-aided estimation. A lower bound on the performance of iterative receivers that perform joint decoding and channel estimation is obtained assuming the receiver knows the correct data and uses them as pilots. From this, the tradeoff between channel diversity and channel estimation is investigated and the optimal channel memory is approximated analytically. Furthermore, the optimal energy allocation for pilot signals is found for different channel memory lengths. | ['Salam A. Zummo', 'Ping Cheng Yeh', 'Wayne E. Stark'] | A union bound on the error probability of binary codes over block-fading channels | 116,161 |
With the rapid progress of wireless technology, mobile users can retrieve multiple real-time data with portable devices from mobile service centers. Providing deadline guarantees for queries over mobile environments is a challenging problem due to real-time data arrival rates and time-varying data contents. In this paper, we propose a prediction-based scheme for periodic continuous queries over wireless multi-channels. It is an important issue to effectively disseminate various materials in mobile environments. This paper highlights important problems that have mobile real-time systems from improving the system performance it could be for periodic continuous queries. While current systems aim to foster significant improvements in access latency, this paper argues that most systems are still limited to just being online material without performance concerns. Mobile real-time infrastructure has become a topic for research. A performance-driven model to mobile real-time delivery is proposed in this paper. A novel methodology for deploying periodic continuous queries based on prediction mechanism in mobile environments is presented. We focus on describing the dynamic processing in terms of performance, rather than the details of its implementation. | ['Ding-Jung Chiang', 'Ching-Sheng Wang', 'Chien-Liang Chen', 'Wen-Jay Lo'] | Scheduling analysis of broadcasting real-time data based on prediction scheme over wireless multi-channels | 217,873 |
We propose an efficient heuristic algorithm to rearrange multicast trees proactively in delay constrained dynamic membership multicast networks. The objective is to construct low cost multicast tree with controlled number of disrupted members within a very short rearrangement time when a member joins and leaves the tree. In addition, a heuristic algorithm to obtain near-optimal solution to the problem is introduced as the benchmark. A new performance index to gauge the efficiency of the proposed heuristics more accurately is also introduced. Better performance is obtained when compared with existing methods using simulations. | ['Keen-Mun Yong', 'Gee-Swee Poo', 'Tee-Hiang Cheng'] | Proactive Rearrangement in Delay Constrained Multicast with Dynamic Membership Support | 129,271 |
Speeding up R-LWE post-quantum key exchange. | ['Shay Gueron', 'Fabian Schlieker'] | Speeding up R-LWE post-quantum key exchange. | 989,882 |
A model based on strikingly different philosophical as. sumptions from those currently popular is proposed for the design of online subject catalog access. Three design principles are presented and discussed: uncertainty (subject indexing is indeterminate and probabilistic beyond a certain point), variety (by Ashby’s law of requisite variety, variety of searcher query must equal variety of document indexing), and complexity (the search process, particularly during the entry and orientation phases, is subtler and more complex, on several grounds, than current models assume). Design features presented are an access phase, including entry and orientation, a hunting phase, and a selection phase. An end-user thesaurus and a front-end system mind are presented as examples of online catalog system components to improve searcher success during entry and orientation. The proposed model is “wrapped around” existing Library of Congress subject-heading indexing in such a way as to enhance access greatly without requiring reindexing. It is argued that both for cost reasons and in principle this is a superior approach to other design philosophies. | ['Marcia J. Bates'] | Subject access in online catalogs: A design model | 337,466 |
This paper develops a sensor fault detection scheme for rail vehicle passive suspension systems, using a fault detection observer, in the presence of uncertain track regularity and vehicle noises which are modeled as external disturbances and stochastic process signals. To design the fault detection observer, the suspension system states are augmented with the disturbances treated as new states, leading to an augmented and singular system with stochastic noises. Using system output measurements, the observer is designed to generate the needed residual signal for fault detection. Existence conditions for observer design are analyzed and illustrated. In term of the residual signal, both fault detection threshold and fault detectability condition are obtained, to form a systematic detection algorithm. Simulation results on a realistic vehicle system model are presented to illustrate the observer behavior and fault detection performance. | ['Zehui Mao', 'Yanhao Zhan', 'Gang Tao', 'Bin Jiang', 'Xing-Gang Yan'] | Sensor fault detection for rail vehicle suspension systems with disturbances and stochastic noises | 942,614 |
A central question in algorithmic game theory is to measure the inefficiency (ratio of costs) of Nash equilibria (NE) with respect to socially optimal solutions. The two established metrics used for this purpose are price of anarchy (POA) and price of stability (POS), which respectively provide upper and lower bounds on this ratio. A deficiency of these metrics, however, is that they are purely existential and shed no light on which of the equilibrium states are reachable in an actual game, i.e., via natural game dynamics. This is particularly striking if these metrics differ significantly, such as in network design games where the exponential gap between the best and worst NE states originally prompted the notion of POS in game theory (Anshelevich et al., FOCS 2002). In this paper, we make progress toward bridging this gap by studying network design games under natural game dynamics. #R##N#First we show that in a completely decentralized setting, where agents arrive, depart, and make improving moves in an arbitrary order, the inefficiency of NE attained can be polynomially large. This implies that the game designer must have some control over the interleaving of these events in order to force the game to attain efficient NE. We complement our negative result by showing that if the game designer is allowed to execute a sequence of improving moves to create an equilibrium state after every batch of agent arrivals or departures, then the resulting equilibrium states attained by the game are exponentially more efficient, i.e., the ratio of costs compared to the optimum is only logarithmic. Overall, our two results establish that in network games, the efficiency of equilibrium states is dictated by whether agents are allowed to join or leave the game in arbitrary states, an observation that might be useful in analyzing the dynamics of other classes of games with divergent POS and POA bounds. | ['Shuchi Chawla', 'Joseph', 'Naor', 'Debmalya Panigrahi', 'Mohit Singh', 'Seeun Umboh'] | Timing Matters: Online Dynamics in Broadcast Games | 940,484 |
Efficient information searching and retrieval methods are needed to navigate the ever increasing volumes of digital information. Traditional lexical information retrieval methods can be inefficient and often return inaccurate results. To overcome problems such as polysemy and synonymy, concept-based retrieval methods have been developed. One such method is Latent Semantic Indexing (LSI), a vector-space model, which uses the singular value decomposition (SVD) of a term-by-document matrix to represent terms and documents in k-dimensional space. As with other vector-space models, LSI is an attempt to exploit the underlying semantic structure of word usage in documents. During the query matching phase of LSI, a user's query is first projected into the term-document space, and then compared to all terms and documents represented in the vector space. Using some similarity measure, the nearest (most relevant) terms and documents are identified and returned to the user. The current LSI query matching method requires that the similarity measure be computed between the query and every term and document in the vector space. In this paper, the kd-tree searching algorithm is used within a recent LSI implementation to reduce the time and computational complexity of query matching. The kd-tree data structure stores the term and document vectors in such a way that only those terms and documents that are most likely to qualify as nearest neighbors to the query will be examined and retrieved. | ['M. K. Hughey', 'Michael W. Berry'] | Improved Query Matching Using kd-Trees: A Latent Semantic Indexing Enhancement | 318,983 |
When a piece of software is loaded on an untrusted machine it can be analyzed by an attacker who could discover any secret information hidden in the code. Software protection by continuously updating the components deployed in an untrusted environment forces a malicious user to restart her or his analyses, thus reducing the time window in which the attack is feasible. In this setting, both the attacker and the defender need to know how to direct their(necessarily limited) efforts. In this paper, we analyze the problem from a game theoretical perspective in order to devise a rational strategy to decide when and which orthogonal updates have to be scheduled in order to minimize the security risks of tampering. We formalize the problem of protecting a set of software modules and we cast it as a game. Since the update strategy is observable by the attacker, we show that the Leader-Follower equilibrium is the proper solution concept for such a game and we describe the basic method to compute it. | ['Nicola Basilico', 'Andrea Lanzi', 'Mattia Monga'] | A Security Game Model for Remote Software Protection | 969,872 |
Comprehending and modifying software is at the heart of many software engineering tasks, and this explains the growing interest that software reverse engineering has gained in the last 20 years. Broadly speaking, reverse engineering is the process of analyzing a subject system to create representations of the system at a higher level of abstraction. This paper briefly presents an overview of the field of reverse engineering, reviews main achievements and areas of application, and highlights key open research issues for the future. | ['Gerardo Canfora', 'M. Di Penta'] | New Frontiers of Reverse Engineering | 452,532 |
Tyre Footprint Reconstruction in the Vehicle Axle Weight-in-Motion Measurement by Fibre-optic Sensors | ['Alexander Grakovski', 'Alexey Pilipovecs', 'Igor Kabashkin', 'Elmars Petersons'] | Tyre Footprint Reconstruction in the Vehicle Axle Weight-in-Motion Measurement by Fibre-optic Sensors | 777,867 |
In this paper, we present a new method for a locally adaptive region detector called Bilateral kernel-based Region Detector (BIRD). This work is to detect stable regions from images by consecutively computing a multiscale decomposition based on the bilateral kernel. The BIRD regards a region as covariant if it exhibits predictability in its photometric distance over spatial distance. Distinctiveness and robustness across scales are achieved by selecting the extremely stable regions through sequential scales. Our method is simple and easy to implement. Experimental results show that our method outperforms competing affine region detection methods in efficiency on region detection. | ['Woon Cho', 'Sung-Yeol Kim', 'Andreas F. Koschan', 'Mongi A. Abidi'] | Bilateral kernel-based Region Detector | 5,070 |
Wireless mesh networks (WMN) have emerged as an economical means for delivering last-mile Internet access. Multicast is a fundamental service in WMNs because it efficiently distributes data among a group of nodes. Multicast algorithms in WMNs are designed to maximize system throughput and minimize delay. Previous work has unrealistically assumed that the underlying WMN is link-homogeneous. We consider one important form of link heterogeneity: different link loss ratios, or equivalently different ETX. We model different link loss ratios by defining a new graph theory problem, HW-SCDS, on an edge-weighted directed graph, where the edge weights model ETX, the reciprocal of link loss ratios. We minimize transmissions in a multicast by computing a minimum HW-SCDS in the edge-weighted graph. We prove HW-SCDS is NP-hard and devise a greedy algorithm for it. Simulations show that our algorithm significantly outperforms the current best WMN multicast algorithm by both increasing throughput and reducing delay. | ['Guokai Zeng', 'Bo Wang 0001', 'Matt W. Mutka', 'Li Xiao', 'Eric Torng'] | Efficient multicast for link-heterogeneous wireless mesh networks | 287,726 |
A cross-layer SDN control plane for optical multicast-featured datacenters | ['Yiting Xia', 'T. S. Eugene Ng'] | A cross-layer SDN control plane for optical multicast-featured datacenters | 162,708 |
Legal Issues Associated with Data Management in European Clouds. | ['Attila Kertesz', 'Szilvia Varadi'] | Legal Issues Associated with Data Management in European Clouds. | 757,805 |
Repository managers increasingly use toolkits such as DSpace to manage submission of and access to resources. However, DSpace does not support the highly desirable distributed replication functionality provided by LOCKSS. This paper describes an experiment to seamlessly interconnect DSpace and LOCKSS in a generalisable manner. An experimental prototype confirms that this is indeed possible, and that the interoperation can be efficient within the constraints of the systems. | ['Mushashu Lumpa', 'Ngoni Munyaradzi', 'Hussein Suleman'] | Interconnecting DSpace and LOCKSS | 304,033 |
The size of nuclei in histological preparations from excised breast tumors is predictive of patient outcome (large nuclei indicate poor outcome). Pathologists take into account nuclear size when performing breast cancer grading. In addition,the mean nuclear area (MNA) has been shown to have independent prognostic value. The straightforward approach to measuring nuclear size is by performing nuclei segmentation. We hypothesize that given an image of a tumor region with known nuclei locations,the area of the individual nuclei and region statistics such as the MNA can be reliably computed directly from the image data by employing a machine learning model,without the intermediate step of nuclei segmentation. Towards this goal,we train a deep convolutional neural network model that is applied locally at each nucleus location,and can reliably measure the area of the individual nuclei and the MNA. Furthermore,we show how such an approach can be extended to perform combined nuclei detection and measurement,which is reminiscent of granulometry. | ['Mitko Veta', 'Paul J. van Diest', 'Josien P. W. Pluim'] | Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation | 824,995 |
Complete Theories with Countably many Rigid Nonisomorphic Models | ['Jerome Malitz'] | Complete Theories with Countably many Rigid Nonisomorphic Models | 54,805 |
Today, the performance and size of micro-gyrometers are mainly limited by their associated electronics. Indeed detection electronics noise and drift induce respectively reduced resolution and stability whereas low drift associated electronics has not been studied. In order to increase the gyrometers performance, the development of a specific detection integrated circuit is presented and special care is taken on the low drift and low noise design. | ['R. Levy', 'Antoine Dupret', 'H. Mathias', 'Jean Guerard'] | A low drift, low noise detection IC applied to MEMS gyros | 531,821 |
Considers the problem of estimating the parameters of a stable (stationary), scalar ARMA (p,q) signal model driven by an i.i.d. non-Gaussian sequence. The driving noise sequence is not observed. The signal is allowed to be nonminimum phase and/or noncausal (i.e., poles and zeros may lie both inside as well as outside the unit circle). The author addresses the problem of parameter identifiability given the higher order cumulant spectrum of the signal on a finite set of polyspectral frequencies. The sufficient set of polyspectral frequencies required to achieve parameter identifiability is the least "rigid" to date. | ['Jitendra K. Tugnait'] | On parameter identifiability of ARMA models of non-Gaussian signals via cumulant spectrum matching | 471,834 |
Endoscopic ultrasonography (EUS) is limited by variability in the examiner's subjective interpretation to differentiate between normal, leiomyoma of esophagus and early esophageal carcinoma. By using information otherwise discarded by conventional EUS systems, quantitative spectral analysis of the raw pixels (picture elements) underlying EUS image enables lesions to be characterized more objectively. In this paper, we propose to represent texture features of early esophageal carcinoma in EUS images as a graph by expressing pixels as nodes and similarity between the gray-level or local features of the EUS image as edges. Then, similarity measurements such as a high-order graph matching kernel can be constructed so as to provide an objective quantification of the properties of the texture features of early esophageal carcinoma in EUS images. This is in terms of the topology and connectivity of the analyzed graphs. Because such properties are directly related to the structure of early esophageal carcinoma lesions in EUS images, they can be used as features for characterizing and classifying early esophageal carcinoma. Finally, we use a refined SVM model based on the new high-order graph matching kernel, resulting an optimal prediction of the types of esophageal lesions. A 10-fold cross validation strategy is employed to evaluate the classification performance. After multiple computer runs of the new kernel SVM model, the overall accuracy for the diagnosis between normal, leiomyoma of esophagus and early esophageal carcinoma was 93 %. Moreover, for the diagnosis of early esophageal carcinoma, the average accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were 89.4 %, 94 %, 95 %, 89 %, and 97 % respectively. The area under all the three ROC curves were close to 1. | ['Zhihong Zhang', 'Lu Bai', 'Peng Ren', 'Edwin R. Hancock'] | High-order graph matching kernel for early carcinoma EUS image classification | 649,001 |
We propose a novel approach to human action recognition, with motion capture data (MoCap), based on grouping sub-body parts. By representing configurations of actions as manifolds, joint positions are mapped on a subspace via principal geodesic analysis. The reduced space is still highly informative and allows for classification based on a non-parametric Bayesian approach, generating behaviors for each sub-body part. Having partitioned the set of joints, poses relative to a sub-body part are exchangeable, given a specified prior and can elicit, in principle, infinite behaviors. The generation of these behaviors is specified by a Dirichlet process mixture. We show with several experiments that the recognition gives very promising results, outperforming methods requiring temporal alignment. | ['Fabrizio Natola', 'Valsamis Ntouskos', 'Marta Sanzari', 'Fiora Pirri'] | Bayesian Non-parametric Inference for Manifold Based MoCap Representation | 596,700 |
Acceleration Based Particle Swarm Optimization for Graph Coloring Problem. | ['Jitendra Agrawal', 'Shikha Agrawal'] | Acceleration Based Particle Swarm Optimization for Graph Coloring Problem. | 735,990 |
A closed-loop method combining N-out-of-M antenna selection and space-time block coding from the selected antennas is proposed and studied in this paper. Performance is evaluated for WCDMA parameters in both frequency nonselective Rayleigh fading channels and in frequency selective (Vehicular A) channels. Operation in intra- and inter-cell dominant interference scenarios is considered. Numerical results show that the additional antenna selection procedure brings a supplementary uncoded BER performance improvement in the range of approximately 2-3.5 dB for flat-fading channels and 0.8-1.5 dB for frequency selective channels (@BER=12%). A suboptimum selection scheme requiring only a single bit for antenna selection is also presented. In addition, the impact on performance of errors in the feedback channel is investigated in this paper. | ['Marcos D. Katz', 'Esa Tapani Tiirola', 'Juha Ylitalo'] | Combining space-time block coding with diversity antenna selection for improved downlink performance | 420,349 |
Data shuffling is one of the fundamental building blocks for distributed learning algorithms, that increases the statistical gain for each step of the learning process. In each iteration, different shuffled data points are assigned by a central node to a distributed set of workers to perform local computation, which leads to communication bottlenecks. The focus of this paper is on formalizing and understanding the fundamental information-theoretic tradeoff between storage (per worker) and the worst-case communication overhead for the data shuffling problem. We completely characterize the information theoretic tradeoff for K = 2, and K = 3 workers, for any value of storage capacity, and show that increasing the storage across workers can reduce the communication overhead by leveraging coding. We propose a novel and systematic data delivery and storage update strategy for each data shuffle iteration, which preserves the structural properties of the storage across the workers, and aids in minimizing the communication overhead in subsequent data shuffling iterations. | ['Mohamed Adel Attia', 'Ravi Tandon'] | Information Theoretic Limits of Data Shuffling for Distributed Learning | 892,848 |
A 55–70GHz two-stage tunable polyphase filter with feedback control for quadrature generation with <2° and <0.32dB phase/amplitude imbalance in 28nm CMOS process | ['Tong Zhang', 'Mazhareddin Taghivand', 'Jacques C. Rudell'] | A 55–70GHz two-stage tunable polyphase filter with feedback control for quadrature generation with <2° and <0.32dB phase/amplitude imbalance in 28nm CMOS process | 678,506 |
The multiple access control (MAC) problem in a wireless network has intrigued researchers for years. An effective MAC protocol is very much desired because efficient allocation of channel bandwidth is imperative in accommodating a large user population with satisfactory quality of service. MAC protocols for integrated data and voice services in a cellular wireless network are even more intricate to design due to the dynamic user population size and traffic demands. Considerable research efforts expended in tackling the problem have resulted in a myriad of MAC protocols. While each protocol is individually shown to be effective by the respective designers, it is unclear how these different protocols compare against each other on a unified basis. In this paper, we quantitatively compare six recently proposed TDMA-based MAC protocols for integrated wireless data and voice services. We first propose a taxonomy of TDMA-based protocols, from which we carefully select six protocols, namely CHARISMA, D-TDMA/VR, D-TDMA/FR, DRMA, RAMA, and RMAV, such that they are devised based on rather orthogonal design philosophies. The objective of our comparison is to highlight the merits and demerits of different protocol designs. | ['Yu-Kwong Kwok', 'Vincent K. N. Lau'] | A quantitative comparison of multiple access control protocols for integrated voice and data services in a cellular wireless network | 398,304 |
We propose a parallel and modular architecture well suited to 802.16e WiMax LDPC code decoding. The proposed design is fully compliant with all the code classes defined by the WiMax standard. It has been validated through an implementation on a Xilinx Virtex5 FPGA component. A four or six-module FPGA design yields a throughput ranging from 10 to 30 Mbit/s by means of 20 iterations at a clock frequency of 160 MHz which mostly satisfies communication throughput in the case of the WiMax mobile communication. | ['François Charot', 'Christophe Wolinski', 'Nicolas Fau', 'François Hamon'] | A Parallel and Modular Architecture for 802.16e LDPC Codes | 9,237 |
Phrase-based statistical machine translation (PBSMT) decoders translate source sentences one phrase at a time using strong independence assumptions over the source phrases. Translation table scores are typically independent of context, language model scores depend on a few words surrounding the target phrase and distortion models do not influence directly the choice of target phrases. In this work, we propose to condition the selection of each target word on the whole source sentence using a multilayer perceptron (MLP). Our interest in MLP lies in their hidden layer which encodes source sentences in a representation that is not directly tied to the notion of word. We evaluated our approach on an English to French translation task. Our MLP model was able to improve BLEU scores over a standard PBSMT system. | ['Alexandre Patry', 'Philippe Langlais'] | Going Beyond Word Cooccurrences in Global Lexical Selection for Statistical Machine Translation using a Multilayer Perceptron | 482,204 |
Mapping a Set of Reals Onto the Reals | ['Arnold W. Miller'] | Mapping a Set of Reals Onto the Reals | 280,746 |
A social choice function may or may not satisfy a desirable property depending on its domain of definition. For the same reason, different conditions may be equivalent for functions defined on some domains, while different in other cases. Understanding the role of domains is therefore a crucial issue in mechanism design. We illustrate this point by analyzing the role of different conditions that are always related, but not always equivalent to strategy-proofness. We define two very natural conditions that are necessary for strategy-proofness: monotonicity and reshuffling invariance. We remark that they are not always sufficient. Then, we identify a domain condition, called intertwinedness, that ensures the equivalence between our two conditions and that of strategy-proofness. We prove that some important domains are intertwined: those of single-peaked preferences, both with public and private goods, and also those arising in simple models of house allocation. We prove that other necessary conditions for strategy-proofness also become equivalent to ours when applied to functions defined on intertwined domains, even if they are not equivalent in general. We also study the relationship between our domain restrictions and others that appear in the literature, proving that we are indeed introducing a novel proposal. | ['Salvador Barberà', 'Dolors Berga', 'Bernardo Gomez Moreno'] | Two Necessary Conditions for Strategy-Proofness: on What Domains are they also Sufficient? | 601,700 |
Neoclassical models of strategic behavior have yielded many insights into competitive behavior, despite the fact that they often rely on a number of assumptions---including instantaneous market clearing and perfect foresight---that have been called into question by a broad range of research. Researchers generally argue that these assumptions are “good enough” to predict an industry's probable equilibria, and that disequilibrium adjustments and bounded rationality have limited competitive implications. Here we focus on the case of strategy in the presence of increasing returns to highlight how relaxing these two assumptions can lead to outcomes quite different from those predicted by standard neoclassical models. Prior research suggests that in the presence of increasing returns, tight appropriability, and accommodating rivals, in some circumstances early entrants can achieve sustained competitive advantage by pursuing “get big fast” (GBF) strategies: Rapidly expanding capacity and cutting prices to gain market share advantage and exploit positive feedbacks faster than their rivals. Using a simulation of the duopoly case we show that when the industry moves slowly compared to capacity adjustment delays, boundedly rational firms find their way to the equilibria predicted by conventional models. However, when market dynamics are rapid relative to capacity adjustment, forecasting errors lead to excess capacity---overwhelming the advantage conferred by increasing returns. Our results highlight the risks of ignoring the role of disequilibrium dynamics and bounded rationality in shaping competitive outcomes, and demonstrate how both can be incorporated into strategic analysis to form a dynamic, behavioral game theory amenable to rigorous analysis. | ['John D. Sterman', 'Rebecca Henderson', 'Eric D. Beinhocker', 'Lee I. Newman'] | Getting Big Too Fast: Strategic Dynamics with Increasing Returns and Bounded Rationality | 241,097 |
Evolutionary Testing (ET) has been shown to be very successful for testing real world applications [10]. The original ET approach focuses on searching for a high coverage of the test object by generating separate inputs for single function calls.#R##N##R##N#We have identified a large set of real world application for which this approach does not perform well because only sequential calls of the tested function can reach a high structural coverage (white box test) or can check functional behavior (black box tests). Especially, control software which is responsible for controlling and constraining a system cannot be tested successfully with ET. Such software is characterized by storing internal data during a sequence of calls.#R##N##R##N#In this paper we present the Evolutionary Sequence Testing approach for white box and black box tests. For automatic sequence testing, a fitness function for the application of ET will be introduced, which allows the optimization of input sequences that reach a high coverage of the software under test. The authors also present a new compact description for the generation of real-world input sequences for functional testing. A set of objective functions to evaluate the test output of systems under test have been developed. These approaches are currently used for the structural and safety testing of car control systems. | ['André Baresel', 'Hartmut Pohlheim', 'Sadegh Sadeghipour'] | Structural and functional sequence test of dynamic and state-based software with evolutionary algorithms | 23,905 |
Scene change detection between multitemporal image scenes can be used to interpret the variation of regional land use, and has significant potential in the application of urban development monitoring at the semantic level. The traditional methods directly comparing the independent semantic classes neglect the temporal correlation, and thus suffer from accumulated classification errors. In this paper, we propose a novel scene change detection method via kernel slow feature analysis (KSFA) and postclassification fusion, which integrates independent scene classification with scene change detection to accurately determine scene changes and identify the “from-to” transition type. After representation with the bag-of-visual-words model, KSFA is proposed to extract the nonlinear temporally invariant features, to better measure the change probability between corresponding multitemporal image scenes. Two postclassification fusion methods, which are based on Bayesian theory and predefined rules, respectively, are then employed to identify the optimal coupled class combinations of multitemporal scene pairs. Furthermore, in addition to identifying semantic changes, the proposed method can also improve the performance of scene classification, since the unchanged scenes are more likely to belong to the same class. Two experiments with high-resolution remote sensing image scene data sets confirm that the proposed method can increase the accuracy of scene change detection, scene transition identification, and scene classification. | ['Chen Wu', 'Liangpei Zhang', 'Bo Du'] | Kernel Slow Feature Analysis for Scene Change Detection | 991,479 |
The performance of global Internet communication is significantly influenced by the reliability and the stability of Internet routing systems, especially the border gateway protocol (BGP), the de facto standard for inter-domain routing. In this paper, we investigate the reliability of BGP sessions and the internal BGP (IBGP) networks in the environment of unreliable physical and routing layers. The reliability analysis of IBGP networks is difficult, because IBGP sessions may be correlated to each other by the shared underlying physical links and TCP enables IBGP sessions to tolerate certain level of network failures. In this paper, we first investigate the failure probability of IBGP sessions and its relation to BGP timers and TCP retransmission behaviors. The result of this investigation is a simple modification of TCP that increases the robustness of IBGP sessions significantly. Second, we present a novel reliability model to measure the resilience of the whole IBGP networks. This model is of great importance for studying the function loss of IBGP operations and it also provides the theory basis for IBGP network optimization in terms of reliability | ['Li Xiao', 'Klara Nahrstedt'] | Reliability models and evaluation of internal BGP networks | 478,229 |
This paper is concerned with speech enhancement using Phase-Error based Filters (PEF) and Excitation Source (ES) information in car environments. For this purpose, we firstly use ES information to determine the time-delay from speech signals obtained by two microphones for sound source localization. Then, the phase-error based filters are performed by prior knowledge regarding the time delay obtained by ES information and the phases of the signals recorded by the microphones. The experimental results showed the effectiveness of the presented method for speech enhancement. | ['Keun-Chang Kwak', 'Myung-Won Lee'] | Speech Enhancement Using ES Information and Phase-Error Based Filters | 942,580 |
In a series of papers (2011---2013) N. Ma and P. Ishwar considered a range of distributed source coding problems that arise in the context of interactive computation of functions, characterizing the region of achievable communication rates. We consider the problems of interactive computation of functions by two terminals and interactive computation in a collocated network, showing that the rate regions for both these problems can be achieved using several rounds of polar-coded transmissions. | ['Talha Cihad Gulcu', 'Alexander Barg'] | Interactive function computation via polar coding | 827,126 |
Register of vietnamese tones in continuous speech. | ['Do Dat Tran', 'Eric Castelli'] | Register of vietnamese tones in continuous speech. | 790,043 |
Cognitive Radio Networks (CRNs) provide a solution for the spectrum scarcity problem facing the wireless communications community. To be able to utilize CRNs in practical applications, a certain level of quality-of-service (QoS) should be guaranteed to the secondary users (SUs) in such networks. In this paper, we propose a packet scheduling scheme that orders the SUs' transmissions according to the packet dropping rates and the number of packets queued waiting for transmission. A medium access control (MAC) protocol, based on the mentioned scheduling scheme, is proposed for a centralized CRN. In addition, the scheduling scheme is adapted for a distributed CRN, by introducing a feature that allows SUs to organize access to the available spectrum without the need for a central unit. Extensive simulation results are presented to evaluate the proposed protocols, in comparison with other MAC protocols designed for CRNs. The results demonstrate the effectiveness of our proposed protocols to guarantee the required QoS for voice packet transmission, while maintaining fairness among SUs. | ['Khaled Ben Ali', 'Weihua Zhuang'] | Link-Layer Resource Allocation for Voice Users in Cognitive Radio Networks | 90,771 |
This note presents a design technique for the delay-based controller called proportional integral retarded (PIR), which solves the regulation problem of a general class of stable second-order LTI systems. Using spectral analysis, the technique yields a tuning strategy for the PIR by placing a triple real dominant root for the closed-loop system. This result ultimately guarantees a desired exponential decay rate $\sigma_{d}$ while achieving the PIR tuning as an explicit function of $\sigma_{d}$ and system parameters. | ['Adrián Ramírez', 'Sabine Mondié', 'Ruben Garrido', 'Rifat Sipahi'] | Design of Proportional-Integral-Retarded (PIR) Controllers for Second-Order LTI Systems | 763,836 |
Stationary [e.g., forward-backward method (FBM)] and nonstationary [e.g., conjugate gradient squared, quasi-minimal residual, and biconjugate gradient stabilized (Bi-CGSTAB)] iterative techniques are applied to the solution of electromagnetic wave scattering from dielectric random rough surfaces with arbitrary complex dielectric constants. The convergence issues as well as the efficiency and accuracy of all the approaches considered in this paper are investigated by comparing obtained scattering (in the form of normalized radar cross section) and surface field values with the numerically exact solution, computed by employing the conventional method of moments. It has been observed that similar to perfectly and imperfectly conducting rough surface cases, the stationary iterative FBM converges faster when applied to geometries yielding best conditioned systems but exhibits convergence difficulties for general geometries due to its inherit limitations. However, nonstationary techniques are, in general, more robust when applied to arbitrarily general dielectric random rough surfaces, which yield more ill-conditioned systems. Therefore, they might prove to be more suitable for general scattering problems. Besides, as opposed to the perfectly and imperfectly conducting rough surface cases, the Bi-CGSTAB method and FBM show two interesting behaviors for dielectric rough surface profiles: 1) FBM generally converges for reentrant surfaces when the vertical polarization is considered and 2) the Bi-CGSTAB method has a peculiar convergence problem for horizontal polarization. Unlike the other nonstationary iterative techniques used in this paper, where a Jacobi preconditioner is used, convergent results are obtained by using a block-diagonal preconditioner | ['Kenan İnan', 'Vakur B. Erturk'] | Application of Iterative Techniques for Electromagnetic Scattering From Dielectric Random and Reentrant Rough Surfaces | 413,814 |
In this paper is described the design and development of a KINECT based interactive platform application aimed to physical rehabilitation and cognitive training of the minors in situations of illness. This platform, called TANGO:H, is highly configurable and customizable, thanks to exercises editor: TANGO:H Designer. So, the platform allows the adaptation of exercises and activities according to the specific characteristics of each user and user group. | ['Carina S. González', 'Pedro A. Toledo', 'Alberto Mora', 'Yeray Barrios'] | Gamified Platform for Physical and Cognitive Rehabilitation | 848,067 |
Quelques aspects de la sémantique et des équivalences de requêtes dans le langage SOLf. | ['Patrick Bosc', 'Olivier Pivert'] | Quelques aspects de la sémantique et des équivalences de requêtes dans le langage SOLf. | 760,192 |
Diffusion tensor imaging (DT-MRI) is very sensitive to corrupting noise due to the non linear relationship between the diffusion-weighted image intensities (DW-MRI) and the resulting diffusion tensor. Denoising is a crucial step to increase the quality of the estimated tensor field. This enhanced quality allows for a better quantification and a better image interpretation. The methods proposed in this paper are based on the Non-Local (NL) means algorithm. This approach uses the natural redundancy of information in images to remove the noise. We introduce three variations of the NL-means algorithms adapted to DW-MRI and to DT-MRI. Experiments were carried out on a set of 12 diffusion-weighted images (DW-MRI) of the same subject. The results show that the intensity based NL-means approaches give better results in the context of DT-MRI than other classical denoising methods, such as Gaussian Smoothing, Anisotropic Diffusion and Total Variation. | ['Nicolas Wiest-Daesslé', 'Sylvain Prima', 'Pierrick Coupé', 'Sean Patrick Morrissey', 'Christian Barillot'] | Non-local means variants for denoising of diffusion-weighted and diffusion tensor MRI | 306,296 |
Seamless switching of H.265/HEVC-coded dash representations with open GOP prediction structure | ['Ye Yan', 'Miska M. Hannuksela', 'Houqiang Li'] | Seamless switching of H.265/HEVC-coded dash representations with open GOP prediction structure | 685,672 |
Discriminative Metric Learning on Extended Grassmann Manifold for Classification of Brain Signals | ['Yoshikazu Washizawa'] | Discriminative Metric Learning on Extended Grassmann Manifold for Classification of Brain Signals | 705,241 |
Seeing who sees: Contrastive access helps children reason about other minds Kathie Pham, Elizabeth Bonawitz, & Alison Gopnik {kathiepham, liz_b, gopnik}@berkely.edu University of California, Department of Psychology, 3210 Tolman Hall Berkeley, CA 94720 USA Abstract Does contrastive access help preschoolers succeed on traditional false-belief tasks? Three- and four-year-olds were presented with a modified version of the change-of-location story in which two characters are the focus of interest. In the contrastive access condition preschoolers’ observe that one character leaves the room while the other stays and witnesses the moving event; in the non-contrastive condition both characters leave the room and fail to observe the moving event. Despite having to track two different characters and their different knowledge states about the location of the toy, preschoolers were more likely to succeed on the task when the characters had contrasting access to the moving event. This result supports a previously unexplored qualitative prediction of the Goodman et al (2006) computational model of the false-belief task and also provides tentative support for the theory theory view of the false-belief transition. Keywords: Cognitive development; theory of mind; False- belief task; Contrastive learning. Theory theory of mind The ability to reason about other people’s mental states, such as their beliefs and desires, their fears and aspirations, is often referred to as theory of mind. Having a theory of mind allows us to construct others as mental beings: entities much grander than their physical attributes or their observable actions. One result of this understanding is that as adults, we are able to not only consider our own beliefs, but the beliefs of countless others—diverging beliefs about a single reality, beliefs that may be mistaken. Decades of research have suggested that three-year-olds tend to struggle with false-belief reasoning in a very specific way. Studies have shown that three-year-olds misinterpret minds systematically—when an agent’s beliefs and reality diverge, they predict actions of that agent to be consistent with the reality, rather than the false-belief (Wimmer & Pemer, 1983; Perner et al., 1987). One classic example that tests a child’s false-belief understanding is the change-of- location task (Wimmer & Perner, 1983). A child is read a story about a character (e.g.) Sally, who stores her toy and then leaves the room. While she is away, a mischievous character moves the toy. Sally then returns to look for her toy and the child is asked, “Where will Sally first go look for her toy?” Three-year-olds often say that Sally will look where the toy actually is, consistent with the true state of the world, rather than the location consistent with the agent’s false-belief. In contrast, older four-year-olds more often correctly answer that Sally will look in the place that the toy was initially left, successfully considering an agent’s beliefs (e.g. Baron-Cohen et al., 1985; Perner, Leekam, & Wimmer, 1987; Wimmer & Perner, 1983). Despite decades of research replicating this finding, there is much debate about how and when knowledge about other’s mental states develops, and in particular when children develop an understanding of false-belief. Some studies suggest that children go through a conceptual change around ages three to five—from systematically failing false- belief tasks to performing above chance (Wellman, Cross, & Watson, 2001). However, there have been compelling arguments for earlier developing theory of mind competence suggesting that as early as 10 to 15 months infants already have an awareness that actors act on the basis of their beliefs and false-beliefs (e.g., see Baillargeon, Scott, and He, 2010 for a review). It is not yet clear how to best interpret these infant “false- belief” findings nor how to reconcile or integrate them with the preschool ones. Regardless, something definite and important is happening in children’s theory-of-mind understandings in the preschool years, beyond earlier developments in infancy. There are likely to be contrasts between implicit predictive and explicit causal-explanatory knowledge. Furthermore, differences in false-belief understanding as measured in the preschool years predict several key childhood competences, such as how and how much children talk about people in everyday conversation, their engagement in pretense, their social interactional skills and consequently their interactions with and popularity with peers (Astington & Jenkins 1995; Lalonde & Chandler 1995; Watson et al. 1999). Furthermore, variability in preschool performance on theory of mind tasks overlaps with but is distinctively different from executive function and IQ (e.g., Carlson & Moses 2001). These findings are important for confirming theory of mind’s significance and relevance during the preschool years as indexed by preschool theory of mind tasks (especially as researched thus far for false-belief tasks). Though it is unclear what factors support success on looking-time measures in young infants, the research that will be presented here assumes a theory-like competence that, in particular, supports explanation (e.g. Gopnik & Wellman, 1992; Wellman & Liu, 2007). We take the idea that theory of mind is analogous to scientific theories, resulting in children’s distinctive patterns of predictions and interpretations of evidence, which is often referred to as the theory theory account of theory of mind development (e.g. Gopnik, 1993; Gopnik & Wellman, 1992; Perner, 1991). What a theory-like understanding of mind permits is conceptual change—theory revision in the face of new | ['Kathie Pham', 'Elizabeth Bonawitz', 'Alison Gopnik'] | Seeing who sees: Contrastive access helps children reason about other minds | 768,510 |
Resolution in optical coherence tomography is often degraded due to sidelobes of the point response. Frequently, the spectrum of the low-coherence source is unable to be changed to reduce sidelobes. We present a method that derives a space-invariant linear post processing digital filter that reduces the sidelobes in the reconstructed image while minimizing the increase in image noise. This method is demonstrated on the image of rat mammary tissue. | ['Daniel L. Marks', 'Paul Scott Carney', 'Stephen A. Boppart'] | A method for dynamically suppressing sidelobes in optical coherence tomography | 190,004 |
A file of fixed-length records in auxiliary storage using a key-to-address transformation to assign records to addresses is considered. The file is assumed to be in steady state, that is that the rates of additions to and of deletions from the file are equal.#R##N##R##N#The loading factors that minimize file maintenance costs in terms of storage space and additional accesses are computed for different bucket sizes and different operational conditions. | ['J. A. van der Pool'] | Optimum storage allocation for a file in steady state | 341,405 |
The Focal Stack Transform integrates a 4D lightfield over a set of appropriately chosen 2D planes. The result of such integration is an image focused on a determined depth in 3D space. The set of such images is the Focal Stack of the lightfield. This paper studies the existence of an inverse for this transform. Such inverse could be used to obtain a 4D lightfield from a set of images focused on several depths of the scene. In this paper, we show that this inversion cannot be obtained for a general lightfield and introduce a subset of lightfields where this inversion can be computed exactly. We examine the numerical properties of such inversion process for general lightfields and examine several regularization approaches to stabilize the transform. Experimental results are provided for focal stacks obtained from several plenoptic cameras. From a practical point of view, results show how this inversion procedure can be used to recover, compress, and denoise the original 4D lightfield. | ['Fernando Pérez', 'Alejandro Pérez', 'Manuel Rodríguez', 'Eduardo Magdaleno'] | Lightfield Recovery from Its Focal Stack | 799,310 |
This paper presents the architecture, policy schema, and policy specifications necessary to accomplish effective management of the application level active networking (ALAN) environment. Using ALAN, developers can engineer applications through the network by utilising platforms (active servers) on which 3rd party software (Proxylets) can be dynamically loaded and run. Redirection of packets destined for active processing at the servers is performed by active routers. Management of such large, dynamic systems presents challenges to centralised approaches. Management based on policies locally interpreted in the context of local state is gaining acceptance as an alternative. The IST project ANDROID uses a flexible generic specification for policies, represented in XML, allowing a wide range of policies to be expressed and processed in a common framework. Policies given here focus on management of routers for VPN scenarios, the resource and security management of active servers running the Proxylets, and management of the information distribution mechanism. Preliminary results were demonstrated during the trial which included the scenario involving the inter-site connectivity and active server resource and security management. | ['Ognjen Prnjat', 'L. Liabotis', 'Temitope Olukemi', 'Lionel Sacks', 'Mike Fisher', 'Paul McKee', 'Ken Carlberg', 'G. Martinez'] | Policy-based management for ALAN-enabled networks | 178,277 |
Document understanding techniques such as document clustering and multi-document summarization have been receiving much attention in recent years. Current document clustering methods usually represent documents as a term-document matrix and perform clustering algorithms on it. Although these clustering methods can group the documents satisfactorily, it is still hard for people to capture the meanings of the documents since there is no satisfactory interpretation for each document cluster. In this paper, we propose a new language model to simultaneously cluster and summarize the documents. By utilizing the mutual influence of the document clustering and summarization, our method makes (1) a better document clustering method with more meaningful interpretation and (2) a better document summarization method taking the document context information into consideration. | ['Dingding Wang', 'Shenghuo Zhu', 'Tao Li', 'Yun Chi', 'Yihong Gong'] | Integrating clustering and multi-document summarization to improve document understanding | 181,219 |
DEVS is a popular formalism for modelling complex dynamic systems using a discrete-event abstraction. At this abstraction level, a timed sequence ofpertinent "events" input to a system (or internal, in the case of timeouts) cause instantaneous changes to the state of the system. Between events, the state does not change, resulting in a a piecewise constant state trajectory. Main advantages of DEVS are its rigorous formal definition, and its support for modular composition. #R##N#This chapter introduces the Classic DEVS formalism in a bottom-up fashion, using a simple traffic light example. The syntax and operational semantics of Atomic (i.e., non-hierarchical) models are intruced first. The semantics of Coupled (hierarchical) models is then given by translation into Atomic DEVS models. As this formal "flattening" is not efficient, a modular abstract simulator which operates directly on the coupled model is also presented. This is the common basis for subsequent efficient implementations. We continue to actual applications of DEVS modelling and simulation, as seen in performance analysis for queueing systems. Finally, we present some of the shortcomings in the Classic DEVS formalism, and show solutions to them in the form of variants of the original formalism. | ['Yentl Van Tendeloo', 'Hans Vangheluwe'] | An Introduction to Classic DEVS | 997,365 |
This paper presents an approach that combines geometry processing with motion planning to enable a robot to efficiently navigate in unstructured environments. The proposed approach relies on a novel oversegmentation method to produce a decomposition of the free space into a set of connected regions. This provides a general and simplified planning layer with navigational routes along which sampling-based motion planning expands a tree of collision-free and dynamically feasible motions to reach the goal. Experiments using robot models with nonlinear dynamics operating in complex environments show significant speedups over related work. | ['Evis Plaku', 'Erion Plaku', 'Patricio D. Simari'] | Direct Path Superfacets: An Intermediate Representation for Motion Planning | 918,484 |
A differential detection scheme for transmit diversity was proposed by Tarokh [V. Tarokh et al., IEEE J. Selec. Areas Commun., Vol. 18, No. 7, pp. 1169-1174, July, 2000], which can achieve full diversity order without the requirement to estimate the channel state at the receiver. This paper investigates the potential of using multiple receive antennas for differential space time coded MPSK signals over correlated Nakagami fading channels. We also investigate the effect of the carrier frequency offset (CFO) and channel correlation on its performance and present some results on its maximal tolerable frequency offsets for different MPSK signals. The results have shown that the differential encoding transmit diversity is very robust to the CFO and channel correlation. | ['Guoping Fan', 'Pingyi Fan', 'Zhigang Cao'] | Performance of the combining received differential encoding transmit diversity with imperfect carrier recovery over correlated Nakagami fading channels | 445,756 |
A Computationally Efficient Algorithm for Fusing Multispectral and Hyperspectral Images | ['Raúl Guerra', 'Sebastián López', 'Roberto Sarmiento'] | A Computationally Efficient Algorithm for Fusing Multispectral and Hyperspectral Images | 816,805 |
This paper presents the efficient VLSI architecture for bit parallel systolic multiplication over dual base for trinomial and pentanomial inGF(2m)for effective use in RS decoders. This architecture supports pipelining. Here irreducible trinomial of form p(x)=xm+xn+1 and pentanomial of the form p(x) = xm+xk+2+ xk+1+xk+1 generate the fields in GF(2m).For ECC algorithms, NIST recommends the five reduction polynomials which are either trinomial or pentanomial. Since the systolic multiplier has the features of regularity, modularity and unidirectional data flow, this structure is well suited to VLSI implementations. For trinomial, the systolic structure of proposed bit parallel dual multipliers requires only m2two inputs AND gates and at most (m2-1) two inputs EXOR gates. For pentanomial, it requires only m2two inputs AND gates and (m2+3m-3)two inputs EXOR gates. The proposed multipliers have clock cycle latency of m. The length of the largest delay path and area of this architecture are less compared to the bit parallel systolic multiplication architectures reported earlier. This architecture can also operate over both the dual-base and polynomial base. | ['Hafizur Rahaman', 'Jimson Mathew', 'Abusaleh M. Jabir', 'Dhiraj K. Pradhan'] | VLSI architecture for bit parallel systolic multipliers for special class of GF(2 m ) using dual bases | 676,309 |
Based on the Antibody Clonal Selection Theory of immunology, we put forward a novel clonal selection algorithm for multiuser detection in Code-division Multiple-access Systems. By using the clonal selection operator, the new algorithm can carry out the global search and the local search in many directions rather than one direction around the same individual simultaneously. After discussing the main characters of the new algorithm, especially the convergence and complexity, the performance of the proposed receiver, named by CAMUD, is evaluated via computer simulations and compared to that of other suboptimal schemes as well as to that of Optimal Multiuser detector (OMD) and conventional detector in CDMA systems over Multi-Path Channels. When compared with the OMD scheme, the CAMUD is capable of reducing the computational complexity significantly. On the other hand, when compared with standard genetic algorithm and improved genetic algorithm, theoretical analysis and Monte Carlo simulations show that the CAMUD with same complexity has optimal performance in eliminating MAI and "near-far" resistance. The simulations also show that the CAMUD performs quite well even when the number of active users and the length of the transmitted packet are considerably large. | ['Maoguo Gong', 'Ling Wang', 'Licheng Jiao', 'Haifeng Du'] | An artificial immune system algorithm for CDMA multiuser detection over multi-path channels | 172,170 |
In this paper, we describe the design and development of the newly developed 3-DOF inchworm mechanism with 6 contact points on a surface to improve a positioning repeatability. The mechanism consists of a pair of Y-shaped electromagnets and six piezoelectric actuators and moves like an inchworm. We calculate a 3-DOF simple harmonic vibration model to get input signals in any 3-DOF motions. In several experiments, we confirm that the mechanism has better positioning repeatability than in previous mechanisms with 4 contact points, especially when the mechanism carries a payload. The design details and basic performance are described to contribute to flexible precise positioning technology. | ['Ohmi Fuchiwaki', 'Manabu Yatsurugi', 'Suguru Omura', 'Kazushi Arafuka'] | Development of a 3-DOF inchworm mechanism organized by a pair of Y-shaped electromagnets and 6 piezoelectric actuators-design, principle, and experiments of translational motions- | 337,059 |
This paper presents a cost-effective and high-performance dual-thread VLIW processor model. The dual-thread VLIW processor model is a low-cost subset of the Weld architecture paradigm. It supports one main thread and one speculative thread running simultaneously in a VLIW processor with a register file and a fetch unit per thread along with memory disambiguation hardware for speculative load and store operations. This paper analyzes the performance impact of the dual-thread VLIW processor, which includes analysis of migrating disambiguation hardware for speculative load operations to the compiler and of the sensitivity of the model to the variation of branch misprediction, second-level cache miss penalties, and register file copy time. Up to 34 percent improvement in performance can be attained using the dual-thread VLIW processor when compared to a single-threaded VLIW processor model. | ['Emre Ozer', 'Thomas M. Conte'] | High-performance and low-cost dual-thread VLIW processor using Weld architecture paradigm | 514,543 |
This paper presents a spoken term detection method, based on automatic speech recognition and phonetic representation. The proposed method combines textual search in word transcripts obtained with a large vocabulary continuous speech recognizer system and phonetic search in the phonetization of these transcripts, to accurately locate the occurrences of a list of keywords in a broadcast corpus. Textual information from the transcripts and an efficient rescoring scheme are used to improve the performance of the phonetic search. Our experiments show that the proposed method outperforms the baseline textual and phonetic searches by its ability to separate correct detections from false alarms. | ['Corentin Dubois', 'Delphine Charlet'] | Using textual information from LVCSR transcripts for phonetic-based spoken term detection | 334,614 |
Aimed at the uniform knowledge representation including STEP and SGML in the virtual organization, XOEM+OWL is put forward which is the semantic model faced on the uniform product knowledge representation on the multi Heterogeneous Product Information. And then the correspondent mapping between STEP Schema Graph and OWL Schema Graph are build as Cos(sc,oc),so we can get the semantic pattern matching degree for the semantic representation on the product information. At last the example is presented. | ['Chengfeng Jian', 'Meiyu Zhang', 'Cunju Lu'] | A Uniform Product Knowledge Representation Semantic Model | 182,856 |
We study the relationship between locality of node channel state knowledge and the percent of sum-capacity achievable in partially connected K-user interference channels. In the process we 1) settle a previous conjecture regarding 3-user interference channels, 2) establish the exact number of hops of knowledge necessary and sufficient to guarantee achievability of capacity given full channel state knowledge, and 3) present an upper bounds on the normalized capacity of K-user interference networks with L-hops of knowledge and arbitrary topology. | ['David T. H. Kao'] | How local can a node's view be and still guarantee sum-capacity in interference networks? | 930,857 |
There are many situations in which information from a Wireless Sensor Network (WSN) must be processed to provide a meaningful summary to an external agency in the minimum amount of time, all within the constraints of the processing power and bandwidth available within the network. Our interest is in supporting emergency response for indoors incidents. At present, there are only two choices about where computation might occur within a sensor network: (i) on individual sensor nodes, with the advantage of achieving substantial data reduction, decreasing the cost of transmission, and avoiding congestion; (ii) outside the sensor network, with the sensors simply supplying sinks or more powerful nodes with the data needed for the calculation. The latter approach does not require powerful nodes but necessitates a higher bandwidth network. If applications reach a level of sophistication at which they cannot be executed on a single node, then it would seem that the only option is to have processing performed centrally. The distributed systems community has proposed another solution to limited computing power on a single node: the distribution of complex applications within grids formed by high-end processors. However, since these devices are usually linked through high-speed connections, they do not experience bandwidth restrictions or congestion that are inevitable in any WSN due to the broadcast radio medium. So far, the existing approaches in this direction are hybrid, because they use clusters of nodes that rely on more powerful clusterheads to execute their computation. They tend to focus exclusively on the load availability of nodes during the distribution process, ignoring real communication issues because of the simulated environments in which they are mainly tested. The key contribution of our work is the introduction of a novel approach, which relies on distributing computation among a homogeneous grid of nodes, in an effort to port the Grid Computing paradigm within the WSNs. Moreover, we demonstrate by practical experimentation that there are significant benefits to be gained by considering local network conditions in addition to load information during distribution. We present results from our implementation of two different algorithms on real Tmote Sky sensor testbeds running the Contiki OS [3]. The first is a novel algorithm while the second is an adaptation of an already existing distribution algorithm [2], both modified to take into account real bandwidth requirements. | ['Elisa Rondini', 'Stephen Hailes'] | Distributed computation in wireless ad hoc grids with bandwidth control | 287,975 |
The problem of learning nonlinear multiple input single output (MISO) systems is considered. The usually applied procedure for the identification of these systems is analysed and the shortcomings of the commonly used structures are described. Based on that a novel approach for the estimation of local model networks or Takagi-Sugeno fuzzy systems is presented, which incorporates recent results of regularized identification of linear finite impulse response (FIR) models for the rules consequent. With the assumption that the impulse response of the local model is a realization of a Gaussian process two properties of impulse responses can be considered. These are exponential decay and smoothness. This approach is extended to the identification of nonlinear multiple input single output systems using the LOLIMOT construction algorithm and incorporating the regularized approach for the local model identification. The results are demonstrated at a test example and the results are compared to a local model network with local ARX models and unregularized FIR models. The comparison reveals the advantages of the novel method. | ['Tobias Münker', 'Oliver Nelles'] | Local model network with regularized MISO finite impulse response models | 936,760 |
Increasing scale leaves a challenging problem for visualizing large attributed networks. Hierarchical aggregation is a promising solution. Existing methods mainly focus on the topological structure but ignore vertex properties, moreover, the inherent hierarchy restricts network navigation process. This paper proposes an user-specified visualization method with a content-based clustering algorithm to explore large attributed networks. The content-based algorithm is able to locate major structures and cluster network based on structural and attribute similarities. Then a novel visualization system is introduced that allows navigation of large networks at any level-of-detail. The user-specified interaction strategy enables user to manipulate cluster metrics and built hierarchy based on interest. Case study demonstrates that the proposed method is effective to extract global knowledge about the network as well as locate critical nodes and major structures. | ['Xiaolei Du', 'Yingmei Wei', 'Hao Ma', 'Lingda Wu'] | Interactive Visual Analysis on Large Attributed Networks | 945,612 |
Parallel discrete event simulation (PDES) techniques have not yet made a substantial impact on the network simulation community because of the need to recast the simulation models using a new set of tools. To address this problem, we present a case study in transparently parallelizing a widely used network simulator, called ns. The use of this parallel ns does not require the modeler to learn any new tools or complex PDES techniques. The paper describes our approach and design choices to build the parallel ns and presents preliminary performance results, which are very encouraging. | ['Kevin G. Jones', 'Samir R. Das'] | Parallel execution of a sequential network simulator | 74,714 |
The wireless mesh network (WMN) is an economical solution to support ubiquitous broadband services. This paper investigates the tradeoffs among quality-of-service (QoS), capacity, and coverage in a scalable multichannel ring-based WMN. We suggest a simple frequency planning in the proposed ring-based WMN to improve the capacity with QoS support, and to make the system more scalable in terms of coverage. We develop a physical (PHY)/medium access control (MAC) cross-layer analytical model to evaluate the delay, jitter, and throughput of the proposed WMN, by taking account of the carrier sense multiple-access (CSMA) MAC protocol, and the impact of hop distance on transmission rate in the physical layer. Furthermore, the mixed-integer nonlinear programming optimization approach is applied to determine the optimal number of rings and the associated ring widths, aiming at maximizing the capacity and coverage of a mesh cell subject to the delay requirement | ['Jane-Hwa Huang', 'Li-Chun Wang', 'Chung-Ju Chang'] | Capacity and QoS for a Scalable Ring-Based Wireless Mesh Network | 314,469 |
Parallele Implementierung einer funktionalen Programmiersprache auf einem Transputer-Mehrprozessor-System | ['Herbert Kuchen', 'Rita Loogen'] | Parallele Implementierung einer funktionalen Programmiersprache auf einem Transputer-Mehrprozessor-System | 511,175 |
The principles of cybernetics have been applied in many fields. With currently renewed and fast growing of interest, it is the time to address the commonly needs for various applications in related to programming language design. The errands of cybernetics application are including the extending of application domain, subdividing the problem, building of reliability features, dealing of parallel and concurrent computation, handling of error states, and creating of precision requirements. This paper studies those programming language features that can accomplish these errands in the commonly existing programming languages for cybernetics application. | ['Trong Wu'] | Programming Language Design in Cybernetics Applications | 257,379 |
Analogical Processes in Language Learning. | ['Bozena Pajak', 'Micah B. Goldwater', 'Dedre Gentner', 'Adele E. Goldberg', 'Ruxue Shao'] | Analogical Processes in Language Learning. | 762,289 |
The SGCEdb (http://sgce.cbse.uab.edu) database/interface serves the primary purpose of reporting progress of the Structural Genomics of Caenorhabditis elegans project at the University of Alabama at Birmingham. It stores and analyzes results of experiments ranging from solubility screening arrays to individual protein purification and structure solution. External databases and algorithms are referenced and evaluated for target selection in the human, C.elegans and Pneumocystis carinii genomes. The flexible and reusable design permits tracking of standard and custom experiment types in a scientist-defined sequence. The database coordinates efforts between collaborators and is adaptable to a wide range of biological applications. | ['David Johnson', 'Jun Tsao', 'Ming Luo', 'Mike Carson'] | SGCEdb: a flexible database and web interface integrating experimental results and analysis for structural genomics focusing on Caenorhabditis elegans | 440,348 |
Face Recognition techniques are solutions used to quickly screen a huge number of persons without being intrusive in open environments or to substitute id cards in companies or research institutes. There are several reasons that require to systems implementing these techniques to be reliable. This paper presents the design of a reliable face recognition system implemented on Field Programmable Gate Array (FPGA). The proposed implementation uses the concepts of multiprocessor architecture, parallel software and dynamic reconfiguration to satisfy the requirement of a reliable system. The target multiprocessor architecture is extended to support the dynamic reconfiguration of the processing unit to provide reliability to processors fault. The experimental results show that, due to the multiprocessor architecture, the parallel face recognition algorithm can achieve a speed up of 63% with respect to the sequential version. Results regarding the overhead in maintaining a reliable architecture are also shown. | ['Antonino Tumeo', 'Francesco Regazzoni', 'Gianluca Palermo', 'Fabrizio Ferrandi', 'Donatella Sciuto'] | A reconfigurable multiprocessor architecture for a reliable face recognition implementation | 466,532 |
This paper describes the discrete Fourier transform (DFT) interpolation algorithm for arbitrary windows and its application and performance for optimal noncosine Kaiser-Bessel and Dolph-Chebyshev windows. The interpolation algorithm is based on the polynomial approximation of the window's spectrum that is computed numerically. Two- and three-point (2p and 3p) interpolations are considered. Systematic errors and noise sensitivity are analyzed for the chosen Kaiser-Bessel and Dolph-Chebyshev windows and compared with Rife-Vincent class I windows. | ['Krzysztof Duda'] | DFT Interpolation Algorithm for Kaiser–Bessel and Dolph–Chebyshev Windows | 102,122 |
The rate-distortion dimension (RDD) of an analog stationary process is studied as a measure of complexity that captures the amount of information contained in the process. It is shown that the RDD of a process, defined as two times the asymptotic ratio of its rate-distortion function R(D) to log 1/D as the distortion D approaches zero, is equal to its information dimension (ID). This generalizes an earlier result by Kawabata and Dembo and provides an operational approach to evaluate the ID of a process, which previously was shown to be closely related to the effective dimension of the underlying process and also to the fundamental limits of compressed sensing. The relation between RDD and ID is illustrated for a piecewise constant process. | ['Farideh Ebrahim Rezagah', 'Shirin Jalali', 'Elza Erkip', 'H. Vincent Poor'] | Rate-distortion dimension of stochastic processes | 879,837 |
The significant progress of the Microprocessors and Microcontrollers course for computer science students | ['Sasko Ristov', 'Nevena Ackovska', 'Vesna Kirandziska', 'Darko Martinovikj'] | The significant progress of the Microprocessors and Microcontrollers course for computer science students | 305,893 |
We consider a one-dimensional system of particles with strong zero-range interactions. This system can be mapped onto a spin chain of the Heisenberg type with exchange coefficients that depend on the external trap. In this paper, we present an algorithm that can be used to compute these exchange coefficients. We introduce an open source code CONAN (Coefficients of One-dimensional NN-Atom Networks) which is based on this algorithm. CONAN works with arbitrary external potentials and we have tested its reliability for system sizes up to around 35 particles. As illustrative examples, we consider a harmonic trap and a box trap with a superimposed asymmetric tilted potential. For these examples, the computation time typically scales with the number of particles as O(N3.5±0.4)O(N3.5±0.4). Computation times are around 10 s for N=10N=10 particles and less than 10 min for N=20N=20 particles.#R##N#Program summary#R##N#Program title: CONAN#R##N##R##N#Program Files doi:http://dx.doi.org/10.17632/tw87vdy68b.1#R##N##R##N#Licensing provisions: GNU General Public License 3 (GPL)#R##N##R##N#Programming language: C#R##N##R##N#Nature of problem: A system of NN atoms (fermions or bosons) with a strong zero-range interaction confined in a one-dimensional potential V(x)V(x) can be described using a spin chain Heisenberg type Hamiltonian. This effective spin chain Hamiltonian is defined through N−1N−1 exchange coefficients (also called geometric coefficients, αkαk). The exchange coefficients depend only on the integer NN and the function V(x)V(x), but each coefficient is formally given as an (N−1N−1)-dimensional integral. Given a number of particles NN and a confining potential V(x)V(x), we wish to compute the exchange coefficients, but carrying out the (N−1N−1)-dimensional integral numerically is not, to say the least, a method that scales well with the system size.#R##N##R##N#Solution method: We wish to compute the exchange coefficients for a given system, but to do this we need to express them in a way that is more well-suited for a numerical implementation, i.e. in a way that does not involve an (N−1N−1)-dimensional integral. In the submitted manuscript, we derive such an expression for the exchange coefficients. Our program, CONAN, is the numerical implementation of this formula for the exchange coefficients. Thus, CONAN takes as physical inputs the system size NN and a smooth potential V(x)V(x), and returns the corresponding N−1N−1 exchange coefficients appearing in the spin chain Hamiltonian. | ['N. J. S. Loft', 'Line Burholt Kristensen', 'A. E. Thomsen', 'A. G. Volosniev', 'N. T. Zinner'] | CONAN -- the cruncher of local exchange coefficients for strongly interacting confined systems in one dimension | 669,800 |
Justification Logic offers a new approach to a theory of knowledge, belief, and evidence, which possesses the potential to have significant impact on applications. The celebrated account of knowledge as justified true belief , which is attributed to Plato, has long been a focus of epistemic studies (cf. [10,15,18,26,30,32] and many others). | ['Sergei N. Artëmov'] | Justification Logic | 713,394 |
This paper proposes a granular ranking algorithm for mining market values, gives the framework of algorithm and the concrete algorithm steps. The core of new algorithm is the construction of granular ranking function r G (x), which guides instances in the testing dataset finish ranking. The ranked result has a strong readability. The new algorithm improves the computation efficiency further relative to existing algorithms, e.g. the market value function. The experiment result shows that the computation accuracy of granular ranking algorithm approaches to the market value function. Meanwhile, incremental granular ranking algorithm also is discussed in the paper. | ['Xiaofeng Wang', 'Zhen Cao'] | A Granular Ranking Algorithm for Mining Market Values | 150,370 |
Today, user interface is an important criterion in determination of the performance and the convenience of the relevant software. Also, along with the growth of smart phones, techniques to develop and express a user interface are diversified so that developments for more advanced user interfaces are in process for tablets and PCs as well as human machine interfaces HMIs for industrial devices. However, such interface technology is under development being centric to smart phone platforms mostly while the common platform applicable to HMIs for PC-based industrial devices is not formed, yet. In this paper, we build the user interface transformation system for efficient HMI by applying the existing image transformation technology, the Shader to the effective user interface transformation algorithm, which is applicable to the industrial equipment. | ['Cheol–Gon Moon', 'Shin–Hyeong Choi'] | Development of smart user interface platform of industrial equipment using Shader effects and filters | 26,785 |
Rangelands in Australia cover approximately 80 percent of the continent and include a diverse group of relatively undisturbed ecosystems such as tropical savannas, woodlands, shrublands and grasslands. It is important to monitor and understand change in the rangelands so that effective actions can be taken to maintain ecological, economic and social values in Australia. Efficient use of feed resources in the livestock industries of Australia is a major factor in determining farm profitability and sustainability. With limited information, many producers forego potential production because of ineffective management of their feed resources. Further, poor management can also lead to environmental degradation. Therefore, CSIRO have invested to investigate, develop and validate new methodologies for integration of remote sensing data with in-situ field measurements, in order to map the dynamics in aboveground plant biomass in forests, crops, grassland and rangelands of Australia. Pasture biomass mapping is a main component of this project given rangelands the majority of Australia. Due to good sensitive to diverse rangelands in tropical and subtropical regions, multi-band SAR data for pasture mapping are investigated in this paper. | ['Zheng-Shu Zhou', 'Peter Caccetta', 'Neil C. Sims', 'Alex Held'] | Multiband SAR data for rangeland pasture monitoring | 932,394 |
PRIVATIVE NEGATION IN THE PORT ROYAL LOGIC | ['John N. Martin'] | PRIVATIVE NEGATION IN THE PORT ROYAL LOGIC | 892,241 |
An Original Simulation Model to Improve the Order Picking Performance: Case Study of an Automated Warehouse | ['Francisco Figueira de Faria', 'Vasco Reis'] | An Original Simulation Model to Improve the Order Picking Performance: Case Study of an Automated Warehouse | 627,044 |
This paper proposes an analytical model for the blocking performance of adaptive routing over WDM networks with sparse wavelength conversion. Two key components of the model are 1) separate routes into segments, and calculate the blocking performance of each segment; and 2) calculate overflow traffic to wavelength convertible node, and then get the blocking probability of the node. The blocking probability of the whole network is then calculated as the combination of the blocking probability of segments and nodes. Based on the model, an adaptive routing algorithm is proposed, which is able to finish routing and wavelength assignment in one single step. Numerical results show that among the parameters considered, the number of wavelengths turns out to be the primary factor for the blocking performance. | ['Aijun Ding', 'Sun-Teck Tan', 'Gee-Swee Poo'] | Blocking performance analysis on adaptive routing over WDM networks with sparse wavelength conversion | 6,270 |
In this paper we improve the approximation ratio for the problem of scheduling packets on line networks with bounded buffers with the aim of maximizing the throughput. Each node in the network has a local buffer of bounded size B, and each edge (or link) can transmit a limited number c of packets in every time unit. The input to the problem consists of a set of packet requests, each defined by a source node, a destination node, and a release time. We denote by n the size of the network. A solution for this problem is a schedule that delivers (some of the) packets to their destinations without violating the capacity constraints of the network (buffers or edges). Our goal is to design an efficient algorithm that computes a schedule that maximizes the number of packets that arrive to their respective destinations.#R##N##R##N#We give a randomized approximation algorithm with constant approximation ratio for the case where the buffer-size to link-capacity ratio, B/c, does not depend on the input size. This improves over the previously best result of O(log^* n) [Racke and Rosen SPAA 2009]. Our improvement is based on a new combinatorial lemma that we prove, stating, roughly speaking, that if packets are allowed to stay put in buffers only a limited number of time steps, 2d, where d is the longest source-destination distance, then the optimal solution is decreased by only a constant factor. This claim was not previously known in the integral (unsplitable, zero-one) case, and may find additional applications for routing and scheduling algorithms.#R##N##R##N#While we are not able to give the same improvement for the related problem when packets have hard deadlines, our algorithm does support "soft deadlines". That is, if packets have deadlines, we achieve a constant approximation ratio when the produced solution is allowed to miss deadlines by at most log n time units. | ['Guy Even', 'Moti Medina', 'Adi Rosén'] | A Constant Approximation Algorithm for Scheduling Packets on Line Networks | 642,751 |
We are in the regime of Internet-of-Things (IoT), — a regime characterized by billions of smart, connected computing devices coordinating to provide large-scale, highly personalized applications. Two overriding themes in this regime are energy consumption and security enforcement, which are both critical to the sustainability and proliferation of the IoT ecosystem. However, energy and security requirements are often at odds. This paper discusses several challenges in developing trustworthy IoT devices that comprehend the energy-security trade-offs. We also outline some emergent approaches to address this conflict. | ['Sandip Ray', 'Tamzidul Hoque', 'Abhishek Basak', 'Swarup Bhunia'] | The power play: Security-energy trade-offs in the IoT regime | 944,083 |
Creating Industrial-Like SAT Instances by Clustering and Reconstruction - (Poster Presentation). | ['Sebastian Burg', 'Stephan Kottler', 'Michael Kaufmann'] | Creating Industrial-Like SAT Instances by Clustering and Reconstruction - (Poster Presentation). | 774,856 |
Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU–GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks. | ['Francisco Naveros', 'Niceto R. Luque', 'Jesús Alberto Garrido', 'Richard R. Carrillo', 'Mancia Anguita', 'Eduardo Ros'] | A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study | 137,759 |
User evaluations have gained increasing importance in visualization research over the past years, as in many cases these evaluations are the only way to support the claims made by visualization researchers. Unfortunately, recent literature reviews show that in comparison to algorithmic performance evaluations, the number of user evaluations is still very low. Reasons for this are the required amount of time to conduct such studies together with the difficulties involved in participant recruitment and result reporting. While it could be shown that the quality of evaluation results and the simplified participant recruitment of crowdsourcing platforms makes this technology a viable alternative to lab experiments when evaluating visualizations, the time for conducting and reporting such evaluations is still very high. In this paper, we propose a software system, which integrates the conduction, the analysis and the reporting of crowdsourced user evaluations directly into the scientific visualization development process. With the proposed system, researchers can conduct and analyze quantitative evaluations on a large scale through an evaluation-centric user interface with only a few mouse clicks. Thus, it becomes possible to perform iterative evaluations during algorithm design, which potentially leads to better results, as compared to the time consuming user evaluations traditionally conducted at the end of the design process. Furthermore, the system is built around a centralized database, which supports an easy reuse of old evaluation designs and the reproduction of old evaluations with new or additional stimuli, which are both driving challenges in scientific visualization research. We will describe the system's design and the considerations made during the design process, and demonstrate the system by conducting three user evaluations, all of which have been published before in the visualization literature. | ['Rickard Englund', 'Sathish Kottravel', 'Timo Ropinski'] | A crowdsourcing system for integrated and reproducible evaluation in scientific visualization | 725,279 |
The wide use of satellite-based instruments provides measurements in climatology on a global scale, which often have nonstationary covariance structure. The issue of modeling a spatial random fields on sphere which is stationary across longitudes is addressed with a kernel convolution approach. The observed random field is generated by convolving a latent uncorrelated random field with a class of Matern type kernel functions. By allowing the parameters in the kernel functions to vary with locations, it is possible to generate a flexible class of covariance functions and capture the nonstationary properties. Since the corresponding covariance functions generally do not have a closed form, numerical evaluations are necessary and a pre-computation table is used to speed up the computation. For regular grid data on sphere, the circulant block property of the covariance matrix enables us to use Fast Fourier Transform (FFT) to get its determinant and inverse matrix efficiently. The proposed approach is applied to the Total Ozone Mapping Spectrometer (TOMS) data for illustration. | ['Yang Li', 'Zhengyuan Zhu'] | Modeling nonstationary covariance function with convolution on sphere | 861,985 |
Subsets and Splits