abstract
stringlengths
8
9.19k
authors
stringlengths
9
1.96k
title
stringlengths
8
367
__index_level_0__
int64
13
1,000k
Retrospective: Monsoon: an explicit token-store architecture
['George M. Papadopoulos', 'David E. Culler']
Retrospective: Monsoon: an explicit token-store architecture
59,989
In an unordered code no codeword is contained in any other codeword. Unordered codes are All Unidirectional Error Detecting (AUED) codes. In the binary case, it is well known that among all systematic codes with k information bits, Berger codes are optimal unordered codes with r = ⌈log 2 (k+1)⌉ check bits. This paper gives some new theory on variable length unordered codes and introduces a new class of systematic unordered codes with variable length check symbols. The average redundancy of these new codes is r ≈ (1/2) log 2 (πek/2) = (1/2) log 2 k + 1.047, where k∈IN is the number of information bits. It is also shown that such codes are optimal in the class of systematic unordered codes with fixed length information symbols and variable length check symbols. The generalization to the non-binary case is also given.
['Laura Pezza', 'Luca G. Tallini', 'Bella Bose']
On systematic variable length unordered codes
506,710
Hypersonic - Model Analysis as a Service.
['Vlad Acretoaie', 'Harald Störrle']
Hypersonic - Model Analysis as a Service.
772,898
We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics W.r.t. their generalization. Finally, the algorithms are successfully applied to protein structure data and to data from the cat's cerebral cortex. They show better performance than K-nearest-neighbor classification.
['Thore Graepel', 'Ralf Herbrich', 'Peter Bollmann-Sdorra', 'Klaus Obermayer']
Classification on Pairwise Proximity Data
2,464
For H.264/AVC decoder system, the motion compensation bandwidth comes from two parts, the reference data loading bandwidth and the equivalent bandwidth from DRAM access overhead latency. In this paper, a bandwidth-efficient cache-based MC architecture is proposed. It exploits both intra-MB and inter-MB data reuse and reduce up to 46% MC bandwidth compared to conventional scheme. To reduce the equivalent bandwidth from DRAM access overhead latency, the DRAM-friendly data mapping and access control scheme are proposed. They can reduce averagely 89.8% of equivalent DRAM access overhead bandwidth. The average MC burst length can be improved to 9.59 words/burst. The total bandwidth reduction can be up to 32∼71% compared to previous works.
['Tzu-Der Chuang', 'Lo-Mei Chang', 'Tsai-Wei Chiu', 'Yi-Hau Chen', 'Liang-Gee Chen']
Bandwidth-efficient cache-based motion compensation architecture with DRAM-friendly data access control
446,457
In this paper we relax the assumption of network connectivity within the sensor network and introduce mobile communication relays to the network. This addition converts the homogeneous sensor network to a heterogeneous one. Based on the communication geometry of both sensing and communication relay agents we derive communication constraints within the network that guarantee network connectivity. We then define a heterogeneous proximity graph that encodes the communication links that exist within the heterogeneous network. By specifying particular edge weights in the proximity graph, we provide a technique for biasing particular connections within the heterogeneous sensor network. Through a minimal spanning tree approach, we show how to minimize communication links within the network which allows for larger feasible motion sets of the sensing agents that guarantee the network remains connected. We also provide an algorithm that allows for adding communication links to the minimal spanning tree of the heterogeneous proximity graph to create a biconnected graph that is robust to a single node failure. We then combine a prioritized search algorithm and the communication constraints to provide a decentralized prioritized sensing control algorithm for a heterogeneous sensor network that maintains network connectivity.
['R. Andres Cortez', 'Rafael Fierro', 'John E. Wood', 'Ronald Lumia']
Heterogeneous sensor network for prioritized sensing
445,127
Mobile and embedded systems increasingly process sensitive data, ranging from personal information including health records or financial transactions to parameters of technical systems such as car engines. Cryptographic circuits are employed to protect these data from unauthorized access and manipulation. Fault-based attacks are a relatively new threat to system integrity. They circumvent the protection by inducing faults into the hardware implementation of cryptographic functions, thus affecting encryption and/or decryption in a controlled way. By doing so, the attacker obtains supplementary information that she can utilize during cryptanalysis to derive protected data, such as secret keys. In the recent years, a large number of fault-based attacks and countermeasures to protect cryptographic circuits against them have been developed. However, isolated techniques for each individual attack are no longer sufficient, and a generic protective strategy is lacking.
['Ilia Polian', 'Martin Kreuzer']
Fault-based attacks on cryptographic hardware
133,776
On Bar (1, j)-Visibility Graphs - (Extended Abstract).
['Franz J. Brandenburg', 'Niklas Heinsohn', 'Michael Kaufmann', 'Daniel Neuwirth']
On Bar (1, j)-Visibility Graphs - (Extended Abstract).
737,218
This paper presents the design and performance of SPIFFI, a scalable high-performance parallel file system intended for use by extremely I/O intensive applications including "Grand Challenge" scientific applications and multimedia systems. This paper contains experimental results from a SPIFFI prototype on a 64 node/64 disk Intel Paragon. The results show that SPIFFI provides high performance and linear scaleup on real hardware. The paper also explains how shared file pointers (i.e., file pointers that are shared by multiple processes) can simplify the design of a parallel application. By sequentializing I/O accesses and by providing dynamic I/O load balancing, a shared file pointer may even improve an application's performance. This paper also presents the predictions of a SPIFFI simulator that we validated using the prototype. The simulator results show that SPIFFI continues to provide high performance even when it is scaled to configurations with as many as 128 disks or 256 compute nodes.
['Craig S. Freedman', 'Josef Burger', 'David J. DeWitt']
SPIFFI-a scalable parallel file system for the Intel Paragon
202,237
Traditional RF-based indoor positioning approaches use only Radio Signal Strength Indicator (RSSI) to locate the target object. But RSSI suffers significantly from the multi-path phenomenon and other environmental factors. Hence, the localization accuracy drops dramatically in a large tracking field. To solve this problem, this paper introduces one more resource, the dynamic of RSSI, which is the variance of signal strength caused by the target object and is more robust to environment changes. By combining these two resources, we are able to greatly improve the accuracy and scalability of current RF-based approaches. We call such hybrid approach COCKTAIL. It employs both the technologies of active RFID and Wireless Sensor Networks (WSNs). Sensors use the dynamic of RSSI to figure out a cluster of reference tags as candidates. The final target location is estimated by using the RSSI relationships between the target tag and candidate reference tags. Experiments show that COCKTAIL can reach a remarkable high degree of localization accuracy to 0:45m, which outperforms significantly to most of the pure RF-based localization approaches.
['Dian Zhang', 'Yanyan Yang', 'Dachao Cheng', 'Siyuan Liu', 'Lionel M. Ni']
COCKTAIL: An RF-Based Hybrid Approach for Indoor Localization
87,689
A highly efficient SOC test compression scheme which uses sequential linear decompressors local to each core is proposed. Test data is stored on the tester in compressed form and brought over the TAM to the core before being decompressed. Very high encoding efficiency is achieved by providing the ability to share free variables across test cubes being compressed at the same time as well as in subsequent time steps. The idea of retaining unused non-pivot free variables when decompressing one test cube to help for encoding subsequent test cubes that was introduced in [Muthyala 12] is applied here in the context of SOC testing. It is shown that in this application, a first-in first-out (FIFO) buffer is not required. The ability to retain excess free variables rather than wasting them when the decompressor is reset avoids the need for high precision in matching the number of free variables used for encoding with the number of care bits. This allows greater flexibility in test scheduling to reduce test time, tester storage, and control complexity as indicated by the experimental results.
['Sreenivaas S. Muthyala', 'Nur A. Touba']
SOC test compression scheme using sequential linear decompressors with retained free variables
128,158
The ability to determine what activity of daily living a person performs is of interest in many application domains. It is possible to determine the physical and cognitive capabilities of the elderly by inferring what activities they perform in their houses. Our primary aim was to establish a proof of concept that a wireless sensor system can monitor and record physical activity and these data can be modeled to predict activities of daily living. The secondary aim was to determine the optimal placement of the sensor boxes for detecting activities in a room. A wireless sensor system was set up in a laboratory kitchen. The ten healthy participants were requested to make tea following a defined sequence of tasks. Data were collected from the eight wireless sensor boxes placed in specific places in the test kitchen and analyzed to detect the sequences of tasks performed by the participants. These sequence of tasks were trained and tested using the Markov Model. Data analysis focused on the reliability of the system and the integrity of the collected data. The sequence of tasks were successfully recognized for all subjects and the averaged data pattern of tasks sequences between the subjects had a high correlation. Analysis of the data collected indicates that sensors placed in different locations are capable of recognizing activities, with the movement detection sensor contributing the most to detection of tasks. The central top of the room with no obstruction of view was considered to be the best location to record data for activity detection. Wireless sensor systems show much promise as easily deployable to monitor and recognize activities of daily living.
['Prabitha Urwyler', 'Reto Stucki', 'René M. Müri', 'Urs Peter Mosimann', 'Tobias Nef']
Passive wireless sensor systems can recognize activites of daily living.
670,641
A Multi-agent Selection of Multiple Composite Web Services Driven by QoS.
['Fatma Siala', 'Khaled Ghedira']
A Multi-agent Selection of Multiple Composite Web Services Driven by QoS.
784,229
An algorithm for unifying the techniques of gate sizing and clock skew optimization for acyclic pipelines is presented in this paper. In the design of circuits under very tight timing specifications, the area overhead of gate sizing can be considerable. The procedure described herein utilizes the idea of cycle borrowing using clock skew optimization to relax the stringency of the timing specification on the critical stages of the pipeline. The theoretical basis for the procedure is developed, a new algorithm for timing analysis of acyclic pipeline circuits with deliberate skew is presented, and a sensitivity-based optimizer is used to solve the sizing+skew problem. Our experimental results verify that the procedure of cycle borrowing using sizing+skew results in a better overall area-delay tradeoff as compared to using sizing alone.
['Harsha Sathyamurthy', 'Sachin S. Sapatnekar', 'John P. Fishburn']
Speeding up pipelined circuits through a combination of gate sizing and clock skew optimization
331,493
Attempts have been made to formally verify software transactional memories (STMs), but these are limited in the scale of systems they can handle and generally verify only a model of the system, and not the actual system. We present an alternate attack on checking the correctness of an STM implementation by verifying the execution runs of an STM using a checker that runs in parallel with the transaction memory system. A correctness criterion that is the subject of verification is the serializability of transactions. While checking transaction serializability is NP-complete, practically useful subclasses such as interchange-serializability (DSR) are efficiently computable. Checking DSR reduces to checking for cycles in a transaction ordering graph which captures the access order of objects shared between transaction instances. Doing this concurrent to the main transaction execution requires minimizing the overhead of capturing object accesses, and managing the size of the graph. We discuss techniques for minimizing the overhead of access logging which includes time-stamping, and present techniques for on-the-fly graph compaction that greatly reduce the graph size that needs to be maintained, to be no larger than the number of threads. We have implemented concurrent serializability checking in the Rochester Software Transactional Memory (RSTM) system. We present our practical experiences with this. Results for RSTM, STAMP and synthetic benchmarks are given. The overhead of concurrent checking is a strong function of the transaction length. For long transactions this is negligible. Thus, the use of the proposed method for continuous runtime checking is acceptable. For very short transactions this can be significant. In this case we see the applicability of the proposed method for debugging.
['Arnab Sinha', 'Sharad Malik']
Runtime checking of serializability in software transactional memory
448,816
A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.
['Han Gong', 'Darren Cosker']
Interactive Removal and Ground Truth for Difficult Shadow Scenes
863,178
Many classification problems must be performed in a timely or time constrained manner. For this reason, the generation of control schemes which are capable of responding in real-time are fundamental to many applications. For our problem, that of ship classification, tactical scenarios often dictate the response time required from a system. In this paper, we discuss efficient ways to prioritize and gather evidence within belief networks. We also suggest ways in which we can structure our large problem into a series of small ones. This both pre-defines much of our control strategy into the system structure and also localizes our run-time control issues into much smaller networks. The overall control strategy thus includes the combination of both of these methods. By combining them correctly we can reduce the amount of dynamic computation required during run-time and thus improve the responsiveness of the system.
['Scott A. Musman', 'LiWu Chang', 'Lashon B. Booker']
APPLICATION OF A REAL-TIME CONTROL STRATEGY FOR BAYESIAN BELIEF NETWORKS TO SHIP CLASSIFICATION PROBLEM SOLVING
43,584
The accuracy of the Facial Expression Recognition (FER) system is completely reliant on the extraction of the informative features. In this work, a new feature extraction method is proposed that has the capability to extract the most prominent features from the human face. The proposed technique has been tested and validated in order to achieve the best accuracy for FER systems. There are some regions in the face that have much contribution in achieving the best accuracy. Therefore, in this work, the human face is divided into number of regions and in each region the movement of pixels have been traced. For this purpose, one of the wavelet families named symlet wavelet is used and individual facial frame is decomposed up to 2 levels. In each decomposition level, the distances between the pixels is found by using the distance formula and by this way some of the informative coefficients are extracted and hence the feature vector has been created. Moreover, the dimension of the feature space is reduced by employing a well-known statistical technique such as Linear Discriminant Analysis (LDA). Finally, Hidden Markov Model (HMM) is exploited for training and testing the system in order to label the expressions. The proposed FER system has been tested and validated on Cohn-Kanade dataset. The resulting recognition accuracy of 94% illustrates the success of employing the proposed technique for FER.
['Muhammad Hameed Siddiqi', 'Sungyoung Lee']
Human Facial Expression Recognition Using Wavelet Transform and Hidden Markov Model
599,007
Least-squares error (LSE) or mean-squared error (MSE) optimization criteria lead to adaptive filters that are highly sensitive to impulsive noise. The sensitivity to noise bursts increases with the convergence speed of the adaptation algorithm and limits the performance of signal processing algorithms, especially when fast convergence is required, as for example, in adaptive beamforming for speech and audio signal acquisition or acoustic echo cancellation. In these applications, noise bursts are frequently due to undetected double-talk. In this paper, we present impulsive noise robust multichannel frequency-domain adaptive filters (MC-FDAFs) based on outlier-robust M-estimation using a Newton algorithm and a discrete Newton algorithm, which are especially designed for frequency bin-wise adaptation control. Bin-wise adaptation and control in the frequency-domain enables the application of the outlier-robust MC-FDAFs to a generalized sidelobe canceler (GSC) using an adaptive blocking matrix for speech and audio signal acquisition. It is shown that the improved robustness leads to faster convergence and to higher interference suppression relative to nonrobust adaptation algorithms, especially during periods of strong interference
['Wolfgang Herbordt', 'Herbert Buchner', 'Satoshi Nakamura', 'Walter Kellermann']
Multichannel Bin-Wise Robust Frequency-Domain Adaptive Filtering and Its Application to Adaptive Beamforming
535,508
The paper proposes a publicly verifiable threshold decryption scheme with no center based on the situation that a trust center does not exist in many special occasions. The scheme asks decryption member for submitting commitment in the decryption process in order to make sure the identity is publicly verifiable. Moreover, the scheme has several advantages such as protecting the shadow without revealing and the members can be deleted dynamically.
['Xin Lv', 'Congming Wang']
Public Verifiable Threshold Decryption Scheme without Trusted Center
83,713
A brief review of document image retrieval methods: Recent advances
['Fahimeh Alaei', 'Alireza Alaei', 'Michael Myer Blumenstein', 'Umapada Pal']
A brief review of document image retrieval methods: Recent advances
945,844
The paper presents a novel error detection and correction methodology for corrupted coefficients caused by the transmission over a noisy channel of images coded using orthogonal transforms. The method is based on the orthogonal property of image transforms, such as the discrete cosine transform. A few reference pixel intensities of each image block are replaced by a predetermined intensity level prior to transmission. This allows the receiver to identify and correct the error pattern generated by the corruption of DCT coefficients. It is possible to correct t corrupted DCT coefficients in an image block by altering the value of 2t+1 pixel intensity levels to a predetermined value in each image block. After recovering the corrupted DCT coefficients, if necessary, and reconstructing the image, the original intensity level of each reference pixel is estimated by averaging the intensity levels of the adjacent pixels. The resulting image is indistinguishable from the original image when examined by a human. The algorithm does not require any channel overhead. An illustrative example is presented to demonstrate the performance of our algorithm.
['Mohamed Bingabr', 'Pramod K. Varshney']
A novel error correction method without overhead for corrupted JPEG images
167,587
Ontology learning tries to find ontological relations, by an automatic process. Similarity relationships are one of non-taxonomic relations which may be included in ontology. Our idea is that in presence of taxonomic relations we are able to extract more useful non-taxonomic similarity relations. In this paper we investigate the specifications of an implemented system for extracting these relations by means of new context extraction method which uses taxonomic relations
['Alireza Vazifedoost', 'Farhad Oroumchian', 'Maseud Rahgozar']
Finding Similarity Relations in Presence of Taxonomic Relations in Ontology Learning Systems
21,797
Dynamic memory management can make up to 60% of total program execution time. Object oriented languages such as C++ can use 20 times more memory than procedural languages like C. Bad memory management causes severe waste of memory, several times that actually needed, in programs. It can also cause degradation in performance. Many widely used allocators waste memory and/or CPU time. Since computer memory is an expensive and limited resource its efficient utilization is necessary. There cannot exist a memory allocator that will deliver best performance and least memory consumption for all programs and therefore easily tunable allocators are required. General purpose allocators that come with operating systems give less than optimal performance or memory consumption. An allocator with a few tunable parameters can be tailored to a program's needs for optimal performance and memory consumption. Our tunable hybrid allocator design shows 11-54% better performance and nearly equal memory consumption when compared to the well known Doug Lea allocator in seven benchmark programs.
['Yusuf Hasan', 'J. Morris Chang']
A tunable hybrid memory allocator
267,151
A multi-wire error correction scheme, which combines Hamming product codes with type-II hybrid ARQ, is proposed for reliable and energy efficient SoC links. Also, a hard decision iterative decoding method, which can achieve the maximum error correction capability of Hamming product codes, is proposed. Simulation results show an improvement of up to four orders of magnitude in residual flit-error rate for multi-wire errors. For a given system reliability requirement, the proposed error control scheme can achieve 35% energy improvement over other error correction codes.
['Bo Fu', 'Paul Ampadu']
A multi-wire error correction scheme for reliable and energy efficient SOC links using hamming product codes
423,372
Combining Dynamic Reward Shaping and Action Shaping for Coordinating Multi-agent Learning.
['Xiangbin Zhu', 'Chongjie Zhang', 'Victor R. Lesser']
Combining Dynamic Reward Shaping and Action Shaping for Coordinating Multi-agent Learning.
774,323
In this brief, a quasi-sliding mode (QSM)-based repetitive learning control (RLC) method is proposed for tackling multi-input multi-output nonlinear continuous-time systems with matching perturbations. The proposed RLC method is able to perform rejection of periodic exogenous disturbances as well as tracking of periodic reference trajectories. It ensures a robust system stability when it is subject to nonperiodic uncertainties and disturbances. In this brief, an application to a robotic manipulator is used to illustrate the performance of the proposed QSM-based RLC method. A comparative study with the conventional variable structure control (VSC) technique is also included
['Xiaodong Li', 'Tommy W. S. Chow', 'John K. L. Ho', 'Hong-Zhou Tan']
Repetitive Learning Control of Nonlinear Continuous-Time Systems Using Quasi-Sliding Mode
46,578
Because of the lack of random access memory in optical domain, optical buffering implemented by fiber delay lines is currently a main way. Typical optical buffering architecture falls into two categories: feed-forward and feedback buffering. Both have advantages and disadvantages. In this paper, we propose an effective hybrid buffering architecture based on output-buffered feed-forward and feedback shared buffering architecture. The proposed architecture employs feed-forward output buffers to connect packets to form a variable-length frame, and employs a feedback shared buffer with a large storage frame to handle frames behind feed-forward output buffers. This scheme uses a frame concept to reduce the control complexity and increase the buffer depth. Our simulation results show that the proposed architecture improves the switch performance and outperforms an existing hybrid buffering architecture, partially shared buffering architecture, in terms of packet loss probability.
['Guan-Hong Jhou', 'Woei Lin']
A Frame-Based Architecture with Shared Buffers for Slotted Optical Packet Switching
28,220
In the paper we consider the penalty finite element method for the stationary incompressible magnetohydrodynamics (MHD) problem with a factor of penalty parameter. Stability and convergence of numerical solutions are established. Furthermore, two level penalty methods are also developed for the MHD problem. Our methods consist of solving a nonlinear MHD problem by the usual penalty method on a coarse mesh with mesh size H , and then a linearized MHD problem based on the Stokes, Newton and Oseen iterations on a fine mesh is solved by the penalty method with mesh size h ( h ? H ) , respectively. Stability and error estimates of numerical solutions in two level penalty methods are presented. Finally, some numerical tests are provided to demonstrate the effectiveness of the developed algorithms.
['Tong Zhang', 'ZhenZhen Tao']
Two level penalty finite element methods for the stationary incompressible magnetohydrodynamics problem
570,409
Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such high-level intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatio-temporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks.
['Ashesh Jain', 'Amir Roshan Zamir', 'Silvio Savarese', 'Ashutosh Saxena']
Structural-RNN: Deep Learning on Spatio-Temporal Graphs
550,599
In this paper, we consider a decode-forward relay system with a source, a relay, and a destination, where two-layer superposition codes are used at the source and the relay. An equivalent squared minimum distance (ESMD) that determines the error performance is derived by using an upper bound on the pair-wise error probability. Without deriving error probabilities, the error performance level for each of superimposed symbols can be shown in a straightforward manner by the ESMD. An optimal superposition-coded relay scheme and a suboptimal switched-power superposition coding scheme are proposed by improving the ESMD. Closed-form power allocation that maximizes the ESMD for the switched scheme is derived for 2-ary pulse amplitude modulation (PAM). An $M$ -ary PAM generalization for the switched-power superposition-coded relay scheme is also presented. Simulation results show that significant signal-to-noise ratio gains are achieved in the optimal and switched-power superposition coding strategies for 2-ary and 4-ary PAM over the Rayleigh fading channel.
['Xianglan Jin', 'Hyoung-Nam Kim']
Switched-Power Two-Layer Superposition Coding in Cooperative Decode-Forward Relay Systems
686,479
The emergence of the computational Grid and the potential for seamless aggregation, integration and interactions has made it possible to conceive a new generation of realistic, scientific and engineering simulations of complex physical phenomena. The inherently heterogeneous and dynamic nature of these application and the Grid presents significant runtime management challenges. In this paper we extend the PRAGMA framework to enable self adapting, self optimizing runtime management of dynamically adaptive applications. Specifically, we present the design, prototype implementation and initial evaluation of policies and mechanisms that enable PRAGMA to autonomically manage, adapt and optimize structured adaptive mesh refinement applications (SAMR) based on current system and application state and predictive models for system behavior and application performance. We use the 3-D adaptive Richtmyer-Meshkov compressible fluid dynamics application and Beowulf clusters at Rutgers University, University of Arizona, and NERSC to develop our performance models, and define and evaluate our adaptation policies. In our prototype, the predictive performance models capture computational and communicational loads and, along with current system state, adjust processors capacities at runtime to enable the application to adapt and optimize its performance.
['Hao Zhu', 'Manish Parashar', 'Jingmei Yang', 'Yeliang Zhang', 'S. Rao', 'Salim Hariri']
Self-adapting, self-optimizing runtime management of Grid applications using PRAGMA
468,041
In this paper, a general framework of the multiple- input multiple-output (MIMO) transceiver design for 5G multiple access (5GMA), including both non- orthogonal multiple access (NOMA) and sparsity code multiple access (SCMA), is developed to enhance the system throughput of next generation of communication systems. By applying generalized singular value decomposition (GSVD), MIMO channels can be decomposed into multiple single-input single- output (SISO) channels, to which the concept of NOMA and SCMA can ideally be applied. GSVD based precoding is proposed to both uplink and downlink 5GMA transmissions, and simulation results are provided to demonstrate the performance of the proposed schemes.
['Zheng Ma', 'Zhiguo Ding', 'Pingzhi Fan', 'Siyang Tang']
A General Framework for MIMO Uplink and Downlink Transmissions in 5G Multiple Access
828,211
Body Sensor Networks (BSNs) have emerged as a revolutionary technology in many application domains in health-care, fitness, smart cities, and many other compelling Internet of Things (IoT) applications. Most commercially available systems assume that a single device monitors a plethora of user information. In reality, BSN technology is transitioning to multi-device synchronous measurement environments; fusion of the data from multiple, potentially heterogeneous, sensor sources is therefore becoming a fundamental yet non-trivial task that directly impacts application performance. Nevertheless, only recently researchers have started developing technical solutions for effective fusion of BSN data. To the best of our knowledge, the community is currently lacking a comprehensive review of the state-of-the-art techniques on multi-sensor fusion in the area of BSN. This survey discusses clear motivations and advantages of multi-sensor data fusion and particularly focuses on physical activity recognition, aiming at providing a systematic categorization and common comparison framework of the literature, by identifying distinctive properties and parameters affecting data fusion design choices at different levels (data, feature, and decision). The survey also covers data fusion in the domains of emotion recognition and general-health and introduce relevant directions and challenges of future research on multi-sensor fusion in the BSN domain.
['Raffaele Gravina', 'Parastoo Alinia', 'Hassan Ghasemzadeh', 'Giancarlo Fortino']
Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges
885,457
Kinematics Analysis of a Novel 5-DOF Hybrid Manipulator
['Wanjin Guo', 'Ruifeng Li', 'Chuqing Cao', 'Yunfeng Gao']
Kinematics Analysis of a Novel 5-DOF Hybrid Manipulator
710,714
Scientists of many countries in which English is not the primary language routinely use a variety of manuscript preparation, correction or editing services, a practice that is openly endorsed by many journals and scientific institutions. These services vary tremendously in their scope; at one end there is simple proof-reading, and at the other extreme there is in-depth and extensive peer-reviewing, proposal preparation, statistical analyses, re-writing and co-writing. In this paper, the various types of service are reviewed, along with authorship guidelines, and the question is raised of whether the high-end services surpass most guidelines’ criteria for authorship. Three other factors are considered. First, the ease of collaboration possible in the internet era allows multiple iterations between the author(s) and the “editing service”, so essentially, papers can be co-written. Second, “editing services” often offer subject-specific experts who comment not only on the language, but interpret and improve scientific content. Third, the trend towards heavily multi-authored papers implies that the threshold necessary to earn authorship is declining. The inevitable conclusion is that at some point the contributions by “editing services” should be deemed sufficient to warrant authorship. Trying to enforce any guidelines would likely be futile, but nevertheless, it might be time to revisit the ethics of using some of the high-end “editing services”. In an increasingly international job market, awareness of this problem might prove increasingly important in authorship disputes, the allocation of research grants, and hiring decisions.
['George A. Lozano']
Ethics of Using Language Editing Services in An Era of Digital Communication and Heavily Multi-Authored Papers
513,492
The exploration of emerging data exchange technologies and design of image-based language learning (IBLL) applications are presented in this paper. For integrating, the mobile devices to learning process the generic interfaces have been created for portable personal spaces (PoPS) providing mobile access to multimedia documents based on XML technologies. The IBLL implies image processing, recognition and retrieval, thereby some algorithms have been proposed for learning assistant applications used by mobile devices. Furthermore, for multimedia data exchange in wireless environment the compression of visual information based on wavelet transforms and several thresholding techniques are supported. The proposed approaches can suggest ways of studying and organising resources which provide long-term guidance on developing skills and support experiential learning. They have been tested for selecting the best ones with the highest processing speed and recognition grade for interpretation of Japanese kanji or Mayan glyphs on mobile devices with limited resources and restricted networking capabilities.
['Oleg Starostenko', 'Vicente Alarcon-Aquino', 'Humberto Lobato-Morales', 'Oleg Sergiyenko']
Computational approaches to support image-based language learning within mobile environment
242,153
We give a recursion-theoretic characterization of the complexity classes NC k for k i¾? 1. In the spirit of implicit computational complexity, it uses no explicit bounds in the recursion and also no separation of variables is needed. It is based on three recursion schemes, one corresponds to time (time iteration), one to space allocation (explicit structural recursion) and one to internal computations (mutual in place recursion). This is, to our knowledge, the first exact characterization of NC k by function algebra over infinite domains in implicit complexity.
['Guillaume Bonfante', 'Reinhard Kahle', 'Jean-Yves Marion', 'Isabel Oitavem']
Recursion Schemata for NCk
378,182
The aims of this paper are to analyze an inversion phenomenon theoretically and discussion on appropriateness of combination of a crossover operator and a selection model. In the previous study, the author designed a crossover operator that worked well on various kinds of objective functions. One of the features of the objective functions is "the optimum exists near a boundary much more than the other". On such objective functions, with recommended selection model, the proposed crossover operator set with an appropriate parameter has shown the fastest convergence speed. However, with another selection model, its convergence speed has been the slowest. In order to understand this inversion phenomenon, a theoretical analysis quantified the selection pressures of the selection models and estimated the expected positions of the center of gravity of the population. The theoretical results corresponded to empirical verifications and successfully explained. Finally, a guideline for designing RCGAs was obtained.
['Hiroshi Someya']
Theoretical analysis on an inversion phenomenon of convergence velocity in a real-coded GA
220,627
Independent component analysis is to extract independent signals from their linear mixtures without assuming prior knowledge of their mixing coefficients. As we know, a number of factors are likely to affect separation results in practical applications, such as the number of active sources, the distribution of source signals, and noise. The purpose of this paper to develop a general framework of blind separation from a practical point of view with special emphasis on the activation function adaptation. First, we propose the exponential generative model for probability density functions. A method of constructing an exponential generative model from the activation functions is discussed. Then, a learning algorithm is derived to update the parameters in the exponential generative model. The learning algorithm for the activation function adaptation is consistent with the one for training the demixing model. Stability analysis of the learning algorithm for the activation function is also discussed. Both theoretical analysis and simulations show that the proposed approach is universally convergent regardless of the distributions of sources. Finally, computer simulations are given to demonstrate the effectiveness and validity of the approach.
['Liqing Zhang', 'Andrzej Cichocki', 'Shun-ichi Amari']
Self-adaptive blind source separation based on activation functions adaptation
84,030
In this letter, we investigate the problem of CFO estimation in OFDM systems when the timing offset and channel length are not exactly known. Instead of explicitly estimating the timing offset and channel length, we employ a multi-model approach, where the timing offset and channel length can take multiple values with certain probabilities. The effect of multimodel is directly incorporated into the CFO estimator. Results show that the proposed estimator outperforms the estimator selecting only the most probable model and the method taking the maximal model.
['Kun Cai', 'Xiao Li', 'Jian Du', 'Yik-Chung Wu', 'Feifei Gao']
CFO estimation in OFDM systems under timing and channel length uncertainties with model averaging
9,895
Using Virtual Characters to Study Human Social Cognition
['Antonia F. de C. Hamilton', 'Xueni Pan', 'Paul Alexander George Forbes', 'Joanna Hale']
Using Virtual Characters to Study Human Social Cognition
908,857
The paper analyzes dynamic behaviors of three- order Cellular Neural Network (3-order CNN). It shows that 3-order CNN is symmetric with regard to the origin; the system is dissipative and asymptotic motion settles onto an attractor. The system has diverse chaotic attractors with different parameters. So 3-order CNN can be applied in secure communications owing to its chaotic behaviors. The paper combines 3-order CNN with DES to propose a scheme of image secure communication. The results of security analysis indicate that this scheme not only has a large key space but also has a very sensitivity with respect to the initial conditions of 3-order CNN and the key of DES.
['Fei Xiang', 'Huijuan Xiao', 'Shuisheng Qiu', 'Cheng-Liang Deng']
Dynamical Behavior of Three-Order Cellular Neural Network with Application in Image Secure Communication
495,699
The representation of information granules is the key issue to the discovery of knowledge in formation of tables. In this paper, the concept of ordered granular labeled structures is introduced. Multi-scale ordered granular labeled structures and multi-scale-decision granular ordered labeled structures are denned in this work. Multi-scale ordered information systems in which there exist hierarchical scale ordered structures of attribute values measured at different levels of granulations are also proposed. Finally, representations of information granules in different scales of ordered granulations in multi-scale ordered information systems and multi-scale ordered decision tables are explored.
['Wei-Zhiwu', 'Shen-Ming Gu', 'Xia Wang']
Information granules in multi-scale ordered information systems
564,957
Force Compensating Trajectories for Redundant Robots: Experimental Results.
['Daniela Vassileva', 'George Boiadjiev', 'Haruhisa Kawasaki', 'Tetsuya Mouri']
Force Compensating Trajectories for Redundant Robots: Experimental Results.
988,472
Quantum Communication Attacks on Classical Cryptographic Protocols - (Invited Talk).
['Ivan Damgård']
Quantum Communication Attacks on Classical Cryptographic Protocols - (Invited Talk).
685,675
This paper describes the results of a long-term empirical investigation into object-oriented framework reuse. The aim is to identify the major problems that occur during framework reuse and the impact of current documentation techniques on these problems. Four major reuse problems are identified: understanding the functionality of framework components; understanding the interactions between framework components; understanding the mapping from the problem domain to the framework implementation; understanding the architectural assumptions in the framework design. Two forms of documentation are identified as having the potential to address these problems, namely pattern languages and micro-architecture descriptions. An in-depth, qualitative analysis suggests that, although pattern languages do provide useful support in terms of introducing framework concepts, this can be bypassed by developers using their previous knowledge, occasionally to the detriment of the final solution. Micro-architecture documentation appears to provide support for simple interaction and functionality queries, but it is not able to address large scale interaction problems involving multiple classes within the framework. The paper concludes that, although a combination of pattern language and micro-architecture documentation is useful for framework reuse, the forms of these documentation types used in this study require further enhancement to become effective. The paper also serves as an example to encourage others to perform evaluation of framework understanding and documentation.
['Douglas Samuel Kirk', 'Marc Roper', 'Murray Wood']
Identifying and addressing problems in object-oriented framework reuse
354,621
Topic Maps offer a powerful foundation for knowledge representation and the implementation of knowledge management applications. Using ontologies to model knowledge structures, they offer concepts to link these knowledge structures with unstructured data stored in files, external documents etc. This paper presents the architecture and prototypical implementation of a Topic Map application infrastructure (called "Topic Grid" in the following) allowing transparent access to different Topic Maps distributed in a network. To a client of the Topic Grid, it appears as if access to a single virtual Topic Map is provided. The Topic Grid architecture is designed as a multi-protocol layered model in order to enhance its reusability.
['Axel Korthaus', 'Stefan Henke', 'Markus Aleksy', 'Martin Schader']
A Distributed Topic Map Architecture for Enterprise Knowledge Management
356,182
In this work, we propose and evaluate an active learning algorithm in context of CPSGrader, an automatic grading and feedback generation tool for laboratory-based courses in the area of cyber-physical systems. CPSGrader detects the presence of certain classes of mistakes using test benches that are generated in part via machine learning from solutions that have the fault and those that do not (positive and negative examples). We develop a clustering-based active learning technique that selects from a large database of unlabeled solutions, a small number of reference solutions for the expert to label that will be used as training data. The goal is to achieve better accuracy of fault identification with fewer reference solutions as compared to random selection. We demonstrate the effectiveness of our algorithm using data obtained from an on-campus laboratory-based course at UC Berkeley.
['Garvit Juniwal', 'Sakshi Jain', 'Alexandre Donzé', 'Sanjit A. Seshia']
Clustering-Based Active Learning for CPSGrader
115,034
Agents in a social-technological network can be thought of as strategically interacting with each other by continually observing their own local or hyperlocal information and communicating suitable signals to the receivers who can take appropriate actions. Such interactions have been modeled as information-asymmetric signaling games and studied in our earlier work to understand the role of deception, which often results in general loss of cybersecurity. While there have been attempts to model and check such a body of agents for various global properties and hyperproperties, it has become clear that various theoretical obstacles against this approach are unsurmountable. We instead advocate an approach to dynamically check various liveness and safety hyperproperties with the help of recommenders and verifiers; we focus on empirical studies of the resulting signaling games to understand their equilibria and stability. Agents in such a proposed system may mutate, publish, and recommend strategies and verify properties, for instance, by using statistical inference, machine learning, and model checking with models derived from the past behavior of the system. For the sake of concreteness, we focus on a well-studied problem of detecting a malicious code family using statistical learning on trace features and show how such a machine learner — in this study a classifier for Zeus/Zbot — can be rendered as a property, and then be deployed on endpoint devices with trace monitors. The results of this paper, in combination with our earlier work, indicate the feasibility and way forward for a recommendation-verification system to achieve a novel defense mechanism in a social-technological network in the era of ubiquitous computing.
['William Casey', 'Evan Wright', 'Jose Andre Morales', 'Michael Y. Appel', 'Jeff Gennari', 'Bud Mishra']
Agent-based trace learning in a recommendation-verification system for cybersecurity
918,179
Abstract We prove the strong continuity of spectral multiplier operators associated with dilations of certain functions on the general Hardy space H L 1 introduced by Hofmann, Lu, Mitrea, Mitrea, Yan. Our results include the heat and Poisson semigroups as well as the group of imaginary powers.
['Jacek Dziubański', 'Błażej Wróbel']
Strong continuity on Hardy spaces
638,885
In this paper, we define volumetric depth confidence and propose a method to denoise this data by performing adaptive wavelet thresholding using three dimensional (3D) wavelet transforms. The depth information is relevant for emerging interactive multimedia applications such as 3D TV and free-viewpoint television (FTV). These emerging applications require high quality virtual view rendering to enable viewers to move freely in a dynamic real world scene. Depth information of a real world scene from different viewpoints is used to render an arbitrary number of novel views. Usually, depth estimates of 3D object points from different viewpoints are inconsistent. This inconsistency of depth estimates affects the quality of view rendering negatively. Based on the superposition principle, we define a volumetric depth confidence description of the underlying geometry of natural 3D scenes by using these inconsistent depth estimates from different viewpoints. Our method denoises this noisy volumetric description, and with this, we enhance the quality of view rendering by up to 0.45 dB when compared to rendering with conventional MPEG depth maps.
['Srinivas Parthasarathy', 'Akul Chopra', 'Émilie Baudin', 'Pravin Kumar Rana', 'Markus Flierl']
Denoising of volumetric depth confidence for view rendering
362,842
This paper presents an heuristic method to solve the combined resource selection and binding problems for the high-level synthesis of multiple-precision specifications. Traditionally, the number of functional (and storage) units in a datapath is determined by the maximum number of operations scheduled in the same cycle, with their respective widths depending on the number of bits of the wider operations. When these wider operations are not scheduled in such "busy" cycle, this way of acting could produce a considerable waste of area. To overcome this problem, we propose the selection of the set of resources taking into account the only truly relevant aspect: the maximum number of bits calculated and stored simultaneously in a cycle. The implementation obtained is a multiple-precision datapath, where the number and widths of the resources are independent of the specification operations and data objects.
['María Molina', 'José M. Mendías', 'Román Hermida']
Multiple-Precision Circuits Allocation Independent of Data-Objects Length
53,153
Research on physical human-robot interaction has been attracting attention recently, focusing on robot embodiment. The work reported here proposes Active Touch Communication Robot (AcToR), a robot that is modeled on the hearing dog. A hearing dog is a type of dog assist people who are deaf or hard of hearing by alerting their handler to important sounds. AcToR uses the sense of touch to notify a human of the intention to transfer information. For example, when AcToR detects that a cell phone that is in another location has received a call, AcToR moves to the user's location and makes contact with the user's body to notify the user of the incoming call. The AcToR robot is based on the Roomba®and uses the Roomba's bumper and contact sensors to detect contact. This paper reports the results of psychological experiments using the AcToR robot that indicate the feasibility of using touch to transfer information from a robot to a person.
['Michihiko Furuhashi', 'Tsuyoshi Nakamura', 'Masayoshi Kanoh', 'Koji Yamada']
Touch-based information transfer from a robot modeled on the hearing dog
552,664
Load balancing (LB) is crucial in the field of cloud computing. LB is to find the optimum allocation of services onto a set of machines so the machine usage can be maximised. This paper proposes a new method for LB, simulated annealing (SA) enhanced by grammatical evolution (GE). SA is a well-known stochastic optimisation algorithm that has good performance on a range of problems including loading balancing. However the success of SA often relies on a key parameter known as the cooling schedule and the type of the utilised neighbourhood structure. Both the parameter and the structure of SA are problem specific. They need to be manually adjusted to fit the problem in hand. In addition different stages of the search process may have different optimal parameter values. To address these issues, a grammar evolution approach is introduced to adaptively evolve the cooling schedule parameter and neighbourhood structures. The proposed method can adjust SA parameter and structure based on the landscape of the current search state so high quality solutions can be found more quickly. The effectiveness of the proposed GE method is demonstrated on the Google machine reassignment problem, which is a typical LB problem, proposed for the ROADEF/EURO 2012 challenge. Experimental results show that our GE enhanced SA is highly competitive compared to state-of-the-art algorithms.
['Nasser R. Sabar', 'Andy Song']
Grammatical Evolution Enhancing Simulated Annealing for the Load Balancing Problem in Cloud Computing
844,408
Most of the ITS applications dedicated to vehicular networks rely on periodic messages sent in the vicinity of the vehicles. To ensure the road safety, the inter-messages delay admits strong constraints. The current standard proposes to adapt the inter-messages delay according to the vehicle dynamics. Nevertheless, when the density of vehicles is large, short delays may lead to collisions and losses, leading to a poor neighborhood knowledge accuracy. In this paper, we propose an adaptive strategy, named AND for Adaptive Neighbor Discovery, to take into account the networking conditions for updating the inter-messages delay. The aim is to detect the neighbors in time while preserving the network resources. We show that our cooperative approach achieves very good results in neighbor discovery while consuming less messages.
['Hermes Pimenta de Moraes', 'Bertrand Ducourthial']
Adaptive inter-messages delay in vehicular networks
953,656
We outline cryptographic key-computation from biometric data based on error-tolerant transformation of continuous-valued face eigenprojec- tions to zero-error bitstrings suitable for cryptographic applicability. Bio- hashing is based on iterated inner-products between pseudorandom and user- specific eigenprojections, each of which extracts a single-bit from the face data. This discretisation is highly tolerant of data capture offsets, with same-user face data resulting in highly correlated bitstrings. The resultant user identification in terms of a small bitstring-set is then securely reduced to a single cryptographic key via Shamir secret-sharing. Generation of the pseudorandom eigenprojection sequence can be securely parameterised via incorporation of physical tokens. Tokenised bio-hashing is rigorously protective of the face data, with security comparable to cryptographic hashing of token and knowledge key-factors. Our methodology has several major advantages over conventional biometric analy- sis ie elimination of false accepts (FA) without unacceptable compromise in terms of more probable false rejects (FR), straightforward key-management, and cryptographically rigorous commitment of biometric data in conjunction with verification thereof.
['Alwyn Goh', 'David Chek Ling Ngo']
Computation of Cryptographic Keys from Face Biometrics
26,692
This paper describes the behavior observed in a class of cellular automata that we have defined as "dissipative", i.e., cellular automata for which the external environment can somehow inject "energy" to dynamically influence the evolution of the automata. In this class of cellular automata, we have observed that stable macro-level global structures emerge from local interactions among cells. Since dissipative cellular automata express characteristics strongly resembling those of open distributed systems, we expect that similar sorts of macro-level behaviors are likely to emerge in real world systems of the same nature and need to be studied, controlled, and possibly fruitfully exploited. A preliminary set of experiments reporting two ways of indirectly controlling the behavior of DCA are reported and discussed w.r.t. the possibility of applying similar sort of indirect control on open distributed systems.
['Marco Mamei', 'Andrea Roli', 'Franco Zambonelli']
Dissipative cellular automata as minimalist distributed systems: a study on emergent behaviors
468,143
Agency is the sense that I am the cause or author of a movement. Babies develop early this feeling by perceiving the contingency between afferent (sensor) and efferent (motor) information. A comparator model is hypothesized to be associated with many brain regions to monitor and simulate the concordance between self-produced actions and their consequences. In this paper, we propose that the biological mechanism of spike timing-dependent plasticity, that synchronizes the neural dynamics almost everywhere in the central nervous system, constitutes the perfect algorithm to detect contingency in sensorimotor networks. The coherence or the dissonance in the sensorimotor information flow imparts then the agency level. In a head-neck-eyes robot, we replicate three developmental experiments illustrating how particular perceptual experiences can modulate the overall level of agency inside the system; i.e., (1) by adding a delay between proprioceptive and visual feedback information, (2) by facing a mirror, and (3) a person. We show that the system learns to discriminate animated objects (self-image and other persons) from other type of stimuli. This suggests a basic stage representing the self in relation to others from low-level sensorimotor processes. We discuss then the relevance of our findings with neurobiological evidences and development psychological observations for developmental robots.
['Alexandre Pitti', 'Hiroki Mori', 'Shingo Kouzuma', 'Yasuo Kuniyoshi']
Contingency Perception and Agency Measure in Visuo-Motor Spiking Neural Networks
225,180
CAPTCHAs represent an important pillar in the web security domain. Yet, current CAPTCHAs do not fully meet the web security requirements. Many existing CAPTCHAs can be broken using automated attacks based on image processing and machine learning techniques. Moreover, most existing CAPTCHAs are completely vulnerable to human-solver relay attacks, whereby CAPTCHA challenges are simply outsourced to a remote human solver. In this paper, we introduce a new class of CAPTCHAs that can not only resist automated attacks but can also make relay attacks hard and detectable. These CAPTCHAs are carefully built on the notions of dynamic cognitive games (DCG) and emerging images (EI), present in the literature. While existing CAPTCHAs based on the DCG notion alone (e.g., an object matching game embedded in a clear background) are prone to automated attacks and those based on the EI notion alone (e.g., moving text embedded in emerging images) are prone to relay attacks, we show that a careful amalgamation of the two notions can resist both forms of attacks. Specifically, we formalize, design and implement a concrete instantiation of EI-DCG CAPTCHAs, and demonstrate its security with respect to image processing and object tracking techniques as well as their resistance to and detectability of relay attacks.
['Song Gao', 'Manar Mohamed', 'Nitesh Saxena', 'Chengcui Zhang']
Emerging Image Game CAPTCHAs for Resisting Automated and Human-Solver Relay Attacks
570,196
Recently, a reversible garbage-free 2k ±1 constant-multiplier circuit was presented by Axelsen and Thomsen. This was the first construction of a garbage-free, reversible circuit for multiplication with non-trivial constants. At the time, the strength, that is, the range of constants obtainable by cascading these circuits, was unknown.#R##N##R##N#In this paper, we show that there exist infinitely many constants we cannot multiply by using cascades of 2k±1-multipliers; in fact, there exist infinitely many primes we cannot multiply by. Using these results, we further provide an algorithm for determining whether one can multiply by a given constant using a cascade of 2k ±1-multipliers, and for generating the minimal cascade of 2k ±1-multipliers for an obtainable constant, giving a complete characterization of the problem. A table of minimal cascades for multiplying by small constants is provided for convenience.
['Eva Rotenberg', 'James Cranch', 'Michael Kirkedal Thomsen', 'Holger Bock Axelsen']
Strength of the reversible, garbage-free 2 k ± 1 multiplier
595,354
In wavelength routed optical networks, wavelength converters can potentially reduce the requirement on the number of wavelengths. The problem of placing a minimum number of wavelength converters in a WDM network so that any routing can be satisfied using no more wavelengths than if there were wavelength converters at every node was raised by Wilfong and Winkler (1998) as the minimum sufficient set problem. This problem is NP-complete in general WDM networks. Wan et al. (1999), showed that the problem is tractable if every edge in the network is bi-directed and the skeleton of the network is a tree of rings. We show that the minimum sufficient set problem is tractable in any directed graph with a general tree of rings skeleton.
['Guangting Chen', 'Guojun Li', 'Guoliang Xue']
Optimal placement of wavelength converters in WDM optical networks with a general tree of rings topology
342,009
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior.
['Debdeep Pati', 'David B. Dunson', 'Surya T. Tokdar']
Posterior consistency in conditional distribution estimation
362,931
Taking into account chemical control, biological control for pest management at different fixed moments, and mutual interference of the predator. A one-predator two-prey system with impulsive effects and mutual interference is established in this paper. By using techniques of impulsive perturbations, Floquet theory and comparison theorem, we investigate the existence and globally asymptotic stability of prey-eradication periodic solution. We also derive some sufficient conditions for the permanence of the system by using comparison methods involving multiple Lyapunov functions. Our results improve some obtained results. Then numerical simulations are given to show the complex behaviors of this system. Finally, we analyze the biological meanings of these results and give some suggestions for feasible control strategies.
['Zhen Wang', 'Yuanfu Shao', 'Xianjia Fang', 'Xiangmin Ma']
The dynamic behaviors of one-predator two-prey system with mutual interference and impulsive control
826,870
Abstract#R##N##R##N#In the present electrical engineering curriculum, power distribution engineering is one of the core courses offered to the students with power systems specialization. Since the distribution systems are mostly radial, the topics related to radial distribution system (RDS) are emphasized in this course. In RDS, network reconfiguration is frequently performed for optimizing system performance. Before the reconfiguration operation is physically performed, it is necessary to have computer-aided performance assessment of the reconfigured RDS from load flow analysis. For students' education, it is necessary that the students should familiarize themselves with computer simulation of RDS reconfiguration followed by load flow analysis. The purpose of this paper is to clearly and illustratively explain the various underlying sub-processes involved in the simulation experiment of RDS reconfiguration. The pseudo-codes in the form of flowcharts and complete C-language program codes for RDS reconfiguration are also included. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 260–276, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20103
['K. Prasad', 'N. C. Sahoo']
A simplified approach for computer‐aided education of network reconfiguration in radial distribution systems
196,805
One of the problems in Bayesian inference is the prior selection. We can categorize different methods for selecting prior into two main groups: informative and non-informative. Here, we have considered an informative method called filters random filed and minimax entropy (FRAME). Despite of its theoretical interest, that method introduces a huge amount of computational burden, which makes it very unsuitable for real-time applications. The main critical point of the method is its parameter estimation part, which plays a major role in its very low speed. In this paper, we have introduced a fast method for parameter estimation to fasten the FRAME approach. Although the kernel of our approach is the Gibbs sampler that intrinsically has very low speed, our proposed method has led to a proper speed.
['Rouhollah Dianat', 'Shohreh Kasaei', 'Majid Khabbazian']
A fast method for prior probability selection based on maximum entropy principle and Gibbs sampler
405,178
In this paper, we present an object proposal generation method by applying energy optimization into superpixel merging algorithms in a multiscale framework, which could generate possible object locations in one image. As images in object detection datasets always enjoy high diversity, we adopt two different energy functions with multi-scales. Thus, our method enjoys the strength of global search, which is strong in locating salient object by concerning the whole image at one merge iteration, as well as the strength of local search which is more likely to recall the un-salient instances. What’s more, unlike most superpixel merging algorithms that are based on diversified segmentation results, our approach takes advantage of robust edge detection and segments each image only once, which greatly reduces the number of proposals. Experiments on PASCAL VOC 2007 test set show that the proposed method outperforms most previous superpixel merging based methods and also could compete with state-of-the-art proposal generators.
['Congchao Wang', 'Jufeng Yang', 'Kai Wang', 'Shang-Hong Lai']
Multi-scale energy optimization for object proposal generation
761,320
Cardiac Fibers Estimation from Arbitrarily Spaced Diffusion Weighted MRI
['Andreas Nagler', 'Cristóbal Bertoglio', 'Christian T. Stoeck', 'Sebastian Kozerke', 'Wolfgang A. Wall']
Cardiac Fibers Estimation from Arbitrarily Spaced Diffusion Weighted MRI
633,906
The World Wide Web Consortium has convened a working group to design a query language for Extensible Markup Language (XML) data sources. This new query language, called XQuery, is still evolving and has been described in a series of drafts published by the working group. XQuery is a functional language comprised of several kinds of expressions that can be nested and composed with full generality. It is based on the type system of XML Schema and is designed to be compatible with other XML-related standards. This paper explains the need for an XML query language, provides a tutorial overview of XQuery, and includes several examples of its use.
['Don Chamberlin']
XQuery: An XML query language
166,914
Dependable embedded software system design is fastidious because designers have to understand and handle multiple, interdependent, pervasive dependability concerns such as fault tolerance, timeliness, performance, security. Because these concerns tend to crosscut application architecture, understanding and changing their descriptions can be difficult. Separating theses concerns at architectural level allow the designers to locate them, to understand them and thus to preserve the required properties when making the change in order to keep the architecture consistent. That separation of concerns leads to better understanding, reuse, analysis and evolution of these concerns during design. The Architecture Analysis and Design Language (AADL) is a standard architecture description language in use by a number of organizations around the world to design, analyze embedded software architectures and generate application code. In this paper we explain how aspect oriented modeling (AOM) techniques and AADL can be used to model dependability aspects of component architecture separately from other aspects. The AOM architectural model used to illustrate the approach in this paper consists of a component primary view describing the base architecture and a component template aspect model describing a fault tolerance concern that provides error detection and recovery services.
['Lydia Michotte', 'Thomas Vergnaud', 'Peter H. Feiler', 'Robert B. France']
Aspect Oriented Modeling of Component Architectures Using AADL
128,260
This paper introduces a large neighbourhood search heuristic for an airline recovery problem combining fleet assignment, aircraft routing and passenger assignment. Given an initial schedule, a list of disruptions, and a recovery period, the problem consists in constructing aircraft routes and passenger itineraries for the recovery period that allow the resumption of regular operations and minimize operating costs and impacts on passengers. The heuristic alternates between construction, repair and improvement phases, which iteratively destroy and repair parts of the solution. The aim of the first two phases is to produce an initial solution that satisfies a set of operational and functional constraints. The third phase then attempts to identify an improved solution by considering large schedule changes while retaining feasibility. The whole process is iterated by including some randomness in the construction phase so as to diversify the search. This work was initiated in the context of the 2009 ROADEF Challenge, a competition organized jointly by the French Operational Research and Decision Analysis Society and the Spanish firm Amadeus S.A.S., in which our team won the first prize.
['Serge Bisaillon', 'Jean-François Cordeau', 'Gilbert Laporte', 'Federico Pasin']
A large neighbourhood search heuristic for the aircraft and passenger recovery problem
421,249
Widely used in data-driven computer animation, motion capture data exhibits its complexity both spatially and temporally. The indexing and retrieval of motion data is a hard task that is not totally solved. In this paper, we present an efficient motion data indexing and retrieval method based on self-organizing map and Smith–Waterman string similarity metric. Existing motion clips are first used to train a self-organizing map and then indexed by the nodes of the map to get the motion strings. The Smith–Waterman algorithm, a local similarity measure method for string comparison, is used in clustering the motion strings. Then the motion motif of each cluster is extracted for the retrieval of example-based query. As an unsupervised learning approach, our method can cluster motion clips automatically without needing to know their motion types. Experiment results on a dataset of various kinds of motion show that the proposed method not only clusters the motion data accurately but also retrieves appropriate motion data efficiently.
['Shuangyuan Wu', 'Shihong Xia', 'Zhaoqi Wang', 'Chunpeng Li']
Efficient motion data indexing and retrieval with local similarity measure of motion strings
415,823
For multiple coupled RLC nets, we formulate the min-area simultaneous shield insertion and net ordering SINO/NB-n problem to satisfy the given noise bound. We develop an efficient and conservative model to compute the peak noise, and apply the noise model to a simulated-annealing (SA) based algorithm for the SINO/NB-n problem. Extensive and accurate experiments show that the SA-based algorithm is efficient, and always achieves solutions satisfying the given noise bound. It uses up to 71\% and 30\% fewer shields when compared to a greedy based shield insertion algorithm and a separated shield insertion and net ordering algorithm, respectively. To the best of our knowledge, it is the first work that presents an in-depth study on the min-area SINO problem under an explicit noise constraint.
['Kevin M. Lepak', 'Irwan Luwandi', 'Lei He']
Simultaneous shield insertion and net ordering under explicit RLC noise constraint
380,094
For a pair of positive parameters $D,\chi$, a partition ${\cal P}$ of the vertex set $V$ of an $n$-vertex graph $G = (V,E)$ into disjoint clusters of diameter at most $D$ each is called a $(D,\chi)$ network decomposition, if the supergraph ${\cal G}({\cal P})$, obtained by contracting each of the clusters of ${\cal P}$, can be properly $\chi$-colored. The decomposition ${\cal P}$ is said to be strong (resp., weak) if each of the clusters has strong (resp., weak) diameter at most $D$, i.e., if for every cluster $C \in {\cal P}$ and every two vertices $u,v \in C$, the distance between them in the induced graph $G(C)$ of $C$ (resp., in $G$) is at most $D$. #R##N#Network decomposition is a powerful construct, very useful in distributed computing and beyond. It was shown by Awerbuch \etal \cite{AGLP89} and Panconesi and Srinivasan \cite{PS92}, that strong $(2^{O(\sqrt{\log n})},2^{O(\sqrt{\log n})})$ network decompositions can be computed in $2^{O(\sqrt{\log n})}$ distributed time. Linial and Saks \cite{LS93} devised an ingenious randomized algorithm that constructs {\em weak} $(O(\log n),O(\log n))$ network decompositions in $O(\log^2 n)$ time. It was however open till now if {\em strong} network decompositions with both parameters $2^{o(\sqrt{\log n})}$ can be constructed in distributed $2^{o(\sqrt{\log n})}$ time. #R##N#In this paper we answer this long-standing open question in the affirmative, and show that strong $(O(\log n),O(\log n))$ network decompositions can be computed in $O(\log^2 n)$ time. We also present a tradeoff between parameters of our network decomposition. Our work is inspired by and relies on the "shifted shortest path approach", due to Blelloch \etal \cite{BGKMPT11}, and Miller \etal \cite{MPX13}. These authors developed this approach for PRAM algorithms for padded partitions. We adapt their approach to network decompositions in the distributed model of computation.
['Michael Elkin', 'Ofer Neiman']
Distributed Strong Diameter Network Decomposition
638,785
This paper describes some key concepts developed and used in the design of a spoken-query based information retrieval system developed at the Mitsubishi Electric Research Labs (MERL). Innovations in the system include automatic inclusion of signature terms of documents in the recognizer's vocabulary, the use of uncertainty vectors to represent spoken queries, and a method of indexing that accommodates the usage of uncertainty vectors. This paper describes these techniques and includes experimental results that demonstrate their effectiveness.
['Peter Wolf', 'Bhiksha Raj']
The MERL SpokenQuery information retrieval system a system for retrieving pertinent documents from a spoken query
70,258
A software-defined radio receiver is designed from a low power ADC perspective, exploiting programmability of windowed integration sampler and clock-programmable discrete-time analog filters. To cover the major frequency bands in today use a wideband RF front-end, including the low noise amplifier and a wide tuning-range synthesizer, spanning over 800 MHz-6 GHz is designed. The entire receiver circuits are implemented in 90 nm CMOS technology. Programmability of the receiver is tested for GSM and 802.11g standards
['Rahim Bagheri', 'Ahmad Mirzaei', 'Saeed Chehrazi', 'Asad A. Abidi']
Architecture and Clock Programmable Baseband of an 800 MHz-6 GHz Software-Defined Wireless Receiver
100,482
Limited main memory size is the primary bottleneck for consolidating virtual machines (VMs) on hosting servers. Memory deduplication scanners reduce the memory footprint of VMs by eliminating redundancy. Our approach extends main memory deduplication scanners through Cross Layer I/O-based Hints (XLH) to find and exploit sharing opportunities earlier without raising the deduplication overhead.#R##N##R##N#Prior work on memory scanners has shown great opportunity for memory deduplication. In our analyses, we have confirmed these results; however, we have found memory scanners to work well only for deduplicating fairly static memory pages. Current scanners need a considerable amount of time to detect new sharing opportunities (e.g., 5 min) and therefore do not exploit the full sharing potential. XLH's early detection of sharing opportunities saves more memory by deduplicating otherwise missed short-lived pages and by increasing the time long-lived duplicates remain shared.#R##N##R##N#Compared to I/O-agnostic scanners such as KSM, our benchmarks show that XLH can merge equal pages that stem from the virtual disk image earlier by minutes and is capable of saving up to four times as much memory; e.g., XLH saves 290 MiB vs. 75 MiB of main memory for two VMs with 512 MiB assigned memory each.
['Konrad Miller', 'Fabian Franz', 'Marc Rittinghaus', 'Marius Hillenbrand', 'Frank Bellosa']
XLH: more effective memory deduplication scanners through cross-layer hints
6,945
Spatiotemporal planning involves making choices at multiple locations in space over some planning horizon to maximize utility and satisfy various constraints. In Forest Ecosystem Management, the problem is to choose actions for thousands of locations each year including harvesting, treating trees for fire or pests, or doing nothing. The utility models could place value on sale of lumber, ecosystem sustainability or employment levels and incorporate legal and logistical constraints on actions such as avoiding large contiguous areas of clearcutting. Simulators developed by forestry researchers provide detailed dynamics but are generally inaccesible black boxes. We model spatiotemporal planning as a factored Markov decision process and present a policy gradient planning algorithm to optimize a stochastic spatial policy using simulated dynamics. It is common in environmental and resource planning to have actions at different locations be spatially interelated; this makes representation and planning challenging. We define a global spatial policy in terms of interacting local policies defining distributions over actions at each location conditioned on actions at nearby locations. Markov chain Monte Carlo simulation is used to sample landscape policies and estimate their gradients. Evaluation is carried out on a forestry planning problem with 1,880 locations using a variety of value models and constraints.
['Mark Crowley']
Using Equilibrium Policy Gradients for Spatiotemporal Planning in Forest Ecosystem Management
182,383
Consideration of the similarity between direct and indirect speech act understanding give rise to the notion that taxonomies of speech acts may not be helpful in modelling language understanding.A computer model which treats representations of direct and indirect speech acts similarly and succesfully has been implemented without any such taxonomy and without an explicit representation of the difference between direct and indirect speech acts.
['Jeremy Ellman']
An indirect approach to types of speech acts
271,942
Ligand based pharmacophore model, docking and density function approaches were employed to reveal the lead candidate for novel kinesin like protein-1.Display Omitted Ligand based pharmacophore model was developed for kinesin like protein-1.Validity and predictability of model has been validated by various potent methods.Validated model used as query in virtual screening to retrieve potent candidates.Molecular interaction and binding mode of hits were studied by molecular docking.Electronic properties and charge transfer of lead candidates were studied by DFT. Kinesin-like protein (KIF11) is a molecular motor protein that is essential in mitosis. Removal of KIF11 prevents centrosome migration and causes cell arrest in mitosis. KIF11 defects are linked to the disease of microcephaly, lymph edema or mental retardation. The human KIF11 protein has been actively studied for its role in mitosis and its potential as a therapeutic target for cancer treatment. Pharmacophore modeling, molecular docking and density functional theory approaches was employed to reveal the structural, chemical and electronic features essential for the development of small molecule inhibitor for KIF11. Hence we have developed chemical feature based pharmacophore models using Discovery Studio v 2.5 (DS). The best hypothesis (Hypo1) consisting of four chemical features (two hydrogen bond acceptor, one hydrophobic and one ring aromatic) has exhibited high correlation co-efficient of 0.9521, cost difference of 70.63 and low RMS value of 0.9475. This Hypo1 is cross validated by Cat Scramble method; test set and decoy set to prove its robustness, statistical significance and predictability respectively. The well validated Hypo1 was used as 3Dquery to perform virtual screening. The hits obtained from the virtual screening were subjected to various scrupulous drug-like filters such as Lipinski's rule of five and ADMET properties. Finally, six hit compounds were identified based on the molecular interaction and its electronic properties. Our final lead compound could serve as a powerful tool for the discovery of potent inhibitor as KIF11 agonists.
['Subramanian Karunagaran', 'Subramaniyan Subhashchandrabose', 'Keun Woo Lee', 'Chandrasekaran Meganathan']
Investigation on the isoform selectivity of novel kinesin-like protein 1 (KIF11) inhibitor using chemical feature based pharmacophore, molecular docking, and quantum mechanical studies
640,296
Prior to deployment, network designers often use simulators to pre-evaluate the performance of designed network with artificial network traffic. The traditional way of separating network design from real applications will not only result in over-designed network configurations, wasting money and energy, but also miss the real network demands of applications, degrading system performance. In this paper, we provide a method to model the network traffic of current popular big data platforms, which can observably improve the matching between network design and applications. The new method extracts communication behavior from the popular big data applications and replays the behavior instead of the packet traces. Experiments show that the traffic generated by the model is almost match the real traffic and the model can easily scale to thousands of nodes.
['Zhen Xie', 'Zheng Cao', 'Zhan Wang', 'Dawei Zang', 'En Shao', 'Ninghui Sun']
Modeling Traffic of Big Data Platform for Large Scale Datacenter Networks
988,421
We present an integrated approach for flexible playout adaptation for high quality audio transmission over impaired network connections. The key concept of our framework is a continuous measurement of the transmission delay, the delay variation, and packet loss. Based on these measurements, the adaptive playout control employs audio time stretching using audio concealment and frame dropping techniques to keep the low delay requirements. In the literature, playout adaptation techniques have mainly been considered for voice over IP, using silence periods between talkspurts, or for high quality audio transmission over dedicated network links. To the best of our knowledge, our playout algorithm is the first achieving low delay high quality audio streaming over impaired network connections for both music and speech. We used a significant number of network traces to estimate the variation of the network quality in DSL, WLAN, UMTS and GPRS links and to update the parameters of our playout adaptation technique. Experimental results clearly indicate that our system provides very high accuracy for the desired accepted late loss rate and achieves a fast playout adaptation, even for rapidly changing network conditions.
['Jochen Issing', 'Stefan Reuschl', 'Falko Dressler', 'Nikolaus Färber']
Flexible playout adaptation for low delay AAC RTP communication
184,792
Spammer detection on social network is a challenging problem. The rigid anti-spam rules have resulted in emergence of "smart" spammers. They resemble legitimate users who are difficult to identify. In this paper, we present a novel spammer classification approach based on Latent Dirichlet Allocation (LDA), a topic model. Our approach extracts both the local and the global information of topic distribution patterns, which capture the essence of spamming. Tested on one benchmark dataset and one self-collected dataset, our proposed method outperforms other stateof-the-art methods in terms of averaged F1score.
['Linqing Liu', 'Yao Lu', 'Ye Luo', 'Renxian Zhang', 'Laurent Itti', 'Jianwei Lu']
Detecting "Smart" Spammers On Social Network: A Topic Model Approach
720,632
(Aim) Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. (Materials) We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). (Method) We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). (Results) The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. (Conclusions) This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.
['Shuihua Wang', 'Ming Yang', 'Sidan Du', 'Jiquan Yang', 'Bin Liu', 'Juan Manuel Górriz', 'Javier Ramírez 0001', 'Ti-Fei Yuan', 'Yudong Zhang']
Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning
910,628
X and Y are random variables. Person P/sub x/ knows X, Person P/sub y/ knows Y, and both know the underlying probability distribution of the random pair (X, Y). Using a predetermined protocol, they exchange messages over a binary, error-free, channel in order for P/sub y/ to learn X. P/sub x/ may or may not learn Y. C/sub m/ is the number of information bits that must be transmitted (by both persons) in the worst case if only m messages are allowed. C/sub infinity / is the corresponding number of bits when there is no restriction on the number of messages exchanged. We consider three aspects of this problem. C/sub 4/. It is known that one-message communication may require exponentially more bits than the minimum possible: for some random pairs, C/sub 1/=2/sup C infinity -1/. Yet just two messages suffice to reduce communication to almost the minimum: for all random pairs, C/sub 2/ or=(2- in )C/sub infinity />or=c. Asymptotically, this is the largest possible discrepancy. Amortized complexity. The amortized complexity of (X,Y) is the limit, as k grows, of the number of bits required in the worst case for L independent repetitions of (X, Y), normalized by k. We show that the four-message amortized complexity of all random pairs is exactly log mu . Hence, when a random pair is repeated many times, no bits can be saved if P/sub x/ knows Y in advance. >
['Moni Naor', 'Alon Orlitsky']
Three results on interactive communication
194,888
This paper addresses the joint coordinated scheduling and power control problem in cloud-enabled networks. Consider the downlink of a cloud-radio access network (CRAN), where the cloud is only responsible for the scheduling policy, power control, and synchronization of the transmit frames across the single-antenna base-stations (BS). The transmit frame consists of several time/frequency blocks, called power-zones (PZs). The paper considers the problem of scheduling users to PZs and determining their power levels (PLs), by maximizing the weighted sum-rate under the practical constraints that each user cannot be served by more than one base-station, but can be served by one or more power-zones within each base-station frame. The paper solves the problem using a graph theoretical approach by introducing the joint scheduling and power control graph formed by several clusters, where each is formed by a set of vertices, representing the possible association of users, BSs, and PLs for one specific PZ. The problem is, then, formulated as a maximum-weight clique problem, in which the weight of each vertex is the sum of the benefits of the individual associations belonging to that vertex. Simulation results suggest that the proposed cross-layer scheme provides appreciable performance improvement as compared to schemes from recent literature.
['Ahmed Douik', 'Hayssam Dahrouj', 'Tareq Y. Al-Naffouri', 'Mohamed-Slim Alouini']
Coordinated Scheduling and Power Control in Cloud-Radio Access Networks
704,032
This paper focuses on a subtask of natural language generation (NLG), voice selection, which decides whether a clause is realised in the active or passive voice according to its contextual information. Automatic voice selection is essential for realising more sophisticated MT and summarisation systems, because it impacts the readability of generated texts. However, to the best of our knowledge, the NLG community has been less concerned with explicit voice selection. In this paper, we propose an automatic voice selection model based on various linguistic information, ranging from lexical to discourse information. Our empirical evaluation using a manually annotated corpus in Japanese demonstrates that the proposed model achieved 0.758 in F-score, outperforming the two baseline models.
['Ryu Iida', 'Takenobu Tokunaga']
Automatic Voice Selection in Japanese based on Various Linguistic Information
172,495
The paper is devoted to the problem of synthesizing proportional-integral-derivative (PID) controllers for a given single-input single-output plant so that the closed-loop system is robustly stabilized and the desired performance specifications are satisfied despite plant uncertainty. First, the problem of robust performance design is converted into simultaneous stabilization of a complex polynomial family. An extension of the results on PID stabilization is then used to devise a linear programming design procedure for determining all admissible PID gain settings. In particular, it is shown that for a fixed proportional gain, the set of admissible integral and derivative gains is a union of convex sets.
['Ming-Tzu Ho', 'Chia-Yi Lin']
PID controller design for robust performance
416,903
Motion capture (mocap) has becoming popular and widely used in various applications. Keyframing is an important tool to select the important frames of the motion sequence to represent the overall motion sequence and regenerate the original motion based on interpolation from the keyframes. In this paper, we would like to propose a new keyframe selection method that analyses the motion activity of the mocap data. The motion changes are considered if the frames are dropped and the difference with original frames is compared to evaluate the importance of the frames that were dropped. A threshold value is used to determine the significance of the frames dropped. More significant frames are kept as keyframes while less significant frames are skipped and can be reconstructed with cubic spline interpolation. Simulation results show that the proposed method is able to produce an overall good visual quality for all types of motion capture because of motion activity analysis. A comparison with curve simplification method is made and shows an improvement of up to 70% in terms of mean square error metric.
['Ming-Hwa Kim', 'Lap-Pui Chau', 'Wan-Chi Siu']
Motion capture keyframing by motion change manipulation
157,481
We often take for granted that we have immediate access to our perception and experience of and through our bodies. But inward listening is a demanding activity and thus not easy to learn to perform or design for. With the Sarka mat we want to support the ability to direct attention by providing sound feedback linked to the weight distribution and motion intensity of different parts of the body, and to provide an exemplar for how such design may be conducted. The process of Sarka's creation is informed by Somaesthetic Appreciation Design. We discuss how a sonic feedback signal can influence listeners, followed by how we, in this design, worked to navigate the complex design space presented to us. We detail the design process involved, and the very particular set of limitations which this interactive sonification presented.
['Ilias Bergstrom', 'Martin Jonsson']
Sarka: Sonification and Somaesthetic Appreciation Design
827,992
Service conflict is an essential issue in smart home automation, which causes the service execution inefficient and inconvenient. In this paper, we propose a conflict detection and avoidance schema based on scene with urgency degree. First, we classify services into five categories. Users can customize their specific scene using different categories of services which can be set with their own urgency degree. Second, we propose a formal service model, ETA, which includes environment variable, trigger and actuator to describe each service. In the design phase of a scene, we use ETA model to detect conflict among the services with same urgency degree. While in the execution phase of a scene, it is effective to avoid service conflict by using its higher urgency degree to choose which service can be executed properly.
['Xukai Wang', 'Yan Lindsay Sun', 'Hong Luo']
Service Conflict Detection and Avoidance Based on Scene with Urgency Degree
697,794
Discusses and compares three prominent contenders for 3G wireless communication. These are: OFDM (orthogonal frequency division multiplexing), CDMA (code division multiple access) and MC-CDMA (multi-carrier CDMA) - a hybrid of the first two. OFDM has gained increasing acceptance as an alternative to single-carrier modulation for wireless systems. Potential exists for very high bit rates and for reaching the channel capacity even over frequency-selective fading channels. CDMA is the technique being most seriously considered for 3G wireless systems. It uses PN (pseudonoise) sequences to spread the signal spectrum to a wide band, thereby achieving greater robustness to deep fades than a narrowband signal, and the capability for multi-user access. The third technique, MC-CDMA, uses a combination of OFDM and CDMA, and has certain advantages that are pointed out. Finally, this paper touches upon an application, namely wireless video.
['J. H. Dholakia', 'Vijay K. Jain']
Technologies for 3G wireless communications
30,640
The increasing demand for broadband access leads operators to upgrade the existing access infrastructures (or building new access network). Broadband access networks require higher investments (especially passive infrastructures such as trenches/ducts and base station towers/masts), and before making any decision it is important to analyze all solutions. The selection of the best solution requires understanding the technical possibilities and limitations of the different access technologies, as well as understanding the costs of building and operating the networks. This study analyzes the effect of asymmetric retail and wholesale prices on operators’ NPV, profit, consumer surplus, welfare, retail market, wholesale market, and so on. For that, we propose a tehno-economic model complemented by a theoretic-game model. This tool identifies all the essential costs of building (and operating) access networks, and performs a detailed analysis and comparison of the different solutions in various scenarios. Communities, operators/service providers, and regulators can use this tool to compare different technological solutions, forecast deployment costs, compare different scenarios, and so on, and help them in making deployment (or regulatory) decisions. The game-theory analyses give a better understanding of the competition and its effect on the business case scenarios’ economic results.
['João Paulo Pereira', 'Pedro Ferreira']
Game Theoretic Modeling of NGANs: Impact of Retail and Wholesale Services Price Variation
209,757
Recent research has shown that including context in a recommender system may improve its performance. The context-based recommendation approaches are classified as pre-filtering, post-filtering and contextual modeling. Moreover, in real e-commerce applications, collecting ratings may be quite difficult. It is possible to use purchasing frequencies instead of ratings, but little research has been done. The research contribution of this work lies in studying when and how including context with a pre-filtering approach improves the performance of a recommender system using transactional data. To this aim, we studied the interaction between homogeneity and sparsity, in several experimental settings. The experiments were done on two databases coming from two actual e-commerce applications.
['Michele Gorgoglione', 'Umberto Panniello']
Including Context in a Transactional Recommender System Using a Pre-filtering Approach: Two Real E-commerce Applications
496,418
The arguable contribution of the odd-even constraint—as an interleaver design criterion—to the performance of turbo trellis-coded modulation is revisited. The question that arises from the literature, remaining still unaddressed, is whether the constraint reduces interleaver gain. In this work, we answer this question in the negative. Moreover, we empirically show and analytically explain that the constraint possesses a spectral-thinning property.
['Konstantinos S. Arkoudogiannis', 'Christos E. Dimakis', 'Konstantinos V. Koutsouvelis']
Turbo trellis-coded modulation: A weight spectrum view at the odd-even constraint
912,649
For the semantics of probabilistic features in programming mainly two approaches are used for building models. One is the Giry monad of Borel probability measures over metric spaces, and the other is Jones' probabilistic powerdomain monad [6] over dcpos (directed complete partial orders). This paper places itself in the second domain theoretical tradition. The probabilistic powerdomain monad is well understood over continuous domains. In this case the algebras of the monad can be described by an equational theory [6,9,5]. It is the aim of this work to obtain similar results for the (extended) probabilistic powerdomain monad over stably compact spaces. We mainly want to determine the algebras of this powerdomain monad and the algebra homomorphisms.
['Ben Cohen', 'Martín Hötzel Escardó', 'Klaus Keimel']
The extended probabilistic powerdomain monad over stably compact spaces
894,721
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
['Mohit Iyyer', 'Varun Manjunatha', 'Jordan L. Boyd-Graber', 'Hal Daumé']
Deep Unordered Composition Rivals Syntactic Methods for Text Classification
612,604
Peer-To-Peer systems are driving a major paradigm shift in the era of genuinely distributed computing. Gnutella is a good example of a Peer-To-Peer success story: a rather simple software enables Internet users to freely exchange files, such as MP3 music files. But it shows up also some of the limitations of current P2P information systems with respect to their ability to manage data efficiently. In this paper we introduce P-Grid, a scalable access structure that is specifically designed for Peer-To-Peer information systems. P-Grids are constructed and maintained by using randomized algorithms strictly based on local interactions, provide reliable data access even with unreliable peers, and scale gracefully both in storage and communication cost.
['Karl Aberer']
P-Grid: A Self-Organizing Access Structure for P2P Information Systems
255,874
Conveyor based transportation systems are used in today manufacturing environments. The main concept we introduce in this paper is using mobile platforms to move parts, tools and fixtures between workstations in a manufacturing environment instead of conveyors. A conveyor-based production system is a fixed transportation system, hard mounted and not reconfigurable, any change in the conveyors path is costly, time-consuming and difficult. Mobile platforms allow for a more flexible manufacturing environment as it only requires a change in the software to reroute the transportations paths and accommodate the changes to the production flow and workstations. This concept paper addresses two objectives: 1) describe some of the existing mobile platform used in industry today 2) introduce the concepts and a simulation of the ultra-flexible production system.
['Remus Boca', 'Thomas A. Fuhlbrigge', 'Harald Staab', 'George Zhang', 'Sang Choi', 'Carlos Martinez', 'William Eakins', 'Gregory F. Rossano', 'Srinivas Nidamarthi']
Ultra-flexible production systems for automated factories
946,134
The proximal gradient and its variants is one of the most attractive first-order algorithm for minimizing the sum of two convex functions, with one being nonsmooth. However, it requires the differentiable part of the objective to have a Lipschitz continuous gradient, thus precluding its use in many applications. In this paper we introduce a framework which allows to circumvent the intricate question of Lipschitz continuity of gradients by using an elegant and easy to check convexity condition which captures the geometry of the constraints. This condition translates into a new descent lemma which in turn leads to a natural derivation of the proximal-gradient scheme with Bregman distances. We then identify a new notion of asymmetry measure for Bregman distances, which is central in determining the relevant step-size. These novelties allow to prove a global sublinear rate of convergence, and as a by-product, global pointwise convergence is obtained. This provides a new path to a broad spectrum of problems ar...
['Heinz H. Bauschke', 'Jérôme Bolte', 'Marc Teboulle']
A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
937,170