abstract
stringlengths
5
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
5
367
__index_level_0__
int64
1
1,000k
We propose the efficient and accurate hierarchical ICIA fitting method for 3D Morphable Models (3DMMs). The conventional ICIA fitting method for 3DMMs requires a long computation time because the 3D face model contains a large number of vertices and it also requires to compute the Hessian matrix using the visible vertices every iteration. For the efficient fitting, we use the hierarchical fitting that use a set of multi-resolution 3D face model and the Gaussian image pyramid. For more accurate fitting, we use a two-stage parameter update that only update the rigid and the texture parameters and then update all parameters after the initial convergence. We present several experiment results to prove that our proposed method shows better performance than previous works.
['Bong-Nam Kang', 'Daijin Kim', 'Hyeran Byun']
An efficient and accurate hierarchical ICIA fitting method for 3D Morphable Models
924,870
The Bayesian parameter estimation problem using a single-bit dithered quantizer is considered. This problem arises, e.g., for channel estimation under low-precision analog-to-digital conversion (ADC) at the receiver. Based on the Bayesian Cramer-Rao lower bound (CRLB), bounds on the mean squared error are derived that hold for all dither strategies with strictly causal adaptive processing of the quantizer output sequence. In particular, any estimator using the binary quantizer output sequence is asymptotically (in the sequence length) at least 10 log 10 (π/2) ≈ 1.96 dB worse than the minimum mean squared error estimator using continuous observations, for any dither strategy. Moreover, dither strategies are designed that are shown by simulation to closely approach the derived lower bounds.
['Georg Zeitler', 'Gerhard Kramer', 'Andrew C. Singer']
Bayesian Parameter Estimation Using Single-Bit Dithered Quantization
323,599
The development of a new general radix-b division algorithm, based on the Svoboda-Tung division, suitable for VLSI implementation is presented. The new algorithm overcomes the drawbacks of the Svoboda-Tung techniques that have prevented the VLSI implementation. First of all, the proposed algorithm is valid for any radix b/spl ges/2; and next, it avoids the possible compensation due to overflow on the iteration by re-writing the two most significant digits of the remainder. An analysis of the algorithm shows that a known radix-2 and two recently published radix-4 division algorithms are particular cases of this general radix-b algorithm. Finally, since the new algorithm is valid only for a reduced range of the IEEE normalised divisor, a pre-scaling technique, based on the multiplication of both the operands by a stepwise approximation to the reciprocal of the divisor is also presented,.
['Luis A. Montalvo', 'Alain Guyot']
Svoboda-Tung division with no compensation
278,945
Distributed network utility maximization (NUM) has received an increasing intensity of interest over the past few years. Distributed solutions (e.g., the primal-dual gradient method) have been intensively investigated under fading channels. As such distributed solutions involve iterative updating and explicit message passing, it is unrealistic to assume that the wireless channel remains unchanged during the iterations. Unfortunately, the behavior of those distributed solutions under time-varying channels is in general unknown. In this paper, we shall investigate the convergence behavior and tracking errors of the iterative primal-dual scaled gradient algorithm (PDSGA) with dynamic scaling matrices (DSC) for solving distributive NUM problems under time-varying fading channels. We shall also study a specific application example, namely the multicommodity flow control and multicarrier power allocation problem in multihop ad hoc networks. Our analysis shows that the PDSGA converges to a limit region rather than a single point under the finite state Markov chain (FSMC) fading channels. We also show that the order of growth of the tracking errors is given by O ( T̅ / N̅ ), where T̅ and N̅ are the update interval and the average sojourn time of the FSMC, respectively. Based on this analysis, we derive a low complexity distributive adaptation algorithm for determining the adaptive scaling matrices, which can be implemented distributively at each transmitter. The numerical results show the superior performance of the proposed dynamic scaling matrix algorithm over several baseline schemes, such as the regular primal-dual gradient algorithm.
['Junting Chen', 'Vincent Kin Nang Lau', 'Yong Cheng']
Distributive Network Utility Maximization Over Time-Varying Fading Channels
20,654
The paper describes an ambient intelligence system aimed at enhancing the experience of people moving inside the related physical environment. The latter is endowed with a set of sensors that perceive the presence of humans (or other physical entities such as dogs, bicycles, cars); the sensors interact with a set of actuators that choose their actions in an attempt improve the overall experience of these users. We will describe the example of an adaptive illumination facility to describe the problem and the proposed solution.
['Stefania Bandini', 'Andrea Bonomi', 'Giuseppe Vizzari', 'Vito Acconci', 'Nathan DeGraaf', 'Jono Podborseck', 'James Clar']
A CA-Based Self-Organized Illumination Facility
25,818
In this work, the symmetry group and similarity reductions of the two-dimensional generalized Benney system are investigated by means of the geometric approach of an invariance group, which is equivalent to the classical Lie symmetry method. Firstly, the vector field associated with the Lie group of transformation is obtained. Then the point transformations are proposed, which keep the solutions of the generalized Benney system invariant. Finally, the symmetry reductions and explicitly exact solutions of the generalized Benney system are derived by solving the corresponding symmetry equations.
['Deng‐Shan Wang', 'Yanbin Yin']
Symmetry analysis and reductions of the two-dimensional generalized Benney system via geometric approach
657,766
This paper explores the approaches to implement intelligent behaviors by biological organisms, silicon automata, and computing systems. Autonomous computing is introduced as the latest and advanced computing techniques built upon routine, algorithmic, and adaptive systems. The theory and philosophy behind autonomous computing is cognitive informatics. In other words, autonomous computing systems are applications of cognitive informatics. A layered reference model of the brain (LRMB) and its cognitive mechanisms and processes are described in this talk, which form the foundation for designing and implementing autonomous computing systems. Real-time process algebra (RTPA) is introduced to formally and rigorously describe autonomous computing systems and cognitive behaviors. It is believed that applications of cognitive informatics and autonomous computing result in the development of new generation computing architectures and information processing systems.
['Yingxu Wang']
On autonomous computing and cognitive processes
50,640
The Computational Grid is an appealing high performance computational platform. Problem in implementing Computational Grid environment is how to effectively use various resources in the system, such as compute cycle, memory, communication network, and data repositories. A benefit function resource mapping heuristic for Computational Grid environments is presented to map a set of independent tasks (Meta-task) to resources. The algorithm considers the influence of input data repositories' location and QOS of tasks to result of mapping. This semi-dynamic algorithm is more suitable for the dynamic adaptability and domain autonomy in the grid, and the benefit function heuristic adopted in the algorithm can assure the QOS of tasks more effectively.
['Qing Ding', 'Guoliang Chen']
A benefit function mapping heuristic for a class of meta-tasks in grid environments
97,649
Business Analysis in the OLAP Context
['Emiel Caron', 'Hennie Daniels']
Business Analysis in the OLAP Context
162,619
CMOS systems have become more susceptible to permanent and transient faults due to the technology scaling. Triple Modular Redundancy is a widely used technique for fault-tolerance. The weak point of this technique is the majority voter. Many researches propose different implementations of voter circuits to tolerate transient faults, but they usually do not evaluate permanent faults. This work investigates the robustness of different majority voters under stuck-on and stuck-open faults. The results show a difference up to 5X in the capacity of tolerate permanent faults.
['Eduardo Liebl', 'Cristina Meinhardt', 'Paulo F. Butzen']
Reliability analysis of majority voters under permanent faults
812,553
Exploring the Role of Twitter in Promoting Women's Health in the Arab World: Lessons Learned.
['Salwa Bahkali', 'Ahmad Almaiman', 'Nahla Altassan', 'Sarah Almaiman', 'Mowafa S. Househ', 'Khaled Al-Surimi']
Exploring the Role of Twitter in Promoting Women's Health in the Arab World: Lessons Learned.
817,632
A Wireless Sensor Network (WSN) is prone to network connectivity failure. Since the nodes suffer from the power constraint and they are mostly deployed remotely, the network suffers from any time connectivity failure problem. Therefore, it is important to find the geographic location of the network discontinuity. The disconnected region is usually known as network cut. It is very important to know the location of this network cut to repair the network for smooth networking operation. In our paper, we propose an algorithm to detect the network cut in a WSN. We validate our method by experimental studies. We find that our method can locate the network discontinuity and the shape of disconnected region.
['S. M. Ferdous', 'Md. Mustafizur Rahman', 'Mahmuda Naznin']
Finding network connectivity failure in a Wireless Sensor Network
725,207
Visible light communication (VLC) is considered to be one of the promising technologies for future wireless systems and has attracted an increasing number of research interests in recent years. Optical orthogonal frequency division multiplexing has been proposed for VLC systems to eliminate the multipath interference while also facilitating the frequency domain equalization. In comparison with the conventional radio frequency (RF) based wireless communications, there has been limited considerations on channel estimation (CE) for VLC, where the indoor optical wireless channel model differs from the traditional RF case. In this paper, we present a new CE algorithm for indoor downlink VLC systems, referred to as the adaptive statistical Bayesian minimum mean square error CE (AS-BMMSE-CE). Furthermore, a so-called variable statistic window (VSW) mechanism is designed for exploiting past channel information within a window of adaptively optimized size, such that the CE performance can be significantly improved. Detailed theoretical analysis is provided and verified by extensive numerical results, demonstrating the superior performance of the proposed AS-BMMSE-CE scheme.
['Xianyu Chen', 'Ming Jiang']
Adaptive Statistical Bayesian MMSE Channel Estimation for Visible Light Communication
945,525
Applications of perceptual image quality assessment (IQA) in image and video processing, such as image acquisition, image compression, image restoration, and multimedia communication, have led to the development of many IQA metrics. In this paper, a reliable full reference IQA model is proposed that utilize gradient similarity (GS), chromaticity similarity (CS), and deviation pooling (DP). By considering the shortcomings of the commonly used GS to model the human visual system (HVS), a new GS is proposed through a fusion technique that is more likely to follow HVS. We propose an efficient and effective formulation to calculate the joint similarity map of two chromatic channels for the purpose of measuring color changes. In comparison with a commonly used formulation in the literature, the proposed CS map is shown to be more efficient and provide comparable or better quality predictions. Motivated by a recent work that utilizes the standard DP, a general formulation of the DP is presented in this paper and used to compute a final score from the proposed GS and CS maps. This proposed formulation of DP benefits from the Minkowski pooling and a proposed power pooling as well. The experimental results on six data sets of natural images, a synthetic data set, and a digitally retouched dataset show that the proposed index provides comparable or better quality predictions than the most recent and competing state-of-the-art IQA metrics in the literature, it is reliable and has low complexity. The MATLAB source code of the proposed metric is available at https://dl.dropboxusercontent.com/u/74505502/MDSI.m .
['Hossein Ziaei Nafchi', 'Atena Shahkolaei', 'Rachid Hedjam', 'Mohamed Cheriet']
Mean Deviation Similarity Index: Efficient and Reliable Full-Reference Image Quality Evaluator
875,425
Studies the buffer-size setting problem for distributed database systems. The main goal is to minimize physical I/O while achieving better buffer utilization at the same time. As opposed to traditional buffer management strategies, where a limited knowledge of user access patterns is analyzed and used, our buffer allocation mechanism extracts knowledge from historical reference streams and then determines the optimal buffer space based on the discovered knowledge. Simulation experiments show that the proposed method can achieve an optimal buffer allocation solution for distributed database systems.
['Hoi Yuen Leung', 'Ling Feng', 'Qing Li']
Analysis of distributed database access histories for buffer allocation
43,227
The problem of recovering a signal from its power spectrum, called phase retrieval , arises in many scientific fields. One of many examples is ultrashort laser pulse characterization, in which the electromagnetic field is oscillating with $\sim{\text{10}}^{15}$ Hz and phase information cannot be measured directly due to limitations of the electronic sensors. Phase retrieval is ill-posed in most of the cases, as there are many different signals with the same Fourier transform magnitude. To overcome this fundamental ill-posedness, several measurement techniques are used in practice. One of the most popular methods for complete characterization of ultrashort laser pulses is the frequency-resolved optical gating (FROG). In FROG, the acquired data are the power spectrum of the product of the unknown pulse with its delayed replica. Therefore, the measured signal is a quartic function of the unknown pulse. A generalized version of FROG, where the delayed replica is replaced by a second unknown pulse, is called blind FROG. In this case, the measured signal is quadratic with respect to both pulses. In this letter, we introduce and formulate FROG-type techniques. We then show that almost all band-limited signals are determined uniquely, up to trivial ambiguities, by blind FROG measurements (and thus also by FROG), if in addition we have access to the signals power spectrum.
['Tamir Bendory', 'Pavel Sidorenko', 'Yonina C. Eldar']
On the Uniqueness of FROG Methods
982,278
Most of the computational study of election problems has assumed that each voter's preferences are, or should be extended to, a total order. However in practice voters may have preferences with ties. We study the complexity of manipulative actions on elections where voters can have ties, extending the definitions of the election systems when necessary to handle voters with ties. We show that for natural election systems allowing ties can both increase and decrease the complexity of manipulation and bribery, and we state a general result on the effect of voters with ties on the complexity of control.
['Zack Fitzsimmons', 'Edith Hemaspaandra']
Complexity of Manipulative Actions When Voting with Ties
595,915
Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators.#R##N##R##N#In this article, we give a full account of Fast Downward's approach to solving multivalued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downward's best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multiheuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way.#R##N##R##N#Fast Downward has proven remarkably successful: It won the "classical" (i. e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements.
['Malte Helmert']
The fast downward planning system
436,466
Semantic Constraints in a Medical Information System
['Carole Goble', 'Andrzej J. Glowinski']
Semantic Constraints in a Medical Information System
500,947
Simulation is used to evaluate parking space availability for a current layout and for future design options at Miami University. By using simulation, an alternative design that increased the average number of parked cars and decreased the number of balked cars was derived. This paper describes the models developed and provides details on the analysis.
['John M. Harris', 'Yasser Dessouky']
A simulation approach for analyzing parking space availability at a major university
166,691
Current infrastructures for developing big-data applications are able to process –via big-data analytics- huge amounts of data, using clusters of machines that collaborate to perform parallel computations. However, current infrastructures were not designed to work with the requirements of time-critical applications; they are more focused on general-purpose applications rather than time-critical ones. Addressing this issue from the perspective of the real-time systems community, this paper considers time-critical big-data. It deals with the definition of a time-critical big-data system from the point of view of requirements, analyzing the specific characteristics of some popular big-data applications. This analysis is complemented by the challenges stemmed from the infrastructures that support the applications, proposing an architecture and offering initial performance patterns that connect application costs with infrastructure performance.
['Pablo Basanta-Val', 'Neil C. Audsley', 'Andy J. Wellings', 'Ian Gray', 'Norberto Fernandez-Garcia']
Architecting Time-Critical Big-Data Systems
920,991
Lost in Semantics? Ballooning the Web of Data.
['Florian Stegmaier', 'Kai Schlegel', 'Michael Granitzer']
Lost in Semantics? Ballooning the Web of Data.
780,915
Detecting the Data Group Most Prone to a Specific Disguise Value
['Wen-Yang Lin', 'Wen-Yu Feng']
Detecting the Data Group Most Prone to a Specific Disguise Value
181,459
The Parasitic Humanoid (PH) is a wearable robot for modeling nonverbal human behavior. This anthropomorphic robot senses the behavior of the wearer and has the internal models to learn the process of human sensory motor integration, thereafter it begins to predict the next behavior of the wearer using the learned models. When the reliability of the prediction is sufficient, the PH outputs the errors from the actual behavior as a request for motion to the wearer. Through symbiotic interaction, the internal model and the process of human sensory motor integration approximate each other asymptotically.
['T. Maeda', 'Hideyuki Ando']
Wearable robotics as a behavioral interface - the study of the Parasitic Humanoid
514,503
Representing bioinformatics datatypes using the OntoDT ontology.
['Panče Panov', 'Larisa N. Soldatova', 'Saso Dzeroski']
Representing bioinformatics datatypes using the OntoDT ontology.
739,062
Service matching is a key research area in Web Service Application. Most existing matching methods gain low precision so that they are not satisfied to use for automatic service discovery in various applications, such as dynamic and automatic service substitution. To solve the problem, this paper proposes a high-precision matching method based on the formal semantic description of services. In our method, we use input, output and the internal logic process to clarify the function of the service. Domain Conception Ontology is used to describe the input and output. SOFL and Domain Process Ontology are introduced to describe the internal logic process. Based on these descriptions, Domain Ontology matching, IO matching and Process matching are conducted successively to give out the high-precision matching result. We define the matching rules and give a case study to illustrate the matching method.
['Jian Liang', 'Fenglin Bu', 'Hongming Cai']
High-Precision Service Matching Based on Formal Semantic Description
230,367
Modeling Security Requirements in Service Based Business Processes
['Sameh Hbaieb Turki', 'Farah Bellaaj', 'Anis Charfi', 'Rafik Bouaziz']
Modeling Security Requirements in Service Based Business Processes
263,834
Motivation: Multiple comparison adjustment is a significant and challenging statistical issue in large-scale biological studies. In previous studies, dependence among genes is largely ignored. However, such dependence may be strong for some genomic-scale studies such as genetical genomics [also called expression quantitative trait loci (eQTL) mapping] in which thousands of genes are treated as quantitative traits and mapped to different genetical markers. Besides the dependence among markers, the dependence among the expression levels of genes can also have a significant impact on data analysis and interpretation.#R##N##R##N#Results: In this article, we propose to consider both the mean as well as the variance of false discovery number for multiple comparison adjustment to handle dependence among hypotheses. This is achieved by developing a variance estimator for false discovery number, and using the upper bound of false discovery proportion (uFDP) for false discovery control. More importantly, we introduce a weighted version of uFDP (wuFDP) control to improve the statistical power of eQTL identification. In addition, the wuFDP approach can better control false positives than false discovery rate (FDR) and uFDP approaches when markers are in linkage disequilibrium. The relative performance of uFDP control and wuFDP control is illustrated through simulation studies and real data analysis.#R##N##R##N#Contacts:liang.chen@usc.edu; hongyu.zhao@yale.edu#R##N##R##N#Supplementary information: Supplementary figures, tables and appendices are available at Bioinformatics online.
['Liang Chen', 'Tiejun Tong', 'Hongyu Zhao']
Considering dependence among genes and markers for false discovery control in eQTL mapping
445,207
Prediction error identification requires that data be informative with respect to the chosen model structure. Whereas sufficient conditions for informative experiments have been available for a long time, there were surprisingly no results of necessary and sufficient nature. With the recent surge of interest in optimal experiment design, it is of interest to know the minimal richness required of the externally applied signal to make the experiment informative. We provide necessary and sufficient conditions on the degree of richness of the applied signal to generate an informative experiment, both in open loop and in closed loop. In a closed-loop setup, where identification can be achieved with no external excitation if the controller is of sufficient degree, our results provide a precisely quantifiable trade-off between controller degree and required degree of external excitation.
['Michel Gevers', 'Alexandre Sanfelice Bazanella', 'Ljubisa Miskovic']
Informative data: How to get just sufficiently rich?
374,766
Parameterization of 3D mesh plays an important role in computer graphics application society, such as geometry compression, texture mapping, morphing and so on. As far as closed two-manifold genus-0 meshes are concerned, natural choice of parameterization domain is sphere, and the basic challenges are no-foldover and low-distortion respectively. To solve the spherical parameterization problem, we present spherical-domain hybrid stretch metric (SHSM). The concept of global area is introduced into SHSM to further reduce the area distortion of parameterized triangles. We demonstrate the ability of our method to both parameterize fast and avoid high distortion, realizing parameterization from the spatial genus-0 meshes onto the unit sphere.
['Haishan Tian', 'Yuanjun He', 'Yong Wu']
A new approach of progressive spherical parameterization
223,923
Publishing Greek Census Data as Linked Open Data.
['Irene Petrou', 'George Papastefanatos']
Publishing Greek Census Data as Linked Open Data.
773,350
The aperture problem is one of the omnipresent issues in computer vision. Its local character constrains point matching to high textured areas, so that points in gradient-oriented regions (such as straight lines) can not be reliably matched. We propose a new method to overcome this problem by devising a global matching strategy under the factorization framework. We solve the n-frame correspondence problem under this context by assuming the rigidity of the scene. To this end, a geometric constraint is used that selects the matching solution resulting in a rank-4 observation matrix. The rank of the observation matrix is a function of the matching solutions associated to each image and as such a simultaneous solution for all frames has to be found. An optimization procedure is used in this text in order to find the solution.
['Ricardo Kaempf de Oliveira', 'João Paulo Costeira', 'João Manuel Freitas Xavier']
Contour point tracking by enforcement of rigidity constraints
423,305
Multi-Label Informed Feature Selection.
['Ling Jian', 'Jundong Li', 'Kai Shu', 'Huan Liu']
Multi-Label Informed Feature Selection.
985,400
Today's software is getting more and more complex and harder to understand. Models help to organize knowledge and emphasize the structure of a software at a higher abstraction level. While the usage of model-driven techniques is widely adopted during software construction, it is still an open research topic if models can also be used to make runtime phenomena more comprehensible as well. It is not obvious which models are suitable for manual analysis and which model elements can be related to what type of runtime events. This paper proposes a collection of runtime event types that can be reused for various systems and meta-models. Based on these event types, information can be derived which help human observers to assess the current system state. Our approach is applied in a case study and evaluated regarding generalisability and completeness by relating it to two different meta-models.
['Michael Szvetits', 'Uwe Zdun']
Reusable event types for models at runtime to support the examination of runtime phenomena
556,364
“The Fruits of Intellectual Labor”: International Student Views of Intellectual Property
['Ilka Datig', 'Beth Russell']
“The Fruits of Intellectual Labor”: International Student Views of Intellectual Property
438,709
Recent advances in computer vision have significantly reduced the difficulty of object classification and recognition. Robust feature detector and descriptor algorithms are particularly useful, forming the basis for many recognition and classification applications. These algorithms have been used in divergent bag-of-words and structural matching approaches. This work demonstrates a recognition application, based upon the SURF feature descriptor algorithm, which fuses bag-of-words and structural verification techniques. The resulting system is applied to the domain of car recognition and achieves accurate (> 90%) and real-time performance when searching databases containing thousands of images.
['Daniel Marcus Jang', 'Matthew Turk']
Car-Rec: A real time car recognition system
55,469
The purpose of this paper is to introduce a symbolic layout technique for MOS integrated circuits. We will give a description of symbolic layout, talk about its potential and briefly describe the symbolic layout system we have developed at AMI.
['Dave Gibson', 'Scott Nance']
SLIC - symbolic layout of integrated circuits
142,946
CPS Specifier – A Specification Tool for Safety-Critical Cyber-Physical Systems
['Jonas Westman', 'Mattias Nyberg', 'Oscar Thydén']
CPS Specifier – A Specification Tool for Safety-Critical Cyber-Physical Systems
986,868
Abstract This paper studies the problem of output agreement in networks of nonlinear dynamical systems under time-varying disturbances. Necessary and sufficient conditions for output agreement are derived for the class of incrementally passive systems. Following this, it is shown that the optimal distribution problem in dynamic inventory systems with time-varying supply and demand can be cast as a special version of the output agreement problem. We show in particular that the time-varying optimal distribution problem can be solved by applying an internal model controller to the dual variables of a certain convex network optimization problem.
['Mathias Burger', 'Claudio De Persis']
Internal models for nonlinear output agreement and optimal flow control
74,589
SHACAL-2 is a 256-bit block cipher with up to 512 bits of key length based on the hash function SHA-2. It was recommended as one of the NESSIE projection selections. As far as the number of the attacked rounds is concerned, the best cryptanalytic result obtained on SHACAL-2 so far is the analysis of a related-key rectangle attack on the 42-round SHACAL-2 [13]. In this paper we present a related-key rectangle attack on 43-round out of the 64-round of SHACAL-2, which requires 2240.38 chosen plaintexts and has time complexity of 2480.4 43- round SHACAL-2 encryptions. In this paper we also identify and fix some flaws in previous attack on SHACAL-2.
['Gaoli Wang']
Related-key rectangle attack on 43-round SHACAL-2
366,218
This paper presents a design space exploration of a selective load value prediction scheme suitable for energy-aware Simultaneous Multi-Threaded (SMT) architectures. A load value predictor is an architectural enhancement which speculates over the results of a micro-processor load instruction to speed-up the execution of the following instructions. The proposed architectural enhancement differs from a classic predictor due to an improved selection scheme that allows to activate the predictor only when a miss occurs in the first level of cache. We analyze the effectiveness of the selective predictor in terms of overall energy reduction and performance improvement. To this end, we show how the proposed predictor can produce benefits (in terms of overall cost) when the cache size of the SMT architecture is reduced and we compare it with a classic non-selective load value prediction scheme. The experimental results have been gathered with a state-of-the-art SMT simulator running the SPEC2000 benchmark suite, both in SMT and non-SMT mode.
['Arpad Gellert', 'Gianluca Palermo', 'Vittorio Zaccaria', 'Adrian Florea', 'Lucian N. Vintan', 'Cristina Silvano']
Energy-performance design space exploration in SMT architectures exploiting selective load value predictions
392,421
Since the 1G of mobile technology, mobile wireless communication systems have continued to evolve, bringing into the network architecture new interfaces and protocols, as well as unified services, high data capacity of data transmission, and packet-based transmission (4G). This evolution has also introduced new vulnerabilities and threats, which can be used to launch attacks on different network components, such as the access network and the core network. These drawbacks stand as a major concern for the security and the performance of mobile networks, since various types of attacks can take down the whole network and cause a denial of service, or perform malicious activities. In this survey, we review the main security issues in the access and core network (vulnerabilities and threats) and provide a classification and categorization of attacks in mobile network. In addition, we analyze major attacks on 4G mobile networks and corresponding countermeasures and current mitigation solutions, discuss limits of current solutions, and highlight open research areas.
['Silvere Mavoungou', 'Georges Kaddoum', 'Mostafa M. I. Taha', 'Georges Matar']
Survey on Threats and Attacks on Mobile Networks
877,671
Diabetic retinopathy affects the vision of a significant fraction of the population worldwide. Retinal fundus images are used to detect the condition before vision loss develops to enable medical interventions. Optic disc detection is an essential step for the automatic detection of the disease. Several techniques have been introduced in the literature to detect the optic disc with different performance characteristics such as speed, accuracy and consistency. For optic disc detection, a nature-inspired algorithm called swarm intelligence has been shown to have clear superiority in terms of speed and accuracy compared to traditional detection algorithms. We therefore further investigated and compared several swarm intelligence techniques. Our study focused on five popular swarm intelligence algorithms: artificial bee colony, particle swarm optimization, bat algorithm, cuckoo search and firefly algorithm. This work also featured a novel pre-processing scheme that enhances the detection accuracy of the swarm techniques by making the optic disc region the highest grayscale value in the image. The pre-processing involves multiple stages of background subtraction, median filtering and mean filtering and is named Background Subtraction-based Optic Disc Detection (BSODD). The best result was obtained by combining our pre-processing technique, firefly algorithm and the parameters used for the algorithm. The obtained accuracy was superior to the other tested algorithms and published results in the literature. The accuracy of the firefly algorithm was 100%, 100%, 98.82% and 95% when using the DRIVE, DiaRetDB1, DMED and STARE databases, respectively.
['Sa’ed Abed', 'Suood Abdulaziz Al-Roomi', 'Mohammad H. Alshayeji']
Effective optic disc detection method based on swarm intelligence techniques and novel pre-processing steps
884,978
This paper presents a thorough analysis of energy consumption of a software HEVC decoder. The evaluation utilizes a framework developed herein specifically to estimate the energy consumption in all levels of cache hierarchies. Our framework is based on analytical models combined with memory profiling; tools. Energy analyses of several cache hierarchies executing HEVC decoding with different input bit streams were carried out. Our results point to the most suited cache parameters for each video resolution. The energy was estimated for a 32nm CMOS technology. Our study includes different tradeoffs between energy efficiency and capacity, associativity, and main memory bandwidth. Our detailed analysis shows that the higher are the cache features, the more efficient is the energy consumption. The main memory bandwidth evaluation shows that the energy consumption increases with the main memory bandwidth requirement. Full HD video resolutions require up to 90 times higher bandwidth and 57 times more energy than class D resolutions.
['Eduarda Monteiro', 'Mateus Grellert', 'Sergio Bampi', 'Bruno Zatt']
Energy-aware cache assessment of HEVC decoding
879,353
In recent years, the repeat-pass GBSAR (ground based synthetic aperture radar) system has demonstrated its capacity to acquire deformation. Nevertheless, in a variety of applications, it needs to measure the deformation with the precision up to 0.1 mm, which could not be reached by utilizing the traditional PS (permanent scatterer) algorithm in most cases. Generally, one of the main reasons could be summarized into the phase error caused by the rail determination error, because the precision of rail determination might degrade during long working hours. However, the traditional PS algorithm could not compensate for the phase error caused by the rail determination error. In order to solve the problems, we modify the conventional PS algorithm. Firstly, we deduced the transformation relationship between the rail determination error and its corresponding interferometric phase error. Then, the phase errors caused by the atmosphere and the rail determination error were jointly compensated. The experimental data, which were obtained in Fangshan District in Beijing (China), were used to test and verify the performance of the new algorithm. After the comparison between the results processed by the new algorithm and those processed by the traditional algorithm, the proposed method demonstrated its ability to obtain high-precision deformation.
['Cheng Hu', 'Mao Zhu', 'Tao Zeng', 'Weiming Tian', 'Cong Mao']
High-precision deformation monitoring algorithm for GBSAR system: rail determination phase error compensation
618,385
Accurate identification of linear B-cell epitopes plays an important role in peptide vaccine designs, immunodiagnosis, and antibody productions. Although several prediction methods have been reported, unsatisfied accuracy has limited the broad usages in linear B-cell epitope prediction. Therefore, developing a reliable model with significant improvement on prediction accuracy is highly desirable.
['Weike Shen', 'Yuan Cao', 'Lei Cha', 'Xufei Zhang', 'Xiaomin Ying', 'Wei Zhang', 'Kun Ge', 'Wuju Li', 'Li Zhong']
Predicting linear B-cell epitopes using amino acid anchoring pair composition
379,854
Black box optimization for automatic speech recognition
['Shinji Watanabe', 'Jonathan Le Roux']
Black box optimization for automatic speech recognition
241,882
We present an approach to use evolutionary learning of behavior to improve testing of commercial computer games. After identifying unwanted results or behavior of the game, we propose to develop measures on how near a sequence of game states comes to the unwanted behavior and to use these measures within the fitness function of a GA working on action sequences. This allows to find action sequences that produce the unwanted behavior, if they exist. Our experimental evaluation of the method with the FIFA-99 game and scoring a goal as unwanted behavior shows that the method is able to find such action sequences, allowing for an easy reproduction of critical situations and improvements to the tested game.
['Ben Chan', 'Jörg Denzinger', 'Darryl Gates', 'Kevin Loose', 'John W. Buchanan']
Evolutionary behavior testing of commercial computer games
376,774
Intelligent tutoring systems are quite difficult and time intensive to develop. In this paper, we describe a method and set of software tools that ease the process of cognitive task analysis and tutor development by allowing the author to demonstrate, instead of programming, the behavior of an intelligent tutor. We focus on the subset of our tools that allow authors to create Pseudo Tutors that exhibit the behavior of intelligent tutors without requiring AI programming. Authors build user interfaces by direct manipulation and then use a Behavior Recorder tool to demonstrate alternative correct and incorrect actions. The resulting behavior graph is annotated with instructional messages and knowledge labels. We present some preliminary evidence of the effectiveness of this approach, both in terms of reduced development time and learning outcome. Pseudo Tutors have now been built for economics, analytic logic, mathematics, and language learning. Our data supports an estimate of about 25:1 ratio of development time to instruction time for Pseudo Tutors, which compares favorably to the 200:1 estimate for Intelligent Tutors, though we acknowledge and discuss limitations of such estimates.
['Kenneth R. Koedinger', 'Vincent Aleven', 'Neil T. Heffernan', 'Bruce M. McLaren', 'Matthew Hockenberry']
Opening the door to non-programmers: Authoring Intelligent tutor behavior by demonstration
896,051
Wireless sensor networks (WSNs) are a promising technology for several industrial and quotidian applications. IPv6 is the most consensual solution to connect such networks to the Internet, and 6LoWPAN is the adaptation layer to run IPv6 over WSNs. Self-organization and self-configuration are key characteristics of WSN because they minimize the network configuration efforts and simultaneously increase the network robustness but they can also be exploited to perform security attacks. This paper proposes a network admission control solution for 6LoWPAN WSN that prevents unauthorized nodes from using the network to communicate either with the legitimate nodes and with the Internet, reducing in this way the security attacks that can be performed. The proposed solution includes node presence detection and authentication, administrative node authorization, and data filtering to discard frames from/to unauthorized nodes. It uses the standard 6LoWPAN neighbor discovery and RPL protocols, minimizing the number of additional required control messages. It includes cryptographic mechanisms, based on the AES symmetric key algorithm, to guarantee node authenticity and integrity, source authenticity, and data freshness of data frames. This paper also presents the design and deployment of a laboratory testbed validating the proposed network admission control solution.
['Luís M. L. Oliveira', 'Joel J. P. C. Rodrigues', 'Amaro de Sousa', 'Victor M. Denisov']
Network Admission Control Solution for 6LoWPAN Networks Based on Symmetric Key Mechanisms
884,180
A hierarchical union-of-subspaces model is proposed for performing semi-supervised human activity summarization in large streams of video data. The union of low-dimensional subspaces model is used to learn meaningful action attributes from a collection of high-dimensional video sequences of human activities. An approach called hierarchical sparse subspace clustering (HSSC) is developed to learn this model from the data in an unsupervised manner by capturing the variations or movements of each action in different subspaces, which allow the human actions to be represented as sequences of transitions from one subspace to another. These transition sequences can be used for human action recognition. The action attributes can also be represented at multiple resolutions using the subspaces at different levels of the hierarchical structure. By visualizing and labeling these action attributes, the hierarchical model can be used to semantically summarize long video sequences of human actions at different scales. The effectiveness of the proposed model is demonstrated through experiments on three real-world human action datasets for action recognition and semantic summarization of the actions using different resolutions of the action attributes.
['Tong Wu', 'Prudhvi Gurram', 'Raghuveer M. Rao', 'Waheed U. Bajwa']
Hierarchical Union-of-Subspaces Model for Human Activity Summarization
610,292
Vulnerability of critical infrastructures have increased with widespread use of information technologies. Although individual hackers whose major aims are self-satisfaction and financial gain are already on the stage, nation sponsored attacks are considered as important threats after Stuxnet attack. An important intention of targeted attacks by nation states may be the degradation of cyber physical systems of their enemies. After these developments, cyber-attacks have become an agenda item of the academics, practitioners and policy makers. In this study, we employed Monte-Carlo reliability analysis technique to quantify the impact of cyber-attacks on industrial control systems used in power generation systems. Economic value of cyber-attacks can help decision makers to decide if a cyber-security investment is feasible or not. The results showed that cyber-attacks may have significant impact on reliability of power generation systems.
['Unal Tatar', 'Hayretdin Bahşi', 'Adrian V. Gheorghe']
Impact assessment of cyber attacks: A quantification study on power generation systems
868,786
The topological reconfiguration of an asynchronous transfer mode (ATM) network embedded into a backbone facility network that uses digital cross connect systems (DCSs) is addressed. The ATM topology consists of links (express pipes) obtained from the backbone facility via circuit switching through DCSs. The problem is formulated as a network optimization problem where performance is optimized, subject to capacity constraints posed by the underlying facility trunks. The variables in this problem are the selection of the express pipes, the routing of traffic on such pipes, and the allocation of bandwidth to pipes. Dynamic reconfiguration schemes where the embedded topology is periodically adjusted to track the fluctuations in traffic requirements are discussed. Such reconfigurations are shown to reduce congestion substantially. >
['José Augusto Suruagy Monteiro', 'Mario Gerla']
Topological reconfiguration of ATM networks
379,955
Research highlights? Minimum dissimilarity on a network. ? CS identifies promising regions of the search space. ? Clusters are explored with local search heuristics. ? Computational results demonstrate the efficacy of CS. The Capacitated Centered Clustering Problem (CCCP) consists of defining a set of p groups with minimum dissimilarity on a network with n points. Demand values are associated with each point and each group has a demand capacity. The problem is well known to be NP-hard and has many practical applications. In this paper, the hybrid method Clustering Search (CS) is implemented to solve the CCCP. This method identifies promising regions of the search space by generating solutions with a metaheuristic, such as Genetic Algorithm, and clustering them into clusters that are then explored further with local search heuristics. Computational results considering instances available in the literature are presented to demonstrate the efficacy of CS.
['Antonio Augusto Chaves', 'Luiz Antonio Nogueira Lorena']
Hybrid evolutionary algorithm for the Capacitated Centered Clustering Problem
209,480
We propose a generalized Gaussian process model (GGPM), which is a unifying framework that encompasses many existing Gaussian process (GP) models, such as GP regression, classification, and counting. In the GGPM framework, the observation likelihood of the GP model is itself parameterized using the exponential family distribution. By deriving approximate inference algorithms for the generalized GP model, we are able to easily apply the same algorithm to all other GP models. Novel GP models are created by changing the parameterization of the likelihood function, which greatly simplifies their creation for task-specific output domains. We also derive a closed-form efficient Taylor approximation for inference on the model, and draw interesting connections with other model-specific closed-form approximations. Finally, using the GGPM, we create several new GP models and show their efficacy in building task-specific GP models for computer vision.
['Antoni B. Chan', 'Daxiang Dong']
Generalized Gaussian process models
196,998
Selecting a menu item in a cascading pull-down menu is a frequent but time consuming and complex GUI task. This paper describes an approach aimed to support the user during selection in cascading pull-down menus when using an indirect pointing device. By enhancing such a cascading pull-down menu with "force fields", the cursor is attracted toward a certain direction, e.g. toward the right hand side within a menu item, which opens up a sub-menu, making the cursor steering task easier and faster. The experiment described here shows that the force fields can decrease selection times, on average by 18%, when a mouse, a track point, or touch pad is used as input device. The results also suggest that selection times in cascading pull-down menus can be modeled using a combination of Fitts' law and the steering law. The proposed model proved to hold for all three devices, in both standard and in enhanced cascading pull-down menus, with correlations better than r 2 =0.90.
['David Ahlström']
Modeling and improving selection in cascading pull-down menus using Fitts' law, the steering law and force fields
286,453
Diffusion Tensor Imaging (DTI) and fiber tracking provide unique insight into the 3D structure of fibrous tissues in the brain. However, the output of fiber tracking contains a significant amount of uncertainty accumulated in the various steps of the processing pipeline. Existing DTI visualization methods do not present these uncertainties to the end-user. This creates a false impression of precision and accuracy that can have serious consequences in applications that rely heavily on risk assessment and decision-making, such as neurosurgery. On the other hand, adding uncertainty to an already complex visualization can easily lead to information overload and visual clutter. In this work, we propose Illustrative Confidence Intervals to reduce the complexity of the visualization and present only those aspects of uncertainty that are of interest to the user. We look specifically at the uncertainty in fiber shape due to noise and modeling errors. To demonstrate the flexibility of our framework, we compute this uncertainty in two different ways, based on (1) fiber distance and (2) the probability of a fiber connection between two brain regions. We provide the user with interactive tools to define multiple confidence intervals, specify visual styles and explore the uncertainty with a Focus+Context approach. Finally, we have conducted a user evaluation with three neurosurgeons to evaluate the added value of our visualization.
['Ralph Brecheisen', 'Bram Platel', 'B. M. Haar Romeny', 'Anna Vilanova']
Illustrative uncertainty visualization of DTI fiber pathways
363,131
Review of: Analysis of Boolean Functions by Ryan O'Donnell
['Daniel Apon']
Review of: Analysis of Boolean Functions by Ryan O'Donnell
953,319
We consider the problem of inference on one of the two parameters of a probability distribution when we have some prior information on a nuisance parameter. When a prior probability distribution on this nuisance parameter is given, the marginal distribution is the classical tool to account for it. If the prior distribution is not given, but we have partial knowledge such as a fixed number of moments, we can use the maximum entropy principle to assign a prior law and thus go back to the previous case. In this work, we consider the case where we only know the median of the prior and propose a new tool for this case. This new inference tool looks like a marginal distribution. It is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution with respect to the prior probability law of the nuisance parameter, and then, by using the median in place of the mean.
['Adel Mohammadpour', 'Ali Mohammad-Djafari']
Inference with the Median of a Prior
131,616
Proof of Knowledge on Monotone Predicates and its Application to Attribute-Based Identifications and Signatures.
['Hiroaki Anada', 'Seiko Arita', 'Kouichi Sakurai']
Proof of Knowledge on Monotone Predicates and its Application to Attribute-Based Identifications and Signatures.
983,680
Recently many statistical learning techniques have been applied to the prediction of financial variables. The aim of this paper is to conduct a comprehensive study of the applications of statistical learning techniques to predict the trend of the return of high-frequency Korea composite stock price index (KOSPI) 200 index data using the information from the one-minute time series of spot index, futures index, and foreign exchange rate. Through experiments, it is observed that the spot index change is better predictable with high-frequency time series data and the futures index information significantly improves the prediction accuracy of the return trends of the spot index for high-frequency index data, while the information of exchange rate does not. Also, dimension reduction process before training helps to increase the accuracy and dramatically for some classifiers. In addition, the trained classifiers with which a virtual trading strategy is applied to, noticeable better profits can be achieved than just a buy-and-hold-like strategy.
['Youngdoo Son', 'Dong-jin Noh', 'Jaewook Lee']
Forecasting trends of high-frequency KOSPI200 index data using learning classifiers
263,929
Power density is a growing problem in high-performance processors in which small, high-activity resources overheat. Two categories of techniques, temporal and spatial, can address power density in a processor. Temporal solutions slow computation and heating either through frequency and voltage scaling or through stopping computation long enough to allow the processor to cool; both degrade performance. Spatial solutions reduce heat by moving computation from a hot resource to an alternate resource (e.g., a spare ALU) to allow cooling. Spatial solutions are appealing because they have negligible impact on performance, but they require availability of spatial slack in the form of spare or underutilized resource copies. Previous work focusing on spatial slack within a pipeline has proposed adding extra resource copies to the pipeline, which adds substantial complexity because the resources that overheat, issue logic, register files, and ALUs, are the resources in some of the tightest critical paths in the pipeline. Previous work has not considered exploiting the spatial slack already existing within pipeline resource copies. Utilization can be quite asymmetric across resource copies, leaving some copies substantially cooler than others. We observe that asymmetric utilization within copies of three key back-end resources, the issue queue, register files, and ALUs, creates spatial slack opportunities. By balancing asymmetry in their utilization, we can reduce power density. Scheduling policies for these resources were designed for maximum simplicity before power density was a concern; our challenge is to address asymmetric heating while keeping the pipeline simple. Balancing asymmetric utilization reduces the need for other performance-degrading temporal power-density techniques. While our techniques do not obviate temporal techniques in high-resource-utilization applications, we greatly reduce their use, improving overall performance.
['Michael D. Powell', 'Ethan Schuchman', 'T. N. Vijaykumar']
Balancing Resource Utilization to Mitigate Power Density in Processor Pipelines
670
HighlightsTest-on-source multilingual speech understanding.Construction of graphs of words from multiple translations.Semantic decoding of graphs of words using statistical models.Unsupervised portability for Spoken Language Understanding. In this paper, we present an approach to multilingual Spoken Language Understanding based on a process of generalization of multiple translations, followed by a specific methodology to perform a semantic parsing of these combined translations. A statistical semantic model, which is learned from a segmented and labeled corpus, is used to represent the semantics of the task in a language. Our goal is to allow the users to interact with the system using other languages different from the one used to train the semantic models, avoiding the cost of segmenting and labeling a training corpus for each language. In order to reduce the effect of translation errors and to increase the coverage, we propose an algorithm to generate graphs of words from different translations. We also propose an algorithm to parse graphs of words with the statistical semantic model. The experimental results confirm the good behavior of this approach using French and English as input languages in a spoken language understanding task that was developed for Spanish.
['Marcos Calvo', 'Lluís-Felip Hurtado', 'Fernando García 0001', 'Emilio Sanchis', 'Encarna Segarra']
Multilingual Spoken Language Understanding using graphs and multiple translations
646,246
In this paper, we present a new, generic approach for Simultaneous Localization and Mapping (SLAM). First of all, we propose an abstraction of the underlying sensor data using Normal Distribution Transform (NDT) maps that are suitable for making our approach independent from the used sensor and the dimension of the generated maps. We present several modifications for the original NDT mapping to handle free-space measurements explicitly. We additionally describe a method to detect and handle dynamic objects such as moving persons. This enables the usage of the proposed approach in highly dynamic environments. In the second part of this paper we describe our graph-based SLAM approach that is designed for lifelong usage. Therefore, the memory and computational complexity is limited by pruning the pose graph in an appropriate way. We present a new mapping approach that combines normal distribution transform (NDT) and occupancy mapping.The mapping approach is fully generic and suitable for 2D and 3D mapping with different sensors.We describe a method for detecting and handling dynamic objects to allow mapping in highly dynamic environments.Based on the mapping algorithm a graph based SLAM algorithm is described.The presented SLAM approach allows lifelong mapping and localization in real world applications.
['Erik Einhorn', 'Horst-Michael Gross']
Generic NDT mapping in dynamic environments and its application for lifelong SLAM
331,540
Self-tuning PI Controllers via Fuzzy Cognitive Maps
['Engin Yesil', 'M. Furkan Dodurka', 'Ahmet Sakalli', 'Cihan Ozturk', 'Cagri Guzay']
Self-tuning PI Controllers via Fuzzy Cognitive Maps
580,718
Control architectures, such as the LAAS architecture, CLARATY and HARPIC, have been developped to provide autonomy to robots. To achieve a robot's task, these control architectures plan sequences of sensorimotor behaviors. Currently carried out by roboticians, the design of sensorimotor behaviors is a truly complex task that can require many hours of hard work and intensive computations. In this paper, we propose a Constraint Programming-based framework to interact with roboticians during the sensorimotor behaviors design. A constraint network acquisition platform and a CSP-Based planner are used to automatically design sensorimotor behaviors. Moreover, our architecture exploits the propagation properties of the acquired CSPs to supervise the execution of a given sensorimotor behavior. Some experimental results are presented to validate our approach.
['Mathias Paulin', 'Christian Bessiere', 'Jean Sallantin']
Automatic Design of Robot Behaviors through Constraint Network Acquisition
531,989
Ein einfacher und transparenter Zugang zur sprachtechnologischen Arbeit mit Textdaten: Beispiele aus dem Projekt e-Identity
['Fritz Kliche', 'Ulrich Heid']
Ein einfacher und transparenter Zugang zur sprachtechnologischen Arbeit mit Textdaten: Beispiele aus dem Projekt e-Identity
755,023
An approximation technique is presented to evaluate the dependability of FDDI networks. This technique, based on the most likely paths to failure concept, is simple and practical. It may be applied easily to evaluate and compare the dependability of different FDDI network configurations. The effects of various network parameters on the network availability are examined. We conclude that, in order to guarantee high availability, an FDDI network backbone should be interconnected using dual-attachment concentrators. Furthermore, dual-homing configurations are required for high-availability paths between end stations and the backbone. For less stringent availability requirements, single-attachment concentrator trees with single attachment stations may suffice. We also discuss how the technique may be extended easily to more general heterogeneous networks including Token Ring and Ethernet.
['Marc Willebeek-LeMair', 'Perwez Shahabuddin']
Approximating dependability measures of computer networks: an FDDI case study
474,807
This paper reports a generic analysis on null boundary and periodic boundary 3-neighborhood multiple attractor cellular automata (MACA) for showing the comparative study in classification technique. Cellular automata (CA) is now-a-days an essential tool for researchers in the area of pattern recognition, pattern generation, testing field, fault diagnosis and so on. So, general knowledge on CA up to some extent is a must for researchers in these areas. A CA may be linear or non-linear in behavior. A linear/additive CA employs XOR/XNOR logic, while a non-linear CA employs AND/OR/NOT logic. This paper shows a graph analysis along with state transition behavior of CA cells. A rule vector graph (RVG) is generated from the rule vector (RV) of a CA. Linear time algorithms are reported for generation of RVG. MACA provides an implicit memory to store the patterns. Search operation to identify the class of a pattern out of several classes boils down to running a CA for one time step. This demands storage of RV and seed values. MACA is based on sound theoretical foundation of CA technology. This paper only concentrates on MACA since it is responsible for classifying the various types of patterns.
['Anirban Kundu', 'Alok Ranjan Pal', 'Tanay Sarkar', 'Moutan Banerjee', 'Sutirtha Kr. Guha', 'Debajyoti Mukhopadhyay']
Comparative study on Null Boundary and Periodic Boundary 3-neighborhood Multiple Attractor Cellular Automata for classification
33,126
Kuala Lumpur city is located at the confluence of two rivers and are flood prone area. With rapid development and uncontrolled town planning, the city had experience several major flash flood incidents and have caused tremendous damage to country. This research describes a study made to model and simulate the flash flood incident that struck Kuala Lumpur on 10 June 2007 using 3D computer graphic and fluid simulation techniques. The aim is to examine the stability and effectiveness of this approach as a solution tool for environmental hazard studies. Particle-based method used to model the fluid objects using MAYA software. Light detection and ranging (LIDAR) data and remote sensing imagery were used to model the study area. The main contribution of this study is the introduction of this approach to enhance realistic visualization for environmental studies thus enable better planning and countermeasures created to prevent the disaster.
['Jasrul Nizam Ghazali', 'Amirrudin Kamsin']
A Real Time Simulation of Flood Hazard
500,388
Within the Aurora2 experimental framework, the aim of this study is to determine what the relative contributions of spectral shape and energy features are to the mismatch observed between clean training and noisy test data. In addition to measurements on the baseline Aurora2 system, recognition performance was also evaluated after the application of time domain noise reduction (TDNR) and histogram normalisation (HN) in the cepstral domain. The results indicate that, for the Aurora2 digit recognition task, TDNR, HN, as well as a combination of the two techniques achieve higher recognition rates by reducing the mismatch in the energy part of acoustic feature space. The corresponding mismatch reduction in the spectral shape features yields hardly any gain in recognition performance.
['F. de Wet', 'J.M. de Veth', 'Bert Cranen', 'L.W.J. Boves']
The impact of spectral and energy mismatch on the Aurora2 digit recognition task
519,678
Pervasive information and communication technologies and large-scale complex systems, are strongly influencing today's networked society. Understanding the behaviour and impact of such distributed, often emergent systems on society is of vital importance. This paper proposes a new approach to better understand the complexity of large-scale participatory systems in the context of smart grids. Multi-agent based distributed simulations of realistic multi-actor scenarios incorporating real-time dynamic data and active participation of actors is the means to this purpose. The Symphony experiment platform, developed to study complex emergent behaviours and to facilitate the analysis of the system dynamics and actor interactions, is the enabler.
['Zulkuf Genc', 'Michel A. Oey', 'Hendrik van Antwerpen', 'Frances M. T. Brazier']
Dynamic Data-Driven Experiments in the Smart Grid Domain with a Multi-agent Platform
822,855
Join operation is usually hard to achieve high quality with machine alone. We adopt crowdsourcing to improve the quality of join. Depending on the number of generated pairs, the overall cost can be expensive for hiring workers to do the verification. We propose a hybrid approach to generate pairs by leveraging attributes, which combines cat- egory, sorting and clustering techniques, called CSCER. We also propose an adaptive attribute-selection strategy to efficiently generate pairs based on attributes. Experiments on a real crowdsourcing platform using real datasets indicate that our approaches save the overall cost compared to existing methods and achieve high quality of join results.
['Jianhong Feng', 'Jianhua Feng', 'Huiqi Hu']
Leveraging Attributes and Crowdsourcing for Join
643,186
Feature-oriented programming (FOP) targets the encapsulation of software building blocks as features which better match the specification of requirements. As a result, programmers find it easier to design and compose different variations of their systems. Change-based FOP (CFOP) proposes to specify features as sets of first-class change objects which can add, modify or delete building blocks to or from a software system. First, we show how CFOP supports the modularization of crosscutting functionality. Afterwards, we expose a weakness of CFOP which is a consequence from features holding extensional sets of changes. We elaborate on a solution for that weakness which is based on intensional changes: descriptions that can evaluate to an extension of changes.
['Peter Ebraert', "Theo D'Hondt", 'Tim Molderez', 'Dirk Janssens']
Intensional changes: modularizing crosscutting features
181,634
Reconfigurable optical add/drop multiplexers (ROADMs) manufactured with different designs and technologies are soon going to be available in the market. Unfortunately, the latest available ROADM architectures are either suffering from high insertion losses or high manufacturing costs that prevent their rapid deployment in the network. In this paper, we propose a low-loss hybrid architecture for a ROADM subsystem that combines the best features of the latest available ROADM designs. A metro network testbed has been designed and simulated in order to compare the performance of different ROADM modules. The obtained results indicate that our proposed hybrid-ROADM module performs better than the latest available ROADM subsystems and would reduce the overall network operating costs.
['C.A. Al Sayeed', 'Alex Vukovic', 'Oliver W. W. Yang', 'Heng Hua']
OPN04-5: Hybrid Low Loss Architecture for Reconfigurable Optical Add/Drop Multiplexer
217,408
Reliability is one of the key issues for the application of Silicon carbide (SiC) diode in high power conversion systems. For instance, in high voltage direct current (HVDC) converters, the devices can be submitted to high voltage transients which yield to avalanche. This paper presents the experimental evaluation of SiC diodes submitted to avalanche, and shows that the energy dissipation in the device can increase quickly and will not be uniformly distributed across the surface of the device. It has been observed that failure occurs at a fairly low energy level (< 0.3 J/cm2), on the edge of the die, where the electrical field intensity is the greatest. The failure results in the collapse of the voltage across the diode (short-circuit failure mode). If a large current is maintained through the diode after its failure, then the damage site is enlarged, masking the initial failure spot, and eventually resulting in a destruction of the device and an open circuit.
['Ilyas Dchar', 'Cyril Buttay', 'Hervé Morel']
Avalanche robustness of SiC Schottky diode
887,128
A Reviews of: “Dangerous Enthusiasms: E-Government, Computer Failure, and Information System Development”
['Bryan Pfaffenberger']
A Reviews of: “Dangerous Enthusiasms: E-Government, Computer Failure, and Information System Development”
141,803
Mice with printing ability mastering every labyrinth are considered in respect to the number of states and print symbols needed. A mouse with four states and three print symbols is given, which is used for a simplified construction of a mouse with only one print symbol. Furthermore a labyrinth mastering mouse with only one state and four print symbols is given.
['Horst Müller']
Improvements on printing mice in labyrinths
683,824
The renown beyond 3G scenario depicts a diverse wireless networking world of "network-of-wireless-networks" accommodating a variety of radio technologies and mobile service requirements in a seamless manner. The achievement of this vision raises significant research challenges in view of system coexistence, system scale, interoperability, and evaluation tool design to provide a framework for cost-effective assessment of optimisation algorithms within a heterogeneous system environment. The UNITE (virtual distributed testbed for optimisation and coexistence of heterogeneous systems) platform aims to address the aforementioned research challenges by implementing an efficient, accurate and scalable virtual distributed testbed to support cross-system, and cross-layer optimization of heterogeneous systems in a unified manner. Through the use of the virtual testbed, cross-layer and cross-system interactions between next generation wireless systems and protocols can be investigated, without neglecting important real-system details.
['Jonathan Rodriguez', 'Atílio Gameiro', 'Christos Politis', 'George Kormentzas', 'Nicolas Ibrahim']
Virtual distributed testbed for optimisation and coexistence of heterogeneous systems
472,871
Editorial: Semantic Media Adaptation and Personalization
['Marios C. Angelides', 'Phivos Mylonas', 'Manolis Wallace']
Editorial: Semantic Media Adaptation and Personalization
231,444
A real-time optical camera communication (OCC) system was designed and implemented. The system jointly utilizes 4 intensity levels for each color and 3 colors of each RGB LED of a 16×16 LED array, resulting in a 64-ary color-intensity modulation (CIM) carrying on 6-bit coded information in each symbol. The LED array is refreshed at 82.5 Hz, and a FPGA controlled commercial CMOS image sensor from a mobile phone operates at a frame rate of 330 fps whose outputs are fed into a PC for real-time signal demodulation. Excluding 60 boarder LEDs and 4 spatial synchronization LEDs, the remaining 192 data-carrying LEDs are able to transmit data at an effective rate of about 95 kbps. Bit error rate below 10∼ 3 over a communication distance up to 1.2 m is achieved without any external optical lens.
['Peng Tian', 'Wei Huang', 'Zhengyuan Xu']
Design and experimental demonstration of a real-time 95kbps optical camera communication system
893,267
The importance of annotations, as a by-product of the reading activity, cannot be overstated. Annotations help users in the process of analyzing, re-reading, and recalling detailed facts such as prior analyses and relations to other works. As elec-tronic reading become pervasive, digital annotations will become part of the essential records of the reading activity. But creating and rendering annotations on a 3D book and other objects in a 3D workspace is non-trivial. In this paper, we present our exploration of how to use 3D graphics techniques to create realistic annotations with acceptable frame rates. We discuss the pros and cons of several techniques and detail our hybrid solution.
['Lichan Hong', 'Ed Huai-hsin Chi', 'Stuart K. Card']
Annotating 3D electronic books
387,625
This paper describes the in-field operation of two interacting autonomous marine vehicles to demonstrate the suitability of interval programming (IvP), a novel mathematical model for multiple-objective optimization. Broadly speaking, IvP coordinates competing control needs such as primary task execution that depends on a sufficient position estimate, and vehicle maneuvers that will improve that position estimate. In this work, vehicles cooperate to improve their position estimates using a sequence of vehicle-to-vehicle range estimates from acoustic modems. Coordinating primary task execution and sensor quality maintenance is a ubiquitous problem, especially in underwater marine vehicles. This work represents the first use of multiobjective optimization in a behavior-based architecture to address this problem
['Michael R. Benjamin', 'M. Grund', 'Paul Newman']
Multi-objective optimization of sensor quality with efficient marine vehicle task execution
275,208
This paper describes an online handwritten Japanese character string recognition system based on conditional random fields, which integrates the information of character recognition, linguistic context and geometric context in a principled framework, and can effectively overcome the variable length of candidate segmentation. For geometric context, we employ both unary and binary feature functions, as well as the ones relevant and irrelevant to character classes. Experimental results show that the CRF based method outperforms the method with normalized path evaluation criterion, and the geometric context benefits the performance significantly.
['Xiang-Dong Zhou', 'Cheng-Lin Liu', 'Masaki Nakagawa']
Online Handwritten Japanese Character String Recognition Using Conditional Random Fields
292,374
Post-processing is an important part of Quantum Key Distribution (QKD) system. This paper discusses the motivation for designing a multi-thread QKD post processing software, its architecture, communication protocol specification, and synchronization issues. It is shown that efficiency is gained from multi-thread programming. Future work to improve the software is also discussed.
['Xiaxiang Lin', 'Xiang Peng', 'Hao Yan', 'Wei Jiang', 'Tian Liu', 'Hong Guo']
An Implementation of Post-Processing Software in Quantum Key Distribution
157,300
Usability defects test escapee can have a negative impact on the success of software. It is quite common for projects to have a tight timeline. For these projects, it is crucial to ensure there are effective processes in place. One way to ensure project success is to improve the manual processes of the usability inspection via automation. An automated usability tool will enable the evaluator to reduce manual processes and focus on capturing more defects in a shorter period of time. Thus improving the effectiveness of the usability inspection and minimizing defects escapee. There exist many usability testing and inspection methods. The scope of this paper is on the Heuristic Evaluation (HE) procedures automation. The Usability Management System (UMS) was developed to automate as many manual steps as possible throughout the software development life cycle (SDLC). It is important for the various teams within the organization to understand the benefits of automation. The results show that with the help of automation more usability defects can be detected. Hence, enhancing the effectiveness of usability evaluation by an automated Heuristic Evaluation System is feasible.
['Ashok Sivaji', 'Shi-Tzuaan Soo', 'Mohamed Redzuan Abdullah']
Enhancing the Effectiveness of Usability Evaluation by Automated Heuristic Evaluation System
327,646
Nowadays the design of complex systems requires the cooperation of several teams belonging to different cultures and using different languages. It is necessary to dispose of new design and verification methods to handle multilanguage approaches. This paper presents an approach for the communication interface synthesis of multilanguage specifications. In the proposed approach, communication synthesis allows to transform a system composed of subsystems, described in different languages that communicate via different communication primitives through communication channels, into a set of interconnected processors that communicate via signals and share communication controls. An example illustrates the usefulness of this approach for the design of an adaptive speed control system that was described in SDL and Matlab.
['Fabiano Hessel', 'P. Coste', 'P. LeMarrec', 'Nacer-Eddine Zergainoh', 'Jean-Marc Daveau', 'Ahmed Amine Jerraya']
Communication interface synthesis for multilanguage specifications
135,343
Summary: GENOME proposes a rapid coalescent-based approach to simulate whole genome data. In addition to features of standard coalescent simulators, the program allows for recombination rates to vary along the genome and for flexible population histories. Within small regions, we have evaluated samples simulated by GENOME to verify that GENOME provides the expected LD patterns and frequency spectra. The program can be used to study the sampling properties of any statistic for a whole genome study. Availability: The program and Cþþ source code are available online at http://www.sph.umich.edu/csg/liang/genome/ Contact: lianglim@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online.
['Liming Liang', 'Sebastian Zöllner', 'Gonçalo R. Abecasis']
GENOME: a rapid coalescent-based whole genome simulator
544,884
This article describes a method for document/speech alignment based on explicit verbal references to documents and parts of documents, in the context of multimodal meetings. The article focuses on the two main stages of dialogue processing for alignment: the detection of the expressions referring to documents in transcribed speech, and the recognition of the documents and document elements that they refer to. The detailed evaluation of the implemented modules, first separately and then in a pipeline, shows that results are well above baseline values. The integration of this method with other techniques for document/speech alignment is finally discussed.
['Andrei Popescu-Belis', 'Denis Lalanne']
Detection and resolution of references to meeting documents
863,528
In this paper we report on a study of implicit feedback models for unobtrusively tracking the information needs of searchers. Such models use relevance information gathered from searcher interaction and can be a potential substitute for explicit relevance feedback. We introduce a variety of implicit feedback models designed to enhance an Information Retrieval (IR) system's representation of searchers' information needs. To benchmark their performance we use a simulation-centric evaluation methodology that measures how well each model learns relevance and improves search effectiveness. The results show that a heuristic-based binary voting model and one based on Jeffrey's rule of conditioning [5] outperform the other models under investigation.
['Ryen W. White', 'Joemon M. Jose', 'C. J. van Rijsbergen', 'Ian Ruthven']
A Simulated Study of Implicit Feedback Models
284,451
Wireless Network-on-Chip (WNoC) architectures have emerged as a promising interconnection infrastructure to address the performance limitations of traditional wire-based multihop NOCs. Nevertheless, the WNoC systems encounter high failure rates due to problems pertaining to integration and manufacturing of wireless interconnection in nano-domain technology. As a result, the permanent failures may lead to the formation of any shape of faulty regions in the interconnection network, which can break down the whole system. This issue is not investigated in previous studies on WNoC architectures. Our solution advocates the adoption of communication structures with both node and link on disjoint paths. On the other hand, the imposed costs of WNoC design must be reasonable. Hence, a novel approach to design an optimized fault-tolerant hybrid hierarchical WNoC architecture for enhancing performance as well as minimizing system costs is proposed. The experimental results indicate that the robustness of this newly proposed design is significantly enhanced in comparison with its the fault-tolerant wire-based counterparts in the presence of various faulty regions under both synthetic and application-specific traffic patterns.
['Abbas Dehghani', 'Kamal Jamshidi']
A Novel Approach to Optimize Fault-Tolerant Hybrid Wireless Network-on-Chip Architectures
691,927
Semantic Web Solutions in the Automotive Industry
['Tania Tudorache', 'Luna Alani']
Semantic Web Solutions in the Automotive Industry
941,406
We introduce a new segmentation method based on second-order energies. Compared to the related works it has the significantly lower computational complexity O(N logN). The increased efficiency is achieved by integrating curvature approximation into a new bidirectional search scheme. Some heuristics are applied in the algorithm at the cost of exact energy minimisation. Our novel pseudo-elastica core algorithm is then incorporated into a user-guided segmentation scheme which represents a generalisation of classic first-order path-based schemes to second-order energies while maintaining the same low complexity. Our results suggest that, compared to first-order approaches, it scores similar or better results and usually requires considerably less user-input. As opposed to a recently introduced efficient second-order scheme, both closed contours and open contours with fixed endpoints can be computed with our technique.
['Matthias Krueger', 'Patrice Delmas', "Georgy L. Gimel'farb"]
Efficient image segmentation using weighted Pseudo-Elastica
220,651
Motivated by the problem of efficiently collecting data from wireless sensor networks via a mobile sink, we present an accelerated random walk on random geometric graphs RGG. Random walks in wireless sensor networks can serve as fully local, lightweight strategies for sink motion that significantly reduce energy dissipation but introduce higher latency in the data collection process. In most cases, random walks are studied on graphs like Gn,p and grid. Instead, we here choose the RGG model, which abstracts more accurately spatial proximity in a wireless sensor network. We first evaluate an adaptive walk the random walk with inertia on the RGG model; its performance proved to be poor and led us to define and experimentally evaluate a novel random walk that we call γ-stretched random walk. Its basic idea is to favour visiting distant neighbours of the current node towards reducing node overlap and accelerate the cover time. We also define a new performance metric called proximity cover time that, along with other metrics such as visit overlap statistics and proximity variation, we use to evaluate the performance properties and features of the various walks. Copyright © 2013 John Wiley & Sons, Ltd.
['Constantinos Marios Angelopoulos', 'Sotiris E. Nikoletseas', 'Dimitra Patroumpa', 'Christoforos Raptopoulos']
Efficient collection of sensor data via a new accelerated random walk
378,296
We present a mathematical analysis of transformations used in fast calculation of inverse square root for single-precision floating-point numbers. Optimal values of the so called magic constants are derived in a systematic way, minimizing either absolute or relative errors at subsequent stages of the discussed algorithm.
['Leonid Moroz', 'Cezary J. Walczyk', 'Andriy Hrynchyshyn', 'Vijay Holimath', 'Jan L. Cieśliński']
Fast calculation of inverse square root with the use of magic constant $-$ analytical approach
690,810
Visual query languages represent an evolution, in terms of understandability and adaptability, with respect to traditional textual languages. We present an iconic query system that enables the interaction of a novice user with a relational database. Our goal is to help a novice user to learn and comprehend the relational data model and a textual query language such as SQL, through the use of the iconic metaphore. In this sense our approach is different from most of the visual query systems proposed in the literature that present the user with a higher level query language, hiding the underlying data model. We also present results from an experiment conducted with first year students to evaluate the effectiveness of our approach.
['Lerina Aversano', 'Gerardo Canfora', 'A. De Lucia', 'Silvio Stefanucci']
Understanding SQL through iconic interfaces
339,160
Several different uncertain inference systems (UISs) have been developed for representing uncertainty in rule-based expert systems. Some of these, such as Mycin's Certainty Factors, Prospector, and Bayes' Networks were designed as approximations to probability, and others, such as Fuzzy Set Theory and Dempster-Shafer Belief Functions were not. How different are these UISs in practice, and does it matter which you use? When combining and propagating uncertain information, each UIS must, at least by implication, make certain assumptions about correlations not explicily specified. The maximum entropy principle with minimum cross-entropy updating, provides a way of making assumptions about the missing specification that minimizes the additional information assumed, and thus offers a standard against which the other UISs can be compared. We describe a framework for the experimental comparison of the performance of different UISs, and provide some illustrative results.
['Ben P. Wise', 'Max Henrion']
A Framework for Comparing Uncertain Inference Systems to Probability
547,217
Reliable and accurate geolocation is essential for airborne and land-based remote sensing applications. The detection, discrimination, and remediation of unexploded ordnance (UXO) and other munitions and explosives of concern (MEC) using the currently available detection and geolocation technologies often yield unsatisfactory results, failing to detect all MEC present at a site or to discriminate between MEC and nonhazardous items. Thus, the goal of this paper is to design and demonstrate a high-accuracy geolocation methodology that will address centimeter-level relative accuracy requirements of a man-portable electromagnetic (EM) sensor system in open and impeded environments. The proposed system design is based on the tight quadruple integration of the Global Positioning System (GPS), the inertial measurement unit (IMU) system, the terrestrial radio-frequency (RF) system pseudolite (PL), and terrestrial laser scanning (TLS) to support high-accuracy geolocation for a noncontact EM mapping system in GPS-challenged environments. The key novel component of the proposed multisensor system is the integration of TLS that can provide centimeter-level positioning accuracy in a local frame and thus enables a GPS/IMU/PL-based navigation system to achieve both high absolute and relative positioning accuracy in GPS-impeded environments. This paper presents the concept design of the quadruple integration system, the algorithmic approach to data integration with a special emphasis on TLS integration with GPS/IMU/PL, and the performance assessment based on real data, where centimeter-level relative geolocation accuracy is demonstrated during the GPS signal blockage.
['Dorota A. Grejner-Brzezinska', 'Charles K. Toth', 'Hongxing Sun', 'Xiankun Wang', 'Chris Rizos']
A Robust Solution to High-Accuracy Geolocation: Quadruple Integration of GPS, IMU, Pseudolite, and Terrestrial Laser Scanning
155,686
Cryptographic protocols are usually specified in an informal language, with crucial elements of the protocol left implicit. We suggest that this is one reason that such protocols are difficult to analyse, and are subject to subtle and nonintuitive attacks. We present an approach for formalising and analysing cryp- tographic protocols in the situation calculus, in which all aspects of a protocol must be explicitly specified. We provide a declarative specification of underlying assumptions and capabilities, such that a protocol is translated into a sequence of actions to be executed by the principals, and a successful attack is an executable plan by an intruder that compromises the goal of the protocol. Our prototype verification software takes a protocol specification, translates it into a high-level situation calculus (Golog) program, and outputs any attacks that can be found. We describe the structure and operation of our prototype software, and discuss performance issues.
['Aaron Hunter', 'James P. Delgrande', 'Ryan McBride']
Protocol Verification in a Theory of Action
11,107
Synthesis is the question of how to construct a correct system from a specification. In recent years, synthesis has made major steps from a theoretists dream towards a practical design tool. While synthesis from a language like LTL has very high complexity, synthesis can be quite practical when we are willing to compromise on the specification formalism. Similarly, we can take a pragmatic approach to synthesize small distributed systems, a problem that is in general undecidable.
['Roderick Bloem']
Reactive synthesis
876,844