abstract
stringlengths
8
9.19k
authors
stringlengths
9
1.96k
title
stringlengths
8
367
__index_level_0__
int64
13
1,000k
Due to secondary code and AltBOC modulation,the primary code acquisition of the Galileo E5 signal can be complicated and requires additional hardware sources and algorithmic complexity in a receiver. In this paper, we propose a fast primary code acquisition technique for the Galileo E5 signal to reduce both hardware and algorithm complexities, while achieving a similar or better performance in receiver operating characteristic (ROC) and mean acquisition time (MAT), respectively. The proposed technique employs a sub- Nyquist sampling scheme followed by a sample compression scheme to cause a complete aliasing between the spectra of E5a and E5b signals to minimize the signal bandwidth and to reduce the number of code phase hypothesis to search. We demonstrate with numerous Monte Carlo simulations that the MAT of the proposed technique is about a half of the conventional AltBOC acquisition techniques.
['Wei Wang', 'Binhee Kim', 'Seung-Hyun Kong']
Sub-Nyquist Sampling Based Low Complexity Fast AltBOC Acquisition
832,837
Sonnet was designed as a visual language for implementing real-time processes. Early design and development of behavioral components has largely focused on the domain of music programming. However, Sonnet's architecture is well-suited to expressing many kinds of real-time activities. In particular, Sonnet is easily extended with new kinds of data types and behavioral components. We have developed a collection of visual output components for Sonnet, referred to collectively as Sonnet+Imager. Its design embodies aesthetically grounded representations of color, form and rhythm, as well as dynamics for each. Moreover, its value is enhanced by a flexible, modular architecture that treats these graphic entities and operations as first-class objects.
['Fred Collopy', 'Robert M. Fuhrer', 'David H. Jameson']
Visual music in a visual programming language
83,294
A new approach to testing component-based applications is presented, which exploits the practice in component-based systems of generating stub/skeleton modules and using these stubs/skeletons to construct a global perspective of end-to-end causality of inter-component communication. This global causality is captured regardless of reentrancy, callbacks, thread and process boundaries, and unsynchronized clocks. The captured logs created from the interception points are used to construct a system-wide component interaction model that can expose the inter-component dependencies usually hidden in static analysis of application code. These discovered dependencies are used to create a test boundary for applying a component test harness for that component and the set of dependent components. Similarly, the discovered dependencies can be applied to pruning the available test cases to identify those cases that are best suited to exposing defects when one or more components are changed. A particular advantage of the approach has been the ability to isolate the sequence of events that led up to a crash or a deadlock condition and view the entire system behavior (not just a particular thread's perspective or a linear log of intercepted messages).
['Jun Li', 'Keith Moore']
Exploiting global causality in testing of distributed and component-based applications
332,137
Sliding mode control is a nonlinear control technique, which is robust against some classes of uncertainties and disturbances. However, this control produces chattering which can cause instability due to unmodeled dynamics and can also cause damage to actuators or the plant. There are essentially two ways to counter the chattering phenomenon. One way is to use higher order sliding mode, and the other way is to add a boundary layer around the switching surface and use continuous control inside the boundary. The problem with the first method is that the derivative of a certain state variable is not available for measurement, and therefore methods have to be used to observe that variable. In the second method, it is important that the trajectories inside the boundary layer do not try to come outside the boundary after entering the boundary layer. Control laws producing chattering-free sliding mode using a boundary layer have been proposed and the existence of solutions to the system using these control laws are presented.
['Pushkin Kachroo']
Existence of solutions to a class of nonlinear convergent chattering-free sliding mode control systems
439,581
Characterizing user access methods in heterogeneous cloud radio access networks (H-CRANs)is critical for performance optimization. Different from the user access in cloud radio access networks, the inter-tier interference from macro base station has a great impact on user access in H-CRANs. In this paper, after considering the inter-tier interference, the ergodic rates of downlink H- CRANs for two proposed user access methods, namely distance based and cluster based, are analyzed. The corresponding mathematical expressions of ergodic rates have been derived. In particular, the closed-form expression for the upper bound of ergodic rate is proposed. Simulation results corroborate the accuracy of the derived expressions for these two methods. Furthermore, the cluster based user access method outperforms the distance based user access method when the intensity of remote radio heads is sufficiently high.
['Lingfeng Yang', 'Mugen Peng', 'Shi Yan', 'Shengli Zhang', 'Changqing Yang', 'Yong Wu']
Ergodic Rate Analysis for User Access in Downlink Heterogeneous Cloud Radio Access Networks
652,036
FPGA packing and placement without routability consideration could lead to unroutable results for high-utilization designs. Conventional FPGA packing and placement approaches are shown to have severe difficulties to yield good routability. In this paper, we propose a FPGA packing and placement engine called UTPlaceF that simultaneously optimizes wirelength and routability. A novel physical and congestion aware packing algorithm and several congestion aware detailed placement techniques are proposed. Compared with the top 3 winners of ISPD'16 FPGA placement contest, UTPlaceF can achieve 3.3%, 7.7% and 28.3% better routed wirelength with similar or shorter runtime.
['Wuxi Li', 'Shounak Dhar', 'David Z. Pan']
UTPlaceF: a routability-driven FPGA placer with physical and congestion aware packing
916,727
In recent years the necessity for handling different aspects of the system separately has introduced the need to represent SA (software architectures) from different viewpoints. In particular, behavioral views are recognized to be one of the most attractive features in the SA description, and in practical contexts, state diagrams and scenarios are the most widely used tools to model this view. Although very expressive, this approach has two drawbacks: system specification incompleteness and view consistency. Our work can be put in this context with the aim of managing incompleteness and checking view conformance: we propose the use of state diagrams and scenario models for representing system dynamics at the architectural level; they can be incomplete and we want to prove that they describe, from different viewpoints, the same system behavior. To reach this goal, we use the SPIN model checker and we implement a tool to manage the translation of architectural models in Promela and LTL.
['Paola Inverardi', 'Henry Muccini', 'Patrizio Pelliccione']
Automated check of architectural models consistency using SPIN
137,368
This paper focuses on data-intensive workflows and addresses the problem of scheduling workflow ensembles under cost and deadline constraints in Infrastructure as a Service (IaaS) clouds. Previous research in this area ignores file transfers between workflow tasks, which, as we show, often have a large impact on workflow ensemble execution. In this paper we propose and implement a simulation model for handling file transfers between tasks, featuring the ability to dynamically calculate bandwidth and supporting a configurable number of replicas, thus allowing us to simulate various levels of congestion. The resulting model is capable of representing a wide range of storage systems available on clouds: from in-memory caches (such as memcached), to distributed file systems (such as NFS servers) and cloud storage (such as Amazon S3 or Google Cloud Storage). We observe that file transfers may have a significant impact on ensemble execution; for some applications up to 90 % of the execution time is spent on file transfers. Next, we propose and evaluate a novel scheduling algorithm that minimizes the number of transfers by taking advantage of data caching and file locality. We find that for data-intensive applications it performs better than other scheduling algorithms. Additionally, we modify the original scheduling algorithms to effectively operate in environments where file transfers take non-zero time.
['Piotr Bryk', 'Maciej Malawski', 'Gideon Juve', 'Ewa Deelman']
Storage-aware Algorithms for Scheduling of Workflow Ensembles in Clouds
622,174
A novel and efficient algorithm is presented in this paper to deal with DNS of turbulent reacting flows under the low-Mach-number assumption, with detailed chemistry and a quasi-spectral accuracy. The temporal integration of the equations relies on an operating-split strategy, where chemical reactions are solved implicitly with a stiff solver and the convection-diffusion operators are solved with a Runge-Kutta-Chebyshev method. The spatial discretisation is performed with high-order compact schemes, and a FFT based constant-coefficient spectral solver is employed to solve a variable-coefficient Poisson equation. The numerical implementation takes advantage of the 2DECOMP&FFT libraries developed by 1, which are based on a pencil decomposition method of the domain and are proven to be computationally very efficient. An enhanced pressure-correction method is proposed to speed up the achievement of machine precision accuracy. It is demonstrated that a second-order accuracy is reached in time, while the spatial accuracy ranges from fourth-order to sixth-order depending on the set of imposed boundary conditions. The software developed to implement the present algorithm is called HOLOMAC, and its numerical efficiency opens the way to deal with DNS of reacting flows to understand complex turbulent and chemical phenomena in flames.
['E. Motheau', 'John Abraham']
A high-order numerical algorithm for DNS of low-Mach-number reactive flows with detailed chemistry and quasi-spectral accuracy
647,475
Smart phone is becoming an ideal platform for continuous and transparent sensing with lots of built-in sensors. Activity recognition on smart phones is still a challenge due to the constraints of resources, such as battery lifetime, computational workload. Keeping in view the demand of low energy activity recognition for mobile devices, we propose an energy-efficient method to recognize user activities based on a single low resolution tri-axial accelerometer in smart phones. This paper presents a hierarchical recognition scheme with variable step size, which reduces the cost of time consuming frequency domain features for low energy consumption and adjusts the size of sliding window to improve the recognition accuracy. Experimental results demonstrate the effectiveness of the proposed algorithm with more than 85% recognition accuracy for 11 activities and 3.2 hours extended battery life for mobile phones.
['Yunji Liang', 'Xingshe Zhou', 'Zhiwen Yu', 'Bin Guo', 'Yue Yang']
Energy efficient activity recognition based on low resolution accelerometer in smart phones
570,087
We consider the AES encryption/decryption algorithm and propose a memory based hardware design to support it. The proposed implementation is mapped on the Xilinx Virtex II Pro technology. Both the byte substitution and the polynomial multiplication of the AES algorithm are implemented in a single dual port on-chip memory block (BRAM). Two AES encryption/decryption cores have been designed and implemented on a prototyping XC2VP20-7 FPGA: a completely unrolled loop structure capable of achieving a throughput above 34 Gbits/s, with an implementation cost of 3513 slices and 80 BRAMs; and a fully folded structure, requiring only 515 slices and 12 BRAMs, capable of a throughput above 2 Gbits/s. To evaluate the proposed AES design, it has been embedded in a polymorphic processor organization, as a reconfigurable co-processor. Comparisons to state-of-the-art AES cores indicate that the proposed unfolded core outperforms the most recent works by 34% in throughput and requires 68% less reconfigurable area. Experimental results of both folded and unfolded AES cores suggest over 560% improvement in the throughput/slice metric when compared to the recent AES related art.
['Ricardo Chaves', 'Georgi Kuzmanov', 'Stamatis Vassiliadis', 'Leonel Sousa']
Reconfigurable memory based AES co-processor
434,408
One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an ever-growing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time.#R##N##R##N#The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks.#R##N##R##N#In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.
['Tobias Baumgartner', 'Ioannis Chatzigiannakis', 'Sándor P. Fekete', 'Christos Koninis', 'Alexander Kröller', 'Apostolos Pyrgelis']
Wiselib: a generic algorithm library for heterogeneous sensor networks
390,171
We consider the application of coded modulation to systems with multiple transmit and multiple receive antennas. We concentrate on fast fading channels with channel state information available at the receiver only. Regarding multiple-antenna signaling as multi-dimensional modulation we propose to separate coding and modulation.
['Lutz Lampe', 'Robert F. H. Fischer', 'R. Schober']
Multilevel coding for multiple-antenna transmission
463,515
The methodology used for prototyping an H.263 video coder is explained in this paper. The coder is based on an architecture, we have called MVIP-2, which consists of a set of specialized processors for the main tasks (transforms, quantizers, motion estimation and motion compensation) and a RISC processor for the scheduling tasks. The design has been written in synthesizable Verilog and fully tested with hardware-software co-simulation using standard video sequences. All modules except the RISC has been synthesized and fitted onto an EP20K400BC652 FPGA from Altera. At present we are testing the prototype in real-time using a commercial board with the RISC and the FPGA, a pattern generator and data acquisition system to generate the input sequences and to read the reconstructed ones, as well as a logic analyzer. The methodological aspects presented in this paper can be applied to other designs.
['M.J. Garrido', 'C. Sanz', 'Marcos Jimenez', 'Juan M. Meneses']
A flexible H.263 video coder prototype based on FPGA
224,875
Advances in adaptive filtering theory and applications to acoustic and speech signal processing
['Markus Rupp', 'Walter Kellermann', 'Abdelhak M. Zoubir', 'Gerhard Schmidt']
Advances in adaptive filtering theory and applications to acoustic and speech signal processing
745,847
It is known that the Alamouti code is the only complex orthogonal design (COD) which achieves capacity and that only for the case of two transmit and one receive antennas. M.O. Damen et al. (see IEEE Trans. Inform. Theory, vol.48, no.3, p.753-60, 2002) gave a design for 2 transmit antennas, which achieves capacity for any number of receive antennas, calling it an information lossless STBC. We construct capacity achieving designs using cyclic division algebras for an arbitrary number of transmit and receive antennas. For the STBCs obtained using these designs, we present simulation results for those numbers of transmit and receive antennas for which Damen et al. also gave results, and show that our STBCs perform better than their's.
['V. Shashidhar', 'B.S. Rajan', 'B.A. Sethuraman']
STBCs using capacity achieving designs from cyclic division algebras
104,946
In this paper, we propose an efficient message passing architecture for permutation matrices based LDPC code decoders. Min-sum algorithm is reformulated to facilitate significant reduction of routing complexity and memory usage. For a (2048, 1723) (6, 32) LDPC code with 4-bit quantization, 54% outgoing wires per variable node unit and 90% outgoing wires per check node unit can be saved. To further reduce hardware complexity, an optimized nonuniform quantization scheme using only 3 bits to represent each message has been investigated. The simulation result shows that it has only 0.25dB performance loss from the floating-point SPA
['Zhiqiang Cui', 'Zhongfeng Wang']
Efficient Message Passing Architecture for High Throughput LDPC Decoder
535,342
A long-standing assumption in the cognitive aging literature is that performance on working memory (WM) tasks involving serial recall is relatively unaffected by aging, whereas tasks that require the rearrangement of items prior to recall are more age-sensitive. Previous neuroimaging studies of WM have found age-related increases in neural activity in frontoparietal brain regions during simple maintenance tasks, but few have examined whether there are age-related differences that are specific to rearranging WM items. In the current study, older and younger adults' brain activity was monitored using functional magnetic resonance imaging (fMRI) as they performed WM tasks involving either maintenance or manipulation (letter–number sequencing). The paradigm was developed so that performance was equivalent across age groups in both tasks, and the manipulation condition was not more difficult than the maintenance condition. In younger adults, manipulation-related increases in activation occurred within a very focal set of regions within the canonical brain WM network, including left posterior prefrontal cortex and bilateral inferior parietal cortex. In contrast, older adults showed a much wider extent of manipulation-related activation within this WM network, with significantly increased activity relative to younger adults found within bilateral PFC. The results suggest that activation and age-differences in lateral PFC engagement during WM manipulation conditions may reflect strategy use and controlled processing demands rather than reflect the act of manipulation per se.
['Lisa Emery', 'Timothy J. Heaven', 'Jessica Paxton', 'Todd S. Braver']
Age-related changes in neural activity during performance matched working memory manipulation
109,809
A great and constantly growing number of scientific journals editions demands strict selection of journals during the planning of their purchase.#R##N##R##N#The method of journal selection on the basis of the information service system data is described in this article. The primary value of a journal has been defined as an amount of the retrieved for the readers information concerning the articles published in a given journal. This parameter and the costs of subscription are the basis for journal ranking and determination of the number of copies (to be bought).#R##N##R##N#This method has been verified by the use of data from the SDI system exploited at Wroclaw Technical University.#R##N##R##N#Having compared the achieved results with the results of the simultaneously conducted questionnaire investigation, it has been found that there is a considerable degree of accordance between the results of journal acquisition planning based on the described method, and the demands of journal users.
['Czesław Daniłowicz', 'H Szarski']
Selection of scientific journals based on the data obtained from an information service system
157,463
Status data identification in ubiquitous computing is studied as the method of providing optimum service to the user. The study is most active in Healthcare field, which is considered as the essential factor in human life. Studies have been conducted in the past which utilized analysis of information taken from sensors to obtain corresponding status information of the patient. However, applying the information obtained from such data had its limitations due to lack of researches done on methods of controlling and distributing such status information. This study suggests a solution for managing status information through tag matching method in order to effectively provide status information from uHealthcare. The suggested model shows slight delay in data process when information is produced, but an improved ability during information search by utilizing tagged information.
['Jae-gu Song', 'Gil-Cheol Park', 'Minseong Ju', 'Seoksoo Kim']
Designing Tag Based Status Identification Control System to Provide Information From uHealthcare Environment
104,167
The seekers
['M. O. Thirunarayanan']
The seekers
713,727
The existence and the uniqueness of Augustin center and the associated Erven-Harremoes bound are established for arbitrary channels with convex constraint sets and finite Augustin capacity. Concepts of Augustin-Legendre capacity, center and radius are introduced and their equality to the corresponding Renyi-Gallager concepts is established. Sphere packing bounds with polynomial prefactors are derived for codes on two families of channels: (possibly non-stationary) memoryless channels with multiple additive cost constraints and stationary memoryless channels with convex constraints on the empirical distribution of the input codewords.
['Barış Nakiboğlu']
The Augustin Center and The Sphere Packing Bound For Memoryless Channels
999,242
We report on an emerging application focused on the design of resilient long reach passive optical networks using combinatorial optimisation techniques. The objective of the application is to determine the optimal position and capacity of a set of metro nodes. We specifically consider dual parented networks whereby each customer must be associated with two metro nodes. An important property of such a placement is resilience to single node failure. Therefore excess capacity should be provided at each metro node in order to ensure that customers can be redistributed amongst the metro sites. Our application, as well as finding optimal node placements, can compute the minimum level of excess capacity on all metro nodes. In this paper we present three alternative approaches to optimal metro node placement.We present a detailed analysisof the impact of different placement approaches on the distribution of excess capacity throughout the network. We show that preferential distributions occur in practice, based on a case-study in Ireland. Finally we show that load and excess capacity provision are independent of each other.
['Deepak Mehta', "Barry O'Sullivan", 'Luis Quesada', 'Marco Ruffini', 'David B. Payne', 'Linda Doyle']
Designing Resilient Long-Reach Passive Optical Networks
545,475
Fast and robust acoustic system identification is still a research topic of interest, because of the typically time-variant nature of acoustic systems and the natural performance limitation of electroacoustic measurement equipment. In this paper, we propose NLMS-type adaptive identification with perfect-sweep excitation. The perfect-sweep is derived from the more general class of perfect sequences and, thus, it inherits periodicity and especially the desired decorrelation property known from perfect sequences. Moreover, the perfect-sweep shows the desirable characteristics of swept sine signals regarding the immunity against non-linear loudspeaker distortions. On this basis, we first demonstrate the fast tracking ability of the perfect-sweep NLMS algorithm via computer generated simulation of a time-variant acoustic system. Then, the robustness of the perfect-sweep NLMS algorithm against non-linear characteristics of real measurements in a time-invariant case is presented. By finally addressing the measurement of quasi-continuous head-related impulse responses, we face the combined challenge of time-variant and possibly non-linear distorted acoustic system identification in a real application scenario and we can demonstrate the superiority of the perfect-sweep NLMS algorithm.
['Christiane Antweiler', 'Aulis Telle', 'Peter Vary', 'Gerald Enzner']
Perfect-sweep NLMS for time-variant acoustic system identification
172,652
Heating, cooling and ventilation accounts for 35p energy usage in the United States. Currently, most modern buildings still condition rooms assuming maximum occupancy rather than actual usage. As a result, rooms are often over-conditioned needlessly. Thus, in order to achieve efficient conditioning, we require knowledge of occupancy. This article shows how real time occupancy data from a wireless sensor network can be used to create occupancy models, which in turn can be integrated into building conditioning system for usage-based demand control conditioning strategies. Using strategies based on sensor network occupancy model predictions, we show that it is possible to achieve 42p annual energy savings while still maintaining American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) comfort standards.
['Varick L. Erickson', 'Miguel Á. Carreira-Perpiñán', 'Alberto E. Cerpa']
Occupancy Modeling and Prediction for Building Energy Management
220,454
To address the frequently occurring situation where data is inexact or imprecise, a number of extensions to the classical notion of a functional dependency (FD) integrity constraint have been proposed in recent years. One of these extensions is the notion of a differential dependency (DD), introduced in the recent article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo by Song and Chen in the March 2011 edition of this journal. A DD generalises the notion of an FD by requiring only that the values of the attribute from the RHS of the DD satisfy a distance constraint whenever the values of attributes from the LHS of the DD satisfy a distance constraint. In contrast, an FD requires that the values from the attributes in the RHS of an FD be equal whenever the values of the attributes from the LHS of the FD are equal. The article &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo investigated a number of aspects of DDs, the most important of which, since they form the basis for the other topics investigated, were the consistency problem (determining whether there exists a relation instance that satisfies a set of DDs) and the implication problem (determining whether a set of DDs logically implies another DD). Concerning these problems, a number of results were claimed in &ldquoDifferential Dependencies: Reasoning and Discovery&rdquo. In this article we conduct a detailed analysis of the correctness of these results. The outcomes of our analysis are that, for almost every claimed result, we show there are either fundamental errors in the proof or the result is false. For some of the claimed results we are able to provide corrected proofs, but for other results their correctness remains open.
['Millist W. Vincent', 'Jixue Liu', 'Hong-Cheu Liu', 'Sebastian Link']
Technical Correspondence: “Differential Dependencies: Reasoning and Discovery” Revisited
601,542
Pointcut fragility is a well-documented problem in Aspect-Oriented Programming, changes to the base-code can lead to join points incorrectly falling in or out of the scope of pointcuts. Deciding which pointcuts have broken due to base-code changes is a daunting venture, especially in large and complex systems. We present an automated approach that recommends pointcuts that are likely to require modification due to a particular base-code change, as well as ones that do not. Our hypothesis is that join points selected by a pointcut exhibit common structural characteristics. Patterns describing such commonality are used to recommend pointcuts that have potentially broken to the developer. The approach is implemented as an extension to the popular Mylyn Eclipse IDE plug-in, which maintains focused contexts of entities relevant to the task at hand using a Degree of Interest (DOI) model.
['Raffi Khatchadourian', 'Awais Rashid', 'Hidehiko Masuhara', 'Takuya Watanabe']
Detecting Broken Pointcuts Using Structural Commonality and Degree of Interest (N)
609,567
Meeting Inelastic Demand in Systems With Storage and Renewable Sources
['Soon-Geol Kwon', 'Yunjian Xu', 'Natarajan Gautam']
Meeting Inelastic Demand in Systems With Storage and Renewable Sources
722,589
In this paper, we study several combinatorial optimization problems which combine the classic open shop or job shop scheduling problem and the shortest path problem. Our goal is to select a subset of jobs that constitutes a feasible solution of the shortest path problem, and then execute the selected jobs on the shop machines to minimize the makespan, i.e., the last completion time of all the jobs. We prove that these problems are NP-hard even if there are two machines. If the number of machines is an input, we show that it is unlikely to find approximation algorithms with performance ratios better than 2 unless P=NP P = NP . We present an intuitive approximation algorithm when the number of machines is an input, and an improved approximation algorithm when the number of machines is fixed. In addition, we propose a polynomial time approximation scheme for the open shop case when the number of machines is fixed.
['Kameng Nip', 'Zhenbo Wang', 'Wenxun Xing']
A study on several combination problems of classic shop scheduling and shortest path
584,047
We approach the design of ubiquitous computing systems in the urban environment as integral to urban design. To understand the city as a system encompassing physical and digital forms and their relationships with people's behaviours, we are developing, applying and refining methods of observing, recording, modelling and analysing the city, physically, digitally and socially. We draw on established methods used in the space syntax approach to urban design. Here we describe how we have combined scanning for discoverable Bluetooth devices with two such methods, gatecounts and static snapshots. We report our experiences in developing, field testing and refining these augmented methods. We present initial findings on the Bluetooth landscape in a city in terms of patterns of Bluetooth presence and Bluetooth naming practices.
["Eamonn O'Neill", 'Vassilis Kostakos', 'Tim Kindberg', 'Ava Fatah gen. Schiek', 'Alan Penn', 'Danae Stanton Fraser', 'Timothy Jones']
Instrumenting the city: developing methods for observing and understanding the digital cityscape
494,396
Decision-theoretic rough set model can derive several probabilistic rough set models by providing proper cost functions. Learning cost functions from data automatically is the key to improving the applicability of decision-theoretic rough set model. Many region-related attribute reductions are not appropriate for probabilistic rough set models as the monotonic property of regions does not always hold. In this paper, we propose an optimization representation of decision-theoretic rough set model. An optimization problem is proposed by considering the minimization of the decision cost. Two significant inferences can be drawn from the solution of the optimization problem. Firstly, cost functions and thresholds used in decision-theoretic rough set model can be learned from the given data automatically. An adaptive learning algorithm and a genetic algorithm are designed. Secondly, a minimum cost attribute reduction can be defined. The attribute reduction is interpreted as finding the minimal attribute set to make the decision cost minimum. A heuristic approach and a particle swarm optimization approach are also proposed. The optimization representation can bring some new insights into the research on decision-theoretic rough set model.
['Xiuyi Jia', 'Zhenmin Tang', 'Wenhe Liao', 'Lin Shang']
On an optimization representation of decision-theoretic rough set model
272,075
Toward to realizing a novel drug delivery system, a simulation system is considered to be important for a future analysis of it. In this paper, a novel, miniature, and energy-efficient bio-mimetic propulsion concept was proposed. The antibody binding technique is developed and used to attach bacteria to liposome's surface for enhancing liposome mobility. Bacteria and liposome are strongly combined through antibody. Consequently, the effect of antibody when bacteria attached to liposome is studied experimentally. The stochastic nature of bacterial propulsion of liposome is investigated experimentally and analytically. It is shown that antibody plays an important role in attaching bacteria to liposome and the liposome with bacteria moved broader compared to the liposome without bacteria. Statistical calculation matches well with experimental data.
['Seiichi Ikeda', 'Zhenhai Zhang', 'Masaru Kojima', 'Masahiro Nakajima', 'Toshio Fukuda']
Evaluation of attachment and motion of bacteria-driven liposome based on antibody binding technique
469,091
A cylindrical hat-loaded method is employed to make monopoles behave as dual band resonators. The two frequency bands are obtained by perturbing higher propagation modes by modifying the hat length. The longer the cylindrical hat is, the lower the resonant frequencies are obtained. The hat length behaves as an inductive load, also increasing the electric length of the monopole. As a result, a compact resonant structure is got, working at two different frequencies without modifying the omnidirectional radiation pattern at both bands.
['H. Jardon-Aguilar', 'Jose Alfredo Tirado-Mendez', 'Ruben Flores-Leal', 'Edgar Alejandro Andrade-Gonzalez']
Novel Dual Band Resonant Cylindrical Hat-Covered Monopole for Personal Communications
12,464
SIFT is a widely-used algorithm that extracts features from images; using it to extract information from hundreds of terabytes of aerial and satellite photographs requires paral-lelization in order to be feasible. We explore accelerating an existing serial SIFT implementation with OpenMP parallelization and GPU execution.
['Seth Warn', 'Wesley Emeneker', 'Jackson Cothren', 'Amy W. Apon']
Accelerating SIFT on parallel architectures
19,871
Power loss from an uninterruptible power supply can account for 15 percent of a datacenter's energy. A rack-level power model that relates IT workload and its power dissipation allows optimized workload placement that can save a datacenter roughly $1.4 million in annual energy costs.
['Quan Zhang', 'Weisong Shi']
Energy-Efficient Workload Placement in Enterprise Datacenters
666,221
This paper, introduces a new methodology for the design and analysis of digital watermarking systems which, from an information theoretic point of view, incorporates robustness and fragility. The proposed methodology is developed by focusing on the probability of error versus watermark-to-noise ratio curve, describing the technique performance, and a scenario for coded techniques which takes into account not only the coding gain, but also the robustness or fragility of the system. This new concept requires that coded digital watermarking systems design be revisited to also include the robustness and fragility requirements. Turbo codes, which appropriately meet these requirements, can be used straightforwardly to construct robust watermarking systems. Fragile systems can also be constructed by introducing the idea of polarization scheme. This new idea has allowed the implementation of hybrid techniques achieving fragility and robustness with a single watermark embedding. We moreover, present (turbo) coded techniques which can also be used in a semi-fragile mode.
['Marcos de Castro Pacitti', 'Weiler Alves Finamore']
Digital watermarking robustness and fragility characteristics: new modelling and coding influence
512,847
In this paper, we will study the stability issues of the linear Takagi-Sugeno (T-S) free fuzzy systems. Based on matrix norm, we propose a new sufficient condition for the linear T-S free fuzzy system to be globally asymptotically stable. We then study the stability analysis in the case of systems with consequent parameter uncertainty. Based on Mayer's convergent theorem, we propose a sufficient condition, which is easily implemented, for the systems with consequent parameter uncertainty to be globally asymptotically stable.
['Chin-Tzong Pang', 'Sy-Ming Guu']
Sufficient conditions for the stability of linear Takagi-Sugeno free fuzzy systems
82,151
We study the use of asset-backed money in a neoclassical growth model with illiquid capital. A mechanism is delegated control of productive capi- tal and issues claims against the revenue it earns. These claims constitute a form of asset-backed money. The mechanism determines (i) the number of claims outstanding, (ii) the dividends paid to claim holders, and (iii) the structure of redemption fees. We find that for capital-rich economies, the first-best allocation can be implemented and price stability is optimal. However, for sufficiently capital-poor economies, achieving the first-best allocation requires a strictly positive rate of inflation. In general, the minimum infiation necessary to implement the first-best allocation is above the Friedman rule and varies with capital wealth.
['David Andolfatto', 'Aleksander Berentsen', 'Christopher J. Waller']
Monetary policy with asset-backed money
654,446
Generalizations of the Periodicity Theorem of Fine and Wilf
['Raffaele Giancarlo', 'Filippo Mignosi']
Generalizations of the Periodicity Theorem of Fine and Wilf
37,912
Tasks in hard real-time systems are required to meet preset deadlines, even in the presence of transient faults, and hence the analysis of worst-case finish time (WCFT) must consider the extra time incurred by re-executing tasks that were faulty. Existing solutions can only estimate WCFT and usually result in significant under- or over-estimation. In this work, we conclude that a sufficient and necessary condition of a task set experiencing its WCFT is that its critical task incurs all expected transient faults. A method is presented to identify the critical task and WCFT in O(|V | + |E|) where |V | and |E| are the number of tasks and dependencies between tasks, respectively. This method finds its application in testing the feasibility of directed acyclic graph (DAG) based task sets scheduled in a wide variety of fault-prone multi-processor systems, where the processors could be either homogeneous or heterogeneous, DVS-capable or DVS-incapable, etc. The common practices, which require the same time complexity as the proposed critical-task method, could either underestimate the worst case by up to 25%, or overestimate by 13%. Based on the proposed critical-task method, a simulated-annealing scheduling algorithm is developed to find the energy efficient fault-tolerant schedule for a given DAG task set. Experimental results show that the proposed critical-task method wins over a common practice by up to 40% in terms of energy saving.
['Xiao Tong Cui', 'Kai Jie Wu', 'Tong Quan Wei', 'Edwin Hsing Mean Sha']
Worst-Case Finish Time Analysis for DAG-Based Applications in the Presence of Transient Faults
703,328
Pulsed lasers can evoke neural activity from motor as well as sensory neurons in vivo. Lasers allow more selective spatial resolution of stimulation than the conventional electrical stimulation. To date, few studies have examined pulsed, mid-infrared laser stimulation of nerves and very little of the available optical parameter space has been studied. In this study, a pulsed diode laser, with wavelength between 1.844-1.873 mum, was used to elicit compound action potentials (CAPs) from the auditory system of the gerbil. We found that pulse durations as short as 35 mus elicit a CAP from the cochlea. In addition, repetition rates up to 13Hz can continually stimulate cochlear spiral ganglion cells for extended periods of time. Varying the wavelength and, therefore, the optical penetration depth, allowed different populations of neurons to be stimulated. The technology of optical stimulation could significantly improve cochlear implants, which are hampered by a lack of spatial selectivity
['Agnella D. Izzo', 'Joseph T. Walsh', 'E.D. Jansen', 'Mark Bendett', 'Jim Webb', 'Heather Ralph', 'Claus Peter Richter']
Optical Parameter Variability in Laser Nerve Stimulation: A Study of Pulse Duration, Repetition Rate, and Wavelength
384,607
The Erdős–Gallai Theorem states that for k≥2k≥2, every graph of average degree more than k−2k−2 contains a k-vertex path. This result is a consequence of a stronger result of Kopylov: if k   is odd, k=2t+1≥5k=2t+1≥5, n≥(5t−3)/2n≥(5t−3)/2, and G is an n  -vertex 2-connected graph with at least h(n,k,t):=(k−t2)+t(n−k+t) edges, then G contains a cycle of length at least k   unless G=Hn,k,t:=Kn−E(Kn−t)G=Hn,k,t:=Kn−E(Kn−t).#R##N##R##N#In this paper we prove a stability version of the Erdős–Gallai Theorem: we show that for all n≥3t>3n≥3t>3, and k∈{2t+1,2t+2}k∈{2t+1,2t+2}, every n-vertex 2-connected graph G   with e(G)>h(n,k,t−1)e(G)>h(n,k,t−1) either contains a cycle of length at least k or contains a set of t   vertices whose removal gives a star forest. In particular, if k=2t+1≠7k=2t+1≠7, we show G⊆Hn,k,tG⊆Hn,k,t. The lower bound e(G)>h(n,k,t−1)e(G)>h(n,k,t−1) in these results is tight and is smaller than Kopylov's bound h(n,k,t)h(n,k,t) by a term of n−t−O(1)n−t−O(1).
['Zoltán Füredi', 'Alexandr V. Kostochka', 'Jacques Verstraëte']
Stability in the Erdős–Gallai Theorems on cycles and paths
828,026
In this paper, a new failure mode of punch-through type insulated gate bipolar transistors (PT-IGBTs) at short circuit condition under high voltage operation has been presented. After turning on the IGBTs, this failure mode is characterized as an abrupt destruction mode that takes place several microseconds later under higher collector voltage short circuit conditions. The destruction mechanism of the IGBTs has been investigated using a 2-D and a 3-D device simulation. It is found that the hole-current caused by dynamic avalanche generation at the peripheral region of the device leads to concentrate on a certain point such as the emitter contact edge of the active cells. Subsequently the IGBTs plunge into destruction. This hole-current aggregation depends on geometrical structure of the active cells close to the device peripherals where a parasitic PMOS is formed. The generated hole-current path is varied by the gate voltage and this effect results in degradation of the short circuit capabilities depending on the gate voltages. This model has been evaluated experimentally by making punch-through type IGBTs with a rating current of 200A and the IGBTs have shown enough short circuit capabilities. These devices have successfully been adopted as power devices of invertors for a new hybrid vehicle.
['Masayasu Ishiko', 'Koji Hotta', 'Sachiko Kawaji', 'Takahide Sugiyama', 'Tomoyuki Shouji', 'T. Fukami', 'Kimimori Hamada']
Investigation of IGBT turn-on failure under high applied voltage operation
730,088
Soft error has become one of the major areas of attention with the device scaling and large scale integration. Lot of variants for superscalar architecture were proposed with focus on program re-execution, thread re-execution and instruction re-execution. In this paper we proposed a fault tolerant micro-architecture of pipelined RISC. The proposed architecture, Floating Resources Extended pipeline (FREP), re-executes the instructions using extended pipeline stages. The instructions are re-executed by hybrid architecture with a suitable combination of space and time redundancy.
['Viney Kumar', 'Rahul Raj Choudhary', 'Virendra Singh']
FREP: A soft error resilient pipelined RISC architecture
82,603
We present a novel analytical framework to investigate the performances of different sleeping strategies in a wireless sensor network where a solar cell is used to charge the battery in a sensor node. While the energy generation process (i.e., solar radiation) in a solar cell is modeled by a stochastic process (i.e., a Markov chain), a linear battery model with relaxation effect is used for the battery capacity recovery process. Average queue length, packet dropping and packet blocking probabilities and packet delay distribution at each node are the major performance metrics. Developed based on a multi-dimensional discrete-time Markov chain, the presented model can be used to analyze the performances of different sleep and wakeup strategies at each node (e.g., strategies based on available battery capacity, channel state, solar radiation condition and queue length, and hybrid of these conditions). The numerical results obtained from the analytical model are validated by extensive simulations. The presented model would be useful for designing and optimizing sleeping strategies in a solar powered sensor network under energy and QoS constraints.
['Dusit Niyato', 'Ekram Hossain', 'Afshin Fallahi']
Analysis of Different Sleep and Wakeup Strategies in Solar Powered Wireless Sensor Networks
291,216
Humans have represented data in many forms for thousands of years, yet the main sensory channel we use to perceive these representations today still remains largely exclusive to sight. Recent developments, such as advances in digital fabrication, microcontrollers, actuated tangibles, and shape-changing interfaces offer new opportunities to encode data in physical forms and have stimulated the emergence of 'Data Physicalization' as a research area. The aim of this workshop is (1) to create an awareness of the potential of Data Physicalization by providing an overview of state-of-the-art research, practice, and tools and (2) to build a community around this emerging field and start to discuss a shared research agenda. This workshop therefore addresses both experienced researchers and practitioners as well as those who are new to the field but interested in applying Data Physicalization to their own (research) practice. The workshop will provide opportunities for participants to explore Data Physicalization hands-on, by creating their own prototypes. These practical explorations will lead into reflective discussions on the role tangibles and embodiment play in Data Physicalization and the future research challenges for this area.
['Trevor Hogan', 'Eva Hornecker', 'Simon Stusak', 'Yvonne Jansen', 'Jason Alexander', 'Andrew Van Moere', 'Uta Hinrichs', 'Kieran Nolan']
Tangible Data, explorations in data physicalization
635,719
A technique for word timing recovery in a direct detection optical pulse position modulation (PPM) communication system is described. It tracks on back-to-back pulse pairs in the received random PPM data sequences with the use of a phase locked loop. The experimental system consisted of an AlGaAs laser diode transmitter ( lambda =833 nm) and a silicon avalanche photodiode photodetector, and its used Q=4 PPM signaling at a source data rate of 25 Mb/s. The mathematical model developed to characterize system performance is shown to be in good agreement with the experimental measurements. Use of this recovered PPM word clock, along with a slot clock recovery system described previously, caused no measurable penalty in receiver sensitivity when compared to a receiver which used common transmitter/receiver clocks. The completely self-synchronized receiver was capable of acquiring and maintaining both slot and word synchronizations for input optical signal levels as low as 20 average detected photons per information bit. The receiver achieved a bit error probability of 10/sup -6/ at less than 60 average detected photons per information bit. >
['Xiaoli Sun', 'Frederic M. Davidson']
Word timing recovery in direct detection optical PPM communication systems with avalanche photodiodes using a phase lock loop
58,928
High-dose chemotherapy has long been advocated as a means of controlling drug resistance in infectious diseases but recent empirical studies have begun to challenge this view. We develop a very general framework for modeling and understanding resistance emergence based on principles from evolutionary biology. We use this framework to show how high-dose chemotherapy engenders opposing evolutionary processes involving the mutational input of resistant strains and their release from ecological competition. Whether such therapy provides the best approach for controlling resistance therefore depends on the relative strengths of these processes. These opposing processes typically lead to a unimodal relationship between drug pressure and resistance emergence. As a result, the optimal drug dose lies at either end of the therapeutic window of clinically acceptable concentrations. We illustrate our findings with a simple model that shows how a seemingly minor change in parameter values can alter the outcome from one where high-dose chemotherapy is optimal to one where using the smallest clinically effective dose is best. A review of the available empirical evidence provides broad support for these general conclusions. Our analysis opens up treatment options not currently considered as resistance management strategies, and it also simplifies the experiments required to determine the drug doses which best retard resistance emergence in patients.
['Troy Day', 'Andrew F. Read']
Does High-Dose Antimicrobial Chemotherapy Prevent the Evolution of Resistance?
624,793
RFID has very huge potential in today's social and business developments. RFID-based identification is an example of emerging technology which requires authentication. Security and Privacy are one of the important issues in the design of practical RFID protocols. In this paper, we focus on RFID authentication protocol. RFID mutual authentication is used to ensure that only an authorized RFID reader can access the data of RFID tag while the RFID tag is confirmed that it releases data to the authenticated RFID reader. In this paper, we will propose two mutual authentication protocols for RFID tags: server-based authentication protocol and serverless authentication protocol. The two protocols both enable RFID reader and tag to carry out the authentication based on their synchronized secret information. In the first protocol based on a server, the synchronized secret information will be monitored by a component of the database server. In the second protocol without a server, mutual authentication does not need to rely on a back-end database. It enables RFID tags to be anonymous to RFID reader so that privacy can be preserved.
['Song Han', 'Tharam S. Dillon', 'Vidyasagar Potdar', 'Elizabeth Chang']
RFID mutual authentication protocols for tags and readers with and without a server
792,358
We examine the quality of social choice mechanisms using a utilitarian view, in which all of the agents have costs for each of the possible alternatives. While these underlying costs determine what the optimal alternative is, they may be unknown to the social choice mechanism; instead the mechanism must decide on a good alternative based only on the ordinal preferences of the agents which are induced by the underlying costs. Due to its limited information, such a social choice mechanism cannot simply select the alternative that minimizes the total social cost (or minimizes some other objective function). Thus, we seek to bound the distortion: the worst-case ratio between the social cost of the alternative selected and the optimal alternative. Distortion measures how good a mechanism is at approximating the alternative with minimum social cost, while using only ordinal preference information. The underlying costs can be arbitrary, implicit, and unknown; our only assumption is that the agent costs form a metric space, which is a natural assumption in many settings. We quantify the distortion of many well-known social choice mechanisms. We show that for both total social cost and median agent cost, many positional scoring rules have large distortion, while on the other hand Copeland and similar mechanisms perform optimally or near-optimally, always obtaining a distortion of at most 5. We also give lower bounds on the distortion that could be obtained by any deterministic social choice mechanism, and extend our results on median agent cost to more general objective functions.
['Elliot Anshelevich', 'Onkar Bhardwaj', 'John Postl']
Approximating optimal social choice under metric preferences
752,137
Linear dispersion (LD) codes are a good candidate for high-data-rate multiple-input multiple-ouput (MIMO) signaling. Traditionally LD codes were designed by maximizing the average mutual information, which cannot guarantee good error performance. This paper presents a new design scheme for LD codes that directly minimizes the block error rate (BLER) in MIMO channels with arbitrary fading statistics and various detection algorithms. For MIMO systems employing LD codes, the error rate does not admit an explicit form. Therefore, we cannot use deterministic optimization methods to design the minimum-error-rate LD codes. In this paper, we propose a simulation-based optimization methodology for the design of LD codes through stochastic approximation and simulation-based gradient estimation. The gradient estimation is done using the score function method originally developed in the discrete-event-system community. The proposed method can be applied to design the minimum-error-rate LD codes for a variety of detector structures including the maximum-likelihood (ML) detector and several suboptimal detectors. It can also design optimal codes under arbitrary fading channel statistics; in particular, it can take into account the knowledge of spatial fading correlation at the transmitter and receiver ends. Simulation results show that codes generated by the proposed new design paradigm generally outperform the codes designed based on algebraic number theory.
['Xiaodong Wang', 'Vikram Krishnamurthy', 'Jibing Wang']
Stochastic gradient algorithms for design of minimum error-rate linear dispersion codes in MIMO wireless systems
220,654
The star graph is an attractive alternative to the hypercube graph. It possess many nice topological properties. Edge fault tolerance is an important issue for a network since the edges in the network may fail sometimes. In this paper, we show that the n-dimensional star graph is (n-3)-edge fault tolerant hamiltonian laceable, (n-3)-edge fault tolerant strongly Hamiltonian laceable, and (n-4)-edge fault tolerant hyper Hamiltonian laceable. All these results are optimal in a sense described in this paper.
['Tseng-Kuei Li', 'Jiali Tan']
Hamiltonian laceability on edge fault star graph
199,389
Orchestration of Crosshaul slices from federated administrative domains
['Luis M. Contreras', 'Carlos Jesús Bernardos', 'Antonio de la Oliva', 'Xavier Costa-Perez', 'Riccardo Guerzoni']
Orchestration of Crosshaul slices from federated administrative domains
876,727
A novel thresholding algorithm is presented in this paper to improve image segmentation performance at a low computational cost. The proposed algorithm uses a normalized graph-cut measure as thresholding principle to distinguish an object from the background. The weight matrices used in evaluating the graph cuts are based on the gray levels of the image, rather than the commonly used image pixels. For most images, the number of gray levels is much smaller than the number of pixels. Therefore, the proposed algorithm requires much smaller storage space and lower computational complexity than other image segmentation algorithms based on graph cuts. This fact makes the proposed algorithm attractive in various real-time vision applications such as automatic target recognition. Several examples are presented, assessing the superior performance of the proposed thresholding algorithm compared with the existing ones. Numerical results also show that the normalized-cut measure is a better thresholding principle compared with other graph-cut measures, such as average-cut and average-association ones.
['Wenbing Tao', 'Hai Jin', 'Yimin Zhang', 'Liman Liu', 'Desheng Wang']
Image Thresholding Using Graph Cuts
320,812
The Scrum software development framework was designed for the hyperproductive state where productivity increases by 5-10 times over waterfall teams and many co-located teams have achieved this effect. In 2006, Xebia (The Netherlands) started localized projects with half Dutch and half Indian team members. After establishing a localized velocity of five times their waterfall competitors on the same project, they moved the Indian members of the team to India and showed stable velocity with fully distributed teams. The ability to achieve hyperproductivity with distributed, outsourced teams was shown to be a repeatable process and a fully distributed model is now the recommended standard when organizations have disciplined Scrum teams with full implementation of XP engineering practices inside the Scrum. Previous studies used overlapping time zones to ease communication and create a single distributed team. The goal of this report is to go one step further and show the same results with team members separated by the 12.5 hour time difference between India and San Francisco. If Scrum works without overlapping time zones then applying it to the mainstream offshoring practice in North America will be possible. In 2008, Xebia India started engagements with partners like TBD.com, a social networking site in San Francisco. TBD has an existing core team of developers doing Scrum with an established local velocity. Adding Xebia India developers to the San Francisco team with a Fully Distributed Scrum model achieved linear scalability with a globally distributed outsourced team.
['Jeff Sutherland', 'Guido Schoonheim', 'N. Kumar', 'V. Pandey', 'Sharma Vishal']
Fully Distributed Scrum: Linear Scalability of Production between San Francisco and India
210,391
Reality is the most realistic representation. We introduce a material display called ZoeMatrope that can reproduce a variety of materials with high resolution, high dynamic range, and high light field fidelity by using real objects and characteristics of the human vision system. ZoeMatrope can also create augmented materials such as a mixture of wood and clear glass, a material with an alpha channel, and a material that looks red when illuminated with a light source A but blue when illuminated with a light source B. In this paper, we give an outline of the ZoeMatrope system and show the results for various materials.
['Leo Miyashita', 'Kota Ishihara', 'Yoshihiro Watanabe', 'Masatoshi Ishikawa']
ZoeMatrope for realistic and augmented materials
947,159
This paper presents the development of an Adaptive Algebraic Multiscale Solver for Compressible flow (C-AMS) in heterogeneous porous media. Similar to the recently developed AMS for incompressible (linear) flows (Wang et al., 2014) 19, C-AMS operates by defining primal and dual-coarse blocks on top of the fine-scale grid. These coarse grids facilitate the construction of a conservative (finite volume) coarse-scale system and the computation of local basis functions, respectively. However, unlike the incompressible (elliptic) case, the choice of equations to solve for basis functions in compressible problems is not trivial. Therefore, several basis function formulations (incompressible and compressible, with and without accumulation) are considered in order to construct an efficient multiscale prolongation operator. As for the restriction operator, C-AMS allows for both multiscale finite volume (MSFV) and finite element (MSFE) methods. Finally, in order to resolve high-frequency errors, fine-scale (pre- and post-) smoother stages are employed. In order to reduce computational expense, the C-AMS operators (prolongation, restriction, and smoothers) are updated adaptively. In addition to this, the linear system in the Newton-Raphson loop is infrequently updated. Systematic numerical experiments are performed to determine the effect of the various options, outlined above, on the C-AMS convergence behaviour. An efficient C-AMS strategy for heterogeneous 3D compressible problems is developed based on overall CPU times. Finally, C-AMS is compared against an industrial-grade Algebraic MultiGrid (AMG) solver. Results of this comparison illustrate that the C-AMS is quite efficient as a nonlinear solver, even when iterated to machine accuracy.
['M. Tene', 'Yixuan Wang', 'Hadi Hajibeygi']
Adaptive algebraic multiscale solver for compressible flow in heterogeneous porous media
211,554
Concept of a Focus-Tunable ECF Microlens and Fabrication of a Large Model Prototype
['Joon-wan Kim', 'Takashi Yoshimoto', 'Shinichi Yokota', 'br', 'Kazuya Edamura']
Concept of a Focus-Tunable ECF Microlens and Fabrication of a Large Model Prototype
982,911
The past decades have witnessed a rapid surge in new sensing and monitoring devices for well-being and healthcare. One key representative in this field is body sensor networks (BSNs). However, with advances in sensing technologies and embedded systems, wireless communication has gradually become one of the dominant energy-consuming sectors in BSN applications. Recently, compressed sensing (CS) has attracted increasing attention in solving this problem due to its enabled sub-Nyquest sampling rate. In this paper, we investigate the quantization effect in CS architecture and argue that the quantization configuration is a critical factor of the energy efficiency for the entire CS architecture. To this end, we present a novel configurable quantized compressed sensing (QCS) architecture, in which the sampling rate and quantization are jointly explored for better energy efficiency. Furthermore, to combat the computational complexity of the configuration procedure, we propose a rapid configuration algorithm, called RapQCS. According to the experiments involving several categories of real biosignals, the proposed configurable QCS architecture can gain more than 66% performance-energy tradeoff than the fixed QCS architecture. Moreover, our proposed RapQCS algorithm can achieve over $150\times$ speedup on average, while decreasing the reconstructed signal fidelity by only 2.32%.
['Aosen Wang', 'Feng Lin', 'Zhanpeng Jin', 'Wenyao Xu']
A Configurable Energy-Efficient Compressed Sensing Architecture With Its Application on Body Sensor Networks
650,827
In this paper we describe our implementations of multi-user stereo systems based on shuttered LCD-projectors and polarization. The combination of these separation techniques allows the presentation of more than one stereoscopic view on a single projection screen. We built two shutter configurations and designed a combined LC-shutter/polarization setup. Our first test setup was a combination of mechanical shutters for the projectors with liquid crystal (LC) shutters for the users’ eyes. The second configuration used LC-shutters only. Based on these configurations we have successfully implemented shuttering of four projectors to support two users with individual perspectively correct stereoscopic views. To improve brightness conditions and to increase the number of simultaneous users, we have designed a combined LC-shutter/polarization filter based projection system, which shows the most promising properties for real world applications.
['Bernd Fröhlich', 'Jan Hochstrate', 'Jörg Hoffmann', 'K. Kluger', 'Roland Blach', 'Matthias Bues', 'Oliver Stefani']
Implementing Multi-Viewer Stereo Displays
24,578
Human beings recognize similarity in scene perception based on their available high-level knowledge about the low-level visual features, which is gradually accumulated throughout their entire lives. Once there is not enough knowledge they tend to rely on low-level visual content. Inspired by this observation, we proposed a new framework of relevance feedback for content-free image retrieval to tackle the problem of sample sparseness. The framework is composed of two components, i.e. short-term feedback and long-term feedback. The former refers to an operation of query conversion and/or refinement during a retrieval session by incorporating a content-aware module, while the latter consists of incrementally updating the system model using the accumulated retrieval results since the last system update. 10000 images from 200 categories of the COREL image collection were employed for evaluating the performance of the framework using the criterion of averaged precision as a function of the number of relevance feedback needed. Experimental results demonstrated a human-like behavior of the proposed framework in that while long-term update helps the system accumulate more knowledge, the content-aware short-term relevance feedback further boosts its performance when the amount of knowledge is limited.
['Rui Zhang', 'Ling Guan']
A new framework of relevance feedback for content-free image retrieval
153,594
Multi-antenna transmission and reception (known as MIMO) is widely touted as the key technology for enabling wireless broadband services, whose widespread success will require 10 times higher spectral efficiency than current cellular systems, at 10 times lower cost per bit. Spectrally efficient, inexpensive cellular systems are by definition densely populated and interference-limited. But spatial multiplexing MIMO systems- whose principal merit is a supposed dramatic increase in spectral efficiency- lose much of their effectiveness in high levels of interference. This article overviews several approaches to handling interference in multicell MIMO systems. The discussion is applicable to any multi-antenna cellular network, including 802.16e/WiMAX, 3GPP (HSDPA and 3GPP LTE), and 3GPP2 (lxEVDO). We argue that many of the traditional interference management techniques have limited usefulness (or are even counterproductive) when viewed in concert with MIMO. The problem of interference in MIMO systems is too large in scope to be handled with a single technique: in practice a combination of complementary countermeasures will be needed. We overview emerging system-level interference-reducing strategies based on cooperation, which will be important for overcoming interference in future spatial multiplexing cellular systems.
['Jeffrey G. Andrews', 'Wan Choi', 'W Robert Heath']
Overcoming interference in spatial multiplexing MIMO cellular networks
307,469
Non-Negative Kernel Sparse Coding for Image Classification
['Yungang Zhang', 'Tianwei Xu', 'Jieming Ma']
Non-Negative Kernel Sparse Coding for Image Classification
756,917
Analysis of hospital bed management and patient mobility using open data sources.
['Fabrizio Pecoraro', 'Daniela Luzi', 'Fabrizio Clemente']
Analysis of hospital bed management and patient mobility using open data sources.
742,226
Dynamics and Resistance to Neighborhood Perturbations of Discrete-and Continuous-Time Cellular Automata.
['David E. Hiebeler']
Dynamics and Resistance to Neighborhood Perturbations of Discrete-and Continuous-Time Cellular Automata.
745,545
Wireless sensor networks (WSNs) are resource constrained. Energy is one of the most important resources in such networks. Therefore, optimal use of energy is necessary. In this paper, we present a novel energy-efficient routing protocol for WSNs. The protocol is reliable in terms of data delivery at the base station (BS). We consider mobility in sensor nodes and in the BS. The proposed protocol is hierarchical and cluster based. Each cluster consists of one cluster head (CH) node, two deputy CH nodes, and some ordinary sensor nodes. The reclustering time and energy requirements have been minimized by introducing the concept of CH panel. At the initial stage of the protocol, the BS selects a set of probable CH nodes and forms the CH panel. Considering the reliability aspect of the protocol, it puts best effort to ensure a specified throughput level at the BS. Depending on the topology of the network, the data transmission from the CH node to the BS is carried out either directly or in multihop fashion. Moreover, alternate paths are used for data transmission between a CH node and the BS. Rigorous simulation results depict the energy efficiency, throughput, and prolonged lifetime of the nodes under the influence of the proposed protocol. Future scope of this work is outlined.
['Hiren Kumar Deva Sarma', 'Rajib Mall', 'Avijit Kar']
E 2 R 2 : Energy-Efficient and Reliable Routing for Mobile Wireless Sensor Networks
700,675
Discovering Geographic Regions in the City Using Social Multimedia and Open Data
['Stevan Rudinac', 'Jan Zahálka', 'Marcel Worring']
Discovering Geographic Regions in the City Using Social Multimedia and Open Data
961,147
By using Gaussian approximation, the optimal number of simultaneous transmissions which maximizes system throughput in multi-code CDMA networks is derived as a function of system parameters including the processing gain, the packet length, and the correctable bit number. Based on this optimal number of simultaneous transmissions and the queue length of each user, a dynamic rate assignment scheme is proposed to support data users with different rate requirements while improving the system resource utilization efficiency. By numerical example the efficiency of the proposed scheme is verified.
['Tao Shu', 'Zhisheng Niu']
A dynamic rate assignment scheme for data traffic in cellular multi-code CDMA networks
395,930
The 3-flow conjecture of Tutte is that every bridgeless graph without a 3-edge cut has a nowhere-zero 3-flow. We show that it suffices to prove this conjecture for 5-edge-connected graphs.
['Martin Kochol']
An Equivalent Version of the 3-Flow Conjecture
28,678
We consider a statistical methodology for the study of the strong stability of the M/G/1 queueing system after disrupting the arrival flow. More precisely, we use nonparametric density estimation with boundary correction techniques and the statistical Student test to approximate the G/G/1 system by the M/G/1 one, when the general arrivals law G in the G/G/1 system is unknown. By elaborating an appropriate algorithm, we effectuate simulation studies to provide the proximity error between the corresponding arrival distributions of the quoted systems, the approximation error on their stationary distributions and confidence intervals for the difference between their corresponding characteristics.
['Aicha Bareche', 'Djamil Aïssani']
Statistical Methodology for Approximating G/G/1 Queues by the Strong Stability Technique
626,867
We present a framework for considering the problem of compressing large collections of similar sequences. In this framework, an unknown individual sequence is modified several times independently to obtain the collection of sequences to be compressed. For certain collections generated by context-dependent bit flips of the individual sequencepsilas bits, and for those generated by simple edit operations on the individual sequence, we derive universal compression algorithms that compress the collection of sequences almost as well as an optimal compressor that has knowledge of the underlying individual sequence and the modifying processes.
['Krishnamurthy Viswanathan', 'Ram Swaminathan']
Framework and algorithms for collaborative compression
521,844
The purpose of this article is to contribute to theorizing the process dimension of informing systems. The conceptualization draws on a framework called the Informing View of Organization and, particularly, on its segment of informing process. The article discusses the concept of informing process and its background in the process view of organization and explores the relationship between informing process and informing system on several examples. The main conclusion is that an informing system facilitates an initial positioning of a research (or practical) problem in systems terms, while the infoprocess stance enables dynamic and analytical view of connected activities that lead to business completion, modern technologies, and performance measurement. Parallels with complex systems theory are also demonstrated. Directions for further research are outlined.
['Bob Travica']
Think Process, Think in Time: Advancing Study of Informing Systems
190,789
Free viewpoint video (FVV) offers compelling interactive experience by allowing users to switch to any viewing angle at any time. An FVV is composed of a large number of camera-captured anchor views, with virtual views (not captured by any camera) rendered from their nearby anchors using techniques such as depth-image-based rendering (DIBR). We consider a group of wireless users who may interact with an FVV by independently switching views. We study a novel live FVV streaming network where each user pulls a subset of anchors from the server via a primary channel. To enhance anchor availability at each user, a user generates network-coded (NC) packets using some of its anchors and broadcasts them to its direct neighbors via a secondary channel. Given limited primary and secondary channel bandwidths at the devices, we seek to maximize the received video quality (i.e., minimize distortion) by jointly optimizing the set of anchors each device pulls and the anchor combination to generate NC packets. To our best knowledge, this is among the first body of work addressing such joint optimization problem for wireless live FVV streaming with NC-based collaboration. We first formulate the problem and show that it is NP-hard. We then propose a scalable and effective algorithm called PAFV (Peer-Assisted Freeview Video). In PAFV, each node collaboratively and distributedly decides on the anchors to pull and NC packets to share so as to minimize video distortion in its neighborhood. Extensive simulation studies show that PAFV outperforms other algorithms, achieving substantially lower video distortion (often by more than 20–50%) with significantly less redundancy (by as much as 70%). Our Android-based video experiment further confirms the effectiveness of PAFV over comparison schemes.
['Bo Zhang', 'Zhi Liu', 'Gary Shueng Han Chan', 'Gene Cheung']
Collaborative Wireless Freeview Video Streaming With Network Coding
645,546
This paper presents an algorithm and an implementation of orthogonal perfect correlation sequences for acoustical system identification using psychoacoustical masking effects. Therefore, the common NLMS-algorithm has been modified to incorporate hidden orthogonal Ipatov- and Huffman-sequences for fast system identification. Using this method, the speed and accuracy of the identification of the loudspeaker-room-microphone-system is increased and the overall-performance of an echo and noise cancellation has been improved.
['Mike Peters']
Psychoacoustical excitation of the (N)LMS algorithm for acoustical system identification
839,286
In power systems the occurrence probability of operating points close to network limits may be increased as a result of high wind penetration. Consequences of such scenarios include inefficient exploitation of both wind and economic resources. A well chosen allocation of wind capacity not only is in line with the trend of renewables integration in power systems but also allows for limiting the occurrence probability of unsafe operating points that may require costly remedies. In this work, a voltage stability constrained optimal power flow (VSC-OPF) framework is presented for transmission system planning and applied to wind capacity allocation. This framework captures multiple wind and demand scenarios within the OPF. The pattern of wind capacity allocation is studied in order to assess its impact on voltage stability and the total wind capacity allocation. The results emphasize the effect of the capacity allocation pattern on improvement of voltage stability.
['Mostafa Bakhtvar', 'Andy J. Keane']
Optimal allocation of wind generation subject to voltage stability constraints
920,613
Integration of leading edge computer science research into the undergraduate curriculum is widely recognized as an important aspect of preparing students to enter the twenty-first century technology workforce. We have integrated research in Grid computing into a mixed undergraduate/graduate course in performance measurement offered at Portland State University. We describe our experiences in two offerings of the new course, Measuring Computer Performance. Student mastery of the additional research material was good, and the related project work successfully taught a variety of research and group work skills. We briefly summarize plans for a future revised offering of the course.
['Karen L. Karavanic']
Incorporating Grid computing concepts into a course in performance measurement
381,427
The dependency of amino acid chemical shifts on φ and ψ torsion angle is, independently, studied using a five-residue fragment of ubiquitin and ONIOM(DFT:HF) approach. The variation of absolute deviation of (13) C(α) chemical shifts relative to φ dihedral angle is specifically dependent on secondary structure of protein not on amino acid type and fragment sequence. This dependency is observed neither on any of (13) C(β) , and (1) H(α) chemical shifts nor on the variation of absolute deviation of (13) C(α) chemical shifts relative to ψ dihedral angle. The (13) C(α) absolute deviation chemical shifts (ADCC) plots are found as a suitable and simple tool to predict secondary structure of protein with no requirement of highly accurate calculations, priori knowledge of protein structure and structural refinement. Comparison of Full-DFT and ONIOM(DFT:HF) approaches illustrates that the trend of (13) C(α) ADCC plots are independent of computational method but not of basis set valence shell type.
['Hoora Shaghaghi', 'Hossein Pasha Ebrahimi', 'Fariba Fathi', 'Niloufar Bahrami Panah', 'Mehdi Jalali‐Heravi', 'Mohsen Tafazzoli']
A simple graphical approach to predict local residue conformation using NMR chemical shifts and density functional theory
657,807
Most decision tree classifiers are designed to classify the data with categorical or Boolean class labels. Unfortunately, many practical classification problems concern data with class labels that are naturally organized as a hierarchical structure, such as test scores. In the hierarchy, the ranges in the upper levels are less specific but easier to predict, while the ranges in the lower levels are more specific but harder to predict. To build a decision tree from this kind of data, we must consider how to classify data so that the class label can be as specific as possible while also ensuring the highest possible accuracy of the prediction. To the best of our knowledge, no previous research has considered the induction of decision trees from data with hierarchical class labels. This paper proposes a novel classification algorithm for learning decision tree classifiers from data with hierarchical class labels. Empirical results show that the proposed method is efficient and effective in both prediction accuracy and prediction specificity.
['Yen-Liang Chen', 'Hsiao-Wei Hu', 'Kwei Tang']
Constructing a decision tree from data with hierarchical class labels
468,280
Presently, more than 7 million Haitians have no access to power nor basic energy related services. Available generation capacity of Haiti reaches 212 MW, which is insufficient to meet the estimated peak demand of more than 500 MW in the whole country. This deficit severely impacts basic essential facilities such as health care centers. The IEEE Student Branch PES Chapter at Georgia Tech established a project to design and implement a microgrid to supply power to a recently established health center in the mountains of Thoman, Haiti. Several combinations of power generating units were evaluated on an economic basis, including: a standalone diesel generator (DG), photovoltaic (PV) panels with batteries, and PV panels with batteries and a DG. Key parameters including power rating, daily energy production, maximum annual capacity shortage, etc., were also incorporated into the economic evaluation. This paper outlines the preliminary microgrid design steps, assessment of topology alternatives, site visit, detailed design and the fundraising process. Only commercial off-the-shelf parts were considered for device selection. To verify the preliminary design, a site visit was conducted in February 2015. Installation and commissioning is expected to take place later this year.
['Szilard Liptak', 'Ashley Stone', 'Felipe A. Larrain']
Power supply of a rural off-grid health center — A case study
558,893
According to Shannon's classical information theory [19] information is measured by the reduction of uncertainty and the latter is measured by entropy . This theory is concerned with the transmission of symbols from a finite alphabet. The uncertainty concerns the question which symbol is sent and the information is given by a probabilistic model of the transmission channel and the symbol observed at the output of the channel. This leads to a statistical communication theory which is still the main subject of communication theory today.
['Jürg Kohlas', 'Cesar Schneuwly']
Information Algebra
820,451
In this paper, the p-ranks and characteristic polynomials of cyclic difference sets are derived by expanding the trace expressions of their characteristic sequences. The 3-ranks and characteristic polynomials of Helleseth-Kumar-Martinsen (HKM) and Lin difference sets are obtained, and the characteristic polynomial of the Singer difference set is calculated.
['Jong-Seon No', 'Dong-Joon Shin', 'Tor Helleseth']
On the p-ranks and characteristic polynomials of cyclic difference sets
102,278
The proposed use of asynchronous transfer mode (ATM) in B-ISDN necessitates fast packet switches (FPS) capable of providing adequate quality of service (QoS). The knockout switch is an FPS with low delay and cell loss at the switch level. However, the performance perceived by a specific input is not simple to determine, since the often computed cell loss probability (CLP) is a time averaged value obtained with respect to the switch. An analysis of the distribution of consecutive cell losses seen at a tagged port would be more useful. In this paper, we examine this distribution under a wide range of traffic burstiness for the knockout switch using Markov processes generated automatically from a stochastic activity network (SAN) representation. The results provide useful information on the performance at a specific input, as well as illustrate the usefulness of SANs in modeling and analyzing telecommunication switch designs.
['Latha Kant', 'William H. Sanders']
Loss process analysis of the knockout switch using stochastic activity networks
320,797
Massive parallelization has lead to a dramatic increase in available computational power. However, data transfer speeds have failed to keep pace and are the major limiting factor in the development of exascale computing. New algorithms must be developed which minimize the transfer of data. Patch dynamics is a computational macroscale modeling scheme which provides a coarse macroscale solution of a problem defined on a fine microscale by dividing the domain into many nonoverlapping, coupled patches. Patch dynamics is readily adaptable to massive parallelization as each processor can evaluate the dynamics on one, or a few, patches. However, patch coupling conditions interpolate across the unevaluated parts of the domain between patches, and are typically reevaluated at every microscale time step, thus requiring almost continuous data transfer. We propose a modified patch dynamics scheme which minimizes data transfer by only reevaluating the patch coupling conditions at “mesoscale” time scales which are sign...
['J. E. Bunder', 'A. J. Roberts', 'Ioannis G. Kevrekidis']
Accuracy of Patch Dynamics with Mesoscale Temporal Coupling for Efficient Massively Parallel Simulations
830,009
Since the introduction of cost-based query optimization, the performance-critical role of interesting orders has been recognized. Some algebraic operators change interesting orders (e.g. sort and select), while others exploit interesting orders (e.g. merge join). The two operations performed by any query optimizer during plan generation are 1) computing the resulting order given an input order and an algebraic operator and 2) determining the compatibility between a given input order and the required order a given algebraic operator can beneficially exploit. Since these two operations are called millions of times during plan generation, they are highly performance-critical. The third crucial parameter is the space requirement for annotating every plan node with its output order. Lately, a powerful framework for reasoning about orders has been developed, which is based on functional dependencies. Within this framework, the current state-of-the-art algorithms for implementing the above operations both have a lower bound time requirement /spl Omega/(n), where n is the number of functional dependencies involved. Further, the lower bound for the space requirement for every plan node is /spl Omega/(n). We improve these bounds by new algorithms with upper time bounds O(1). That is, our algorithms for both operations work in constant time during plan generation, after a one-time preparation step. Further, the upper bound for the space requirement for plan nodes is O(1) for our approach. Besides, our algorithm reduces the search space by detecting and ignoring irrelevant orderings. Experimental results with a full-fledged query optimizer show that our approach significantly reduces the total time needed for plan generation. As a corollary of our experiments, it follows that the time spent for order processing is a nonnegligible part of plan generation.
['Thomas Neumann', 'Guido Moerkotte']
An efficient framework for order optimization
275,190
The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience. However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured sparseness. Here we analyze neural responses to natural movies in the primary visual cortex of ferrets at different stages of development and of rats while awake and under different levels of anesthesia. In contrast with prediction from a sparse coding model, our data shows that population and lifetime sparseness decrease with visual experience, and increase from the awake to anesthetized state. These results suggest that the representation in the primary visual cortex is not actively optimized to maximize sparseness.
['Pietro Berkes', 'Benjamin L. White', 'József Fiser']
No evidence for active sparsification in the visual cortex
446,778
The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc co...
['Anna Grubert', 'Nancy B. Carlisle', 'Martin Eimer']
The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory
860,441
In this paper, performance comparison of evolutionary algorithms (EAs) such as real coded genetic algorithm (RGA), modified particle swarm optimization (MPSO), covariance matrix adaptation evolution strategy (CMAES) and differential evolution (DE) on optimal design of multivariable PID controller design is considered. Decoupled multivariable PI and PID controller structure for Binary distillation column plant described by Wood and Berry, having 2 inputs and 2 outputs is taken. EAs simulations are carried with minimization of IAE as objective using two types of stopping criteria, namely, maximum number of functional evaluations (Fevalmax) and Fevalmax along with tolerance of PID parameters and IAE. To compare the performances of various EAs, statistical measures like best, mean, standard deviation of results and average computation time, over 20 independent trials are considered. Results obtained by various EAs are compared with previously reported results using BLT and GA with multi-crossover approach. Results clearly indicate the better performance of CMAES and MPSO designed PI/PID controller on multivariable system. Simulations also reveal that all the four algorithms considered are suitable for off-line tuning of PID controller. However, only CMAES and MPSO algorithms are suitable for on-line tuning of PID due to their better consistency and minimum computation time.
['M. Willjuice Iruthayarajan', 'S. Baskar']
Evolutionary algorithms based design of multivariable PID controller
481,213
3GPP LWIP Release 13 technology and its prestandard version Wi-Fi Boost have recently emerged as an efficient LTE and Wi-Fi integration at the IP layer, allowing uplink on LTE and downlink on Wi-Fi. This solves all the contention problems of Wi-Fi and allows an optimum usage of the unlicensed band for downlink. In this paper, we present a new feature of Wi-Fi Boost, its radio link management, which allows to steer the downlink traffic between both LTE and Wi-Fi upon congestion detection in an intelligent manner. This customised congestion detection algorithm is based on IP probing, and can work with any Wi-Fi access point. Simulation results in a typical enterprise scenario show that LWIP R13 and Wi-Fi Boost can enhance network performance up to 5x and 6x over LTE-only, and 4x and 5x over Wi-Fi only networks, respectively, and that the the proposed radio link management can further improve Wi-Fi Boost performance over LWIP R13 up to 19 %. Based on the promising results, this paper suggests to enhance LWIP R13 user feedback in future LTE releases.
['David Lopez-Perez', 'Jonathan Ling', 'Bong Ho Kim', 'Vasudevan Subramanian', 'Satish Kanugovi', 'Ming Ding']
LWIP and Wi-Fi Boost Link Management
856,928
This paper proposes a symmetric key based PHRMS solution for cloud satisfying the following security and privacy properties: (1) forward data secrecy i.e., a user (for example, a doctor) with old keys cannot access any newly added data, (2) data unlinkability i.e., no unauthorized user can link an outsourced PHR information to its owner, and (3) write integrity protection i.e., no unauthorized user can modify the outsourced PHR data including their actual writers (for example, a doctor or Laboratory), even if they collude with the cloud service provider.
['Naveen Kumar', 'Anish Mathuria', 'Manik Lal Das']
Achieving Forward Secrecy and Unlinkability in Cloud-Based Personal Health Record System
565,251
While several previous works have considered the problem of resource (including subcarrier and power) allocation in multicell orthogonal frequency division multiple access networks, only a few contributions have explicitly taken into account the elastic nature of data applications. In this work, each user is associated with a minimum and maximum resource block requirement and the resource allocation problem consists of maximizing the overall throughput such that these requirements are met. We propose a hybrid method that partitions this problem into a centralized and a distributed algorithm that balances between maximizing the overall throughput and being feasible in real systems. By means of simulations we compare four resource allocation strategies that represent various degrees of multi- cell coordination and taking advantage of multi-user diversity. We find that a feasible dynamic coordination combined with intra-cell multi-user diversity provides large throughput gains compared with non-coordinated schemes or schemes that would limit the degree of freedom of multi-user diversity.
['Chrysostomos Koutsimanis', 'Gabor Fodor']
A Dynamic Resource Allocation Scheme for Guaranteed Bit Rate Services in OFDMA Networks
286,874
The write amplification problem deteriorates as the block size of modern flash-memory chips keeps increasing. Without the careful garbage collection, significant live-page copying might even worsen the reliability problem, that is already severe to 3D flash memory. In this work, we propose a sub-block erase design to not only alleviate the write amplification problem by reducing live-page copying but also improve the system reliability with a software isolation strategy. In particular, sub-blocks are carefully allocated to satisfy write requests so as to reduce disturbance by using free or invalid sub-blocks as isolation layers among sub-blocks, without additional hardware cost and capacity loss. A series of experiments were conducted to evaluate the capability of the proposed design. The results show that the proposed design is very effective in improving the system performance by reducing garbage collection overheads and in improving the device reliability/lifetime.
['Hsin-Yu Chang', 'Chien-Chung Ho', 'Yuan-Hao Chang', 'Yu-Ming Chang', 'Tei-Wei Kuo']
How to enable software isolation and boost system performance with sub-block erase over 3D flash memory
905,362
This paper proposes the use of reference models to detect emotional prominence in the energy and F0 contours. The proposed framework aims to model the intrinsic variability of these prosodic features. We present a novel approach based on Functional Data Analysis (FDA) to build reference models using a family of energy and F0 contours, which are implemented with lexicon-independent models. The neutral models are represented by bases of functions and the testing energy and F0 contours are characterized by their projections onto the corresponding bases. The proposed system can lead to accuracies as high as 80.4% in binary emotion classification in the EMODB corpus, which is 17.6% higher than the one achieved by a benchmark classifier trained with sentence level prosodic features. The approach is also evaluated with the SEMAINE corpus, showing that it can be effectively used in real applications. Index Terms: Emotion detection, prosody modeling, emotional speech analysis, expressive speech, functional data analysis.
['Juan Pablo Arias', 'Carlos Busso', 'Néstor Becerra Yoma']
Energy and F0 contour modeling with Functional Data Analysis for Emotional Speech Detection
779,304
Column-stores gained popularity as a promising physical design alternative. The main overhead of query processing in Column-stores is on-the-fly tuple reconstruction for multi-attributes queries. Typical column-stores, such as C-Store and MonetDB, use projections to support tuple reconstruction. But how to select attributes for a projection is keeping open problem. This paper presents an adaptive approach to solve this problem. Our approach exploits an adaptive algorithm to cluster attributes for each projection. We show that our approach can conform well to clustering attributes for projections in column-stores, and it enable projections to adapt dynamically to users query customs, therefore is an effective method to select attributes for projections.
['Xiangwu Ding', 'Jiajin Le']
Adaptive projection in Column-stores
261,393
This paper presents a comparative study of the performance of the bidirectional ring and the unidirectional ring multiprocessor, with emphasis on the effect of system parameters, specifically, the message length and the relative processor speed. The choice of these parameters may not be optimum due to the performance cost tradeoffs in practice. Our study shows that the use of bidirectional ring is more effective in such suboptimum system configurations and can improve the processor utilization by up to 35%.
['Hitoshi Oi', 'Nagarajan Ranganathan']
Effect of message length and processor speed on the performance of the bidirectional ring-based multiprocessor
445,263
Reports a texture separation algorithm to solve the problem of unsupervised boundary localization in textured images. The proposed algorithm is mainly characterized by the extraction of textural density gradients by a nonlinear multiple scale-space analysis of the image. Texture boundaries are extracted by segmenting the images resulting from a multiscale fuzzy gradient operation applied to detail images. The segmentation stage consists of a parallel hierarchical clustering algorithm, aimed at the minimization of a global cost functional taking into account region homogeneity and segmentation quality. Experiments and comparisons on Brodatz textures are reported.
['Alfredo Petrosino', 'Michele Ceccarelli']
Unsupervised texture discrimination based on rough fuzzy sets and parallel hierarchical clustering
326,103
We show how Gabidulin codes can be list decoded by using a parametrization approach. For this we consider a certain module in the ring of linearized polynomials and find a minimal basis for this module using the Euclidean algorithm with respect to composition of polynomials. For a given received word, our decoding algorithm computes a list of all codewords that are closest to the received word with respect to the rank metric.
['Margreta Kuijper', 'Anna-Lena Trautmann']
List-Decoding Gabidulin Codes via Interpolation and the Euclidean Algorithm
601,201
Facial scanning has become ubiquitous in digital media, but so far most efforts have focused on reconstructing the skin. Eye reconstruction, on the other hand, has received only little attention, and the current state-of-the-art method is cumbersome for the actor, time-consuming, and requires carefully setup and calibrated hardware. These constraints currently make eye capture impractical for general use. We present the first approach for high-quality lightweight eye capture, which leverages a database of pre-captured eyes to guide the reconstruction of new eyes from much less constrained inputs, such as traditional single-shot face scanners or even a single photo from the internet. This is accomplished with a new parametric model of the eye built from the database, and a novel image-based model fitting algorithm. Our method provides both automatic reconstructions of real eyes, as well as artistic control over the parameters to generate user-specific eyes.
['Pascal Bérard', 'Derek Bradley', 'Markus H. Gross', 'Thabo Beeler']
Lightweight eye capture using a parametric model
835,539
The design description for an integrated circuit may be described in terms of three domains, namely: (1) behavioral domain, (2) structural domain and (3) physical domain. These domains may be hierarchically divided into several levels of abstraction. Classically, these level of abstraction are (1) Architectural or Functional level, (2) Register-transfer level, (3) Logic level and (4) Circuit level. Some of the design problems associated with VLSI circuit design are area, speed, reliability and power consumption. With the development of portable devices, power consumption has become a dominant design consideration in the modern VLSI design area. In each of these domains there are a number of design challenges to reduce power. For instance, at the behavioral level, the freedom to choose multiple voltages and frequencies to minimize power to meet the given hard time constraints is considered as an active field of research to minimize power. Various past researches have showed that higher the level of abstraction, better the ability to address the problems associated with the design. Therefore this work proposes an algorithm that allocates both voltage and frequency simultaneously to the operations of the directed flow graph to optimize power given the time constraints. The resources required for multiple voltage-frequency scheduling is derived using the classical force directed scheduling algorithm. This algorithm has been implemented and tested on High-Level synthesis benchmarks for both non-pipelined and pipeline instances.
['Venkatesan Muthukumar', 'Bharath Radhakrishnan', 'Henry Selvaraj']
Multiple voltage and frequency scheduling for power minimization
79,256
Software process improvement and measurement are closely linked: measures are the only way to prove improvements in a process. Despite this link, and the interest in process improvement, measurement is not widely applied in industrial software production. This paper describes a method designed to guide the definition, implementation and operation of measurement processes. The method, which builds upon Fenton’s measurement framework and GQM, starts from the point that measuring a software process is in its turn a process in the software process. The three basic ideas of the method derive from this assumption: -the measurement process should reuse and suitably adapt the same phases of the software process: requirements definition, design, implementation, etc.., -a descriptive process model should be the essential starting point of a measurement process, -many concepts and tools which derive from the object oriented approach should be effectively used in the measurement process. An experimental application in an industrial process has shown that building the process model was the hardest part of the measurement process, and that it has improved the quality of measurement by reducing misunderstandings. Object oriented concepts and tools make it possible to automate certain tasks (for instance the definition of the schema of the measurement database) and to improve robustness against changes in the measurement process.
['Maurizio Morisio']
Measurement processes are software, too
435,199
Visual Tracking by Local Superpixel Matching with Markov Random Field
['Heng Fan', 'Jinhai Xiang', 'Zhongmin Chen']
Visual Tracking by Local Superpixel Matching with Markov Random Field
940,649