abstract
stringlengths 5
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 5
367
| __index_level_0__
int64 1
1,000k
|
---|---|---|---|
Quality perception of 3-D images is one of the most important parameters for accelerating advances in 3-D imaging fields. Despite active research in recent years for understanding the quality perception of 3-D images, binocular quality perception of asymmetric distortions in stereoscopic images is not thoroughly comprehended. In this paper, we explore the relationship between the perceptual quality of stereoscopic images and visual information, and introduce a model for binocular quality perception. Based on this binocular quality perception model, a no-reference quality metric for stereoscopic images is proposed. The proposed metric is a top-down method modeling the binocular quality perception of the human visual system in the context of blurriness and blockiness. Perceptual blurriness and blockiness scores of left and right images were computed using local blurriness, blockiness, and visual saliency information and then combined into an overall quality index using the binocular quality perception model. Experiments for image and video databases show that the proposed metric provides consistent correlations with subjective quality scores. The results also show that the proposed metric provides higher performance than existing full-reference methods even though the proposed method is a no-reference approach. | ['Seungchul Ryu', 'Kwanghoon Sohn'] | No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception | 245,039 |
On the characteristics of multi-source, heterogenous structure, mass data of WebGIS as well as the complexity of the system, the paper proposes the design model based on Web and Multi-Agent. Moreover, through analyzing the characteristics of erach agent of the design model, the paper makes a further research on the communication problem of the agents. It also proposes the communication model and gives analysis and design to communication adapter. | ['Guangru Li', 'Jingfeng Hu', 'Xian Wu'] | Communication Adapter Design of Multi-Agent in Webgis | 426,183 |
AbstractThe process capability modeling has become a tool for systematization and codifying knowledge of process-oriented activities. Enterprise SPICE defines a domain independent integrated model for enterprise-wide assessment and continuous improvement. This paper presents the complete export process capability assessment model containing the export body of knowledge expressed in terms of SPICE conformant processes, their outcomes and base practices. The purpose of the developed model is to enable use of domain independent Enterprise SPICE model for the export process improvement. | ['Jérémy Besson', 'Antanas Mitasiunas', 'Saulius Ragaisis'] | Export Process Capability Assessment Model | 833,526 |
In this paper, we propose a low complexity gradient based approach for enabling the Tone Reservation (TR) technique to reduce the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals. The performance of the proposed algorithm is evaluated for different pilot location in the frequency domain, and also in combination with the discrete Fourier transform (DFT) spreading technique proposed in [Heung-Gyoon Ryu, et al., 2007]; in order to further reduce the PAPR. Simulation results show that the new technique achieves significant PAPR reductions, which are further enhanced when it is combined with DFT spreading. The simulation results also show that the performance of the technique is dependent on the pilot positions. In addition, further investigation was performed where the reduction tones are constrained, equal to the average power mask for the data tones, by a simple projection rule in the frequency domain both for the TR scheme and for the combined scheme. Simulation results show that the contiguous pilot arrangement provides better PAPR reduction performance in both cases, when the peak-cancellation signal is constrained in the frequency domain. | ['Sundarampillai Janaaththanan', 'Christos Kasparis', 'Barry G. Evans'] | A Gradient Based Algorithm for PAPR Reduction of OFDM using Tone Reservation Technique | 507,108 |
In the uplink of a fiber-based wireless system, the multipath dispersion that is introduced by the wireless link and the nonlinear distortion that is caused by the radio-over-fiber (RoF) link significantly degrade system performance. Channel equalization is challenging, because these impairments are generally unknown to the receiver. One novel technique of estimating both of these impairments using ternary pseudorandom signals is proposed. The key idea is to exploit proper harmonic suppression through signal design such that errors that are inflicted by the nonlinear distortion can be eliminated or reduced. Capitalizing on this case, a simple yet effective correlation approach is applied to accurately estimate the channel impulse response. Subsequently, least squares polynomial fitting is invoked to identify the nonlinearity of the RoF link, making use of the linear estimates obtained earlier. Design guidelines that lead to good performance are proposed. Results reveal that an improvement in the estimation accuracy of at least a factor of two is achieved by the proposed technique over an existing technique utilizing binary signals in the various scenarios investigated, even if the identification period of the proposed technique is almost 1000 times shorter than the existing technique. Because the proposed technique enables a simpler transmitter structure and requires shorter identification periods and lower power transmission, it is found to be appealing for use in broadband fiber-based wireless systems. | ['Yin Hoe Ng', 'Ai Hui Tan', 'Teong Chee Chuah'] | Channel Identification of Concatenated Fiber-Wireless Uplink Using Ternary Signals | 322,725 |
The paper presents a feasible model of architecture for the technical building systems (TBS) particularly suitable for Nearly Zero Energy Buildings (NZEBs). NZEBs are buildings where the energetic consumption are optimized by means of solutions that drastically reduce both electric and thermal demand, while residual required energy has to be provided by local renewable generation. The suggested model aggregates the users around an electric node in a common microgrid in order to reach up the threshold value of electric power and to get a more virtuous and flexible cumulative load profile. The building (or a group of buildings) represents the natural limit of the aggregation of the electric systems, like in the heating systems. Present proposal is a full electric smart micro grid with heating and domestic hot water generated by a centralized electric heat pump system. The renewable energy is provided by a photovoltaic field. The authors suggest to control the whole electric demand of the building by exploiting its thermal inertia as an energy storage by forcing both local and central set points of heating and air conditioning systems and time shifting opportunities of smart appliances. A case study is presented. | ['L. Martirano', 'Emanuele Habib', 'Giuseppe Parise', 'Giacomo Greco', 'Matteo Manganelli', 'Ferdinando Massarella', 'Luigi Parise'] | Smart micro grids for Nearly Zero Energy Buildings | 931,289 |
Super-deformed, SD, is a specific artistic style for Japanese manga and anime which exaggerates characters in the goal of appearing cute and funny. The SD style characters are widely used, and can be seen in many anime, CG movies, or games. However, to create an SD model often requires professional skills and considerable time and effort. In this paper, we present a novel technique to generate an SD style counterpart of a normal 3D character model. Our approach uses an optimization guided by a number of constraints that can capture the properties of the SD style. Users can also customize the results by specifying a small set of parameters related to the body proportions and the emphasis of the signature characteristics. With our technique, even a novel user can generate visually pleasing SD models in seconds. © 2012 Wiley Periodicals, Inc. | ['Liang-Tsen Shen', 'Sheng-Jie Luo', 'Chun-Kai Huang', 'Bing-Yu Chen'] | SD Models: Super-Deformed Character Models | 535,455 |
In this study first we consider the singular integrals as generalized functions in two dimensions and then we solve the non-homogeneous wave equation with convolutional term by using the generalized functions as boundary conditions. | ['Adem Kilicman', 'Hassan Eltayeb'] | A note on defining singular integral as distribution and partial differential equations with convolution term | 342,201 |
The tactual scanning of five naturalistic textures was recorded with an apparatus that is capable of measuring the tangential interaction force with a high degree of temporal and spatial resolution. The resulting signal showed that the transformation from the geometry of a surface to the force of traction and, hence, to the skin deformation experienced by a finger is a highly nonlinear process. Participants were asked to identify simulated textures reproduced by stimulating their fingers with rapid, imposed lateral skin displacements as a function of net position. They performed the identification task with a high degree of success, yet not perfectly. The fact that the experimental conditions eliminated many aspects of the interaction, including low-frequency finger deformation, distributed information, as well as normal skin movements, shows that the nervous system is able to rely on only two cues: amplitude and spectral information. The examination of the “spatial spectrograms” of the imposed lateral skin displacement revealed that texture could be represented spatially, despite being sensed through time and that these spectrograms were distinctively organized into what could be called “spatial formants.” This finding led us to speculate that the mechanical properties of the finger enables spatial information to be used for perceptual purposes in humans with no distributed sensing, which is a principle that could be applied to robots. | ['Michael Wiertlewski', 'Jos ´ e Lozada', 'Vincent Hayward'] | The Spatial Spectrum of Tangential Skin Displacement Can Encode Tactual Texture | 545,946 |
A novel color interpolation algorithm for the color filter array (CFA) in digital still cameras (DSCs) is presented. The paper introduces pre-estimating of the minimum square error to address the color interpolation for CFA. In order to estimate the missing pixels in the Bayer CFA pattern, the weights of adjacent color pattern pairs are decided by matrix computation. We adopt the color model (K/sub R/, K/sub B/) used in many color interpolation algorithms for CFA. The proposed algorithm can achieve better performance, as shown in the experimental results. Compared with previous methods, the proposed color interpolation algorithm can provide a high quality image in DSCs. | ['Jhing-Fa Wang', 'Chien-Shun Wang', 'Han-Jen Hsu'] | A novel color interpolation algorithm by pre-estimating minimum square error | 307,674 |
Nowadays, new in-memory-based Enterprise Resource Planning solutions are available for the companies. One of the main purposes of these solutions are the availability of real-time analytics. In a previous paper [1], we discussed, how Oracle In-Memory Database solutions can support faster on-line analytical processing queries and measured and discussed the limitations of the Oracle in-memory extension related to the complexity of the query and size of the tables, which showed a downturn in in-memory performance gain as the number of tables to be joined raises. [3] In present paper, the question is, how this in-memory performance gain does look like in case of on-line transaction processing functionality. During the examination, we try to go forward into the direction of comparison this performance issue at different type of in-memory database types. | ['Patrik Szpisjak', 'Levente Radai'] | Performance issues of In-Memory Databases in OLTP systems | 851,034 |
Addition of a backwards-compatible enhanced data mode (E-VSB) to the ATSC (Advanced Television Systems Committee) standard 8-VSB transmissions requires a way to reliably indicate to new enhanced receivers the mix of E-VSB and 8-VSB data. This mix is indicated by "map" data using bits in the reserved area of the 8-VSB data field sync. This must be done in a manner that allows for changing the mix "on the fly" without loss of E-VSB/8-VSB data. The reception of the map data must be significantly more robust than the lowest code rate E-VSB data. A method is described that offers reliable signaling without latency for channel-change in the receiver. | ['Mark Fimoff', 'Thomas P. Horwitz', 'Wayne E. Bretl'] | E-VSB map signaling | 415,396 |
Fluid Documents incorporate additional information into a page by adjusting typography using interactive animation. One application is to support hypertext browsing by providing glosses for link anchors. This paper describes an observational study of the impact of Fluid Documents on reading and browsing. The study involved six conditions that differ along several dimensions, including the degree of typographic adjustment and the distance glosses are placed from anchors. Six subjects read and answered questions about two hypertext corpora while being monitored by an eyetracker. The eyetracking data revealed no substantial differenccs in eye behavior between conditions. Gloss placement was significant: subjects required less time to use nearby glosses. Finally, the reaction to the conditions was highly varied, with several conditions receiving both a best and worst rating on the subjective questionnaires. These results suggest implications for the design of dynamic reading environments. | ['Polle T. Zellweger', 'Susan Harkness Regli', 'Jock D. Mackinlay', 'Bay-Wei Chang'] | The impact of fluid documents on reading and browsing: an observational study | 8,720 |
Conference Key Management (CKM) is one of the primary issues in Secure Dynamic Conferencing (SDC). In this paper, we propose a novel CKM scheme for SDC based on the secret sharing principle and the novel concept/introduction of randomised access polynomial. Our scheme is simple, efficient, scalable, practical, dynamic and outperforms existing CKM schemes in overall comparison. Furthermore, if t or less users collude, the new scheme is unconditionally secure and able to defend against their collusions. The storage (O(1) at user end), computation and communication efficiency of the new scheme makes it well suited for the networks with low power devices. | ['Xukai Zou', 'Yogesh Karandikar'] | A novel Conference Key Management solution for Secure Dynamic Conferencing | 260,502 |
Traditional text classification algorithms are based on a basic assumption: the training and test data should hold the same distribution. However, this identical distribution assumption is always violated in real applications. Due to the distribution of test data from target domain and the distribution of training data from auxiliary domain are different, we call this classification problem cross-domain classification. Although most of the training data are drawn from auxiliary domain, we still can obtain a few training data drawn from target domain. To solve the cross-domain classification problem in this situation, we propose a two-stage algorithm which is based on semi-supervised classification. We firstly utilizes labeled data in target domain to filter the support vectors of the auxiliary domain, then uses filtered data and labeled data from target domain to construct a classifier for the target domain. The experimental evaluation on real-world text classification problems demonstrates encouraging results and validates our approach. | ['Yi Zhen', 'Chunping Li'] | Cross-Domain Knowledge Transfer Using Semi-supervised Classification | 289,198 |
Robust motion recovery in tracking multiple targets using image features is affected by difficulties in obtaining good correspondences over long sequences. Difficulties are introduced by occlusions, scale changes, as well as disappearance of features with the rotation of targets. In this work, we describe an adaptive geometric template-based method for robust motion recovery from features. A geometric template consists of nodes containing salient features (e.g., corner features). The spatial configuration of the features is modeled using a spanning tree. This paper makes the following two contributions: (i) an adaptive geometric template to model the varying number of features on a target, and (U) an iterative data association method for the features based on the uncertainties in the estimated template structure in conjunction with its individual features. We present experimental results for tracking multiple targets over long outdoor image sequences with multiple persistent occlusions. A comparison of the results of the data association method with a standard Mahalanobis distance gating applied to individual features is also presented | ['Harini Veeraraghavan', 'Paul R. Schrater', 'Nikolaos Papanikolopoulos'] | Adaptive geometric templates for feature matching | 318,070 |
KoDEgen: A Knowledge Driven Engineering Code Generating Tool | ['Reuven Yagel', 'Anton Litovka', 'Iaakov Exman'] | KoDEgen: A Knowledge Driven Engineering Code Generating Tool | 600,960 |
Personalized health management services based on personal health record (PHR). | ['Eun-Young Jung', 'Dong-Kyun Park', 'Hyung Wook Kang', 'Yong Su Lim'] | Personalized health management services based on personal health record (PHR). | 750,625 |
Routing algorithm can significantly impact network performance. Routing in a network containing heterogeneous nodes differs from routing in a network with homogeneous nodes. If the routing algorithm is designed to fit less powerful nodes, the resources of more powerful nodes are wasted and network performance can be degraded. If the routing algorithm is developed to suit more powerful nodes, less powerful nodes may not have sufficient resources to run the algorithm and network may break down. Routing algorithms developed for homogeneous networks do not work well for heterogeneous networks. The IETF designed the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) by taking into account resource heterogeneity and defined four modes of operation. However, RPL only allows one mode of operation for all routers in a network. This paper proposes a resource-aware adaptive mode RPL (RAM-RPL) to achieve adaptive mode of operation in heterogeneous wireless machine-to-machine (M2M) networks. RAM-RPL not only allows routers to have mixed modes of operation in a network but also allows routers to adaptively adjust their modes of operation during network operation. Acting parent and acting root techniques are introduced to realize adaptive mode of operation and route compression. RAM-RPL exploits resource heterogeneity and shifts routing workload from less powerful nodes to more powerful nodes. Simulation results show that RAM-RPL can improve data packet delivery rate by 26% and reduce control message overhead by 53% while maintaining similar packet latency. | ['Jianlin Guo', 'Philip V. Orlik', 'Kieran Parsons', 'Koichi Ishibashi', 'Daisuke Takita'] | Resource Aware Routing Protocol in Heterogeneous Wireless Machine-to-Machine Networks | 653,777 |
This paper proposes a fast-transient over-sampled adaptive switching DC-DC converter. The converter employs a delta-sigma (ΔΣ) modulator to perform noise shaping at a variable and well regulated power output. As a result, the noise tones are much reduced compared to the conventional PWM converters. An observation-based line and load regulation circuit is proposed for fast response to variations at both power source and output load. A half-clock double-sampled technique is also presented to improve speed and resolution of the ΔΣ modulator. With TSMC 0.35 μm N-well CMOS process, the converter exhibits a ripple voltage of less than 8 mV with an on-chip filtering capacitor of 10 nF and a maximum load power of 450 mW. The regulated output voltage ranges from 0.35 V to 1.7 V. Compared with its PWM counterpart, the converter's noise peak is reduced by 35 dB at the nominal switching frequency of 9.2 MHz. The design provides an effective design solution to a large number of noise-sensitive applications. | ['Minkyu Song', 'Dongsheng Ma'] | A fast-transient over-sampled delta-sigma adaptive DC-DC converter for power-efficient noise-sensitive devices | 165,386 |
Preference trees, or P-trees for short, offer an intuitive and often concise way of representing preferences over combinatorial domains. In this paper, we propose an alternative definition of P-trees, and formally introduce their compact representation that exploits occurrences of identical subtrees. We show that P-trees generalize lexicographic preference trees and are strictly more expressive. We relate P-trees to answer-set optimization programs and possibilistic logic theories. Finally, we study reasoning with P-trees and establish computational complexity results for the key reasoning tasks of comparing outcomes with respect to orders defined by P-trees, and of finding optimal outcomes. | ['Xudong Liu', 'Miroslaw Truszczynski'] | Reasoning with Preference Trees over Combinatorial Domains | 597,998 |
The verification of digital designs, i.e., hardware or embedded hardware/software systems, is an important task in the design process. Often more than 70% of the development time is spent for locating and correcting errors in the design. Therefore, many techniques have been proposed to support the debugging process. Recently, simulation and test methods have been accompanied by formal methods such as equivalence checking and property checking. However their industrial applicability is currently restricted to small or medium sized designs or to a specific phase in the design cycle. In this paper, we present a method for verifying temporal properties of systems described in an executable description language. Our method allows the user to specify properties about the system in finite linear time temporal logic. These properties are translated to a special kind of finite state machines which are then efficiently checked on-the-fly during each simulation run. Properties may be placed anywhere in the system description and violations are immediately indicated to the designer. | ['Jürgen Ruf', 'Dirk W. Hoffmann', 'Thomas Kropf', 'Wolfgang Rosenstiel'] | Simulation-guided property checking based on a multi-valued AR-automata | 343,515 |
The study of multiple-input-multiple-output (MIMO) systems over unshielded-twisted-pair (UTP) cables relies on or assumes models of the cable's transmission and crosstalk parameters. Several cable models have been proposed; however, there has been a lack of wide-scale line surveys to verify the applicability of these models. This paper presents the results of wideband-crosstalk and transmission-parameter measurements on UTP cables conducted at the laboratories of BTExact. The measurement results are then used to verify Joffe's MIMO channel model, which is one of the two available but mathematically equivalent models. It is shown that crosstalk estimates from measurements taken on very short cable lengths fail to account for the effects from the pair and bundle twisting. Model parameters measured on longer cable pieces yield more realistic frequency-dependent model predictions; these are, however, distorted at high frequencies due to resonance effects. In general, the model tends to overestimate far-end crosstalk and underestimate near-end crosstalk. | ['Nedko Nedev', 'Stephen McLaughlin', 'John W. Cook'] | Wideband Unshielded-Twisted-Pair (UTP) Cable Measurements and Modeling for Multiple-Input–Multiple-Output (MIMO) Systems | 241,086 |
A new approach for classification problems, called proximal bilateral-weighted fuzzy support vector machine, is proposed wherein each input example is treated as belonging to both positive and negative classes with different fuzzy memberships. The assumption of treating every input example belonging to both the classes is very well justified in real world applications. For example, for the study of credit risk assessment a customer can not always be assumed to be absolutely good or bad as he may default or pay his debit at times and therefore he may be treated as belonging to both the classes. Our formulation leads to solving a system of linear equations of size equals to the number of input examples. Computational results of the proposed method on publicly available datasets including two credit risk analysis datasets to that of the standard, proximal and bilateral-weighted fuzzy support vector machine methods clearly demonstrates its efficiency and usefulness. | ['S. Balasundaram', 'M. Tanveer'] | On proximal bilateral-weighted fuzzy support vector machine classifiers | 49,864 |
Theoretical analysis of musical noise in nonlinear noise reduction based on higher-order statistics | ['Yu Takahashi', 'Ryoichi Miyazaki', 'Hiroshi Saruwatari', 'Kazunobu Kondo'] | Theoretical analysis of musical noise in nonlinear noise reduction based on higher-order statistics | 908,603 |
The 8th International Workshop on Software Quality and Maintainability (SQM) was co-located with the CSMR-WCRE 2014 Software Evolution Week in Antwerp, Belgium in February 2014. SQM focuses on the boundaries between theory and practice of software quality. This special issue of ECEASST contains 7 papers that have been selected for inclusion in the postproceedings, after a rigorous peer review process and taking into account the feedback received from the reviewers and from the workshop participants. | ['Lodewijk Bergmans', 'Tom Mens', 'Steven Raemaekers'] | Preface of SQM 2014 Proceedings - 8th International Workshop on Software Quality and Maintainability | 662,447 |
As the use of wearable haptic devices with vibrating alert features is commonplace, an understanding of the perceptual categorization of vibrotactile frequencies has become important. This understanding can be substantially enhanced by unveiling how neural activity represents vibrotactile frequency information. Using functional magnetic resonance imaging (fMRI), this study investigated categorical clustering patterns of the frequency-dependent neural activity evoked by vibrotactile stimuli with gradually changing frequencies from 20 to 200 Hz. First, a searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions exhibiting neural activities associated with frequency information. We found that the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG) carried frequency-dependent information. Next, we applied multidimensional scaling (MDS) to find low-dimensional neural representations of different frequencies obtained from the multi-voxel activity patterns within these regions. The clustering analysis on the MDS results showed that neural activity patterns of 20-100 Hz and 120-200 Hz were divided into two distinct groups. Interestingly, this neural grouping conformed to the perceptual frequency categories found in the previous behavioral studies. Our findings therefore suggest that neural activity patterns in the somatosensory cortical regions may provide a neural basis for the perceptual categorization of vibrotactile frequency. | ['Junsuk Kim', 'Yoon Gi Chung', 'Soon-Cheol Chung', 'Hh Bülthoff', 'Sung-Phil Kim'] | Neural Categorization of Vibrotactile Frequency in Flutter and Vibration Stimulations: An fMRI Study | 858,365 |
We propose a novel nonparametric approach for semantic segmentation using high-order semantic relations. Conventional context models mainly focus on learning pairwise relationships between objects. Pairwise relations, however, are not enough to represent high-level contextual knowledge within images. In this paper, we propose semantic relation transfer, a method to transfer high-order semantic relations of objects from annotated images to unlabeled images analogous to label transfer techniques where label information are transferred. We first define semantic tensors representing high-order relations of objects. Semantic relation transfer problem is then formulated as semi-supervised learning using a quadratic objective function of the semantic tensors. By exploiting low-rank property of the semantic tensors and employing Kronecker sum similarity, an efficient approximation algorithm is developed. Based on the predicted high-order semantic relations, we reason semantic segmentation and evaluate the performance on several challenging datasets. | ['Heesoo Myeong', 'Kyoung Mu Lee'] | Tensor-Based High-Order Semantic Relation Transfer for Semantic Scene Segmentation | 496,460 |
This note studies the existence of solutions to MIMO linear systems under decentralized relay feedback containing hysterisis. A necessary and sufficient condition is presented to guarantee the extended solutions at the so-called intersecting instant. | ['Chong Lin', 'Qing-Guo Wang', 'Tong Heng Lee'] | Existence of Solutions to MIMO Relay Feedback Systems | 256,309 |
To improve the efficiency of big data feature learning, the paper proposes a privacy preserving deep computation model by offloading the expensive operations to the cloud. Privacy concerns become evident because there are a large number of private data by various applications in the smart city, such as sensitive data of governments or proprietary information of enterprises. To protect the private data, the proposed model uses the BGV encryption scheme to encrypt the private data and employs cloud servers to perform the high-order back-propagation algorithm on the encrypted data efficiently for deep computation model training. Furthermore, the proposed scheme approximates the Sigmoid function as a polynomial function to support the secure computation of the activation function with the BGV encryption. In our scheme, only the encryption operations and the decryption operations are performed by the client while all the computation tasks are performed on the cloud. Experimental results show that our scheme is improved by approximately 2.5 times in the training efficiency compared to the conventional deep computation model without disclosing the private data using the cloud computing including ten nodes. More importantly, our scheme is highly scalable by employing more cloud servers, which is particularly suitable for big data. | ['Qingchen Zhang', 'Laurence T. Yang', 'Zhikui Chen'] | Privacy Preserving Deep Computation Model on Cloud for Big Data Feature Learning | 698,693 |
Many binary halftoning algorithms tend to render ex- treme tones (i.e., very light or very dark tones) with objectionable dot distributions. To alleviate this artifact, we introduce a halftone post- processing algorithm called the Springs algorithm. The objective of Springs is to rearrange minority pixels in affected regions for a smoother, more attractive rendition. In this paper, we describe the Springs algorithm, and we show results which demonstrate its effec- tiveness. The heart of this algorithm is a simple dot-rearrangement heuristic which results in a more isotropic dot distribution. The ap- proach is to treat any well-isolated dot as if it were connected to neighboring dots by springs, and to move it to a location where the energy in the springs is a minimum. Applied to the whole image, this could degrade halftone appearance. However, Springs only moves dots in selected regions of the image. Pixels that are not minority pixels are not moved at all. Moreover, dot rearrangement is disabled on and around detected edges, since it could otherwise render those edges soft and diffuse. © 2000 SPIE and IS&T. | ['Clayton Brian Atkins', 'Jan P. Allebach', 'Charles A. Bouman'] | Halftone postprocessing for improved rendition of highlights and shadows | 275,118 |
Error Correction Code decoding algorithms for consumer products such as Internet of Things IoT devices are usually implemented as dedicated hardware circuits. As processors are becoming increasingly powerful and energy efficient, there is now a strong desire to perform this processing in software to reduce production costs and time to market. The recently introduced family of Successive Cancellation decoders for Polar codes has been shown in several research works to efficiently leverage the ubiquitous SIMD units in modern CPUs, while offering strong potentials for a wide range of optimizations. The P-EDGE environment introduced in this paper, combines a specialized skeleton generator and a building blocks library routines to provide a generic, extensible Polar code exploration workbench. It enables ECC code designers to easily experiments with combinations of existing and new optimizations, while delivering performance close to state-of-art decoders. | ['Adrien Cassagne', 'Bertrand Le Gal', 'Camille Leroux', 'Olivier Aumage', 'Denis Barthou'] | An Efficient, Portable and Generic Library for Successive Cancellation Decoding of Polar Codes | 574,615 |
Cultural stereotypes about women's "fit" and ability in technical fields, like computing, are alive and well. These cultural beliefs can make their way into women's personal belief system. When this happens, women's self-conceptions in computing suffer, namely, self-efficacy, sense of belonging, and identification with computing. The current research examines whether collaborative learning methods (pair programing; supplemental instruction) can erase the negative relationship between women's endorsement of negative gender stereotypes and their computing self-concept. Longitudinal survey data from 48 women computing majors indicated that participation in collaborative learning activities nullified the negative impact of gender stereotype endorsement on women's self-efficacy, sense of belonging, and identification with computing. These findings showcase the benefits of existing pedagogical strategies in computing on increasing the likelihood that women will persist in a computing career path. | ['Jane G. Stout', 'Burçin Tamer'] | Collaborative Learning Eliminates the Negative Impact of Gender Stereotypes on Women's Self-Concept (Abstract Only) | 640,510 |
Background#R##N#The value of health information technology (IT) ultimately depends on end users accepting and appropriately using it for patient care. This study examined pediatric intensive care unit nurses’ perceptions, acceptance, and use of a novel health IT, the Large Customizable Interactive Monitor. | ['Richard J. Holden', 'Onur Asan', 'Erica M. Wozniak', 'Kathryn E. Flynn', 'Matthew C. Scanlon'] | Nurses’ perceptions, acceptance, and use of a novel in-room pediatric ICU technology: testing an expanded technology acceptance model | 938,584 |
Implicit emotion tagging is a central theme in the area of affective computing. To this end, Several physiological signals acquired from subjects can be employed, for example, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) from brain, electrocardiography (ECG) from cardiac activities, and other peripheral physiological signals, such as galvanic skin resistance, electromyogram (EMG), blood volume pressure etc. Brain is regarded as the place where emotional activities evoke. Determining affective states by observing brain activities directly is of therefore great interest. There are several published works that use EEG signals to identify affective states in different aspects with various stimuli, e.s., images, musics and videos. In this paper, we propose to adopt EEG connectivity between electrodes to identify subjects' affective levels in both valence and arousal space during video stimuli presentation. Three catagories of connectivity are adopted in magnitude and phase domains. One open accessed affective database, DEAP, is used as benchmark. We will show that with the proposed connectivity-based representation, the accuracy of affective levels identification tasks are higher than the same tasks in existing works based on same database. | ['Mo Chen', 'Junwei Han', 'Lei Guo', 'Jiahui Wang', 'Ioannis Patras'] | Identifying valence and arousal levels via connectivity between EEG channels | 566,416 |
NC machining of a nonzero genus triangular mesh surface is being more widely confronted than before in the manufacturing field. At present, due to the complexity of geometry computation related to tool path generation, only one path pattern of iso-planar type is adopted in real machining of such surface. To improve significantly 5-axis machining of the nonzero genus mesh surface, it is necessary to develop a more efficient and robust tool path generation method. In this paper, a new method of generating spiral or contour-parallel tool path is proposed, which is inspired by the cylindrical helix or circle which are a set of parallel lines on the rectangular region obtained by unwrapping the cylinder. According to this idea, the effective data structure and algorithm are first designed to transform a nonzero genus surface into a genus-0 surface such that the conformal map method can be used to build the bidirectional mapping between the genus-0 surface and the rectangular region. In this rectangular region, the issues of spiral or contour-parallel tool path generation fall into the category of simple straight path planning. Accordingly, the formula for calculating the parameter increment for the guide line is derived by the difference scheme on the mesh surface and an accuracy improvement method is proposed based on the edge curve interpolation for determining the cutter contact (CC) point. These guarantee that the generated tool path can meet nicely the machining requirement. To improve further the kinematic and dynamic performance of 5-axis machine tool, a method for optimizing tool orientation is also preliminarily investigated. Finally, the experiments are performed to demonstrate the proposed method and show that it can generate nicely the spiral tool path or contour-parallel tool path on the nonzero genus mesh surface and also can guarantee the smooth change of tool orientation. A new method of generating spiral or contour-parallel tool paths on the nonzero genus mesh surface is proposed.The analytical formulas of computing CC points and parameter increment for path interval are derived.A simple and efficient method of optimizing tool orientation is also preliminarily investigated. | ['Yuwen Sun', 'Jinting Xu', 'Chunning Jin', 'Dongming Guo'] | Smooth tool path generation for 5-axis machining of triangular mesh surface with nonzero genus | 826,463 |
The assessment of travel time reliability for segments and routes is a rapidly advancing frontier. The increasing availability of probe data is making it possible to monitor reliability in real-time based on individual vehicle data as opposed to ex-post-facto based on averages. This paper examines metrics that can be used to monitor reliability based on probe data. The merits of traditional metrics like the planning time index, buffer index, and travel time index are compared with newer ideas like complete cumulative distribution functions and mean/variance combinations. The question is: what is the quality of information about real-time reliability provided by these various options? This paper compares these metrics in the context of probe-based observations of travel times and rates. Also, a new idea for a pairwise metric, the root mean square travel rate τ rms in conjunction with the standard deviation σ τ . These two measures in combination seem to provide a picture of reliability that is nearly as complete as the underlying Cumulative Density Function (CDF) and better than the simpler metrics. These ideas are examined in the context of probe data from I-5 in Sacramento, CA. | ['Isaac K Isukapati', 'George F List'] | Using travel time reliability measures with individual vehicle data | 965,404 |
The Vaccine and Related Biologic Products Advisory Committee meets at least once a year to decide the composition of seasonal influenza vaccine in the United States. Past evidence suggests that the committee could use a more systematic approach to incorporate observed information and to quantify the risks associated with different options. There are two key trade-offs involved in this decision. First, if the Committee decides to retain the current vaccine composition instead of updating to a new one, there is lower uncertainty in production yields, but the current vaccine could be less effective if a new virus strain spreads. Second, if the Committee decides early with less information, then manufacturers have more production time, but the reduced information increases the risk of choosing a wrong strain. We derive an optimal dynamic policy for this decision. Because of the greater uncertainty in production yields of new vaccines, the optimal thresholds are neither symmetric between retaining and updating the composition nor monotonic over time. We apply our model to past decisions using parameter values estimated from a historical case. Our analysis shows that the dynamic optimal policy can significantly improve social welfare. | ['Soo-Haeng Cho'] | The Optimal Composition of Influenza Vaccines Subject to Random Production Yields | 75,460 |
This paper presents an optimal methodology of scheduling/mapping of fully deterministic digital signal processing algorithms onto any generic very long instruction word (VLIW) digital signal processor (DSP). The VLIW DSPs can be broadly classified as heterogeneous and homogenous depending upon their architecture. The methodology is equally efficient on heterogeneous as well as on homogeneous VLIW DSPs. An equivalent model of the algorithm and the DSP is generated using mixed integer programming (MIP). A framework is developed to generate the MIP models. The framework also incorporates a MIP solver to solve the generated MIP model. The framework also helps in defining the architecture of the VLIW and then generating an exact model of the processor. After solving the MIP it gives an optimal schedule/mapping of the algorithm onto the DSP. The framework also encompasses a code generator that takes the mapping information in generating an assembly code of the VLIW processor. | ['Muhammad Sohail Sadiq', 'Shoab A. Khan'] | Optimal Mapping of DSP Algorithms on Commercially Available Off-The-Shelf (COTS) VLIW DSPs | 267,280 |
In this paper, a framework for testing microprocessor prototypes is presented. A RISC microprocessor is designed by students using VHDL language and adapted to be implemented on a FPGA device. The correct behaviour of the designed microprocessor is checked executing test programs written and compiled by the students for this microprocessor. Using a Web client, users send test programs and a file with the design to a remote laboratory where it is loaded on a real FPGA device. A set of tools for debugging the remote execution of the tests has been developed, using a graphical interface similar to other debugging tools. Groups of selected students of a computer architecture course have participated in this experience. The good opinions received from the students, suggest the incorporation of this remote laboratory experience in the next regular course. | ['Javier Pastor', 'Ivan Gonzalez', 'Jorge López', 'Francisco J. Gomez-Arribas', 'J. Martinez'] | A remote laboratory for debugging FPGA-based microprocessor prototypes | 378,800 |
Unsupervised segmentation of images with low depth of field (DOF) is highly useful in various applications. This paper describes a novel multiresolution image segmentation algorithm for low DOF images. The algorithm is designed to separate a sharply focused object-of-interest from other foreground or background objects. The algorithm is fully automatic in that all parameters are image independent. A multi-scale approach based on high frequency wavelet coefficients and their statistics is used to perform context-dependent classification of individual blocks of the image. Unlike other edge-based approaches, our algorithm does not rely on the process of connecting object boundaries. The algorithm has achieved high accuracy when tested on more than 100 low DOF images, many with inhomogeneous foreground or background distractions. Compared with he state of the art algorithms, this new algorithm provides better accuracy at higher speed. | ['James Ze Wang', 'Js Li', 'Robert M. Gray', 'Gio Wiederhold'] | Unsupervised multiresolution segmentation for images with low depth of field | 203,685 |
Recent research has shown the Linked Data cloud to be a potentially ideal basis for improving user experience when interacting with Web content across different applications and domains. Using the explicit knowledge of datasets, however, is neither sufficient nor straightforward. Dataset knowledge is often not uniformly organized, thus it is generally unknown how to query for it. To deal with these issues, we propose a dataset analysis approach based on knowledge patterns, and show how the recognition of patterns can support querying datasets even if their vocabularies are previously unknown. Finally, we discuss results from experimenting on three multimedia-related datasets. | ['Valentina Presutti', 'Lora Aroyo', 'Alessandro Adamou', 'Balthasar A. C. Schopman', 'Aldo Gangemi', 'Guus Schreiber'] | Extracting core knowledge from linked data | 662,876 |
A combination of STDP, Hebbian learning and synaptic scaling deriving from a theoretical learning principle | ['Mathieu Galtier', 'Gilles Wainrib'] | A combination of STDP, Hebbian learning and synaptic scaling deriving from a theoretical learning principle | 749,538 |
Objectives To determine the impact of tethered personal health record (PHR) use on patient engagement and intermediate health outcomes among patients with coronary artery disease (CAD).#N##N#Methods Adult CAD patients ( N = 200) were enrolled in this prospective, quasi-experimental observational study. Each patient received a PHR account and training on its use. PHRs were populated with information from patient electronic medical records, hosted by a Health Information Exchange. Intermediate health outcomes including blood pressure, body mass index, and hemoglobin A1c (HbA1c) were evaluated through electronic medical record review or laboratory tests. Trends in patient activation measure® (PAM) were determined through three surveys conducted at baseline, 6 and 12 months. Frequency of PHR use data was collected and used to classify participants into groups for analysis: Low , Active , and Super users.#N##N#Results There was no statistically significant improvement in patient engagement as measured by PAM scores during the study period. HbA1c levels improved significantly in the Active and Super user groups at 6 months; however, no other health outcome measures improved significantly. Higher PAM scores were associated with lower body mass index and lower HbA1c, but there was no association between changes in PAM scores and changes in health outcomes. Use of the PHR health diary increased significantly following PHR education offered at the 6-month study visit and an elective group refresher course.#N##N#Conclusions The study findings show that PHR use had minimal impact on intermediate health outcomes and no significant impact on patient engagement among CAD patients. | ['Tammy Toscos', 'Carly Daley', 'Lisa Heral', 'Riddhi Doshi', 'Yu-Chieh Chen', 'George J. Eckert', 'Robert Plant', 'Michael J. Mirro'] | Impact of electronic personal health record use on engagement and intermediate health outcomes among cardiac patients: a quasi-experimental study. | 689,602 |
This paper deals with mean square state estimation over sensor networks with a fixed topology. Attention is focused on designing local stationary state estimators with a general structure while accounting for the network communication topology. Two estimator design approaches are proposed. One is based on the observability Gramian, and the other on the controllability Gramian. The computation of the estimator state-space matrices is recast as off-line convex optimization problems and requires the system asymptotic stability and global knowledge of the network topology. Convergence of the estimation error variance is ensured at each network node and a guaranteed performance in the mean square sense is achieved. The proposed approaches are also extended for designing robust filters to handle polytopic-type parameter uncertainty. | ['Carlos E. de Souza', 'Daniel Ferreira Coutinho', 'Michel Kinnaert'] | Mean square state estimation for sensor networks | 848,284 |
This paper is a study of low dimensional visualisation methods for data visualisation under uncertainty of the input data. It focuses on NeuroScale, the feed-forward neural networks algorithm by trying to make the algorithm able to accommodate the uncertainty. The standard model is shown not to work well under high levels of noise within the data and need to be modified. The modifications of the model are verified by using synthetic data to show their ability to accommodate the noise. | ['Mingamanas Sivaraksa', 'David Lowe'] | Probabilistic NeuroScale for Uncertainty Visualisation | 180,808 |
The design and implementation of wireless systems has been impeded by the lack of an evaluation framework that can provide an accurate understanding of middleware and application performance in the context of their interactions with system hardware and software, network architecture and configuration and wireless channel effects. In this paper we present a novel evaluation paradigm wherein the applications, middleware or sub-networks can be evaluated in-situ , in other words, as operational software that interfaces with the operating system and other applications, thus offering a fidelity equivalent to physical deployment. The physical environment in which such systems operate is modeled using high-fidelity simulations. This approach combines the fidelity of physical test beds with the benefits of scalability, repeatability of input parameters, and comprehensive parameter space evaluation - the known limitations of a physical test-bed. The framework design is extensible in that it allows configuring the desired components of a system with different modalities to suit a particular evaluation criterion. The implementation also addresses the key challenges in the interaction of the framework sub-components: seamless interfaces, time synchronization and preserving causality constraints. The benefits and applicability of the framework to diverse wireless contexts is demonstrated by means of various case studies in diverse wireless networks. In one case study, we show that a design exhibiting 4X improvement in network metrics may be actually degrading the application metric by 50%. | ['Maneesh Varshney', 'Zhiguo Xu', 'Shrinivas Mohan', 'Yi Yang', 'Defeng Xu', 'Rajive L. Bagrodia'] | WHYNET: a framework for in-situ evaluation of heterogeneous mobile wireless systems | 377,146 |
The current Semantic Web ontology language has been designed to be both expressive for specifying complex concepts and decidable for automated reasoning. In recent years, the Semantic Web Rules Language has been proposed to add more expressiveness to the family of ontology languages. However the inclusion of rules has created new challenges of not only verifying the consistency of an ontology, but also checking for anomalies of a set of rules by itself. Currently automated tool support for reasoning about ontologies with rules is relatively limited compared to those for standard ontology reasoning. This paper addresses these challenges by defining notions of rule anomalies and proposing a method of discovering such anomalies by using the constraint logic programming technique and the state-of-the-art Semantic Web reasoners. | ['Yuzhang Feng', 'Yang Liu', 'Yuan-Fang Li', 'Daqing Zhang'] | Discovering Anomalies in Semantic Web Rules | 394,128 |
As most applications in Wireless Sensor Networks (WSN) are location sensitive, in this paper, we explore the problem of location-aided multicast for WSN. We present four strategies to construct the GeoMulticast routing tree, namely, SARF, SAM, CoFAM and MSAM. Especially, we discuss CoFAM in detail and give the algorithm for setting up a multicast tree in a cone-based forwarding area. This algorithm is distributed and energy-efficient. Extensive simulations have been conducted to evaluate the performance of the proposed routing schemas. Simulation results have shown that when constructing a multicast tree, fewer messages must be transmitted in our schemas. | ['Wentao Zhang', 'Xiaohua Jia', 'Chuanhe Huang'] | Distributed energy-efficient geographic multicast for Wireless Sensor Networks | 124,874 |
Recently, testing techniques based on dynamic exploration, which try to automatically exercise every possible user interface element, have been extensively used to facilitate fully testing web applications. Most of such testing tools are however not effective in reaching dynamic pages induced by form interactions due to their emphasis on handling client-side scripting. In this paper, we present a combinatorial strategy to achieve a full form test and build an automated test model. We propose an algorithm called pairwise testing with constraints (PTC) to implement the strategy. Our PTC algorithm uses pairwise coverage and handles the issues of semantic constraints and illegal values. We have implemented a prototype tool ComjaxTest and conducted an empirical study on five web applications. Experimental results indicate that our PTC algorithm generates less form test cases while achieving a higher coverage of dynamic pages than the general pairwise testing algorithm. Additionally, our ComjaxTest generates a relatively complete test model and then detects more faults in a reasonable amount of time, as compared with other existing tools based on dynamic exploration. | ['Xiaofang Qi', 'Ziyuan Wang', 'Jun-Qiang Mao', 'Peng Wang'] | Automated Testing of Web Applications Using Combinatorial Strategies | 974,478 |
An Empirical Study of the Suitability of Class Decomposition for Linear Models: When Does It Work Well? | ['Francisco Ocegueda-Hernandez', 'Ricardo Vilalta'] | An Empirical Study of the Suitability of Class Decomposition for Linear Models: When Does It Work Well? | 788,432 |
Large scale High Performance Computing and Communication (HPCC) applications (e.g. Video-on-Demand, and HPDC) would require storage and processing capabilities which are beyond existing single computer systems. The current advances in networking technology (e.g. ATM) have made high performance network computing an attractive computing environment for such applications. However, using only high speed network is not sufficient to achieve high performance distributed computing environment unless some hardware and software problems have been resolved. These problems include the limited communication bandwidth available to the application, high overhead associated with context switching, redundant data copying during protocol processing and lack of support to overlap computation and communication at application level. In this paper, we propose a multithreaded message passing system for parallel/distributed processing that we refer to as NYNET communication system (NCS). NCS, being developed for NYNET (ATM wide area network testbed), is built on top of an ATM application programmer interface (API). The multithreaded environment allows applications to overlap computations and communications and provides a modular approach to support efficiently HPDC applications with different quality of service (QOS) requirements. | ['Rajesh Yadav', 'Rajashekar Reddy', 'Salim Hariri'] | A multithreaded message passing environment for ATM LAN/WAN | 515,533 |
This paper presents a new approach to the synthesis of parallel multiplier circuits with an objective of minimizing leakage power consumption under circuit timing constraint. Our leakage power optimization is based on the use of dual-threshold voltage (V t ) technology. From experiments using a set of benchmark designs, it is shown that the approach is quite effective. | ['Keoncheol Shin', 'Taewhan Kim'] | Leakage power minimization for the synthesis of parallel multiplier circuits | 104,294 |
Many hardware designs, especially those for signal and image processing, involve structured data access such as queues, stacks and stripes. This work presents parametric descriptions as abstractions for such structured data access, and explains how these abstractions can be supported either as FPGA libraries targeting existing reconfigurable hardware devices, or as dedicated logic implementations forming autonomous memory blocks (AMBs). Scalable architectures combining the address generation logic in AMBs together to provide larger storage with parallel data access, are also examined. The effectiveness of this approach is illustrated with size and performance estimates for our FPGA libraries and dedicated logic implementations of AMBs. It is shown that for two-dimensional filtering, the dedicated AMBs can be 7 times smaller and 5 times faster than the FPGA libraries performing the same function. | ['Wim J.C. Melis', 'Peter Y. K. Cheung', 'Wayne Luk'] | Scalable structured data access by combining autonomous memory blocks | 476,304 |
Chances, Affordances, Niche Construction | ['Lorenzo Magnani'] | Chances, Affordances, Niche Construction | 72,379 |
In this paper, a blind synchronization algorithm for estimating the symbol timing offset and the carrier frequency offset (CFO) in OFDM systems has been presented. The synchronization parameters are estimated by utilizing the correlation attribute of the samples of the received signal. The Cramer-Rao lower bound (CRLB) of the CFO estimation has been derived over the entire delay spread of the fading channel. Simulation results are provided here to illustrate the performance of the estimator. | ['Manish Kumar', 'Sudhan Majhi'] | Blind synchronization of OFDM system and CRLB derivation of CFO over fading channels | 728,811 |
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. | ['Thomas C. Wilkes', 'A. J. S. McGonigle', 'Tom D Pering', 'Angus J. Taggart', 'Benjamin S. White', 'Robert G. Bryant', 'Jon R. Willmott'] | Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera | 901,137 |
This paper addresses the problem of recognizing fragmented characters in printed documents of poor printing quality, which often causes characters to break up. To enhance the recognition accuracy of such characters, most existing approaches attempt to improve the quality of character images by means of some mending techniques. We propose an alternative approach that adopts a bagging-predictor method to build classifiers, using only intact characters as training samples. The resultant classifiers can classify both intact and fragmented characters with a high degree of accuracy. Applying this approach to characters in archived Chinese newspapers, we extract two types of features from character images and form bagging predictors, each of which takes a subset of features as input. As a result, we are able to achieve drastic improvements in the recognition of fragmented characters. | ['Chien-Hsing Chou', 'Chien-Yang Guo', 'Fu Chang'] | Recognition of Fragmented Characters Using Multiple Feature-Subset Classifiers | 497,684 |
Cooperative networking as a means of creating spatial diversity is used in order to mitigate the adverse effect of fading in a wireless channel and increase reliability of communications. We investigate signal-to-noise ratio (SNR) gain in wireless cooperative networks. We show that the differential SNR gain in the high data rate regime, which we refer to as SNR gain exponent ζ ∞ , is independent of the relaying strategy and only depends on the number of transmission phases used for communication. Furthermore, a straight-line upper and lower bound is derived based on geometric considerations. It is shown that the approximation error of the upper bound with respect to the exact SNR gain tends to zero for R → ∞. For the lower bound, the approximation error tends asymptotically to a constant factor δ for R → ∞. Both bounds are the best possible straight-line bounds with respect to absolute error. | ['Tobias Renk', 'Friedrich K. Jondral'] | Upper and lower bound on signal-to-noise ratio gains for cooperative relay networks | 24,420 |
In this paper, a new method, applied the annealing robust Walsh function networks, is proposed to discretize the continuous-time controller in computer-controlled systems. That is, the annealing robust Walsh function networks are used to add nonlinearly and to approximate smooth controller with digital neural networks. Hence, the proposed controller is a new smooth controller that can replace original controller and independent of the sampling time under the Sample Theorem. Besides, the input-output stability is proposed for this discretizating continuous-time controller with the annealing robust Walsh function networks. Consequently, the proposed annealing robust Walsh function networks controller cannot only discretize the continuous-time controllers, but can also tolerate a wider range of sampling time uncertainty. | ['Shun-Feng Su', 'Jin-Tsong Jeng', 'Tsu-Tian Lee'] | Discretizing Continuous-time Controllers via Annealing Robust Walsh Function Networks | 143,310 |
We set out to use machine learning techniques to analyse ECG data to improve risk evaluation of cardiovascular disease in a very large cohort study of the Chinese population. We performed this investigation by (i) detecting “abnormality” using 3 one-class classification methods, and (ii) predicting probabilities of “normality”, arrhythmia, ischemia, and hypertrophy using a multiclass approach. For one-class classification, we considered 5 possible definitions for “normality” and used 10 automatically-extracted ECG features along with 4 blood pressure features. The one-class approach was able to identify abnormality with area-under-curve (AUC) 0.83, and with 75.6% accuracy. For four-class classification, we used 86 features in total, with 72 additional features extracted from the ECG. Accuracy for this four-class classifier reached 75.1%. The methods demonstrated proof-of-principle that cardiac abnormality can be detected using machine learning in a large cohort study. | ['Yanting Shen', 'Yang Yang Yang Yang', 'S Parish', 'Zhengming Chen', 'Robert Clarke', 'David A. Clifton'] | Risk prediction for cardiovascular disease using ECG data in the China kadoorie biobank | 908,709 |
An algorithm for computing the PDF of order statistics drawn from discrete parent populations is presented, along with an implementation of the algorithm in a computer algebra system. Several examples and applications, including exact bootstrapping analysis, illustrate the utility of this algorithm. Bootstrapping procedures require that B bootstrap samples be generated in order to perform statistical inference concerning a data set. Although the requirements for the magnitude of B are typically modest, a practitioner would prefer to avoid the resampling error introduced by choosing a finite B, if possible. The part of the order-statistic algorithm for sampling with replacement from a finite sample can be used to perform exact bootstrapping analysis in certain applications, eliminating the need for replication in the analysis of a data set. | ['Diane L. Evans', 'Lawrence M. Leemis', 'John H. Drew'] | The Distribution of Order Statistics for Discrete Random Variables with Applications to Bootstrapping | 429,884 |
Opportunistic routing is considered as one of the most promising techniques to effectively limit performance degradation in wireless mesh networks caused by unpredictable channel variations and high loss rates. This paradigm defers the selection of the next hop after the packet reception to take advantage of any opportunity provided by broadcast transmissions. Most of the existing opportunistic approaches base the forwarder selection on end-to-end principles. However, in multi-hop wireless environments the cost of a path is not uniformly distributed over space, nor constant over time, hence even two equal-cost paths might present significantly different link quality distributions one from the other. This encourages the use of localized context to implement a more accurate selection of the possible forwarders after each packet transmission. Hence, in this paper the authors propose RELADO, an adaptive opportunistic routing protocol able to efficiently combine end-to-end with local information to ensure transmission resilience across the network. With this flexibility, RELADO is able to reduce packet loss by ensuring the best trade-off between throughput maximization and packet progress. An extensive set of ns2 simulations confirms the potentiality of RELADO to improve network performance when compared to both legacy unicast and opportunistic routing protocols. | ['Raffaele Bruno', 'Marco Conti', 'Maddalena Nurchis'] | RELADO: RELiable and ADaptive Opportunistic Routing Protocol for Wireless Mesh Networks | 172,449 |
Super-resolution localization microscopy relies on sparse activation of photo-switchable probes. Such activation, however, introduces limited temporal resolution. High-density imaging overcomes this limitation by allowing several neighboring probes to be activated simultaneously. In this work, we propose an algorithm that incorporates a continuous-domain sparsity prior into the high-density localization problem. We use a Taylor approximation of the PSF, and rely on a fast proximal gradient optimization procedure. Unlike currently available methods that use discrete-domain sparsity priors, our approach does not restrict the estimated locations to a pre-defined sampling grid. Experimental results of simulated and real data demonstrate significant improvement over these methods in terms of accuracy, molecular identification and computational complexity. | ['Junhong Min', 'Cedric Vonesch', 'Nicolas Olivier', 'Hagai Kirshner', 'Suliana Manley', 'Jong Chul Ye', 'Michael Unser'] | Continuous localization using sparsity constraints for high-density super-resolution microscopy | 232,124 |
This special issue deals with three areas. Learning design is the practice of devising effective learning experiences aimed at achieving defined educational objectives in a given context. Teacher inquiry is an approach to professional development and capacity building in education in which teachers study their own and their peers’ practice. Learning analytics use data about learners and their contexts to understand and optimise learning and the environments in which it takes place. Typically, these three – design, inquiry and analytics – are seen as separate areas of practice and research. In this issue, we show that the three can work together to form a virtuous circle. Within this circle, learning analytics offers a powerful set of tools for teacher inquiry, feeding back into improved learning design. Learning design provides a semantic structure for analytics, whereas teacher inquiry defines meaningful questions to analyse. | ['Yishay Mor', 'Rebecca Ferguson', 'Barbara Wasson'] | Editorial: Learning design, teacher inquiry into student learning and learning analytics: A call for action | 503,848 |
It is important for students to solve problems with specific requirements in the programming teaching. Our teaching system is a Moodle-based interactive teaching platform for C programming. Its online judging system can grade students code automatically. It plays an extremely important role in programming language teaching. This paper is devoted to optimizing and improving the system. We firstly analyze the five problems in the system according to the feedback from the teachers and students: 1) logical errors in students programs cannot be located; 2) a cheating that directly outputs answers cannot be detected; 3) evaluation results lack statistics and visualization; 4) the code submitting procedure is complicated; 5) the feedback of incorrect answers is not detailed. In order to solve these problems, we employ a fault localization algorithm, revise the evaluation logic, introduce third-party visualization plug-ins and refactor the system, respectively. Detailed and exact solutions are given also. After optimizing and improving the system, the user experience is significantly improved. It is convenient for the student to find and correct errors in their programs. Also, it is easier for teachers to acquire valuable feedback and master students' learning situation. | ['Xiaohong Su', 'Jing Qiu', 'Tiantian Wang', 'Lingling Zhao'] | Optimization and improvements of a Moodle-Based online learning system for C programming | 949,979 |
This work deals with Markov decision processes (MDPs) with expected total rewards, discrete state spaces, and compact action sets. Within this framework, a question on the existence of optimal stationary policies, formulated by Puterman (1994, p. 326), is considered. The paper concerns the possibility of obtaining an affirmative answer when additional assumptions are imposed on the decision model. Three conditions ensuring the existence ofaverage optimal stationary policies infinite-state MDPs are analyzed, and it is shown that only the so-called structural continuity condition is a natural sufficient assumption under which the existence of total-reward optimal stationary policies can be guaranteed. In particular, this existence result holds for unichain MDPs with finite state space, but an example is provided to show that this general conclusion does not have an extension to the denumerable state space case. | ['Rolando Cavazos-Cadena', 'Eugene A. Feinberg', 'Raúl Montes-de-Oca'] | A Note on the Existence of Optimal Policies in Total Reward Dynamic Programs with Compact Action Sets | 391,039 |
Two algorithms for use in force controlled robot applications have been developed for grinding. The 'gradient prediction method' is presented as an improvement to contour following in the force control mode. In this method, the gradient of the workpiece is predicted, and the force errors caused thereby are corrected. The 'progressive stiffness method' is also presented for grinding using a compliance control mode. In this method, the spring constant is automatically increased according to the grinding remaining, to keep the contact force nearly constant and to obtain an accurate profile. Both algorithms are experimentally tested. > | ['Kunio Kashiwagi', 'Kozo Ono', 'Eiki Izumi', 'Tohru Kurenuma', 'Kazuyoshi Yamada'] | Force controlled robot for grinding | 487,090 |
We show that for any class of bipartite graphs which is closed under edge deletion and where the number of perfect matchings can be counted in NC, there is a deterministic NC algorithm for finding a perfect matching. In particular, a perfect matching can be found in NC for planar bipartite graphs and K3,3-free bipartite graphs via this approach. A crucial ingredient is part of an interior-point algorithm due to Goldberg, Plotkin, Shmoys and Tardos. An easy observation allows this approach to handle regular bipartite graphs as well. We show, by a careful analysis of the polynomial time algorithm due to Galluccio and Loebl, that the number of perfect matchings in a graph of small (O(log n)) genus can be counted in NC. So perfect matchings in small genus bipartite graphs can also be found via this approach. We then present a different algorithm for finding a perfect matching in a planar bipartite graph. This algorithm is substantially different from the algorithm described above, and also from the algorithm of Miller and Naor, which predates the approach of Goldberg et al. and tackles the same problem. Our new algorithm extends to small genus bipartite graphs, but not to K3,3-free bipartite graphs. We next show that a non-trivial extension of this algorithm allows us to compute a vertex of the fractional perfect matching polytope (such a vertex is either a perfect matching or a half-integral matching) in NC, provided the graph is planar or small genus but not necessarily bipartite, and has a perfect matching to begin with. This extension rekindles the hope for an NC-algorithm to find a perfect matching in a non-bipartite planar graph. � Most results in this paper were originally announced in papers in Proc. 32nd ACM Symposium on Theory of | ['Raghav Kulkarni', 'Meena Mahajan', 'Kasturi R. Varadarajan'] | Some perfect matchings and perfect half-integral matchings in NC ∗ | 228,225 |
More is not always better: balancing sense distributions for all-words Word Sense Disambiguation. | ['Marten Postma', 'Rubén Izquierdo-Beviá', 'Piek Vossen'] | More is not always better: balancing sense distributions for all-words Word Sense Disambiguation. | 982,188 |
The source-location privacy threat is one of the critical issues in Wireless Sensor Networks (WSNs). Adversaries may trace along the sensor traffic to hunt targets around source nodes. Previous works proposed dummy messages and network coding to eliminate time and content correlations. However, these proposed schemes may result in explosion of polluted and dummy messages, opening up vulnerability to active attackers. In this work, we propose pollution avoiding source location privacy preserving scheme PA-SLP. A probabilistic key predistribution is proposed to predistributed keys in nodes. It enables intermediate nodes to verify signatures and filter out dummy or polluted messages with a certain possibility. In PA-SLP, a triple type homomorphic signature algorithm is developed to detect and classify three message types with only one pair of asymmetric keys. PA-SLP is able to adjust key distribution parameters to balance network performance and privacy. Security analysis and simulation results demonstrate PA-SLP can resist traffic analysis and filter out polluted and dummy messages effectively. | ['Xuan Zha', 'Kangfeng Zheng', 'Dongmei Zhang'] | Anti-Pollution Source Location Privacy Preserving Scheme in Wireless Sensor Networks | 931,073 |
This paper develops a new approach to compiling C programs for multiple address space, multi-processor DSPs. It integrates a novel data transformation technique that exposes the processor location of partitioned data into a parallelization strategy. When this is combined with a new address resolution mechanism, it generates efficient programs that run on multiple address spaces without using message passing. This approach is applied to the UTDSP benchmark suite and evaluated on a four processor TigerSHARC board, where it is shown to outperform existing approaches and gives an average speedup of 3.25 on the parallel benchmarks. | ['Björn Franke', "Michael F. P. O'Boyle"] | Compiler parallelization of C programs for multi-core DSPs with multiple address spaces | 115,965 |
This paper presents a model called object-oriented attribute grammar (OOAG) that can be used to construct a toolset for software maintenance. The kernel of OOAG consists of two inter-related parts: a model-view-shape (MVS) application framework and an AG++, an object-oriented extension to traditional AGs. By combining compositional and generative techniques seamlessly, OOAG preserves both advantages introduced by respective OO and AG models, such as rapid prototyping, reusability, extensibility, and incrementality. So far, a toolset prototype consisting of a number of programming and maintenance tools were implemented using OOAG on the Windows environment. The editors developed can be used to construct programs by specifying the associated flow information in explicit (visual) or implicit (textual) ways, while the (incremental) maintenance tools, such as DU/UD tools and a program slicer, can help analyze incomplete program fragments to locate and inform the user of useful information. | ['Chung-Hua Hu', 'Ji-Tzay Yang', 'Feng-Jian Wang', 'William C. Chu'] | Constructing a toolset for software maintenance with OOAG | 525,181 |
Mobile app developers often wish to make their apps available on a wide variety of platforms, e.g., Android, iOS, and Windows devices. Each of these platforms uses a different programming environment, each with its own language and APIs for app development. Small app development teams lack the resources and the expertise to build and maintain separate code bases of the app customized for each platform. As a result, we are beginning to see a number of cross-platform mobile app development frameworks. These frameworks allow the app developers to specify the business logic of the app once, using the language and APIs of a home platform (e.g., Windows Phone), and automatically produce versions of the app for multiple target platforms (e.g., iOS and Android). In this paper, we focus on the problem of testing cross-platform app development frameworks. Such frameworks are challenging to develop because they must correctly translate the home platform API to the (possibly disparate) target platform API while providing the same behavior. We develop a differential testing methodology to identify inconsistencies in the way that these frameworks handle the APIs of the home and target platforms. We have built a prototype testing tool, called X-Checker, and have applied it to test Xamarin, a popular framework that allows Windows Phone apps to be cross-compiled into native Android (and iOS) apps. To date, X-Checker has found 47 bugs in Xamarin, corresponding to inconsistencies in the way that Xamarin translates between the semantics of the Windows Phone and the Android APIs. We have reported these bugs to the Xamarin developers, who have already committed patches for twelve of them. | ['Nader Boushehrinejadmoradi', 'Vinod Ganapathy', 'Santosh Nagarakatte', 'Liviu Iftode'] | Testing Cross-Platform Mobile App Development Frameworks (T) | 607,688 |
The path to exascale computational capabilities in high-performance computing (HPC) systems is challenged by the inadequacy of present software technologies to adapt to the rapid evolution of architectures of supercomputing systems. The constraints of power have driven system designs to include increasingly heterogeneous architectures and diverse memory technologies and interfaces. Future systems are also expected to experience an increased rate of errors, such that the applications will no longer be able to assume correct behavior of the underlying machine. To enable the scientific community to succeed in scaling their applications, and to harness the capabilities of exascale systems, we need software strategies that provide mechanisms for explicit management of resilience to errors in the system, in addition to locality of reference in the complex memory hierarchies of future HPC systems. #R##N#In prior work, we introduced the concept of explicitly reliable memory regions, called havens. Memory management using havens supports reliability management through a region-based approach to memory allocations. Havens enable the creation of robust memory regions, whose resilient behavior is guaranteed by software-based protection schemes. In this paper, we propose language support for havens through type annotations that make the structure of a program's havens more explicit and convenient for HPC programmers to use. We describe how the extended haven-based memory management model is implemented, and demonstrate the use of the language-based annotations to affect the resiliency of a conjugate gradient solver application. | ['Saurabh Hukerikar', 'Christian Engelmann'] | Language Support for Reliable Memory Regions | 991,221 |
Unknown and uncertain disturbances significantly affect the servo performances of servo track writers. Therefore, the development of a robust fast seek controller that seeks using a state space disturbance observer is required. We present the design and application of a robust fast seek controller for servo track writers. Unlike some existing methods that use an explicit model for disturbance or for adjusting sensitivity using a filter, the proposed method is shown to effectively compensate for disturbances even while track seeking, which is not possible when using an integrator or a frequency domain disturbance observer. Experimental results demonstrate the utility of the proposed robust fast seek controller, which uses a state space disturbance observer, for servo track writers. The proposed servo track writer control scheme seeks tracks quickly in the presence of unknown and uncertain disturbances. | ['Seung-Hi Lee', 'Hyun Jae Kang', 'Chung Choo Chung'] | Robust Fast Seek Control of a Servo Track Writer Using a State Space Disturbance Observer | 226,768 |
In the hybrid simulation research, we investigate a new approach to build software virtual networks SVNs that are indistinguishable from their equivalent real live networks LNs. We define the concept of 'Network's Interactive Turing Test' based on the similar concept used in the artificial intelligence areas. Our goal is to actualize the interactive and indistinguishable real-virtual interface pair RVIP for large-scale computer network simulations. By RVIP's support, a single SVN is indistinguishable from its equivalent LN. In the entire hybrid system, multiple LNs and multiple SVNs are connected using many RVIPs in an arbitrary topology and at real time. To actualize RVIP, the following necessary conditions must be satisfied: i the performance of the underlying simulation platform must be faster than real time; ii all needed changes incurred by introducing any SVN into an LN scenario are put on the simulation's side. To interact with an SVN, RVIP requires that no change is made on any live node; iii an SVN does not exchange simulation events with LNs, that is, only standard IP protocol interactions between SVN and LN are allowed. iv Any LN can be dynamically plugged into the hybrid scenario at real time, just like being plugged into an equivalent purely LN. Compared with existing hybrid simulation efforts on NS-3, QualNet's EXata and OPNET's system-in-the-loop, in this paper, we use the actual RVIP implementation to show that RVIP is a better candidate to pass the Network's Interactive Turing Test owing to the following two advantages: i an interactive network tester can easily distinguish the existing hybrid networks from the LNs by using a live topology that cannot be simulated, for example, by including the entire live Internet. But RVIP is not vulnerable to such tests. RVIP can support hybrid scenarios with multiple SVNs and multiple LNs connected by an arbitrary network topology, and with the LNs on and off at anytime. ii Performance-wise, our studies show that RVIP provides more efficient support in terms of common metrics such as larger throughput limit and smaller extra latency; thus, the simulated SVNs are more indistinguishable from their live counterparts. Copyright © 2014 John Wiley & Sons, Ltd. | ['Jiejun Kong', 'Tingzhen Li', 'Dapeng Oliver Wu'] | RVIP: an indistinguishable approach to scalable network simulation at real time | 419,544 |
We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to encode visual information -- these play a crucial role in achieving high performance. Extensive experiments show that the proposed technique improves mean average precision by 24% on a public dataset, while being 4X faster, compared to the previous state-of-the-art. | ['Andre F. de Araújo', 'J. Chaves', 'Haricharan Lakshman', 'Roland Angst', 'Bernd Girod'] | Large-Scale Query-by-Image Video Retrieval Using Bloom Filters | 716,530 |
Interval Arithmetic and Self Similarity Based Subthreshold Leakage Optimization in RTL Datapaths | ['Shilpa Pendyala', 'Srinivas Katkoori'] | Interval Arithmetic and Self Similarity Based Subthreshold Leakage Optimization in RTL Datapaths | 588,353 |
We propose a traffic safety metric called the safety marginal value (SMV) to be applied to discrete-time and continuous-space vehicular traffic networks. Every vehicle uses a set of vehicle states containing the position, velocity, and lane index of all vehicles on a roadway to determine the SMV, while also controlling its velocity for the next time step. The anterior SMV is defined as the minimum value from a set of the continuous levels of collision risk with the leading vehicles predicted by the collision avoidance (CA) margin time of the designated vehicle and is bounded by two non-negative integers. The higher the anterior SMV, the lower the likelihood of a rear-end accident occurring. This simple and rigorous traffic safety metric will be useful in reducing vehicle-to-vehicle crashes and could thus relieve traffic congestion caused by accidents. Moreover, the anterior SMV can be used as a safety criterion to validate car- following models under various environmental variables or as a key parameter of an objective function to maximize safety levels on roadways. | ['Seokheon Cho', 'Ramesh R. Rao'] | Safety Marginal Value as a Traffic Safety Metric for the Trailing Vehicle | 826,794 |
Formal specification and analysis of an ISO communications protocol | ['Jon Rowson'] | Formal specification and analysis of an ISO communications protocol | 550,792 |
Dimensionality reduction by feature projection is widely used in pattern recognition, information retrieval, and statistics. When there are some outputs available (e.g., regression values or classification results), it is often beneficial to consider supervised projection, which is based not only on the inputs, but also on the target values. While this applies to a single-output setting, we are more interested in applications with multiple outputs, where several tasks need to be learned simultaneously. In this paper, we introduce a novel projection approach called multi-output regularized feature projection (MORP), which preserves the information of input features and, meanwhile, captures the correlations between inputs/outputs and (if applicable) between multiple outputs. This is done by introducing a latent variable model on the joint input-output space and minimizing the reconstruction errors for both inputs and outputs. It turns out that the mappings can be found by solving a generalized eigenvalue problem and are ready to extend to nonlinear mappings. Prediction accuracy can be greatly improved by using the new features since the structure of outputs is explored. We validate our approach in two applications. In the first setting, we predict users' preferences for a set of paintings. The second is concerned with image and text categorization where each image (or document) may belong to multiple categories. The proposed algorithm produces very encouraging results in both settings | ['Shipeng Yu', 'Kai Yu', 'Volker Tresp', 'Hans-Peter Kriegel'] | Multi-Output Regularized Feature Projection | 93,130 |
Evidence-Based Practice (EBP) represents a decision-making process centered on justifications of relevant information contained scientific research proof found in the Internet. Context is a type of knowledge that supports identi- fying what is or is not relevant in a given situation. Therefore, the integration of evidence and context is still an open issue. Besides, EBP procedures do not provide mechanisms to retain strategic knowledge from individual solutions, which could facilitate the learning of decision makers, preserving evidences used. On the other hand, Case-Based Reasoning (CBR) uses the history of similar cases and provides mechanisms to retain problem-solving. This paper proposes the integration of the CBR model with EBP procedures and Context to support decision making. Our approach includes a conceptual framework ex- tended to support the development of applications that combines cases, evi- dence and context, preserving the characteristics of usability and portability across domains. An implementation in the area of crime prevention illustrates the usage of our proposal. | ['Expedito Carlos Lopes', 'Vaninha Vieira', 'Ana Carolina Salgado', 'Ulrich Schiel'] | Using Cases, Evidences and Context to Support Decision Making | 150,032 |
Online social networks are increasingly being used as places where communities gather to exchange information, form opinions, collaborate in response to events. An aspect of this information exchange is how to determine if a source of social information can be trusted or not. Data mining literature addresses this problem. However, if usually employs social balance theories, by looking at small structures in complex networks known as triangles. This has proven effective in some cases, but it under performs in the lack of context information about the relation and in more complex interactive structures. In this paper we address the problem of creating a framework for the trust inference, able to infer the trust/distrust relationships in those relational environments that cannot be described by using the classical social balance theory. We do so by decomposing a trust network in its ego network components and mining on this ego network set the trust relationships, extending a well known graph mining algorithm. We test our framework on three public datasets describing trust relationships in the real world (from the social media Epinions, Slash dot and Wikipedia) and confronting our results with the trust inference state of the art, showing better performances where the social balance theory fails. | ['Giacomo Bachi', 'Michele Coscia', 'Anna Monreale', 'Fosca Giannotti'] | Classifying Trust/Distrust Relationships in Online Social Networks | 926,872 |
Given a set of linear equations Mx=b, we say that a set of integers S is (M,b)-free if it contains no solution to this system of equations. Motivated by questions related to testing linear-invariant properties of boolean functions, as well as recent investigations in additive number theory, the following conjecture was raised (implicitly) by Green and by Bhattacharyya, Chen, Sudan and Xie: we say that a set of integers S ⊆ [n], is e-far from being (M,b)-free if one needs to remove at least e n elements from S in order to make it (M,b)-free. The above conjecture states that for any system of homogenous linear equations Mx=0 and for any e >0 there is a constant time algorithm that can distinguish with high probability between sets of integers that are (M,0)-free from sets that are e-far from being (M,0)-free. Or in other words, that for any M there is an efficient testing algorithm for the property of being (M,0)-free. In this paper we confirm the above conjecture by showing that such a testing algorithm exists even for non-homogenous linear equations. As opposed to most results on testing boolean functions, which rely on algebraic and analytic arguments, our proof relies on results from extremal hypergraph theory, such as the recent removal lemmas of Gowers, Rodl et al. and Austin and Tao. | ['Asaf Shapira'] | Green's conjecture and testing linear-invariant properties | 115,354 |
Cloudifying Mobile Network Management: Performance Tests of Event Distribution and Rule Processing | ['Sumit Dawar', 'Sven van der Meer', 'John Keeney', 'Enda Fallon', 'T. Bennet'] | Cloudifying Mobile Network Management: Performance Tests of Event Distribution and Rule Processing | 836,109 |
This article proposes a novel iterative algorithm based on Low Density Parity Check (LDPC) codes for compression of correlated sources at rates approaching the Slepian-Wolf bound. The setup considered in the article looks at the problem of compressing one source at a rate determined based on the knowledge of the mean source correlation at the encoder, and employing the other correlated source as side information at the decoder which decompresses the first source based on the estimates of the actual correlation. We demonstrate that depending on the extent of the actual source correlation estimated through an iterative paradigm, significant compression can be obtained relative to the case the decoder does not use the implicit knowledge of the existence of correlation. | ['Fred Daneshgaran', 'Massimiliano Laddomada', 'Marina Mondin'] | LDPC-Based Iterative Algorithm for Compression of Correlated Sources at Rates Approaching the Slepian-Wolf Bound | 465,016 |
We believe that machine learning can be used to help diabetics and care providers manage diabetes by predicting the effect that behaviors have on blood glucose. This when coupled with telemedicine could help care providers provide better individualized therapy more frequently. Currently, diabetics might get 15 minutes of interaction with a health expert during a checkup, and in that amount of time the physician must quickly evaluate the patient's health to offer therapy advice. The Intelligent Diabetes Assistant (IDA) addresses this problem by remotely collecting data, instantaneously sharing that data with a physician, and automatically processing the data to reveal important patterns. The system makes data collection more efficient for the patient, and it will make data analysis more efficient for the care team. We have conducted a two week longitudinal study tracking the lifestyle, nutrition, and blood glucose readings of 10 diabetics using IDA. | ['David L. Duke', 'Charles E. Thorpe', 'Mazahir Mahmoud', 'Mahmoud Zirie'] | Intelligent Diabetes Assistant: Using machine learning to help manage diabetes | 229,991 |
We present a novel approach for jointly disambiguating and clustering known and unknown concepts and entities with Markov Logic. Concept and entity disambiguation is the task of identifying the correct concept or entity in a knowledge base for a single- or multi-word noun (mention) given its context. Concept and entity clustering is the task of clustering mentions so that all mentions in one cluster refer to the same concept or entity. The proposed model (1) is global, i.e. a group of mentions in a text is disambiguated in one single step combining various global and local features, and (2) performs disambiguation, unknown concept and entity detection and clustering jointly. The disambiguation is performed with respect to Wikipedia. The model is trained once on Wikipedia articles and then applied to and evaluated on different data sets originating from news papers, audio transcripts and internet sources. | ['Angela Fahrni', 'Michael Strube'] | Jointly Disambiguating and Clustering Concepts and Entities with Markov Logic | 612,005 |
Emerging lightweight cloud technologies, such as Docker containers, are gaining wide traction in IT due to the fact that they allow users to deploy applications in any environment faster and more efficiently than using virtual machines. However, current Docker-based container deployment solutions are aimed at managing containers in a single-site, which limits their capabilities. As more users look to adopt Docker containers in dynamic, heterogenous environments, the ability to deploy and effectively manage containers across multiple clouds and data centers becomes of utmost importance. In this paper, we propose a prototype framework, called C-Ports, that enables the deployment and management of Docker containers across multiple hybrid clouds and traditional clusters while taking into consideration user and resource provider objectives and constraints. The framework leverages a constraint-programming model for resource selection and uses CometCloud to allocate/deallocate resources as well as to deploy containers on top of these resources. Our prototype has been effectively used to deploy and manage containers in a dynamic federation composed of five clouds and two clusters. | ['Moustafa AbdelBaky', 'Javier Diaz-Montes', 'Manish Parashar', 'Merve Unuvar', 'Malgorzata Steinder'] | Docker Containers across Multiple Clouds and Data Centers | 682,618 |
Every segmentation algorithm has parameters that need to be adjusted in order to achieve good results. Evolving fuzzy systems for adjustment of segmentation parameters have been proposed recently (Evolving fuzzy image segmentation -- EFIS [1]). However, similar to any other algorithm, EFIS too suffers from a few limitations when used in practice. As a major drawback, EFIS depends on detection of the object of interest for feature calculation, a task that is highly application-dependent. In this paper, a new version of EFIS is proposed to overcome these limitations. The new EFIS, called self-configuring EFIS (SC-EFIS), uses available training data to auto-configure the parameters that are fixed in EFIS. As well, the proposed SCEFIS relies on a feature selection process that does not require the detection of a region of interest (ROI). | ['Ahmed A. Othman', 'Hamid R. Tizhoosh', 'Farzad Khalvati'] | Self-Configuring and Evolving Fuzzy Image Thresholding | 631,216 |
Adjusting the level of supply chain integration is a key instrument for managers to improve supply chain performance. Tightly integrated supply chains, shortly integrated supply chains, are typified by intensified cooperation between organizations and by the existence of a so-called business bus, being the supply chain wide IT-backbone for business processes and transactions. It is generally believed that this type of supply chains can be highly efficient, leading to a relatively high performance. The level of integration is closely related to that of networkability, which refers to the ability of an organization to become and stay a partner in an existing supply chain. In literature, a higher level of networkability is implicitly regarded as desirable, to improve the performance of a supply chain. To clarify the relationship between networkability, supply chain integration and supply chain performance, we have assessed these notions in an SME based supply chain in the high tech manufacturing industry. We found some preliminary evidence that supply chain performance can be comparatively high, without high levels of networkability at the level of IT (e.g., a business bus), as long as it is compensated by networkability of process, products, people and/or organization. | ['Martin Smits', 'W. van den Heuvel', 'W. Huisman'] | The Tacit Liaison between Networkability and Supply Chain Performance | 428,394 |
Transformations for early reply and forward message passing mechanisms | ['Ronald A. Olsson', 'Aaron W. Keen', 'Todd Williamson'] | Transformations for early reply and forward message passing mechanisms | 236,144 |
Standing up after falling is an essential ability for humanoid robots in order to resume their tasks without help from humans. Although many humanoid robots, especially small-size humanoid robots, have their own stand-up motions, there has not been a generalized method to automatically learn flexible stand-up motions for humanoid robots which can be applied to various fallen positions. In this research, we propose a method for learning stand-up motions for humanoid robots using Q-learning making use of their bilateral symmetry. We implemented this method on DarwIn-OP humanoid robots and learned an optimal policy in simulation. We compared the resulting stand-up motion with manually designed stand-up motions and with stand-up motions learned without considering bilateral symmetry. Both in simulation and on the real robot, the new stand-up motion was successful in most trials while other motions took longer or were not as robust. | ['Heejin Jeong', 'Daniel D. Lee'] | Efficient learning of stand-up motion for humanoid robots with bilateral symmetry | 967,742 |
In this paper, we present the OFDMA-based macro/femtocell wireless system as a geometric model, based on which we propose a resource allocation (RA) scheme that mitigates cross-tier and inter-femto interference. To mitigate the cross-tier interference, the neighboring area where the macro mobile station (MMS) is regarded as neighboring MMS (NMMS) is decided according to the average signal-to-interference-ratio (ASIR) threshold; then the total spectrum resource is partitioned by the femtocell and its NMMS. Subsequently, RA for each femto-cell is implemented as a strategic non-cooperative game by which the inter-femto interference is mitigated. Simulation results show that better performance can be achieved in terms of macrocell throughput, femtocell throughput on each subcarrier, MMS and femto mobile station (FMS) CDF of signal-to-interference-plus-noise ratio (SINR), with negligible femtocell throughput sacrifice. | ['Shiying Han', 'Boon-Hee Soong', 'Quang Duy La'] | Interference mitigation in resource allocation for OFDMA-based macro/femtocell two-tier wireless networks | 179,351 |
Weight functions with a parameter are introduced into an iteration process to increase the order of the convergence and enhance the behavior of the iteration process. The parameter can be chosen to restrict extraneous fixed points to the imaginary axis and provide the best basin of attraction. The process is demonstrated on several examples. | ['Changbum Chun', 'Beny Neta', 'Jeremy E. Kozdon', 'Melvin Scott'] | Choosing weight functions in iterative methods for simple roots | 425,137 |
We investigate terrestrial water storage (TWS) changes over the Sichuan Basin and the related impacts of water variations in the adjacent basins from GRACE (Gravity Recovery and Climate Experiment), in situ river level, and precipitation data. Although GRACE shows water increased over the Sichuan Basin from January 2003 to February 2015, two heavy droughts in 2006 and 2011 have resulted in significant water deficits. Correlations of 0.74 and 0.56 were found between TWS and mean river level/precipitation within the Sichuan Basin, respectively, indicating that the Sichuan Basin TWS is influenced by both of the local rainfall and water recharge from the adjacent rivers. Moreover, water sources from the neighboring basins showed different impacts on water deficits observed by GRACE during the two severe droughts in the region. This provides valuable information for regional water management in response to serious dry conditions. Additionally, the Sichuan Basin TWS is shown to be influenced more by the Indian Ocean Dipole (IOD) than the El Nino-Southern Oscillation (ENSO), especially for the January 2003–July 2012 period with a correlation of −0.66. However, a strong positive correlation of 0.84 was found between TWS and ENSO after August 2012, which is a puzzle that needs further investigation. This study shows that the combination of other hydrological variables can provide beneficial applications of GRACE in inter-basin areas. | ['Chaolong Yao', 'Zhicai Luo', 'Haihong Wang', 'Qiong Li', 'Hao Zhou'] | GRACE-Derived Terrestrial Water Storage Changes in the Inter-Basin Region and Its Possible Influencing Factors: A Case Study of the Sichuan Basin, China | 745,904 |
This paper describes a progressive image transmission (PIT) scheme using a variable block size coding technique in conjunction with a variety of quantization schemes in the transform domain. The proposed scheme uses a region growing technique to partition the images so that regions of different sizes can be addressed using a small amount of side information. This segmentation divides the image into five different regions that vary in size based on the details within the image. High detail blocks are classified into four different categories based on the energy distribution followed by vector quantization (VQ), and low-detail blocks are encoded with scalar quantization (SQ). Progressive refinement is achieved by proper masking of the information in the transform domain. Simulation results show that the reconstructed images preserve fine and pleasant qualities based on both subjective and mean square error criteria. Also, the receiver reconstructs more details in each stage so that the observer can recognize the image quickly. | ['Young Huh', 'Krit Panusopone', 'K. R. Rao'] | Variable block size coding of images with hybrid quantization | 268,568 |
Neuroimaging evidence increasingly supports the hypothesis that the same neural structures subserve the execution, imagination, and observation of actions. We used repetitive transcranial magnetic stimulation (rTMS) to investigate the specific roles of cerebellum and dorsolateral prefrontal cortex (DLPFC) in observational learning of a visuomotor task. Subjects observed an actor detecting a hidden sequence in a matrix and then performed the task detecting either the previously observed sequence or a new one. rTMS applied over the cerebellum before the observational training interfered with performance of the new sequence, whereas rTMS applied over the DLPFC interfered with performance of the previously observed one. When rTMS applied over cerebellar or prefrontal site was delivered after the observational training, no influence was observed on the execution of the task. These results furnish new insights on the neural circuitry involved in the single component of observational learning and allow us to hypothesize that cerebellum and DLPFC interact in planning actions, the former by permitting the acquisition of procedural competencies and the latter by providing flexibility among already acquired solutions. | ['Sara Torriero', 'Massimiliano Oliveri', 'Giacomo Koch', 'Carlo Caltagirone', 'Laura Petrosini'] | The What and How of Observational Learning | 522,045 |
Learning Management Systems (LMS) have acquired importance in schools and business training. The increase in the use of mobile devices has led to many users attempting to access LMS from devices for which these systems and their contents were not designed. Also, mobile devices contain various features in their interaction media and there are no standards for these. This variety of features changes from the type of display, keyboard and sensors, to operating systems. A natural task to mobile device users would be to access any LMS in a simple, effective and efficient way from any mobile device. Besides that developers won't have to write native applications for each mobile platform and each version. An appropriate solution to these problems is to apply a middleware-based architecture. This paper presents an analysis of the functionality of LMS focused on student role users using different mobile devices. | ['Daniel Vazquez Sanchez', 'Erika Hernández Rubio', 'Elena Fabiola Ruiz Ledesma', 'Amilcar Meneses Viveros'] | Student role functionalities towards Learning Management Systems as open platforms through mobile devices | 260,183 |
Subsets and Splits