abstract
stringlengths 5
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 5
367
| __index_level_0__
int64 1
1,000k
|
---|---|---|---|
Keywords: JPEG XT ; High Dynamic Range imaging ; image compression ; image quality assessment ; subjective evaluations ; objective metrics Reference EPFL-ARTICLE-214365doi:10.1109/MSP.2015.2506199View record in Web of Science Record created on 2015-12-03, modified on 2016-08-09 | ['Alessandro Artusi', 'Rafal Mantiuk', 'Thomas Richter', 'Pavel Korshunov', 'Philippe Hanhart', 'Touradj Ebrahimi', 'Massimiliano Agostinelli'] | JPEG XT: A Compression Standard for HDR and WCG Images [Standards in a Nutshell] | 672,336 |
Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500% when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings. | ['André Sérgio Nobre Gomes', 'Bruno Sousa', 'David Palma', 'Vitor Fonseca', 'Zhongliang Zhao', 'Edmundo Monteiro', 'Torsten Braun', 'Paulo Simões', 'Luis Cordeiro'] | Edge caching with mobility prediction in virtualized LTE mobile networks | 834,584 |
Little research has been done to explore the status of business genres in China. The present study explores the evolution of genre of resume writing in China using a grounded theory approach. This study examines the rhetorical patterns and persuasive strategies employed in resume writing in different periods in China and reveals how these changes are related to historical, social, and economic contexts in China, especially from 1979 to 2010, as well as impacts of global contexts on Chinese resume writing. This study characterizes resume writing in China and relates these features to cultural motives and cultural contexts. It concludes that current resume writing practice in China shows a glocal trend. | ['Xiaoli Li'] | A Genre in the Making—A Grounded Theory Explanation of the Cultural Factors in Current Resume Writing in China | 175,487 |
Review: "Insight Into Game Theory: An Alternative Mathematical Experience" by Ein-Ya Cura and Michael Maschler. | ['Milan Mares'] | Review: "Insight Into Game Theory: An Alternative Mathematical Experience" by Ein-Ya Cura and Michael Maschler. | 766,221 |
Life Support | ['Annette Weintraub'] | Life Support | 676,326 |
BioBayesNet is a new web application that allows the easy modeling and classification of biological data using Bayesian networks. To learn Bayesian networks the user can either upload a set of annotated FASTA sequences or a set of pre-computed feature vectors. In case of FASTA sequences, the server is able to generate a wide range of sequence and structural features from the sequences. These features are used to learn Bayesian networks. An automatic feature selection procedure assists in selecting discriminative features, providing an (locally) optimal set of features. The output includes several quality measures of the overall network and individual features as well as a graphical representation of the network structure, which allows to explore dependencies between features. Finally, the learned Bayesian network or another uploaded network can be used to classify new data. BioBayesNet facilitates the use of Bayesian networks in biological sequences analysis and is flexible to support modeling and classification applications in various scientific fields. The BioBayesNet server is available at http://biwww3. informatik.uni-freiburg.de:8080/BioBayesNet/. | ['Swetlana Nikolajewa', 'Rainer Pudimat', 'Michael Hiller', 'Matthias Platzer', 'Rolf Backofen'] | BioBayesNet: a web server for feature extraction and Bayesian network modeling of biological sequence data | 361,000 |
In this study, the authors tackle the problem of carrier aggregation (CA) in downlink of long-term evolution advanced (LTE-A) femtocell networks. They propose a novel approach in a new perspective: namely, user navigation, to improve the CA performance of an LTE-A system in the indoor environment. The proposed indoor user navigation (IUN) algorithm exploits the a priori knowledge of radio interferences between femtocells to build a geometric quality-of-service (QoS) map, which can be utilised to navigate users toward the locations suitable for performing CA to satisfy the QoS requirements. The simulations demonstrate the effectiveness of the proposed IUN algorithm to improve the CA performance in terms of the aggregate throughput for the LTE-A femtocell networks. | ['Chiapin Wang', 'Shih Hau Fang', 'Wen Hsing Kuo', 'Hsiao Chun Wu'] | Indoor user navigation for CA in LTE-advanced | 906,182 |
An adaptive in-loop deblocking filter (DF) is standardized in H.264/AVC to reduce blocking artifacts and improve compression efficiency. This paper proposes a low power DF architecture with hybrid and intelligent edge skip filtering order. We further adopt a four-stage pipeline to boost the speed of DF process and the proposed Horizontal Edge Skip Processing Architecture (HESPA) offers an edge skip aware mechanism for filtering the horizontal edges that not only reduces power consumption but also reduces the filtering processes down to 100 clock cycles per macroblock (MB). In addition, the architecture utilizes the buffers efficiently to store the temporary data without affecting the standarddefined data dependency by a reasonable strategy of edge filtering order to enhance the reusability of the intermediate data. The system throughput can then be improved and the power consumption can also be reduced. Simulation results show that more than 34% of logic power measured in FPGA can be saved when the proposed HESPA is enabled. Furthermore, the proposed architecture is implemented on a 0.18μm standard cell library, which consumes 19.8K gates at a clock frequency of 200 MHz, which compares competitively with other state-of-the-art works in terms of hardware cost. | ['Hua-chang Chung', 'Zong-Yi Chen', 'Pao-Chi Chang'] | Low power architecture design and hardware implementations of deblocking filter in H.264/AVC | 265,075 |
We present smooth interpretation , a method to systematically approximate numerical imperative programs by smooth mathematical functions. This approximation facilitates the use of numerical search techniques like gradient descent for program analysis and synthesis. The method extends to programs the notion of Gaussian smoothing , a popular signal-processing technique that filters out noise and discontinuities from a signal by taking its convolution with a Gaussian function. In our setting, Gaussian smoothing executes a program according to a probabilistic semantics; the execution of program P on an input x after Gaussian smoothing can be summarized as follows: (1) Apply a Gaussian perturbation to x -- the perturbed input is a random variable following a normal distribution with mean x . (2) Compute and return the expected output of P on this perturbed input. Computing the expectation explicitly would require the execution of P on all possible inputs, but smooth interpretation bypasses this requirement by using a form of symbolic execution to approximate the effect of Gaussian smoothing on P . The result is an efficient but approximate implementation of Gaussian smoothing of programs. Smooth interpretation has the effect of attenuating features of a program that impede numerical searches of its input space -- for example, discontinuities resulting from conditional branches are replaced by continuous transitions. We apply smooth interpretation to the problem of synthesizing values of numerical control parameters in embedded control applications. This problem is naturally formulated as one of numerical optimization: the goal is to find parameter values that minimize the error between the resulting program and a programmer-provided behavioral specification. Solving this problem by directly applying numerical optimization techniques is often impractical due to the discontinuities in the error function. By eliminating these discontinuities, smooth interpretation makes it possible to search the parameter space efficiently by means of simple gradient descent. Our experiments demonstrate the value of this strategy in synthesizing parameters for several challenging programs, including models of an automated gear shift and a PID controller. | ['Swarat Chaudhuri', 'Armando Solar-Lezama'] | Smooth interpretation | 678,157 |
We introduce new techniques for extracting, analyzing, and visualizing textual contents from instructional videos of low production quality. Using automatic speech recognition, approximate transcripts (/spl ap/75% word error rate) are obtained from the originally highly compressed videos of university courses, each comprising between 10 to 30 lectures. Text material in the form of books or papers that accompany the course are then used to filter meaningful phrases from the seemingly incoherent transcripts. The resulting index into the transcripts is tied together and visualized in 3 experimental graphs that help in understanding the overall course structure and provide a tool for localizing certain topics for indexing. We specifically discuss a transcript index map, which graphically lays out key phrases for a course, a textbook chapter to transcript match, and finally a lecture transcript similarity graph, which clusters semantically similar lectures. We test our methods and tools on 7 full courses with 230 hours of video and 273 transcripts. We are able to extract up to 98 unique key terms for a given transcript and up to 347 unique key terms for an entire course. The accuracy of the Textbook Chapter to Transcript Match exceeds 70% on average. The methods used can be applied to genres of video in which there are recurrent thematic words (news, sports, meetings, etc.). | ['Alexander Haubold'] | Analysis and visualization of index words from audio transcripts of instructional videos | 187,519 |
With the rapid development of E-commerce, the demand for forward and reverse logistics for E-commerce enterprises is becoming increasingly urgent. This paper proposes a joint design model of multi-period reverse logistics network with the consideration of carbon emissions for E-commerce enterprises. And a case study of E-commerce enterprises is used to verify the feasibility of the model. The result indicates that, this joint design model of multi-period reverse logistics network is more accordant with reality, the total operation cost of multi-period model reduces a lot and the carbon emission reduces greatly. Furthermore, this research provides additionally reference for reducing carbon missions as well as the designing of regional reverse logistics network for E-business enterprises, and even for other kinds of enterprises. | ['Xinxin Liu', 'Jianquan Guo', 'Chengji Liang'] | Joint Design Model of Multi-period Reverse Logistics Network with the Consideration of Carbon Emissions for E-Commerce Enterprises | 689,991 |
A Delta Oriented Approach to the Evolution and Reconciliation of Enterprise Software Products Lines | ['Gleydson Lima', 'Jadson Santos', 'Uirá Kulesza', 'Daniel Alencar', 'Sergio Vianna Fialho'] | A Delta Oriented Approach to the Evolution and Reconciliation of Enterprise Software Products Lines | 736,354 |
A practical system approach for time-multiplexing cellular neural network (CNN) implementations suitable for processing large and complex images using small CNN arrays is presented. For real size applications, due to hardware limitations, it is impossible to have a one-on-one mapping between the CNN hardware cells and all the pixels in the image involved. This paper presents a practical solution by processing the input image, block by block, with the number of pixels in a block being the same as the number of CNN cells in the array. Furthermore, unlike other implementations in which the output is observed at the hard-limiting block, the very large scale integrated (VLSI) architecture hereby described monitors the outputs from the state node. While previous implementations are mostly suitable for black and white applications because of the thresholded outputs, our approach is especially suitable for applications in color (gray) image processing due to the analog nature of the state node. Experimental complementary metal-oxide-semiconductor (CMOS) chip results in good agreement with theoretical results are presented. | ['Lei Wang', 'J.P. de Gyvez', 'E. Sanchez-Sinencio'] | Time multiplexed color image processing based on a CNN with cell-state outputs | 158,735 |
In this paper, we present a new upper bound for the bit error probability (BEP) of the so-called Space Time Block Coded Spatial Modulation (STBC-SM) system introduced by E. Basar et al. in [1] over a quasi-static Rayleigh fading channel. Based on the Verdu's theorem [2], the concept of the spatial constellation (SC) codewords and maximum likelihood (ML) decoder in [3], the upper bound is obtained by eliminating a number of redundant pairwise error probabilities (PEPs). Our approach leads to a new upper bound, which is tighter than the union bound. Consequently, it allows us to evaluate the bit-error performance of STBC-SM systems more exactly, particularly when the signal-to-noise power ratio (SNR) is sufficiently high. | ['Van-Thien Luong', 'Minh-Tuan Le', 'Hong-Anh Mai', 'Xuan-Nam Tran', 'Vu-Duc Ngo'] | New upper bound for space-time block coded spatial modulation | 564,266 |
Software component technology on the one hand supports the cost-effective development of specialized applications. On the other hand, however it introduces special security problems. Some major problems can be solved by the automated run-time enforcement of security policies. Each component is controlled by a wrapper which monitors the component's behavior and checks its compliance with the security behavior constraints of the component's employment contract. Since control functions and wrappers can cause substantial overhead, we introduce trust-adapted control functions where the intensity of monitoring and behavior checks depends on the level of trust, the component, its hosting environment, and its vendor have currently in the eyes of the application administration. We report on wrappers and a trust information service, outline the embedding security model and architecture, and describe a Java Bean based experimental implementation. | ['Peter Herrmann', 'Heiko Krumm'] | Trust-adapted enforcement of security policies in distributed component-structured applications | 527,212 |
Multi-Resolution State Roadmap Method for Trajectory Planning | ['Yuichi Tazaki', 'Jingyu Xiang', 'Tatsuya Suzuki', 'Blaine Levedahl'] | Multi-Resolution State Roadmap Method for Trajectory Planning | 721,927 |
A empirical study on the status of software localization in open source projects | ['Zeyad Alshaikh', 'Shaikh Mostafa', 'Xiaoyin Wang', 'Sen He'] | A empirical study on the status of software localization in open source projects | 668,497 |
Current business conditions have given rise to distributed teams that are mostly collocated except for one remote member. These "hub-and-satellite" teams face the challenge of the satellite colleague being out-of-sight and out-of-mind. We developed a telepresence device, called an Embodied Social Proxy (ESP), which represents the satellite coworker 24x7. Beyond using ESPs in our own group, we deployed an ESP in four product teams within our company for six weeks. We studied how ESP was used through ethnographic observations, surveys, and usage log data. ESP not only increased the satellite worker's ability to fully participate in meetings, it also increased the hub's attention and affinity towards the satellite. The continuous physical presence of ESP in each team improved the interpersonal social connections between hub and satellite colleagues. | ['Gina Venolia', 'John C. Tang', 'Ruy Cervantes', 'Sara A. Bly', 'George G. Robertson', 'Bongshin Lee', 'Kori Inkpen'] | Embodied social proxy: mediating interpersonal connection in hub-and-satellite teams | 532,321 |
The concept of quasiperiodicity is a generalization of the notion of periodicity where in contrast to periodicity the quasiperiods of a quasiperiodic string may overlap. A lot of research has been concentrated around algorithms for the computation of quasiperiodicities in strings while not much is known about bounds on their maximum number of occurrences in words. We study the overlapping factors of a word as a means to provide more insight into quasiperiodic structures of words. We propose a linear time algorithm for the identification of all overlapping factors of a word, we investigate the appearance of overlapping factors in Fibonacci words and we provide some bounds on the maximum number of distinct overlapping factors in a word. | ['Manolis Christodoulakis', 'Michalis Christou', 'Maxime Crochemore', 'Costas S. Iliopoulos'] | Overlapping factors in words | 567,510 |
In this paper, considering multiple interference regions simultaneously, an optimal antenna deployment problem for distributed Multi-Input Multi-Output (MIMO) radar is investigated. The optimal antenna deployment problem is solved by proposing an antenna deployment method based on Multi-Objective Particle Swarm Optimization (MOPSO). Firstly, we construct a multi-objective optimization problem for MIMO radar antenna deployment by choosing the interference power densities of different regions as objective functions. Then, to obtain the optimal deployment result without wasting time and computational resources, an iteration convergence criterion based on interval distance is proposed. The iteration convergence criterion can be used to stop the MOPSO optimization process efficiently when the optimal antenna deployment algorithm reaches the desired convergence level. Finally, numerical results are provided to verify the validity of the proposed algorithm. | ['Tianxian Zhang', 'Jiadong Liang', 'Yichuan Yang', 'Guolong Cui', 'Lingjiang Kong', 'Xiaobo Yang'] | Antenna Deployment Method for MIMO Radar under the Situation of Multiple Interference Regions. | 998,097 |
In this paper the visual reasoning that is part of visual thinking capabilities of the shape understanding system (SUS) is investigated. This research is a continuation of the authors' previous work focused on investigating understanding capabilities of the intelligent systems based on the shape understanding system. SUS is an example of the visual understanding system, where sensory information is transformed into the multilevel representation in the concept formation process that is part of the visual thinking capabilities. The visual reasoning involves transformation of the description of the object when passing consequent stages of the reasoning process and the reasoning and processing of the data are mutually dependent. | ['Zbigniew Les', 'Magdalena Les'] | SHAPE UNDERSTANDING SYSTEM: THE VISUAL REASONING PROCESS | 46,305 |
Complexity of cerebral blood flow velocity and arterial blood pressure in subarachnoid hemorrhage using time-frequency analysis. | ['Michal M. Placek', 'Paweł Wachel', 'Marek Czosnyka', 'Martin Soehle', 'Peter Smielewski', 'Magdalena Kasprowicz'] | Complexity of cerebral blood flow velocity and arterial blood pressure in subarachnoid hemorrhage using time-frequency analysis. | 679,823 |
In this work, we investigate a new objective measurement for assessing the video playback quality for services delivered in networks that use TCP as a transport layer protocol. We define the new metric as pause intensity to characterize the quality of playback in terms of its continuity since, in the case of TCP, data packets are protected from losses but not from delays. Using packet traces generated from real TCP connections in a lossy environment, we are able to simulate the playback of a video and monitor buffer behaviors in order to calculate pause intensity values. We also run subjective tests to verify the effectiveness of the metric introduced and show that the results of pause intensity and the subjective scores made over the same real video clips are closely correlated. | ['Timothy Porter', 'Xiao-Hong Peng'] | An Objective Approach to Measuring Video Playback Quality in Lossy Networks using TCP | 366,080 |
Cooperative diversity can be applied to energy-constrained wireless sensor networks to significantly reduce node energy consumption. However, cooperation partners must be carefully selected and coordinated to practically exploit this energy saving potential. In this paper we investigate partner choice for energy efficient cooperation in a wireless sensor network. We formulate novel and computationally efficient partner choice heuristics for sensor nodes based on either global or local knowledge of average path loss values in the network. We present extensive simulation results of cooperation in a wireless sensor network to show that the proposed heuristics achieve near-optimally energy efficient partner selection. Our results also demonstrate that large network-wide energy savings are achieved as a result of cooperative communication. Therefore, our simple partner choice heuristics form the basis of an effective distributed cooperation protocol for improving the energy efficiency of a wireless sensor network. Very importantly from the point of view of practical implementation, we show that our partner choice heuristic based on local information is the most effective cooperation strategy for resource-constrained wireless sensor networks, as it yields superior energy conservation results while enabling fully distributed and scalable cooperation. | ['Ljiljana Simic', 'Stevan M. Berber', 'Kevin W. Sowerby'] | Distributed Partner Choice for Energy Efficient Cooperation in a Wireless Sensor Network | 440,514 |
Beyond IC Postulates: Classification Criteria for Merging Operators. | ['Adrian Haret', 'Andreas Pfandler', 'Stefan Woltran'] | Beyond IC Postulates: Classification Criteria for Merging Operators. | 980,154 |
When using alpha-design for plant variety testing under space restrictions, ex post design modifications must be implemented to prevent variety self-proximity on plots and, consequently, to prevent damage-induced loss of experimental information. This is done ad hoc for each experiment; the unsystematic modification is, however, commonly not only unable to resolve all existing proximities, but may introduce secondary undesired proximities. In this paper, a procedure is developed for the universal construction of modified alpha-design that covers all existing proximity constraints while keeping the efficiency level of the original design. Using extensive real data simulation, we validate the procedure and confirm high damage robustness of the modified designs. The procedure has been implemented as a Matlab function and is available as on-line supplement to the paper. The function enables to design the damage-robust experiments automatically using only standard computer equipment. | ['Jitka Janová', 'David Hampel'] | alfaDRA: A program for automatic elimination of variety self-proximities in alpha-design | 617,602 |
We study the problem of maintaining knowledge of the locations of $n$ entities that are moving, each with some, possibly different, upper bound on their speed. We assume a setting where we can query the current location of any one entity, but this query takes a unit of time, during which we cannot query any other entities. In this model, we can never know the exact locations of all entities at any one time. Instead, we wish to minimize uncertainty concerning the locations of all entities at some target time that is t units in the future. We measure uncertainty by the ply of the potential locations: the maximum over all points $x$ of the number of entities that could potentially be at $x$. Since the ply could be large for every query strategy, we analyze the performance of our query strategy in a competitive framework: we consider the worst-case ratio of the ply achieved by our strategy to the intrinsic ply (the smallest ply achievable by any strategy, even one that knows in advance the full trajectories o... | ['William S. Evans', 'David G. Kirkpatrick', 'Maarten Löffler', 'Frank Staals'] | Minimizing Co-location Potential of Moving Entities | 909,759 |
Edited MRS allows the detection of low-concentration metabolites, whose signals are not resolved in the MR spectrum. Tailored acquisitions can be designed to detect, for example, the inhibitory neurotransmitter γ-aminobutyric acid (GABA), or the reduction-oxidation (redox) compound glutathione (GSH), and single-voxel edited experiments are generally acquired at a rate of one metabolite-per-experiment. We demonstrate that simultaneous detection of the overlapping signals of GABA and GSH is possible using Hadamard Encoding and Reconstruction of Mega-Edited Spectroscopy (HERMES). HERMES applies orthogonal editing encoding (following a Hadamard scheme), such that GSH- and GABA-edited difference spectra can be reconstructed from a single multiplexed experiment. At a TE of 80 ms, 20-ms editing pulses are applied at 4.56 ppm (on GSH),1.9 ppm (on GABA), both offsets (using a dual-lobe cosine-modulated pulse) or neither. Hadamard combinations of the four sub-experiments yield GABA and GSH difference spectra.#R##N##R##N#It is shown that HERMES gives excellent separation of the edited GABA and GSH signals in phantoms, and resulting edited lineshapes agree well with separate Mescher-Garwood Point-resolved Spectroscopy (MEGA-PRESS) acquisitions. In vivo, the quality and signal-to-noise ratio (SNR) of HERMES spectra are similar to those of sequentially acquired MEGA-PRESS spectra, with the benefit of saving half the acquisition time. | ['Muhammad G. Saleh', 'Georg Oeltzschner', 'Kimberly L. Chan', 'Nicolaas A.J. Puts', 'Mark Mikkelsen', 'Michael Schär', 'Ashley D. Harris', 'Richard A.E. Edden'] | Simultaneous edited MRS of GABA and glutathione | 874,359 |
The foundations of technique of cognitive analysis of socio-economic objects (enterprise, city, region, state, etc.) security and working out of secure strategies of their development, that have been worked out by the Institute of Control Sciences of the Russian Academy of Sciences are considered herein. The socio-economic object (SEO) security is considered herein as such SEO state that provides its purposeful development under transformation of external and internal environment. The technique is based on cognitive approach to modeling and includes the following stages: cognitive structuring; construction of cognitive model of SEO; structure and goal analysis of the model; scenario modeling of a situation development; interpretation of results; cognitive monitoring | ['Dmitry I. Makarenko', 'Z. Avdeeva', 'V. Maximov'] | Cognitive approach to control of socio-economic systems security | 447,828 |
Quasi-equal Clock Reduction: Eliminating Assumptions on Networks | ['Christian Herrera', 'Bernd Westphal'] | Quasi-equal Clock Reduction: Eliminating Assumptions on Networks | 627,891 |
Hydra accepts an equation written in terms of operations on matrices and automatically produces highly efficient code to solve these equations. Processing of the equation starts by tiling the matrices. This transforms the equation into either a single new equation containing terms involving tiles or into multiple equations some of which can be solved in parallel with each other. Hydra continues transforming the equations using tiling and seeking terms that Hydra knows how to compute or equations it knows how to solve. The end result is that by transforming the equations Hydra can produce multiple solvers with different locality behavior and/or different parallel execution profiles. Next, Hydra applies empirical search over this space of possible solvers to identify the most efficient version. In this way, Hydra enables the automatic production of efficient solvers requiring very little or no coding at all and delivering performance approximating that of the highly tuned library routines such as Intel's MKL. | ['Alexandre Duchateau', 'David A. Padua', 'Denis Barthou'] | Hydra: Automatic algorithm exploration from linear algebra equations | 272,683 |
High-throughput experimental technologies gradually shift the paradigm of biological research from hypothesis-validation toward hypothesis-generation science. Translating diverse types of large-scale experimental data into testable hypotheses, however, remains a daunting task. We previously demonstrated that heterogeneous genomics data can be integrated into a single genome-scale gene network with high prediction power for ribonucleic acid interference (RNAi) phenotypes in Caenorhabditis elegans, a popular metazoan model in the study of developmental biology, neurobiology and genetics. Here, we present WormNet version 3 (v3), which is a new network-assisted hypothesis-generating server for C. elegans. WormNet v3 includes major updates to the base gene network, which substantially improved predictions of RNAi phenotypes. The server generates various gene network-based hypotheses using three complementary network methods: (i) a phenotype-centric approach to ‘find new members for a pathway’; (ii) a gene-centric approach to ‘infer functions from network neighbors’ and (iii) a context-centric approach to ‘find context-associated hub genes’, which is a new method to identify key genes that mediate physiology within a specific context. For example, we demonstrated that the context-centric approach can be used to identify potential molecular targets of toxic chemicals. WormNet v3 is freely accessible at http://www.inetbio.org/wormnet. | ['Ara Cho', 'Junha Shin', 'Sohyun Hwang', 'Chanyoung Kim', 'Hongseok Shim', 'Hyo-Jin Kim', 'Hanhae Kim', 'Insuk Lee'] | WormNet v3: a network-assisted hypothesis-generating server for Caenorhabditis elegans | 218,171 |
In today's turbulent e-commerce environment, online companies need to have a long-term client relationship strategy to keep customer satisfied. It is believed that satisfied customers not only continue to use the product or service but also help to recruit more customers through word of mouth. Similarly unsatisfied customers discontinue the product or service usage and discourage others from using the product or service. Therefore, understanding the determinants of customer satisfaction and post-adoption behaviors are important issues. The present study proposes and tests a model that integrates cognition, emotion, satisfaction, and three post purchase behaviors. The results show that negative affective experience mediates the effect of confirmation and directly predicts complaint intention in addition to satisfaction. Furthermore, perceived usefulness and confirmation predict the level of satisfaction with e-service and the latter in turn predicts continuance intention, recommendation, and complaint | ['Sophea Chea', 'Margaret Meiling Luo'] | Cognition, Emotion, Satisfaction, and Post-Adoption Behaviors of E-Service Customers | 132,994 |
When people observe and interact with physical spaces, they are able to associate functionality to regions in the environment. Our goal is to automate dense functional understanding of large spaces by leveraging sparse activity demonstrations recorded from an ego-centric viewpoint. The method we describe enables functionality estimation in large scenes where people have behaved, as well as novel scenes where no behaviors are observed. Our method learns and predicts "Action Maps", which encode the ability for a user to perform activities at various locations. With the usage of an egocentric camera to observe human activities, our method scales with the size of the scene without the need for mounting multiple static surveillance cameras and is well-suited to the task of observing activities up-close. We demonstrate that by capturing appearance-based attributes of the environment and associating these attributes with activity demonstrations, our proposed mathematical framework allows for the prediction of Action Maps in new environments. Additionally, we offer a preliminary glance of the applicability of Action Maps by demonstrating a proof-of concept application in which they are used in concert with activity detections to perform localization. | ['Nicholas Rhinehart', 'Kris M. Kitani'] | Learning Action Maps of Large Environments via First-Person Vision | 725,324 |
The classification of surface reflectance functions as diffuse, specular, and glossy has been introduced by Heckbert more than two decades ago. Many rendering algorithms are dependent on such a classification, as different kinds of light transport will be handled by specialized methods, for example, caustics require specular bounce or refraction. As the surface reflectance models are more and more rich and descriptive including those based on measured data, it has not been possible to keep such a characterization simple. Each surface reflectance model is mostly handled separately, or alternatively, the rendering algorithm restricts itself to the use of some subset of reflectance models. We provide a general characterization for arbitrary surface reflectance representation by means of statistical tools. We demonstrate by rendered images using Matusik's BRDF data sets for two environment maps and two 3D objects (sphere and Utah teapot) that there is even a visible perceptual correspondence to the proposed surface reflectance characterization, when we use monochromatic surface reflectance and the albedo is normalized for rendering images to equalize perceived brightness. The proposed characterization is intended to be used to optimize rendering algorithms. | ['Vlastimil Havran', 'Mateu Sbert'] | Surface reflectance characterization by statistical tools | 589,326 |
Discusses issues involving texting while driving, focusing on legal and privacy concerns. The majority of states have laws intended to combat distracted driving, but some laws cause concerns for privacy advocates. In January 2016, Vermont representative Martin LaLonde introduced a bill that would allow officers to search a driver’s phone during routine traffic stops to see if it was being used. The same purported technology used by companies like Cellebrite to unlock the terrorist’s iPhone 5C without Apple’s help for the FBI’s investigation would be deployed to law enforcement on the roads. The technology available would essentially create a warrantless investigation that any police officer could carry out on site. Given previous federal court decisions, it is highly unlikely that the bill would pass Constitutional muster if enacted. The Fourth Amendment prohibits unreasonable search and seizure of “persons, houses, papers and effects.” Sure enough, in October 2013, the U.S. Supreme Court ruled that authorities cannot search cellphones or smartphones without a warrant. | ['Scott Schober'] | Technology Versus Privacy Issues in Preventing Distracted Driver Accidents [Point of View] | 709,322 |
We investigate the open-loop stability of a planar biped robot performing a periodic motion of forward somersaults with alternating single-leg contacts. The robot has a trunk and two actuated telescopic legs with point feet which are coupled to the trunk by actuated hinges. There is compliance and damping in the hip and in the legs. The concept of open-loop control implies that all actuators of the system receive predetermined inputs that are never altered by any feedback interference. Only with the right choice of model parameters and actuator inputs is it possible to create such self-stabilizing motions exploiting the natural stability properties of the system. These unknowns have been determined using special-purpose stability-optimization methods. The resulting motion is not only stable, but also a more efficient form of forward motion than running for the investigated robot. | ['Katja D. Mombaur', 'Hans Georg Bock', 'Johannes P. Schlöder', 'Richard W. Longman'] | Self-stabilizing somersaults | 82,735 |
In the study presented in this paper two haptic and visual prototypes for learning about geometrical concepts in group work in primary school have been designed and evaluated. The aim was for the prototypes to support collaborative learning between sighted and visually impaired pupils. The first prototype was a 3D environment, that supported learning of spatial geometry. The second prototype was a flattened 3D environment that supported learning to distinguish between angles. The two prototypes were evaluated in four schools with small groups of pupils - two sighted and one visually impaired. The results showed that the support for the visually impaired user was good and that cooperation and learning are satisfactorily supported. However, a number of interesting problems were also discovered that need to be investigated further. A promising result was that the power of the touch-based haptic interface for supporting visually impaired people was made clear | ['E.-L. Salinas', 'Jonas Moll', 'Kerstin Severinson-Eklundh'] | Group Work About Geometrical Concepts Among Blind and Sighted Pupils Using Haptic Interfaces | 5,126 |
Capturing recurring concepts using discrete Fourier transform | ['Sripirakas Sakthithasan', 'Russel Pears'] | Capturing recurring concepts using discrete Fourier transform | 635,182 |
When can we reason about the neutrality of a network based on external observations? We prove conditions under which it is possible to (a) detect neutrality violations and (b) localize them to specific links, based on external observations. Our insight is that, when we make external observations from different vantage points, these will most likely be inconsistent with each other if the network is not neutral. Where existing tomographic techniques try to form solvable systems of equations to infer network properties, we try to form \emph{un}solvable systems that reveal neutrality violations. We present an algorithm that relies on this idea to identify sets of non-neutral links based on external observations, and we show, through network emulation, that it achieves good accuracy for a variety of network conditions. | ['Zhiyong Zhang', 'Ovidiu Sebastian Mara', 'Katerina J. Argyraki'] | Network neutrality inference | 85,717 |
This paper provides a quantitative analysis for the longitudinal dynamic stability of a vertically flying insect-mimicking flapping wing system (FWS. In order to define the parameters in the equation of motion, the computational fluid dynamics (CFD) by ANSYS-Fluent was used. The aerodynamic forces and moment when the FWS was installed vertically and then inclined -15 and +15 degree for flight speeds of 0, 0.2 and 0.4 rn/s were computed. Through the eigenvalue and eigenvector analysis of the system matrix, we could make the formal description of the dynamic stability of the FWS. Three modes of motion were identified: one stable oscillatory mode, one unstable divergence mode, and one stable subsidence mode. Due to the divergence mode, the FWS eventually becomes unstable. However, the FWS could stay stable in the vertical flight during the first 0.5 second. | ['Loan Thi Kim Au', 'Vu Hoang Phan', 'Agus Budiyono', 'Hoon Cheol Park'] | Dynamic stability in vertically flying insect-mimicking flapping wing system | 327,954 |
Motion detection surveillance technology came about as a relief for the generally time-consuming reviewing process that a normal video surveillance system offers. It has gained a lot of interests over the past few years. In this paper, we propose a motion detection surveillance system, consisting of its Graphic User Interface (GUI) and its method for motion detection, through the study and evaluation of currently available products and methods. The proposed system is efficient and convenient for both office and home uses. | ['Li Fang', 'Zhang Meng', 'Claire Chen', 'Qian Hui'] | Smart Motion Detection Surveillance System | 85,020 |
In this paper, we develop a new tolerance-based Branch and Bound algorithm for solving NP-hard problems. In particular, we consider the asymmetric traveling salesman problem (ATSP), an NP-hard problem with large practical relevance. The main algorithmic contribution is our lower bounding strategy that uses the expected costs of including arcs in the solution to the assignment problem relaxation of the ATSP, the so-called lower tolerance values. The computation of the lower bound requires the calculation of a large set of lower tolerances. We apply and adapt a finding from 23] that makes it possible to compute all lower tolerance values efficiently. Computational results show that our Branch and Bound algorithm exhibits very good performance in comparison with state-of-the-art algorithms, in particular for difficult clustered ATSP instances. | ['Remco Germs', 'Boris Goldengorin', 'Marcel Turkensteen'] | Lower tolerance-based Branch and Bound algorithms for the ATSP | 163,521 |
Online Searching: A Guide to Finding Quality Information Efficiently and Effectively | ['Alireza Isfandyari-Moghaddam'] | Online Searching: A Guide to Finding Quality Information Efficiently and Effectively | 843,092 |
Most conventional IPv4-based route lookup algorithms are no more suitable for IPv6 packet forwarding due to the significantly increased 128-bit-long address. However, as a result of lacking of standard IPv6 route databases, it is hard to make benchmarks for the new generation IPv6-based algorithms developing/evaluation. In this paper, based on the studies of initial IPv6 prefix distributions and the associated RFC documents, we originally develop a scalable IPv6 prefix generator, called V6Gene, for IPv6-based route lookup algorithms benchmarking. According to the RFCs and other associated standards, V6Gene generates IPv6 route prefixes from the initially assigned LIR (local Internet registries) prefixes collected from the real world, simulating the process of future IPv6 address block allocation from the LIRs to their subscribers. V6Gene is totally flexible for generation of all kinds of route databases with different characteristics. It is simple for implementation and can be easily integrated within other IPv6 benchmark tools/systems. | ['Kai Zheng', 'Bin Liu'] | V6Gene: a scalable IPv6 prefix generator for route lookup algorithm benchmark | 168,346 |
Nonparametric Bayesian models are often based on the assumption that the objects being modeled are exchangeable. While appropriate in some applications (e.g., bag-of-words models for documents), exchangeability is sometimes assumed simply for computational reasons; non-exchangeable models might be a better choice for applications based on subject matter. Drawing on ideas from graphical models and phylogenetics, we describe a non-exchangeable prior for a class of nonparametric latent feature models that is nearly as efficient computationally as its exchangeable counterpart. Our model is applicable to the general setting in which the dependencies between objects can be expressed using a tree, where edge lengths indicate the strength of relationships. We demonstrate an application to modeling probabilistic choice. | ['Kurt Miller', 'Thomas L. Griffiths', 'Michael I. Jordan'] | The Phylogenetic Indian Buffet Process: A Non-Exchangeable Nonparametric Prior for Latent Features | 146,092 |
Towards Hyper Activity Books for Children. Connecting Activity Books and Montessori-like Educational Materials | ['Raffaele Di Fuccio', 'Michela Ponticorvo', 'Andrea Di Ferdinando', 'Orazio Miglino'] | Towards Hyper Activity Books for Children. Connecting Activity Books and Montessori-like Educational Materials | 646,307 |
Key/Value Store (KVS) is a fundamental service used widely in modern data centers to associate keys with data values. KVS systems, such as Redis, Memcached, and Dynamo DB have traditionally been implemented with software and run on clusters of microprocessor-based servers. In this work an alternate approach is taken that performs KVS with gate ware in Field Programmable Gate Array (FPGA) logic. We leverage an efficient, open-standard, binary message format to transfer keys and values over Ethernet. Results of three different implementations of this KVS were compared -- software running on a Linux server with network data sent over UDP/IP sockets, kernel bypass using Intel's Data Plane Development Kit (DPDK), and with pure FPGA logic implemented in gate ware. We characterize the three implementations in terms of throughput, latency, and power. | ['John W. Lockwood'] | Scalable Key/Value Search in Datacenters | 208,581 |
In this article, we present a novel evolutionary algorithm for approximating the efficient set of a multiobjective optimization problem (MOP) with continuous variables. The algorithm is based on populations of variable size and exploits new rules for selecting alternatives generated by mutation and recombination. A special feature of the algorithm is that it solves at the same time the original problem and a dual problem such that solutions converge towards the efficient border from two "sides", the feasible set and a subset of the infeasible set. Together with additional assumptions on the considered MOP and further specifications on the algorithm, theoretical results on the approximation quality and the convergence of both subpopulations, the feasible and the infeasible one, are derived. | ['Thomas Hanne'] | A primal-dual multiobjective evolutionary algorithm for approximating the efficient set | 108,426 |
We propose a novel two-layer neural network to answer a point query in R/sup n/ which is partitioned into polyhedral regions; such a task solves among others nearest neighbor clustering. As in previous approaches to the problem, our design is based on the use of Voronoi diagrams. However, our approach results in substantial reduction of the number of neurons, completely eliminating the second layer, at the price of requiring only two additional clock steps. In addition, the design process is also simplified while retaining the main advantage of the approach, namely its ability to furnish precise values for the number of neurons and the connection weights necessitating neither trial and error type iterations nor ad hoc parameters. | ['Camillo Gentile', 'Mario Sznaier'] | An improved Voronoi-diagram-based neural net for pattern classification | 257,429 |
Implementing production of XML documents is a rarely discussed topic in academic literature even though it is an important issue in many contemporary organizations. This paper describes findings from three case organizations where different kinds of XML documents were implemented. Our findings suggest that the implementation is a domain-specific task related to various kinds of organizational activities from document authoring to business processes. As expected, the amount and complexity of document types as well as the number of people and organizations involved affect the challenges in the implementation process. Hiding the XML format from the software users and training the end users are important means to reduce the user resistance against structured document authoring and novel tools. | ['Reija Nurmeksela', 'Eliisa Jauhiainen', 'Airi Salminen', 'Anne Honkaranta'] | XML document implementation: Experiences from three cases | 399,615 |
Implementing High-Level Identification Specifications | ['Arnd Poetzsch-Heffter'] | Implementing High-Level Identification Specifications | 480,689 |
Bimodal Logics for Extensions of Arithmetical Theories | ['Lev D. Beklemishev'] | Bimodal Logics for Extensions of Arithmetical Theories | 481,740 |
In the case of cellular video streaming over wireless channels, burst frame losses may be unavoidable. Considering the unequal importance of different frames in a group-of-pictures (GOP) and the burst-error characteristics of wireless channels, this paper proposes a channel-aware frame dropping scheme so as to shift burst losses into relatively unimportant frames in the same GOP. By using selective retransmission at the radio link layer, a base station can adaptively assign the unequal transmission attempts to different video frames. Simulation results show that the proposed scheme can be aware of the variation of wireless channel conditions, and thus significantly improve error resilience of cellular video streaming. | ['Hao Liu', 'Wenjun Zhang', 'Songyu Yu', 'Xiaokang Yang'] | Channel-Aware Frame Dropping for Cellular Video Streaming | 45,606 |
Einsatz von Vorlesungsaufzeichnungen im regulären Universitätsbetrieb. | ['Robert Mertens', 'Andreas Knaden', 'Anja Krüger', 'Oliver Vornberger'] | Einsatz von Vorlesungsaufzeichnungen im regulären Universitätsbetrieb. | 675,116 |
City-scale human mobility analysis is an important problem in pervasive computing. In this paper, with qualitative and quantitative analysis, we establish and confirm the relationship between the get-on/off characteristics of taxi passengers and the social function of city regions. We find that get-on/off amount in a region can depict the social activity dynamics in that area, i.e. the temporal variation of get-on/off amount can characterize the social function of a region. The experimental results on a large-scale real-world taxi dataset suggest that three typical regional categories can be recognized even using a very simple classification method. | ['Guande Qi', 'Xiaolong Li', 'Shijian Li', 'Gang Pan', 'Zonghui Wang', 'Daqing Zhang'] | Measuring social functions of city regions from large-scale taxi behaviors | 211,123 |
The performance of orthogonal frequency division multiplexing ultra wideband (UWB) radio signals distribution in long-reach passive optical networks (LR-PONs) using conventional chirp-less Mach Zehnder (MZ) and linearized (L) Y-fed directional couplers electro-optic modulators (EOMs) is compared. Particularly, the optimum modulation index and the corresponding minimum optical signal-to-noise ratio (OSNR) required to achieve a bit error probability of 10 -12 are evaluated through numerical simulation for systems operating with single and three UWB sub-bands and different standard single-mode fiber (SSMF) distances indicated for LR-PONs. Both modulators are characterized experimentally and theoretical model parameters are adjusted to correctly describe the power and chirp characteristics of the electro-optic conversion in the simulation process. It is shown that the optimum modulation index and the tolerance to modulation index variations, considering either single or multi band UWB operation, is approximately two times higher for systems employing L-EOM than MZ-EOM. Additionally, with MZ-EOM, the required OSNR may increase considerably when the fiber length increases due to the power fading induced by fiber chromatic dispersion, achieving a penalty of almost 3 dB for systems with 100 km of SSMF and single UWB sub-band operation. Instead, with L-EOM, the required OSNR penalty (when compared with the back-to-back case) is lower for fiber lengths up to 100 km than it is for systems employing the MZ-EOM modulator due to the combined effect of the chirp generated by the modulator and the chromatic dispersion. | ['Tiago M. F. Alves', 'Maria Morant', 'Adolfo V. T. Cartaxo', 'Roberto Llorente'] | Performance Comparison of OFDM-UWB Radio Signals Distribution in Long-Reach PONs Using Mach-Zehnder and Linearized Modulators | 14,888 |
We propose a dynamical model-based approach for tracking the shape and deformation of highly deforming objects from time-varying imagery. Previous works have assumed that the object deformation is smooth, which is realistic for the tracking problem, but most have restricted the deformation to belong to a finite-dimensional group, such as affine motions, or to finitely-parameterized models. This, however, limits the accuracy of the tracking scheme. We exploit the smoothness assumption implicit in previous work, but we lift the restriction to finite-dimensional motions/deformations. To do so, we derive analytical tools to define a dynamical model on the (infinite-dimensional) space of curves. To demonstrate the application of these ideas to object tracking, we construct a simple dynamical model on shapes, which is a first-order approximation to any dynamical system. We then derive an associated nonlinear filter that estimates and predicts the shape and deformation of a object from image measurements. | ['Ganesh Sundaramoorthi', 'Andrea C. Mennucci', 'Stefano Soatto', 'Anthony J. Yezzi'] | Tracking deforming objects by filtering and prediction in the space of curves | 321,986 |
Users' confidential data in transit on the WWW are protected by the HTTP's authentication scheme or the SSL protocol. However, the former has several weak points in terms of security, while the latter has a few problems against its wide deplotmemt. To alleviate the problems, we propose a scheme for user-initiated server authentication and two schemes for protecting against the Cross-Site-Scripting (XSS) and Cross-Site Reference Forgery (XSRF) attacks. Server authentication fails when when phishing, pharming, and MITM attacks are deployed, leading to the detection of those attacks. The protection schemes can thwart MITM, as well as XSS and XSRF. We integrate our schemes into the HTTP and extend the browser so that the user can start server authentication when a loaded web page has a form for submitting data and the user notifies the browser that his/her submitting data are confidential. The browser invokes the protection schemes when the page has no submission form, since XSS and XSRF are deployed without the user's awareness, i.e., without the submission form. | ['Masaru Takesue'] | An HTTP Extension for Secure Transfer of Confidential Data | 141,748 |
A depth image-based error concealment algorithm for 3-D video transmission is proposed, which utilizes the strong correlations between 2-D video and its corresponding the depth map. We first investigate the internal characteristics of the macroblock in the depth map, and then take advantage of these characteristics to recover accurately the lost motion vector for the corrupted blocks, with the joint consideration of the neighbor information and the corresponding depth. Experimental results show that the proposed method provides significant improvements in terms of both objective and subjective evaluations. | ['Yunqiang Liu', 'Jin Wang', 'Huanhuan Zhang'] | Depth Image-Based Temporal Error Concealment for 3-D Video Transmission | 441,032 |
We address the problem of 3D model based vehicle tracking from monocular videos of calibrated traffic scenes. A 3D wire-frame model is set up as prior information and an efficient fitness evaluation method based on image gradients is introduced to estimate the fitness score between the projection of vehicle model and image data, which is then combined into a particle filter based framework for robust vehicle tracking. Numerous experiments are conducted and experimental results demonstrate the effectiveness of our approach for accurate vehicle tracking and robustness to noise and occlusions. | ['Zhaoxiang Zhang', 'Kaiqi Huang', 'Tieniu Tan', 'Yunhong Wang'] | 3D Model Based Vehicle Tracking Using Gradient Based Fitness Evaluation under Particle Filter Framework | 109,868 |
In Service Oriented systems, complex applications can be composed from a variety of functionally equivalent Web services which may differ for quality parameters. Under this scenario, applications are defined as high level business processes and service composition can be implemented dynamically by identifying the best set of services available at run time. In this paper, we model the service composition problem as a mixed integer linear problem where local constraints, i.e., constraints for component Web services, and global constraints, i.e., constraints for the whole application, can be specified. Our approach proposes the formulation of the optimization problem as a global optimization, not optimizing separately each possible execution path as in other approaches. Experimental results demonstrate the effectiveness of our approach. | ['Danilo Ardagna', 'Barbara Pernici'] | Global and local QoS guarantee in web service selection | 851,488 |
Typically, we have several tasks at hand, some of which are in interrupted state while others are being carried out. Most of the time, such interruptions are not disruptive to task performance. Based on the theory of Long-Term Working Memory (LTWM; Ericsson, K.A., Kintsch, W., 1995. Long-term working memory. Psychological Review, 102, 211-245), we posit that unless there are enough mental skills and resources to encode task representations to retrieval structures in long-term memory, the resulting memory traces will not enable reinstating the information, which can lead to memory losses. However, once encoded to LTWM, they are virtually safeguarded. Implications of the theory were tested in a series of experiments in which the reading of an expository text was interrupted by a 30-s interactive task, after which the reading was continued. The results convey the remarkably robust nature of skilled memory-when LTWM encoding speed is fast enough for the task-processing imposed by the interface, interruptions have no effect on memory, regardless of their pacing, intensity, or difficulty. In the final experiment where presentation time in the main task was notably speeded up to match the limits of encoding speed, interruptions did hamper memory. Based on the results and the theory, we argue that auditory rehearsal or time-based retrieval cues were not utilized in surviving interruptions and that they are in general weaker strategies for surviving interruptions in complex cognitive tasks. We conclude the paper by suggesting three ways to support interruption tolerance by the means of task and interface design: (1) actively facilitating the development of memory skills, (2) matching encoding speed to task processing demands, and (3) supporting encoding-retrieval symmetry. | ['Antti Oulasvirta', 'Pertti Saariluoma'] | Surviving task interruptions: Investigating the implications of long-term working memory theory | 197,455 |
This paper describes a novel approach to multi-document summarization, which explicitly addresses the problem of detecting, and retaining for the summary, multiple themes in document collections. We place equal emphasis on the processes of theme identification and theme presentation. For the former, we apply Iterative Residual Rescaling (IRR); for the latter, we argue for graphical display elements. IRR is an algorithm designed to account for correlations between words and to construct multi-dimensional topical space indicative of relationships among linguistic objects (documents, phrases, and sentences). Summaries are composed of objects with certain properties, derived by exploiting the many-to-many relationships in such a space. Given their inherent complexity, our multi-faceted summaries benefit from a visualization environment. We discuss some essential features of such an environment. | ['Rie Kubota Ando', 'Branimir Boguraev', 'Roy J. Byrd', 'Mary S. Neff'] | Visualization-enabled multi-document summarization by Iterative Residual Rescaling | 402,321 |
The paper describes the task of performing efficient decision-theoretic troubleshooting of electromechanical devices. In general, this task is NP-complete, but under fairly strict assumptions, a greedy approach will yield an optimal sequence of actions, as discussed in the paper. This set of assumptions is weaker than the set proposed by Heckerman et al. (1995). However, the printing system domain, which motivated the research and which is described in detail in the paper, does not meet the requirements for the greedy approach, and a heuristic method is used. The method takes value of identification of the fault into account and it also performs a partial two-step look-ahead analysis. We compare the results of the heuristic method with optimal sequences of actions, and find only minor differences between the two. | ['Finn Verner Jensen', 'Uffe Bro Kjærulff', 'Brian Kristiansen', 'Helge Langseth', 'Claus Skaanning', 'Jirí Vomlel', 'Marta Vomlelová'] | The SACSO methodology for troubleshooting complex systems | 37,053 |
Grounding lexical choice in Bayesian inference. | ['Kyle Albarado', 'Michael L. Kalish'] | Grounding lexical choice in Bayesian inference. | 991,168 |
Numerical Experiments with a Primal-Dual Algorithm for Solving Quadratic Problems | ['Derkaoui Orkia', 'Lehireche Ahmed'] | Numerical Experiments with a Primal-Dual Algorithm for Solving Quadratic Problems | 721,543 |
Local scattering in the vicinity of the receiver or the transmitter leads to the formation of a large number of multipath components along different spatial angles. A condition of angular distribution, which is valid for only a uniform linear array, is proposed in this paper to justify whether the spatial fading correlation (SFC) remains simple as a Bessel function. If an angular distribution satisfies the condition, a class of angular distributions is revealed and results in simplifying the analysis of the SFC. To demonstrate its practical use, we apply the condition to several angular distributions that are considered in previous works. It is found that cosine and von Mises distributions follow the condition, whereas uniform, Gaussian, and Laplacian distributions do not satisfy the condition, and then, one needs to calculate the sinusoidal coefficients in the SFC computation. | ['Bamrung Tau Sieskul', 'Claus Kupferschmidt', 'Thomas Kaiser'] | Spatial Fading Correlation for Local Scattering: A Condition of Angular Distribution | 395,532 |
A statistically robust and biologically-based approach for analysis of microarray data is described that integrates independent biological knowledge and data with a global F-test for finding genes of interest that minimizes the need for replicates when used for hypothesis generation. First, each microarray is normalized to its noise level around zero. The microarray dataset is then globally adjusted by robust linear regression. Second, genes of interest that capture significant responses to experimental conditions are selected by finding those that express significantly higher variance than those expressing only technical variability. Clustering expression data and identifying expression-independent properties of genes of interest including upstream transcriptional regulatory elements (TREs), ontologies and networks or pathways organizes the data into a biologically meaningful system. We demonstrate that when the number of genes of interest is inconveniently large, identifying a subset of "beacon genes" representing the largest changes will identify pathways or networks altered by biological manipulation. The entire dataset is then used to complete the picture outlined by the "beacon genes." This allow construction of a structured model of a system that can generate biologically testable hypotheses. We illustrate this approach by comparing cells cultured on plastic or an extracellular matrix which organizes a dataset of over 2,000 genes of interest from a genome wide scan of transcription. The resulting model was confirmed by comparing the predicted pattern of TREs with experimental determination of active transcription factors. | ['Mikhail G Dozmorov', 'Kimberly D. Kyker', 'Paul J. Hauser', 'Ricardo Saban', 'David D. Buethe', 'Igor Dozmorov', 'Michael Centola', 'Daniel J. Culkin', 'Robert E. Hurst'] | From microarray to biology: an integrated experimental, statistical and in silico analysis of how the extracellular matrix modulates the phenotype of cancer cells | 471,948 |
Defect prediction is a powerful tool that greatly helps focusing quality assurance efforts during development. In the case of the availability of fault data from a particular context, there are different ways of using such fault predictions in practice. Companies like Google, Bell Labs and Cisco make use of fault prediction, whereas its use within automotive industry has not yet gained a lot of attraction, although, modern cars require a huge amount of software to operate. In this paper, we want to contribute the adoption of fault prediction techniques for automotive software projects. Hereby we rely on a publicly available data set comprising fault data from three automotive software projects. When learning a fault prediction model from the data of one particular project, we achieve a remarkably high and nearly perfect prediction performance for the same project. However, when applying a cross-project prediction we obtain rather poor results. These results are rather surprising, because of the fact that the underlying projects are as similar as two distinct projects can possibly be within a certain application context. Therefore we investigate the reasons behind this observation through correlation and factor analyses techniques. We further report the obtained findings and discuss the consequences for future applications of Cross-Project Fault Prediction CPFP in the domain of automotive software. | ['Harald Altinger', 'Steffen Herbold', 'Jens Grabowski', 'Franz Wotawa'] | Novel Insights on Cross Project Fault Prediction Applied to Automotive Software | 670,333 |
To encourage data sharing in the life sciences, supporting tools need to minimize effort and maximize incentives. We have created infrastructure that makes it easy to create portals that supports dataset sharing and simplified publishing of the datasets as high quality linked data. We report here on our infrastructure and its use in the creation of a melanoma dataset portal. This portal is based on the Comprehensive Knowledge Archive Network (CKAN) and Prizms, an infrastructure to acquire, integrate, and publish data using Linked Data principles. In addition, we introduce an extension to CKAN that makes it easy for others to cite datasets from within both publications and subsequently- derived datasets using the emerging nanopublication and World Wide Web Consortium provenance standards. | ['James P. McCusker', 'Timothy Lebo', 'Michael Krauthammer', 'Deborah L. McGuinness'] | Next Generation Cancer Data Discovery, Access, and Integration Using Prizms and Nanopublications | 561,124 |
In this paper, we compare the inference capabilities of three different types of fuzzy cognitive maps (FCMs). A fuzzy cognitive map is a recurrent artificial neural network that creates models as collections of concepts/neurons and the various causal relations that exist between these concepts/neurons. In the paper, a variety of industry/engineering FCM applications is presented. The three different types of FCMs that we study and compare are the binary, the trivalent and the sigmoid FCM, each of them using the corresponding transfer function for their neurons/concepts. Predictions are made by viewing dynamically the consequences of the various imposed scenarios. The prediction making capabilities are examined and presented. Conclusions are drawn concerning the use of the three types of FCMs for making predictions. Guidance is given, in order FCM users to choose the most suitable type of FCM, according to (a) the nature of the problem, (b) the required representation capabilities of the problem and (c) the level of inference required by the case. | ['Athanasios K. Tsadiras'] | Comparing the inference capabilities of binary, trivalent and sigmoid fuzzy cognitive maps | 307,834 |
In this paper, we propose a new technique for semi-automatic syntactic annotation of Arabic corpora. We describe a tool that takes a morpho-syntactic tagged corpus as an input and provides its syntactic annotation as output according to the ArabTAG formalism. We say it is 'intelligent' because this tool automatically learns and improves during elementary annotation (supertagging). It applies a supervised classification method that combines three classifiers (Naive Bayes, K-Nearest Neighbours, Decision tree). In order to evaluate the ability of this tool to acquire information from human intervention, we present an experimental protocol for a small Treebank of 5000 words. | ['Chiraz Ben Othmane Zribi', 'Fériel Ben Fraj', 'Mohamed Ben Ahmed'] | An intelligent tool for syntactic annotation of Arabic corpora | 146,175 |
Generating English plural determiners from semantic representations: a neural network learning approach | ['Gabriele Scheler'] | Generating English plural determiners from semantic representations: a neural network learning approach | 283,999 |
Fully adaptive SVD-based noise removal for robust speech recognition | ['Kris Hermus', 'Ioannis Dologlou', 'Patrick Wambacq', 'Dirk Van Compernolle'] | Fully adaptive SVD-based noise removal for robust speech recognition | 731,787 |
Wavelet packets and local trigonometric bases provide an efficient framework and fast algorithms to obtain a "best basis" or "best representation" of deterministic signals. Applying these deterministic techniques to stochastic processes may, however, lead to variable results. We revisit this problem and introduce a prior model on the underlying signal in noise and account for the contaminating noise model as well. We thus develop a Bayesian-based approach to the best basis problem, while preserving the classical tree search efficiency. | ['Jean-Christophe Pesquet', 'Hamid Krim', 'David Leporini', 'E. Hamman'] | Bayesian approach to best basis selection | 152,523 |
This paper focuses on the estimation of the intrinsic camera parameters and the trajectory of the camera from an image sequence. Intrinsic camera calibration and pose estimation are the prerequisites for many applications involving navigation tasks, scene reconstruction, and merging of virtual and real environments. Proposed and evaluated is a technical solution to decrease the sensitivity of self-calibration by placing easily identifiable targets of known shape in the environment. The relative position of the targets need not be known a priori. Assuming an appropriate ratio of size to distance these targets resolve known ambiguities. Constraints on the target placement and the cameras' motions are explored. The algorithm is extensively tested in a variety of real-world scenarios. | ['Jeffrey Mendelsohn', 'Konstantinos Daniilidis'] | Constrained self-calibration | 272,561 |
Seizure prediction using polynomial SVM classification | ['Zisheng Zhang', 'Keshab K. Parhi'] | Seizure prediction using polynomial SVM classification | 665,474 |
In this paper, we propose and investigate a simple, low-complexity and deterministic cooperative protocol that exploits Network Coding (NC) within a wireless network. The scenario under investigation is for Long Term Evaluation Advanced (LTE-A) downlink communication system network, where a source of data sends k stream of packets to a Pico relay which forwards them to two Femto relays via fiber optic link and to the destination via lossy wireless channels. The Femto relays forward the received k packets to the destination (downlink scenario) through two lossy wireless channels after applying NC over them. The proposed scenario applies NC in deterministic way over the data link layer, specifically, over the MAC sub-layer with taking the advantages of Hybrid Automatic Repeat Request (HARQ) and Coordinated Multi-Point (CoMP) applications. Simulation results showed good improvement obtained when NC is applied in term of packet error probability and transmission data rate over the MAC layer. Moreover, it showed how significantly ARQ needed is improved when cooperative NC is implemented. | ['Hani Attar', 'Lina Stankovic', 'Mohamed Alhihi', 'Ahmed Ameen'] | Deterministic network coding over Long Term Evaluation Advance communication system | 532,562 |
The transition from traditional circuit-switched phone systems to modern packet-based Internet telephony networks demands tools to support Voice over Internet Protocol (VoIP) development. In this paper, we introduce the XinuPhone, an integrated hardware/software approach for educating users about VoIP technology on a real-time embedded platform. We propose modular course topics for design-oriented, hands-on laboratory exercises: filter design, timing, serial communications, interrupts and resource budgeting, network transmission, and system benchmarking. Our open-source software platform encourages development and testing of new CODECs alongside existing standards, unlike similar commercial solutions. Furthermore, the supporting hardware features inexpensive, readily available components designed specifically for educational and research users on a limited budget. The XinuPhone is especially good for experimenting with design trade-offs as well as interactions between real-time software and hardware components. | ['Kyle Persohn', 'Dennis Brylow'] | Interactive Real-Time Embedded Systems Education Infused with Applied Internet Telephony | 444,109 |
Rough Non-deterministic Information Analysis (RNIA) is a rough set-based data analysis framework for Non-deterministic Information Systems (NISs). RNIA-related algorithms and software tools developed so far for rule generation provide good characteristics of NISs and can be successfully applied to decision making based on non-deterministic data. This article presents a general overview of Decision Making in RNIA including both theoretical and algorithmic aspects of the theory. We mainly focused on the following aspects of RNIA: (1) a question-answering functionality that enables decision makers to analyze data gathered in NISs, (2) an automatic decision rule generation with stability factor. | ['Hitomi Okuma', 'Michinori Nakata', 'Dominik Slezak', 'Hiroshi Sakai'] | An overview of decision making in Rough Non-deterministic Information Analysis | 476,741 |
In wireless sensor networks, the limitation of energy and cache space of nodes around the base station, as well as the multi-hop transmission instability will seriously interfere the performance of traditional data collection protocols. To address this problem, a data collection mechanism by using mobile base station is proposed. Firstly, a new clustering algorithm-Time High-Overflow-Based Dominating (THD) is proposed that the sensor network is divided into several clusters based on sample rate and cache of nodes. Secondly, temporary caching mechanism (TCM) is proposed to further resolve the buffer overflow problem. Finally, Dominating-Based Minimum Weighted Sum (DMWS) protocol is proposed in order to design the optimal path of the mobile base stations. The simulation results show that: compared with traditional data collection schemes, the method shortens the moving distance of the mobile base station, extends the network life cycle reduces the energy consumption, and has the shorter delay of the data. | ['Jianpeng Du', 'Hui Wang', 'Yiming Wu', 'Fukun Jiang', 'Haiping Huang'] | A data collection approach based on mobile sinks for heterogeneous sensor networks | 951,663 |
Wireless sensor networks provide an opportunity to enhance the current equipment diagnosis systems in the process industry, which have been based so far on wired networks. In this paper, we use our experience in the Anshan Iron and Steel Factory, China, as an example to present the issues from the real field of process industry, and our solutions. The challenges are three fold: First, very high reliability is required; second, energy consumption is constrained; and third, the environment is very challenging and constrained. To address these issues, it is necessary to put systematic efforts on network topology and node placement, network protocols, embedded software, and hardware. In this paper, we propose two technologies i.e. design for reliability and energy efficiency (DRE), and design for reconfiguration (DRC). Using these techniques we developed Anshan, a wireless sensor network for monitoring the temperature of rollers in a continuously annealing line and detecting equipment failures. Project Anshan includes 406 sensor nodes and has been running for four months continuously. | ['Yadong Wan', 'Lei Li', 'Jie He', 'Xiaotong Zhang', 'Qin Wang'] | Anshan: Wireless Sensor Networks for Equipment Fault Diagnosis in the Process Industry | 185,124 |
We propose a new approach for assigning audio data in large missing audio parts (from 1 to 16 seconds). Inspired by image inpainting approaches, the proposed method uses the repetitive aspect of music pieces on musical features to recover missing segments via an exemplar-based reconstruction. Tonal features combined with a string matching technique allows locating repeated segments accurately. The evaluation consists in performing on both musician and nonmusician subjects listening tests of randomly reconstruct ed audio excerpts, and experiments highlight good results in assigning musically relevant parts. The contribution of th is paper is twofold: bringing musical features to solve a signal processing problem in the case of large missing audio parts, and successfully applying exemplar-based techniques on musical signals while keeping a musical consistency on audio pieces. | ['Benjamin Martin', 'Pierre Hanna', 'Ta Vinh Thong', 'Myriam Desainte-Catherine', 'Pascal Ferraro'] | Exemplar-based Assignment of Large Missing Audio Parts using String Matching on Tonal Features. | 637,520 |
This paper describes a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of words paired with meaning representations. WOLFIE is part of an integrated system that learns to parse novel sentences into semantic representations, such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The lexicons learned by WOLFIE are compared to those acquired by a similar system developed by Siskind (1996). | ['Cynthia A. Thompson', 'Raymond J. Mooney'] | Automatic construction of semantic lexicons for learning natural language interfaces | 1,263 |
As microprocessors become increasingly complex, the techniques used to analyze and predict their behavior must become increasingly rigorous. We apply wavelet analysis techniques to the problem of dl/dt estimation and control in modern microprocessors. While prior work has considered Bayesian phase analysis, Markov analysis, and other techniques to characterize hardware and software behavior, we know of no prior work using wavelets for characterizing computer systems. The dl/dt problem has been increasingly vexing in recent years, because of aggressive drops in supply voltage and increasingly large relative fluctuations in CPU current dissipation. Because the dl/dt problem has natural frequency dependence (it is worst in the mid-frequency range of roughly 50-200 MHz) it is natural to apply frequency-oriented techniques like wavelets to understand it. Our work proposes (i) an offline wavelet-based estimation technique that can accurately predict a benchmark's likelihood of causing voltage emergencies, and (ii) an online wavelet-based control technique that uses key wavelet coefficients to predict and avert impending voltage emergencies. The offline estimation technique works with roughly 0.94% error. The online control technique reduces false positives in dl/dt prediction, allowing, voltage control to occur with less than 2.5% performance overhead on the SPEC benchmark suite. | ['Russ Joseph', 'Zhigang Hu', 'Margaret Martonosi'] | Wavelet analysis for microprocessor design: experiences with wavelet-based dI/dt characterization | 922,606 |
A calculation algorithm for hepatorenal contrast from real ultrasonic images is proposed for analysis research of time series change in patient condition by aging. It provides automatic calculation of kidney pelvis position based on fuzzy inference, which detects kidney and liver region for hepatorenal contrast to calculate. Experimental calculation results for 150 ultrasonic images taken in real treatment from Kochi Medical School hospital show that accuracy of kidney pelvis detection is 93% and that correlation coefficient of hepatorenal contrast with normal gamma-GT is 0.82. The proposed algorithm is being considered for use in analysis of condition change in Center of Medical Information Science, Kochi Medical School. | ['Yutaka Hatakeyama', 'Hiromi Kataoka', 'Noriaki Nakajima', 'Teruaki Watabe', 'Yoshiyasu Okuhara'] | Calculation algorithm of hepatorenal contrast in ultrasonic images based on fuzzy inference | 26,879 |
Balancing Fidelity and Performance in Virtual Walkthrough | ['Kian-Lee Tan', 'Yixin Ruan', 'Jason Chionh', 'Zhiyong Huang', 'Lidan Shou'] | Balancing Fidelity and Performance in Virtual Walkthrough | 276,830 |
A generalized fuzzy c-means (FCM) clustering is proposed by modifying the standard FCM objective function and introducing some simplifications. FCM clustering results in very fuzzy partitions for data points that are far from all cluster centroids. This property distinguishes FCM from Gaussian mixture models or entropy based clustering. The generalized FCM clustering aims at aggregating standard FCM and entropy based FCM so that the generalized algorithm is furnished with the two distinctive properties for data points that are far from all centroids and for those that are close to any centroid. k-Harmonic means clustering are reviewed from the view point of FCM clustering. Graphical comparisons of the four classification functions are presented | ['Hidetomo Ichihashi', 'Katsuhiro Honda', 'Akira Notsu', 'Takao Hattori'] | Aggregation of Standard and Entropy Based Fuzzy c-Means Clustering by a Modified Objective Function | 457,827 |
Wide computer registers offer opportunities to exploit parallel processing. Instead of using hardware assists to partition a register into independent non-interacting fields, the multiple data elements can borrow and carry from elements to the left, and yet be accurately separated. Algorithms can be designed so that they execute within the allocated precision. Their floating point or irrational constants (e.g., cosines) are converted into integer numerators with floating point denominators. The denominators are then merged into scaling terms. To control the dynamic range and thus require less bits of precision per element, shift rights can be used. The effect of the average truncation errors is analyzed and a technique shown to minimize this average error. | ['Joan L. Mitchell', 'Arianne T. Hinds'] | Enhanced parallel processing in wide registers | 481,772 |
Multimodal grammars provide an expressive formalism for multimodal integration and understanding. However, hand-crafted multimodal grammars can be brittle with respect to unexpected, erroneous, or disfluent inputs. In previous work, we have shown how the robustness of stochastic language models can be combined with the expressiveness of multimodal grammars by adding a finite-state edit machine to the multimodal language processing cascade. In this paper, we present an approach where the edits are trained from data using a noisy channel model paradigm. We evaluate this model and compare its performance against hand-crafted edit machines from our previous work in the context of a multimodal conversational system (MATCH). | ['Michael Johnston', 'Srinivas Bangalore'] | Learning Edit Machines for Robust Multimodal Understanding | 187,835 |
In this paper, the robust control of robot manipulators with consideration of motor dynamics is studied. A robust control scheme is designed based on a third-order dynamic model of robot manipulators that incorporates motor dynamics. The motor torque ripple and parameter uncertainty of the armature circuit are taken into account. The well known robust saturation control technique is applied to the third-order dynamic model to design a control law that guarantees the uniform ultimate boundedness of the closed-loop tracking errors. Simulations are conducted to evaluate the proposed control method, and the results have confirmed its effectiveness. | ['Guangjun Liu', 'Andrew A. Goldenberg'] | Robust control of robot manipulators incorporating motor dynamics | 100,766 |
In this letter, a coherence-based technique for atmospheric artifact removal in ground-based (GB) zero-baseline synthetic aperture radar (SAR) acquisitions is proposed. For this purpose, polarimetric measurements acquired using the GB-SAR sensor developed at the Universitat Politecnica de Catalunya are employed. The heterogeneous environment of Collserola Park in the outskirts of Barcelona, Spain, was selected as the test area. Data sets were acquired at X-band during one week in June 2005. The effects of the atmosphere variations between successive zero-baseline SAR polarimetric acquisitions are treated here in detail. The need to compensate for the resulting phase-difference errors when retrieving interferometric information is put forward. A compensation technique is then proposed and evaluated using the control points placed inside the observed scene. | ['Luca Pipia', 'Xavier Fabregas', 'Albert Aguasca', 'Carlos Lopez-Martinez'] | Atmospheric Artifact Compensation in Ground-Based DInSAR Applications | 316,795 |
Hypermap Specification and Certified Linked Implementation Using Orbits | ['Jean-François Dufourd'] | Hypermap Specification and Certified Linked Implementation Using Orbits | 598,982 |
To each number β > 1 correspond abelian groups in Rd, of the form Λβ = Σi=1d Zβei, which obey βΛβ ⊂ Λβ. The set Zβ of beta-integers is a countable set of numbers: it is precisely the set of real numbers which are polynomial in β when they are written in "basis β", and Zβ = Z when β ∈ N. We prove here a list of arithmetic properties of Zβ: addition, multiplication, relation with integers, when β is a quadratic Pisot-Vijayaraghavan unit (quasicrystallographic inflation factors are particular examples). We also consider the case of a cubic Pisot-Vijayaraghavan unit associated with the seven-fold cyclotomic ring. At the end, we show how the point sets Λβ are vertices of d-dimensional tilings. | ['Christiane Frougny', 'Jean-Pierre Gazeau', 'Rudolf Krejcar'] | Additive and multiplicative properties of point sets based on beta-integers | 115,175 |
Based on a bijective mapping between two mixed integer sets, we introduce a new perspective on developing cuts for the mixed integer polyhedral conic (MIPC) set by establishing a one-to-one correspondence between the cuts for this set and those for a related mixed integer knapsack (MIK) set. The face/facet-defining properties of the corresponding cuts are identical for their respective sets. We further show that the cut generation approach for the MIPC set resulting from this new perspective always produces cuts that dominate those generated based on any of the two individual MIK constraints corresponding to the MIPC constraint. Our computational results show this dominance can be quite significant. As a special case of this new perspective, the conic MIR inequality of Atamturk and Narayanan for the MIPC set and its properties can be directly derived from the MIR inequality for the MIK set and its properties. We also generalize these cuts to the n-step conic MIR inequalities, which are directly derived form the n-step MIR inequalities for the MIK set. | ['Sujeevraja Sanjeevi', 'Sina Masihabadi (1985-2011)', 'Kiavash Kianfar'] | Using cuts for mixed integer knapsack sets to generate cuts for mixed integer polyhedral conic sets | 576,786 |
Curbing Resource Consumption Using Team-Based Feedback - - Paper Printing in a Longitudinal Case Study -. | ['Souleiman Hasan', 'Richard Medland', 'Marcus Foth', 'Edward Curry'] | Curbing Resource Consumption Using Team-Based Feedback - - Paper Printing in a Longitudinal Case Study -. | 739,395 |
We present a new scan-BIST approach for determining failing vectors for fault diagnosis. This approach is based on the application of overlapping intervals of test vectors to the circuit under test. Two MISRs (multiple-input signature registers) are used in an interleaved fashion to generate intermediate signatures, thereby obviating the need for multiple test sessions. The knowledge of failing and non-failing intervals is used to obtain a set S of candidate failing vectors that includes all the actual (true) failing vectors. We present analytical results to determine an appropriate interval length and the degree of overlap, an upper bound on the size of S, and a lower bound on the number of true failing vectors; the latter depends only on the knowledge of failing and non-failing intervals. Finally, we describe two pruning procedures that allow us to reduce the size of S, while retaining most true failing vectors in S. We present experimental results for the ISCAS 89 benchmark circuits to demonstrate the effectiveness of the proposed scan-BIST diagnosis approach. | ['Chunsheng Liu', 'Krishnendu Chakrabarty', 'Michael Goessel'] | An Interval-Based Diagnosis Scheme for Identifying Failing Vectors in a Scan-BIST Environment | 460,453 |
New Complex Product Introduction by Means of Product Configuration. | ['Martin Bonev', 'Manuel Korell', 'Lars Hvam'] | New Complex Product Introduction by Means of Product Configuration. | 746,029 |
In response to the productivity challenge of the U.S. DARPA HPCS initiative, we have developed a methodology that provides an extremely simple and pain-free interface through which scientists can collect rich performance data from selected parts of an execution, digest the data at a very high level, and plan for improvements. This process can be easily repeated, each time refining the selection of parts of the application and revising the granularity of data collected, until complete insight is gained about bottlenecks. A distinct feature of our approach is that the framework is independent of the features being examined. Recognizing that the features to be examined change with systems/applications and also with depth at which an aspect is being examined, our framework provides an easy interface to continually add new features for examination. Furthermore, many different features can be collected simultaneously and examined in a non-interfering manner. Finally, all this is accomplished without changing the source code in any manner. We believe that this is an ideal platform for building knowledge-based repositories for automatic performance tuning, which is the subject of our future study. In this paper, we describe our productivity centered framework for application performance tuning. It comprises of three features: an unique source code and binary instrumentation feature, a versatile user-interface that brings all the sophisticated capabilities of the binary instrumentation to the user at a higher level of abstraction, and the functionality to collect different dimensions of performance data. The results of execution are all in terms of source level names and at no point does the scientist needs to worry about low-level details of instrumentation. We believe that it is this ability, of deciphering performance impacts at source level, that leads to high productivity of scientists to understand, direct and tune the behavior of the computing system. | ['Simone Sbaraglia', 'Hui-Fang Wen', 'Seetharami Seelam', 'I-Hsin Chung', 'Guojing Cong', 'Kattamuri Ekanadham', 'David J. Klepacki'] | A productivity centered application performance tuning framework | 49,615 |
Subsets and Splits