abstract
stringlengths
8
9.19k
authors
stringlengths
9
1.96k
title
stringlengths
8
367
__index_level_0__
int64
13
1,000k
Visio-spatial road boundary detection for unmarked urban and rural roads
['Tobias Kuhnl', 'Jannik Fritsch']
Visio-spatial road boundary detection for unmarked urban and rural roads
211,106
Relay links are expected to play a critical role in the design of wireless networks. This paper investigates the energy efficiency of relay communications in the low-power regime under two different scenarios: when the relay has unlimited power supply and when it has limited power supply. A system with a source node, a destination node, and a single relay operating in the time division duplex (TDD) mode was considered. Analysis and simulations are used to compare the energy required for transmitting one information bit in three different relay schemes: amplify and forward (AnF), decode and forward (DnF), and block Markov coding (BMC). Relative merits of these relay schemes in comparison with direct transmissions (direct Tx) are discussed. The optimal allocation of power and transmission time between source and relay is also studied.
['Yingwei Yao', 'Xiaodong Cai', 'Georgios B. Giannakis']
On energy efficiency and optimum resource allocation of relay transmissions in the low-power regime
208,005
Documents in rich text corpora often contain multiple facets of information. For example, an article from a medical document collection might consist of multifaceted information about symptoms, treatments, causes, diagnoses, prognoses, and preventions. Thus, documents in the collection may have different relations across each of these various facets. Topic analysis and exploration for such multi-relational corpora is a challenging visual analytic task. This paper presents Solar Map, a multifaceted visual analytic technique for visually exploring topics in multi-relational data. Solar Map simultaneously visualizes the topic distribution of the underlying entities from one facet together with keyword distributions that convey the semantic definition of each cluster along a secondary facet. Solar Map combines several visual techniques including 1) topic contour clusters and interactive multifaceted keyword topic rings, 2) a global layout optimization algorithm that aligns each topic cluster with its corresponding keywords, and 3) 2) an optimal temporal network segmentation and layout method that renders temporal evolution of clusters. Finally, the paper concludes with two case studies and quantitative user evaluation which show the power of the Solar Map technique.
['Nan Cao', 'David Gotz', 'Jimeng Sun', 'Yu Ru Lin', 'Huamin Qu']
SolarMap: Multifaceted Visual Analytics for Topic Exploration
256,239
This paper addresses three major problems of closed task Chinese word segmentation (CWS): word overlap, tagging sentences interspersed with non-Chinese words, and long named entity (NE) identification. For the first, we use additional bigram features to approximate trigram and tetragram features. For the second, we first apply K-means clustering to identify non-Chinese characters. Then, we employ a two-tagger architecture: one for Chinese text and the other for non-Chinese text. Finally, we post-process our CWS output using automatically generated templates. Our results show that additional bigrams can effectively identify more unknown words. Secondly, using our two-tagger method, segmentation performance on sentences contain-ing non-Chinese words is significantly improved when non-Chinese Ccharacters are sparse in the training corpus. Lastly, identification of long NEs and long words is also enhanced by template-based post-processing. Using corpora in closed task of SIGHAN CWS, our best system achieves F-scores of 0.956, 0.947, and 0.965 on the AS, HK, and MSR corpora respectively, compared to the best context scores of 0.952, 0.943, and 0.964 in SIGHAN Bakeoff 2005. In AS, this performance is comparable to the best result (F=0956) in the open task.
['Richard Tzong-Han Tsai', 'Hong-Jie Dai', 'Hsieh-Chuan Hung', 'Cheng-Lung Sung', 'Min-Yuh Day', 'Wen-Lian Hsu']
Chinese Word Segmentation with Minimal Linguistic Knowledge: An Improved Conditional Random Fields Coupled with Character Clustering and Automatically Discovered Template Matching
18,311
A new class of embedded devices is emerging that has a mixture of traditional firmware (written in C/C++) with an embedded virtual machine (e.g., Java). For these devices, the main part of the application is usually writ- ten in C/C++ for efficiency and extensible features can be added on the virtual machine (even after product shipment). These late bound features need access to the C/C++ code and may in fact replace or extend functionality that was originally deployed in ROM. This paper describes the JeCOM bridge that dra- matically simplifies development and deployment of such add-on features for the embedded devices and allows the features to be added without requiring the firmware to be reburned or reflashed. After being dynamically loaded onto the device's Java virtual machine, the JeCOM bridge facilitates transparent bi- directional communication between the Java application and the underlying firmware. Our bridging approach focuses on embedded applications develop- ment and deployment, and makes several significant advances over traditional Java Native Interface or other fixed stub/skeleton COM/CORBA/RMI ap- proaches. In particular, we address object discovery, object lifecycle manage- ment, and memory management for parameter passing. While the paper focuses on the specific elements and experiences with an HP proprietary infrastructure, the techniques developed are applicable to a wide range of mixed language and mixed distributed object-based systems.
['Jun Li', 'Keith Moore']
Enabling Rapid Feature Deployment on Embedded Platforms with JeCOM Bridge
22,392
Online social media represent a fundamental shift of how information is being produced, transferred and consumed. User generated content in the form of blog posts, comments, and tweets establishes a connection between the producers and the consumers of information. Tracking the pulse of the social media outlets, enables companies to gain feedback and insight in how to improve and market products better. For consumers, the abundance of information and opinions from diverse sources helps them tap into the wisdom of crowds, to aid in making more informed decisions. The present tutorial investigates techniques for social media modeling, analytics and optimization. First we present methods for collecting large scale social media data and then discuss techniques for coping with and correcting for the effects arising from missing and incomplete data. We proceed by discussing methods for extracting and tracking information as it spreads among the users. Then we examine methods for extracting temporal patterns by which information popularity grows and fades over time. We show how to quantify and maximize the influence of media outlets on the popularity and attention given to particular piece of content, and how to build predictive models of information diffusion and adoption. As the information often spreads through implicit social and information networks we present methods for inferring networks of influence and diffusion. Last, we discuss methods for tracking the flow of sentiment through networks and emergence of polarization.
['Jure Leskovec']
Social media analytics: tracking, modeling and predicting the flow of information through networks
18,972
A Supporting System for Human Creativity: Computer Aided Divergent Thinking Process by Provision of Associative Pieces of Information
['Kazushi Nishimoto', 'Kenji Mochizuki', 'Tsutomu Miyasato', 'Fumio Kishino']
A Supporting System for Human Creativity: Computer Aided Divergent Thinking Process by Provision of Associative Pieces of Information
211,533
Discusses analysis and synthesis techniques for robust pole placement in linear matrix inequality (LMI) regions, a class of convex regions of the complex plane that embraces most practically useful stability regions. The focus is on linear systems with static uncertainty on the state matrix. For this class of uncertain systems, the notion of quadratic stability and the related robustness analysis tests are generalized to arbitrary LMI regions. The resulting tests for robust pole clustering are all numerically tractable because they involve solving linear matrix inequalities (LMIs) and cover both unstructured and parameter uncertainty. These analysis results are then applied to the synthesis of dynamic output-feedback controllers that robustly assign the closed-loop poles in a prescribed LMI region. With some conservatism, this problem is again tractable via LMI optimization. In addition, robust pole placement can be combined with other control objectives, such as H/sub 2/ or H/sub /spl infin// performance, to capture realistic sets of design specifications. Physically motivated examples demonstrate the effectiveness of this robust pole clustering technique.
['Mahmoud Chilali', 'Pascal Gahinet', 'Pierre Apkarian']
Robust pole placement in LMI regions
534,495
This letter presents a framework of composite kernel machines for enhanced classification of hyperspectral images. This novel method exploits the properties of Mercer's kernels to construct a family of composite kernels that easily combine spatial and spectral information. This framework of composite kernels demonstrates: 1) enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only: 2) flexibility to balance between the spatial and spectral information in the classifier; and 3) computational efficiency. In addition, the proposed family of kernel classifiers opens a wide field for future developments in which spatial and spectral information can be easily integrated.
['Gustavo Camps-Valls', 'Luis Gomez-Chova', 'Jordi Muñoz-Marí', 'Joan Vila-Francés', 'Javier Calpe-Maravilla']
Composite kernels for hyperspectral image classification
465,580
The evaluation of changes in Intervertebral Discs (IVDs) with 3D Magnetic Resonance (MR) Imaging (MRI) can be of interest for many clinical applications. This paper presents the evaluation of both IVD localization and IVD segmentation methods submitted to the Automatic 3D MRI IVD Localization and Segmentation challenge, held at the 2015 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2015) with an on-site competition. With the construction of a manually annotated reference data set composed of 25 3D T2-weighted MR images acquired from two different studies and the establishment of a standard validation framework, quantitative evaluation was performed to compare the results of methods submitted to the challenge. Experimental results show that overall the best localization method achieves a mean localization distance of 0.8 mm and the best segmentation method achieves a mean Dice of 91.8%, a mean average absolute distance of 1.1 mm and a mean Hausdorff distance of 4.3 mm, respectively. The strengths and drawbacks of each method are discussed, which provides insights into the performance of different IVD localization and segmentation methods.
['Guoyan Zheng', 'Chengwen Chu', 'Daniel L. Belavý', 'Bulat Ibragimov', 'Robert Korez', 'Tomaž Vrtovec', 'Hugo Hutt', 'Richard M. Everson', 'Judith R. Meakin', 'Isabel Lŏpez Andrade', 'Ben Glocker', 'Hao Chen', 'Qi Dou', 'Pheng-Ann Heng', 'Chunliang Wang', 'Daniel Forsberg', 'Ales Neubert', 'Jurgen Fripp', 'Martin Urschler', 'Darko Štern', 'Maria Wimmer', 'Alexey A. Novikov', 'Hui Cheng', 'Gabriele Armbrecht', 'Dieter Felsenberg', 'Shuo Li']
Evaluation and comparison of 3D intervertebral disc localization and segmentation methods for 3D T2 MR data: A grand challenge.
875,714
Advances in Distributed Branch and Bound.
['Lars Otten', 'Rina Dechter']
Advances in Distributed Branch and Bound.
767,141
The development in positioning technology has enabled us to collect a huge amount of movement data from moving objects, such as human, animals, and vehicles. The data embed rich information about the relationships among moving objects and have applications in many fields, e.g., in ecological study and human behavioral study. Previously, we have proposed a system MoveMine that integrates several start-of-art movement mining methods. However, it does not include recent methods on relationship pattern mining. Thus, we propose to extend MoveMine to MoveMine 2.0 by adding substantial new methods in mining dynamic relationship patterns. Newly added methods focus on two types of pairwise relationship patterns: (i) attraction/avoidance relationship, and (ii) following pattern. A user-friendly interface is designed to support interactive exploration of the result and provides flexibility in tuning parameters. MoveMine 2.0 is tested on multiple types of real datasets to ensure its practical use. Our system provides useful tools for domain experts to gain insights on real dataset. Meanwhile, it will promote further research in relationship mining from moving objects.
['Fei Wu', 'Tobias Kin Hou Lei', 'Zhenhui Li', 'Jiawei Han']
MoveMine 2.0: mining object relationships from movement data
668,383
An Interval Type-2 Fuzzy Multiple Echelon Supply Chain Model.
['Simon Miller', 'Robert John']
An Interval Type-2 Fuzzy Multiple Echelon Supply Chain Model.
776,395
One of the key features for the failure of project estimation techniques is the selection of inappropriate estimation models. Further, noisy data poses a challenge to build accurate estimation models. Therefore, the software cost estimation (SCE) is a challenging problem that has attracted many researchers over the past few decades. In the recent times,the use of computational intelligence methodologies for software cost estimation have gained prominence. This paper reviews some of the commonly used computational intelligence (CI) techniques and analyzes their application in software cost estimation and outlines the emerging trends in this area
['Tirimula Rao Benala', 'Satchidananda Dehuri', 'Rajib Mall']
Computational intelligence in software cost estimation: an emerging paradigm
495,743
A novel reversible design for double edge triggered flip-flops and new designs of reversible sequential circuits
['Mariam Zomorodi Moghadam', 'Keivan Navi', 'Mahmood Kalemati']
A novel reversible design for double edge triggered flip-flops and new designs of reversible sequential circuits
797,450
Two-player win-lose games have a simple directed graph representation. Exploiting this, we develop graph theoretic techniques for finding Nash equilibria in such games. In particular, we give a polynomial time algorithm for finding a Nash equilibrium in a two-player win-lose game whose graph representation is planar.
['Louigi Addario-Berry', 'Neil Olver', 'Adrian Vetta']
A Polynomial Time Algorithm for finding Nash Equilibria in Planar Win-Lose Games
448,535
The history of articulatory synthesis at Haskins laboratories.
['Philip Rubin', 'Gordon Ramsay', 'Mark Tiede']
The history of articulatory synthesis at Haskins laboratories.
791,980
This paper provides an overview of the INEX Linked Data Track, which went into its second iteration in 2013.
['Sairam Gurajada', 'Jaap Kamps', 'Arunav Mishra', 'Ralf Schenkel', 'Martin Theobald', 'Qiuyue Wang']
Overview of the INEX 2013 Linked Data Track
39,871
Background: The popularity of the open source software development in the last decade, has brought about an increased interest from the industry on how to use open source components, participate in the open source community, build business models around this type of software development, and learn more about open source development methodologies. Aim: The aim of this study is to review research carried out on usage of open source components and development methodologies by the industry, as well as companies’ participation in the open source community. Method: Systematic review through searches in library databases and manual identification of articles from the open source conference. Results: 19 articles were identified. Conclusions: The articles could be divided into four categories: open source as part of component based software engineering, business models with open source in commercial organization, company participation in open source development communities, and usage of open source processes within a company. (Less)
['Martin Höst', 'Alma Orucevic-Alagic']
A systematic review of research on open source software in commercial software product development
814,891
A Physical Analysis of an Accident Scenario.
['Alexander Flach', 'Klaus David']
A Physical Analysis of an Accident Scenario.
767,402
Background. Interpretation of microarray data remains challenging because biological meaning should be extracted from enormous numeric matrices and be presented explicitly. Moreover, huge public repositories of microarray dataset are ready to be exploited for comparative analysis. This study aimed to provide a platform where essential implication of a microarray experiment could be visually expressed and various microarray datasets could be intuitively compared. Results. On the semantic space, gene sets from Molecular Signature Database (MSigDB) were plotted as landmarks and their relative distances were calculated by Lin’s semantic similarity measure. By formal concept analysis, a microarray dataset was transformed into a concept lattice with gene clusters as objects and Gene Ontology terms as attributes. Concepts of a lattice were located on the semantic space reflecting semantic distance from landmarks and edges between concepts were drawn; consequently, a specific geographic pattern could be observed from a microarray dataset. We termed a distinctive geography shared by microarray datasets of the same category as “semantic signature.” Conclusions. “Semantic space,” a map of biological entities, could serve as a universal platform for comparative microarray analysis. When microarray data were displayed on the semantic space as concept lattices, “semantic signature,” characteristic geography for a microarray experiment, could be discovered.
['Jihun Kim', 'Keewon Kim', 'Ju Han Kim']
Semantic Signature: Comparative Interpretation of Gene Expression on a Semantic Space
724,218
Exploratory Data Analysis of Software Repositories via GPU Processing
['Jose Ricardo Silva Junior', 'Esteban Clua', 'Leonardo Murta', 'Anita Sarma']
Exploratory Data Analysis of Software Repositories via GPU Processing
618,804
Many modern satellite and terrestrial point-to-point communications systems use circular polarization (CP) wave propagation in order to maximize the polarization efficiency component of the link budget. Therefore, in an undergraduate electromagnetics syllabus, an introduction to the topic of circular polarization is necessary to promote an understanding of the propagation aspects of modern communications system design. Students new to the antennas and propagation discipline often have difficulty in grasping the concept of CP; therefore, in this paper, the essential aspects of this topic are reinforced by a tutorial description of CP in terms of wave propagation, antenna properties, and measurement techniques. A simple laboratory-based project is described that requires the design, fabrication, and measurement of a crossed dipole antenna. The measured input impedance and radiation patterns are correlated with theory to highlight the conditions necessary to support CP wave propagation. By combining basic electromagnetic concepts with a series of simple intuitive laboratory experiments, the students can more easily visualize, and hence understand, CP wave propagation and its use in communications systems design.
['B.Y. Toh', 'Robert Cahill', 'Vincent Fusco']
Understanding and measuring circular polarization
42,440
It is widely recognized that differential decode-and-forward (DDF) cooperative transmission schemes are capable of achieving a cooperative diversity gain, while circumventing the potentially excessive-complexity and yet inaccurate channel estimation, especially in mobile environments. In this letter, we find the optimum transmit-interval duration for the source and relay in the context of TDMA-based DDF-aided half-duplex systems for the sake of maximizing the achievable network throughput. We also demonstrate from a pure capacity perspective, in what scenarios the introduction of cooperation improves the achievable throughput.
['Li Wang', 'Lajos Hanzo']
Optimum time resource allocation for TDMA-based differential decode-and-forward cooperative systems: a capacity perspective
197,683
Semi Supervised Adaptive Framework for Classifying Evolving Data Stream
['Ahsanul Haque', 'Latifur Khan', 'Michael Baron']
Semi Supervised Adaptive Framework for Classifying Evolving Data Stream
667,854
FeedbackBypass: A New Approach to Interactive Similarity Query Processing
['Ilaria Bartolini', 'Paolo Ciaccia', 'Florian Waas']
FeedbackBypass: A New Approach to Interactive Similarity Query Processing
973,622
This paper proposes a novel approach for the parallel execution of tiled iteration spaces onto a cluster of SMP PC nodes. Each SMP node has multiple CPUs and a single memory mapped PCI-SCI network interface card. We apply a hyperplane-based grouping transformation to the tiled space, so as to group together independent neighboring tiles and assign them to the same SMP node. In this way, intranode (intragroup) communication is annihilated. Groups are atomically executed inside each node. Nodes exchange data between successive group computations. We schedule groups much more efficiently by exploiting the inherent overlapping between communication and computation phases among successive atomic group executions. The applied non-blocking schedule resembles a pipelined datapath where group computation phases are overlapped with communication ones, instead of being interleaved with them. Our experimental results illustrate that the proposed method outperforms previous approaches involving blocking communication or conventional grouping schemes.
['Maria Athanasaki', 'Aristidis Sotiropoulos', 'Georgios Tsoukalas', 'Nectarios Koziris']
A pipelined execution of tiled nested loops on SMPs with computation and communication overlapping
341,831
Audio-Games (AGs) are electronic games that mainly (or exclusively) implement auditory means to express the game's plot, mechanics and content. AG players need to concentrate on perceiving sound, in order to understand and play the game. Recent developments in the field of edutainment suggest that AGs can be implemented not only as entertaining systems, but also as valid tools for research and education on any curriculum related (but no limited) to acoustics and sound studies. This paper will first discuss existing approaches to organizing the sonic content of an AG. Then it will propose a methodology for designing an AG for educational purposes. Finally, a case-study consisting of the development of two educational AGs related to the issue of noise will be demonstrated, on which the proposed methodology is applied.
['Emmanouel Rovithis', 'Andreas Floros', 'Andreas Mniestris', 'Nikolas Grigoriou']
Audio games as educational tools: Design principles and examples
926,903
The present paper generalises results by Tadaki [12] and Calude et al. [1] on oscillation-free partially random infinite strings. Moreover, it shows that oscillation-free partial Chaitin randomness can be separated from oscillation-free partial strong Martin-Lof randomness by $\Pi_{1}^{0}$ -definable sets of infinite strings.
['Ludwig Staiger']
On oscillation-free chaitin h-random sequences
585,126
So far, some models have been established to calculate the complex effective permittivity of an aqueous electrolyte solution. Almost all of the models based on the fitting parameters in Debye's equation were used to calculate the complex effective permittivity of a few aqueous electrolyte solutions, such as saline water and seawater. In this paper, we propose a new empirical formula to calculate the complex effective permittivity of a mixed aqueous electrolyte solution based on the measurement of complex impact factors for fundamental ions. The calculated complex effective permittivities of six mixed aqueous electrolyte solutions were compared with the measured results at 915 and 2450 MHz. The complex permittivity of saline water obtained by this model was compared with the published data as well. Good agreement can be observed.
['Xiaoqing Yang', 'Kama Huang']
The empirical formula for calculating the complex effective permittivity of an aqueous electrolyte solution at microwave frequency
4,919
The objective of this paper is to evaluate the effects of changing different physics schemes on the accuracy of weather simulations at different locations inside Egypt. The model sensitivity to physics options was tested in the four seasons and the results were compared to observations at different locations. Different physics packages were used based on different planetary boundary layer (PBL) and radiation schemes. The question to be answered in this study is: which scheme is the best scheme for certain location and/or weather regime?. The paper presents the details of model configurations, the results of the carried out simulations, and the behavior of the model with different physics options and initializations at different locations. The best physics options for each location and how to get better solutions for areas with complex land-use characteristics was identified. This is beneficial in determining how to choose an optimal set-up for a forecasting system, especially in Egypt.
['H. S. Badr', 'Hamdy A. Kandil', 'Basman Elhadidi', 'Atef O. Sherif']
Evaluating the physics options of regional weather models for areas with complex land-use characteristics
38,066
Proposes in a corner detector algorithm, which leads to results both on mono-spectral and multispectral images. To validate the method, we compare its mono-spectral version to the Harris detector, which is the most frequently used in literature. This study shows that the proposed method gives generally more efficient results. However, bad localisations appear for very blurred images (as for most corner detectors). Therefore, we have implemented a sub-pixel detector able to find the exact corner position.
['Catherine Achard', 'Erwan Bigorgne', 'Jean Devars']
A sub-pixel and multispectral corner detector
520,589
Current Geographic Information Systems (GISs) do not adequately allow users to query spatial databases by means of qualitative terms like left, near, or above. Hence, we propose a matching framework that enables users to formulate configurations in a spatial query in an intuitive and qualitative manner. Spatial queries are translated into the formal query language Structured Query Language (SQL) which is used to query and retrieve results from spatial databases. In order to demonstrate the applicability of our approach we developed the Bremen Tourists Advisor with the matching framework as prominent component. Finally, we conduct experiments in the BTA context which exhibit the efficiency of our framework.
['Rami Al-Salman', 'Frank Dylla', 'Paolo Fogliaroni']
Matching geo-spatial information by qualitative spatial relations
268,131
To detect the integrity of damaged pipeline and accurately evaluate its damaged condition, a CCD image data guided digital automatic ultrasonic testing system was designed to realize the dynamic measurement of a pipeline damaged surface, and extract the characteristic parameter of damaged pipeline. Based on the three-dimensional discrete point set which gathered to describe the integrity of damaged pipeline, we establish a damaged pipeline entity model which is closer to the actual condition, analyze and simulate mechanical properties under the actual load by using the numerical simulation software (ABAQUS), to get the most dangerous region and the trend of stress and displacement. This method provides more reliable reference for integral safety analysis and evaluation of damaged pipeline.
['Hua Bai', 'Bing Chen']
Study on the integrity detection and evaluation method of damaged pipeline
102,408
Hashing techniques have been widely adopted for cross-modal retrieval due to its low storage cost and fast query speed. Most existing cross-modal hashing methods aim to map heterogeneous data into the common low-dimensional hamming space and then threshold to obtain binary codes by relaxing the discrete constraint. However, this independent relaxation step also brings quantization errors, resulting in poor retrieval performances. Other cross-modal hashing methods try to directly optimize the challenging objective function with discrete binary constraints. Inspired by [1], we propose a novel supervised cross-modal hashing method called Discrete Cross-Modal Hashing (DCMH) to learn the discrete binary codes without relaxing them. DCMH is formulated through reconstructing the semantic similarity matrix and learning binary codes as ideal features for classification. Furthermore, DCMH alternately updates binary codes of each modality, and iteratively learns the discrete hashing codes bit by bit efficiently, which is quite promising for large-scale datasets. Extensive empirical results on three real-world datasets show that DCMH outperforms the baseline approaches significantly.
['Dekui Ma', 'Jian Liang', 'Xiangwei Kong', 'Ran He', 'Ying Li']
Discrete Cross-Modal Hashing for Efficient Multimedia Retrieval
994,638
An omnidirectional image sensor COPIS (conic projection image sensor) is proposed for guiding navigation of a mobile robot. It features passive sensing of the omnidirectional environment in real-time using a conic mirror. Because the conic mirror is used, its image is under conic projection; where the azimuth of each point in the scene appears in the image as its direction from the image center. The authors describe COPIS and its application to guide the navigation of a mobile robot. The COPIS system acquires an omnidirectional view around the robot in real-time by using a conic mirror. Under the assumption of constant motion of the robot, locations of objects around the robot can be estimated by detecting their azimuth changes in the omnidirectional image. Using this method, the robot generates an environmental map of an indoor scene while it is moving in the environment. A method to avoid collision against objects by detecting their azimuth changes is presented. >
['Yasushi Yagi', 'Masahiko Yachida']
Real-time generation of environmental map and obstacle avoidance using omnidirectional image sensor with conic mirror
244,824
In inter-picture coding, block-based frequency transform is usually carried out on the predicted errors for each interblock to remove the spatial correlation among them. However, it can not always do well since the predicted errors in some inter-blocks have marginal or diagonal correlation. A good solution is to omit transform operations for the predicted errors of those inter-blocks with low correlation before quantization operation. The same phenomenon also can be observed in fine grain scalability (FGS) layer coding. In this paper, an adaptive prediction error coding method in spatial and frequency domain with lower complexity is considered for FGS layer coding. Transform operation is only needed when there are non-zero reconstructed coefficients in spatially co-located block in base layer. The experimental results show that compared with FGS coding in JSVM, higher coding efficiency can be achieved with lower computational complexity at decoder since inverse transform is no longer needed for those predicted errors coded in spatial domain at encoder.
['Li Zhang', 'Xiangyang Jil', 'Wen Gao', 'Debin Zhao']
Adaptive Spatial and Transform Domain FGS Coding
111,931
In this paper we establish some properties of fuzzy quasi-pseudo-metric spaces. An im- portant result is that any partial ordering can be defined by a fuzzy quasi-metric, which can be applied both in theoretical computer science and in information theory, where it is usual to work with sequences of objects of increasing information. We also obtain decomposition theorems of a fuzzy quasi-pseudo metric into a right continuous and ascending family of quasi-pseudo metrics. We develop a topological foundation for complexity analysis of algorithms and programs, and based on our results a fuzzy complexity space can be considered. Also, we built a fertile ground to study some types of fuzzy quasi-pseudo-metrics on the domain of words, which play an important role on denotational semantics, and on the poset BX of all closed formal balls on a metric space.
['Sorin Nădăban', 'Ioan Dzitac']
Some Properties and Applications of Fuzzy Quasi-Pseudo-Metric Spaces
703,169
In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH), to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a "weight-aware sampling" technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images.
['Hanzi Wang', 'Guobao Xiao', 'Yan Yan', 'David Suter']
Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting
575,322
The development and maintenance of large and complex ontologies are often time-consuming and error-prone. Thus, automated ontology learning and evolution have attracted intensive research interest. In data-centric applications where ontologies are designed from the data or automatically learnt from it, when new data instances are added that contradict the ontology, it is often desirable to incrementally revise the ontology according to the added data. In description logics, this problem can be intuitively formulated as the operation of TBox contraction, i.e., rational elimination of certain axioms from the logical consequences of a TBox, and it is w.r.t. an ABox. In this paper we introduce a model-theoretic approach to such a contraction problem by using an alternative semantic characterisation of DL-Lite TBoxes. We show that entailment checking (without necessarily first computing the contraction result) is in coNP, which does not shift the corresponding complexity in propositional logic, and the problem is tractable when the size of the new data is bounded.
['Zhe Wang', 'Kewen Wang', 'Zhiqiang Zhuang', 'Guilin Qi']
Instance-driven ontology evolution in DL-lite
741,952
This paper describes the results of an experiment applying the strategy method to analyze the behavior of subjects in an 8-player common pool resource (CPR) game. The CPR game consists of a constituent game played for 20 periods. The CPR game has a unique optimum and a unique subgame perfect equilibrium; the latter involves overinvestment in the appropriation from the CPR. Sixteen students, all experienced in game theory, were recruited to play the CPR game over the course of 6 weeks. In the first phase of the experiment, they played the CPR game on-line 3 times. In the second phase of the experiment, the tournament phase, they designed strategies which were then played against each other. At the aggregate level, subgame perfect equilibrium organizes the data fairly well. At the individual level, however, fewer than 5% of subjects play in accordance with the game equilibrium prediction.
['Claudia Keser', 'Roy Gardner']
Strategic Behavior of Experienced Subjects in a Common Pool Resource Game
597,265
Editorial IEEE Transactions on Cognitive and Developmental Systems
['Yaochu Jin']
Editorial IEEE Transactions on Cognitive and Developmental Systems
685,012
As industry moves towards many-core chips, networks-on-chip (NoCs) are emerging as the scalable fabric for interconnecting the cores. With power now the first-order design constraint, early-stage estimation of NoC power has become crucially important. ORION [29] was amongst the first NoC power models released, and has since been fairly widely used for early-stage power estimation of NoCs. However, when validated against recent NoC prototypes -- the Intel 80-core Teraflops chip and the Intel Scalable Communications Core (SCC) chip -- we saw significant deviation that can lead to erroneous NoC design choices. This prompted our development of ORION 2.0, an extensive enhancement of the original ORION models which includes completely new subcomponent power models, area models, as well as improved and updated technology models. Validation against the two Intel chips confirms a substantial improvement in accuracy over the original ORION. A case study with these power models plugged within the COSI-OCC NoC design space exploration tool [23] confirms the need for, and value of, accurate early-stage NoC power estimation. To ensure the longevity of ORION 2.0, we will be releasing it wrapped within a semi-automated flow that automatically updates its models as new technology files become available.
['Andrew B. Kahng', 'Bin Li', 'Li-Shiuan Peh', 'Kambiz Samadi']
ORION 2.0: a fast and accurate NoC power and area model for early-stage design space exploration
483,696
Critical Mass: How One Thing Leads to Another by Philip Ball .
['Bruce Edmonds']
Critical Mass: How One Thing Leads to Another by Philip Ball .
733,174
We consider the system of robots with limited-visibility, where each robot can see only the robots within the unit visibility range a.k.a. the unit distance range). In this model, we focus on the inherent cost we have to pay for connectivity preservation in the conservative way (i.e., in any execution, no edge of the visibility graph is deleted). We present a bad configuration with the visibility graph of diameter D for which any conservative algorithm requires O(D2) rounds to make all robots movable, where D is the diameter of the initial visibility graph. This result implies that we inherently need edge-deletion mechanisms to solve many connectivity-preserving problems (as considered in AOSY99, FPSW05, SDY09) within o(D2) rounds.
['Daichi Kaino', 'Taisuke Izumi']
On the Worst-Case Initial Configuration for Conservative Connectivity Preservation
926,821
A problem related to deployment of femtocells is how to manage access of users to radio resources. On one hand, all resources of the femtocell can be reserved for users belonging to a closed subscriber group (CSG), which is a set of users defined by a femtocell subscriber. This approach, known as closed access, however, increases interference to users not included in the CSG as those users do not have a permission to access this femtocell. Contrary, resources can be shared by all users with no priority in an open access mode. In this case, the femtocell subscriber shares radio as well as backhaul resources with all other users. Thus, throughput and quality of service of the subscriber and the CSG users can be deteriorated. To satisfy both the CSG as well as non-CSG users, a hybrid access is seen as a compromise. In this paper, we propose a new approach for sharing radio resources among all users. As in common cases, the CSG users have a priority for usage of a part of resources while rest of the resources is shared by all users proportionally to their requirements. As the simulation results show, the proposed resource sharing scheme significantly improves throughput of the CSG users and their satisfaction with granted bitrates. At the same time, throughput and satisfaction of the non-CSG users is still guaranteed roughly at the same level as if conventional sharing schemes are applied.
['Zdenek Becvar', 'Jan Plachy']
Radio Resource Sharing Among Users in Hybrid Access Femtocells
597,585
We review commercially available software- defined radio platforms and classify them with respect to their ability to enable rapid prototyping of next-generation wireless systems. In particular, we first discuss the research challenges imposed by the latest software-defined radio enabling technologies including both analog and digital processing hardware. Then we present the state-of-the-art commercial software-defined radio platforms, describe their software and hardware capabilities, and classify them based on their ability to enable rapid prototyping and advance experimental research in wireless networking. Finally, we present three experimental testbed scenarios (wireless terrestrial, aerial, and underwater) and argue that the development of a system design abstraction could significantly improve the efficiency of the prototyping and testbed implementation process.
['George Sklivanitis', 'Adam Gannon', 'Stella N. Batalama', 'Dimitris A. Pados']
Addressing next-generation wireless challenges with commercial software-defined radio platforms
600,412
Statistical model training technique for speech synthesis based on speaker class.
['Yusuke Ijima', 'Noboru Miyazaki', 'Hideyuki Mizuno']
Statistical model training technique for speech synthesis based on speaker class.
978,326
In voice coding applications where there is no constraint on the encoding delay, such as store and forward message systems or voice storage, segment coding techniques can be used to achieve a reduction in data rate without compromising the level of distortion. For low data rate linear predictive coding schemes, increasing the encoding delay allows one to exploit any long term temporal stationarities on an interframe basis, thus reducing the transmission bandwidth or storage needs of the speech signal. Transform coding has previously been applied in low data rate speech coding to exploit both the interframe and the intraframe correlation. This paper investigates the potential for optimising the transform for segmented parametric representation of speech.
['Damith J. Mudugamuwa', 'Alan B. Bradley']
Optimal transform for segmented parametric speech coding
136,744
In this paper, a novel approach for simplifying the design, the prototyping and the assembly of a wrist mechanism with 2 DOFs for anthropomorphic tendon-driven robotic hands is presented. This novel design concept allows a relevant reduction of both the number of parts and their manufacturing complexity, guaranteeing at the same time the decoupling of the fingers and the wrist motion by means of a particular choice of tendons routing. The simplification of the mechanism is achieved with the partial drawback of introducing additional friction forces along the tendons, which are however compensated by the control and do not significantly affect the overall behavior of the hand. The proposed wrist design has been adopted in the development of th UB-Hand IV.
['Umberto Scarcia', 'Claudio Melchiorri', 'Gianluca Palli']
Towards simplicity: On the design of a 2-DOFs wrist mechanism for tendon-driven robotic hands
651,254
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
['Alexandre Bur', 'Pascal Wurtz', 'René M. Müri', 'Heinz Hügli']
Dynamic visual attention: motion direction versus motion magnitude
83,263
The Canny edge detector is a very popular and effective edge feature detector that is used as a pre-processing step in many computer vision algorithms. It is a multi-step detector which performs smoothing and filtering, non-maxima suppression, followed by a connected-component analysis stage to detect ldquotruerdquo edges, while suppressing ldquofalserdquo non edge filter responses. While there have been previous (partial) implementations of the Canny and other edge detectors on GPUs, they have been focussed on the old style GPGPU computing with programming using graphical application layers. Using the more programmer friendly CUDA framework, we are able to implement the entire Canny algorithm. Details are presented along with a comparison with CPU implementations. We also integrate our detector in to MATLAB, a popular interactive simulation package often used by researchers. The source code will be made available as open source.
['Yuancheng Luo', 'Ramani Duraiswami']
Canny edge detection on NVIDIA CUDA
544,388
Linsker has reported the development of centre--surround receptive fields and oriented receptive fields in simulations of a Hebb-type equation in a linear network. The dynamics of the learning rule are analysed in terms of the eigenvectors of the covariance matrix of cell activities. Analytic and computational results for Linsker's covariance matrices, and some general theorems, lead to an explanation of the emergence of centre--surround and certain oriented structures.
['David J. C. MacKay', 'Kenneth D. Miller']
Analysis of Linsker's Simulations of Hebbian Rules
537,103
The verifiability principle of meaning holds that information is meaningful only if there is a procedure that can verify whether it is true or false. In this paper, we explore this principle of philosophy in the realm of program comprehension. We introduce the notion of concept programs, which are independent, executable, and thus verifiable units of program knowledge. Concept programs are well-suited for the comprehension and explanation of the central mechanisms that underlie a complex program. We use an industrial case study to motivate the importance of concept programs for the comprehension of the innermost complexity of industrial programs.
['Reinhard Schauer', 'Rudolf K. Keller']
A case for concept programs
116,713
How do you tell a blackbird from a crow? There has been great progress toward automatic methods for visual recognition, including fine-grained visual categorization in which the classes to be distinguished are very similar. In a task such as bird species recognition, automatic recognition systems can now exceed the performance of non-experts - most people are challenged to name a couple dozen bird species, let alone identify them. This leads us to the question, "Can a recognition system show humans what to look for when identifying classes (in this case birds)?" In the context of fine-grained visual categorization, we show that we can automatically determine which classes are most visually similar, discover what visual features distinguish very similar classes, and illustrate the key features in a way meaningful to humans. Running these methods on a dataset of bird images, we can generate a visual field guide to birds which includes a tree of similarity that displays the similarity relations between all species, pages for each species showing the most similar other species, and pages for each pair of similar species illustrating their differences.
['Thomas Berg', 'Peter N. Belhumeur']
How Do You Tell a Blackbird from a Crow
129,244
In the past years, the advent of multi-core machines has led to the need for adapting current simulation solutions to modern hardware architectures. In this poster, we present a solution to exploit multicore shared-memory capacities in Yades, a parallel tool for running socio-demography dynamic simulations. We propose to abandon the single-threaded programming approach addresses in Yades by using ROOT-Sim, a library which allows to apply discrete event simulation to parallel environments profiting share-memory capabilities. As a result of this new approach, our results show the improvement in Yades' performance and scalability.
['Vanessa Büsing-Meneses', 'Cristina Montañola-Sales', 'Josep Casanovas-Garcia', 'Alessandro Pellegrini']
Analysis and optimization of a demographic simulator for parallel environments
660,914
Google Versus Death; To Be, Or Not to Be?
['Newton Lee']
Google Versus Death; To Be, Or Not to Be?
917,464
Imaging modalities that use a mechanically rotated endoscopic probe to scan a tubular volume, such as an artery, often suffer from image degradation due to nonuniform rotation distortion (NURD). In this paper, we present a new method to align individual lines in a sequence of images. It is based on dynamic time warping, finding a continuous path through a cost matrix that measures the similarity between regions of two frames being aligned. The path represents the angular mismatch corresponding to the NURD. The prime advantage of this novel approach compared to earlier work is the line-to-line continuity, which accurately captures slow intraframe variations in rotational velocity of the probe. The algorithm is optimized using data from a clinically available intravascular optical coherence tomography (OCT) instrument in a realistic vessel phantom. Its efficacy is demonstrated on an in vivo recording, and compared with conventional global rotation block matching. Intravascular OCT is a particularly challenging modality for motion correction because, in clinical situations, the image is generally undersampled, and correlation between the speckle in different lines or frames is absent. The algorithm can be adapted to ingest data frame-by-frame, and can be implemented to work in real time.
['G. van Soest', 'J.G. Bosch', 'A.F.W. van der Steen']
Azimuthal Registration of Image Sequences Affected by Nonuniform Rotation Distortion
358,918
Some Examinations of Intrinsic Methods for Summary Evaluation Based on the Text Summarization Challenge (TSC).
['Hidetsugu Nanba', 'Manabu Okumura']
Some Examinations of Intrinsic Methods for Summary Evaluation Based on the Text Summarization Challenge (TSC).
808,131
This paper describes a formative evaluation of an integrated multilingual, multimedia information system, a series of user studies designed to guide system development. The system includes automatic speech recognition for English, Chinese, and Arabic, automatic translation from Chinese and Arabic into English, and query-based and profile-based search options. The study design emphasizes repeated evaluation with the same (increasingly experienced) participants, exploration of alternative task designs, rich qualitative and quantitative data collection, and rapid analysis to provide the timely feedback needed to support iterative and responsive development. Results indicate that users presented with materials in a language that they do not know can generate remarkably useful work products, but that integration of transcription, translation, search and profile management poses challenges that would be less evident were each technology to be evaluated in isolation.
['Pengyi Zhang', 'Lynne Plettenberg', 'Judith L. Klavans', 'Douglas W. Oard', 'Dagobert Soergel']
Task-based interaction with an integrated multilingual, multimedia information system: a formative evaluation
21,858
Recent advances in guaranteed stable generalized predictive control algorithms are applied to vertical positioning of plasma in the COMPASS-D tokamak, a test device used to study instabilities and control techniques necessary for fusion power plants in the future. The tokamak current P+D controller cannot stabilize the plasma vertical position without internal sensors which will not be available in larger devices. A controller which is derived with the stable predictive control algorithms described in this paper stabilizes the plasma with only external sensors and also solves noise and robustness problems. It compares favorably to the one produced with standard H/sub /spl infin// design techniques.
['J.R. Gossner', 'P. Vyas', 'B. Kouvaritakis', 'A.W. Morris']
Application of cautious stable predictive control to vertical positioning in COMPASS-D tokamak
241,170
Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan.
['Louise A. Dennis', 'Michael Fisher', 'Marija Slavkovik', 'Matt Webster']
Formal verification of ethical choices in autonomous systems
589,835
Learning algorithms aim for accuracy of classification but this depends on a choice of heuristic metric to measure performance and also on the proper consideration and addressing of the important requirements of the classification task. This paper introduces a framework, MVGen, to implement different training heuristics capable of inducing the training algorithm that can provide the desired results while negating detrimental aspects of a training set imbalance. Our experiments indicate that successful classifiers can indeed be built to specialize on the minority class within an imbalanced data set.
['Romesh Ranawana', 'Vasile Palade', 'Daniel Howard']
Genetic Algorithm Approach to Construction of Specialized Multi-Classifier Systems: Application to DNA Analysis
499,962
Media processing system-on-chips (SoCs) mainly consist of audio encoding/decoding (e.g. AC-3, MP3), video encoding/decoding (e.g. H263, MPEG-2) and video pixel processing functions (e.g. de-interlacing, noise reduction). Video pixel processing functions have very high computational demands, as they require a large amount of computations on large amount of data (note that the data are pixels of completely decoded pictures). In this paper, we focus on video pixel processing functions. Usually, these functions are implemented in dedicated hardware. However, flexibility (by means of programmability or reconfigurability) is needed to introduce the latest innovative algorithms, to allow differentiation of products, and to allow bug fixing after fabricating chips. It is impossible to fulfill the computational requirements of these functions by current programmable media processors. To achieve efficient implementations for flexible solutions, we will study, in this paper, the application characteristics of some representative video pixel processing functions. The characteristics considered are granularity of operations, amount and kind of data accesses and degree of parallelism present in these functions. We observe that from computational granularity point of view many functions can be expressed in terms of kernels e.g. Median3 (i.e. median of three values), finite impulse response (FIR) filters, table lookups (LUT) etc. that are coarser grain than ALU, Mult, MAC, etc. Regarding the kind of data accesses, we categorize these functions as regular, regular with some data rearrangement and irregular data access patterns. Furthermore, the degree of parallelism present in these functions is expressed in terms of data level parallelism (DLP) and instruction/operation level parallelism (ILP). We show with an example that these properties can be exploited to make specialized programmable processors.
['Om Prakash Gangwal', 'Johan Janssen', 'Selliah Rathnam', 'Erwin B. Bellers', 'Marc Duranton']
Understanding video pixel processing applications for flexible implementations
283,339
We introduce a new robust algorithm that is insensitive to impulsive noise (IN) for distributed estimation problem over adaptive networks. Motivated by the fact that each node can access to multiple spatial data, we propose to discard IN-contaminated data. Under the assumption that IN is successfully detected, we propose a cost function that considers only the uncontaminated data. The derived algorithm is the ATC diffusion LMS algorithm that has variable weighting coefficients depending on IN detection, which leads both to insensitivity to IN and to good estimation performance. A method to detect IN is also presented. Simulation results show that the proposed algorithm has good estimation performance in an environment that is subject to IN, and outperforms the conventional robust algorithms. HighlightsWe propose a new robust algorithm for distributed estimation over adaptive networks.We propose a cost function that considers only the data without impulsive noise.Weighting coefficients that vary depending on impulsive noise occurrence are derived.A method to detect impulsive noise is also presented.The proposed algorithm outperforms the conventional robust algorithms.
['Do-Chang Ahn', 'Jae-Woo Lee', 'Seung-Jun Shin', 'Woo-Jin Song']
A new robust variable weighting coefficients diffusion LMS algorithm
864,968
This paper considers optical orthogonal codes (OOCs) with variable chip rates. In the previous studies on OOCs have been performed on uniform/non-uniform code length/weight distributions. They provide different QoS and bit error properties in optical CDMA networks. However, the assumption of design with a common chip rate requires all users in a network must have fast optical transmitters and receivers which can detect short pulses even when some low-end users are included. To solve this, we propose OOCs with variable chip rates and show some results of code search. Its theoretical bound and bit error rate performance is also obtained and discussed.
['Tetsuo Tsujioka', 'Hiromi Yamamoto']
Performance Analysis of Optical Orthogonal Codes with Variable Chip Rates
152,558
An efficient and a practical genetic algorithm tool was developed and applied successfully to Burnable Poisons (BPs) placement optimization problem in the reference Three Mile Island-1 (TMI-1) core. Core BP optimization problem means developing a BP loading map for a given core loading configuration that minimizes the total Gadolinium (Gd) amount in the core without violating any design constraints. The number of UO 2 /Gd 2 O 3 pins and Gd 2 O 3 concentrations for each fresh fuel location in the core are the decision variables and the total amount of the Gd in the core is in the objective function. The main objective is to develop the BP loading pattern to minimize the total Gd in the core together with the with residual binding at End-of-Cycle (EOC) and to keep the maximum peak pin power and Soluble Boron Concentration (SOB) at the Beginning of Cycle (BOC) both less than their limit values during core depletion. The innovation of this study was to search all of the feasible U/Gd fuel assembly designs with variable number of U/Gd pins and concentration of Gd 2 O 3 in the overall decision space. The use of different fitness functions guides the solution towards desired (good solutions) region in the solution space, which accelerates the GA solution. The main objective of this study was to develop a practical and efficient GA tool and to apply this tool for designing BP patterns of a given core loading.
['Serkan Yilmaz', 'Kostadin Ivanov', 'Samuel H. Levine']
Application of genetic algorithm to optimize burnable poison placement in pressurized water reactors
101,579
Recently, extensive analytic research into packet scheduling in crossbar switches has yielded interesting throughput maximizing algorithms. Surprisingly, however, quality of service (QoS) performance associated with these algorithms has only been approximated through simulation. We present here certain randomized algorithms with analytic QoS. These are simple to implement and possess closed form expressions for various performance measures. By fine tuning particular parameters of these algorithms, one can vary the QoS associated with the individual ports as desired. This allows cost and utility optimization, a feature which was not feasible under previously studied packet scheduling algorithms.
['Kevin Ross', 'Nicholas Bambos']
Optimizing quality of service in packet switch scheduling
188,234
A whole brain, multiband spin-echo (SE) echo planar imaging (EPI) sequence employing a high spatial (1.5 mm isotropic) and temporal (TR of 2 s) resolution was implemented at 7 T. Its overall performance (tSNR, sensitivity and CNR) was assessed and compared to a geometrically matched gradient-echo (GE) EPI multiband sequence (TR of 1.4 s) using a color-word Stroop task. PINS RF pulses were used for refocusing to reduce RF amplitude requirements and SAR, summed and phase-optimized standard pulses were used for excitation enabling a transverse or oblique slice orientation. The distortions were minimized with the use of parallel imaging in the phase encoding direction and a post-acquisition distortion correction. In general, GE-EPI shows higher efficiency and higher CNR in most brain areas except in some parts of the visual cortex and superior frontal pole at both the group and individual-subject levels. Gradient-echo EPI was able to detect robust activation near the air/tissue interfaces such as the orbito-frontal and subcortical regions due to reduced intra-voxel dephasing because of the thin slices used and high in-plane resolution
['Rasim Boyacioğlu', 'J. Schulz', 'N. Müller', 'Peter J. Koopmans', 'Markus Barth', 'David G. Norris']
Whole brain, high resolution multiband spin-echo EPI fMRI at 7 T: a comparison with gradient-echo EPI using a color-word Stroop task.
530,484
We describe generalized running key ciphers and apply them for analysis of two Shannon's methods. In particular, we suggest some estimation of the cipher equivocation and the probability of correct deciphering without key.
['Boris Ryabko']
Information-theoretical analysis of two Shannon's ciphers
726,524
A finite-element scheme based on a coupled arbitrary Lagrangian-Eulerian and Lagrangian approach is developed for the computation of interface flows with soluble surfactants. The numerical scheme is designed to solve the time-dependent Navier-Stokes equations and an evolution equation for the surfactant concentration in the bulk phase, and simultaneously, an evolution equation for the surfactant concentration on the interface. Second-order isoparametric finite elements on moving meshes and second-order isoparametric surface finite elements are used to solve these equations. The interface-resolved moving meshes allow the accurate incorporation of surface forces, Marangoni forces and jumps in the material parameters. The lower-dimensional finite-element meshes for solving the surface evolution equation are part of the interface-resolved moving meshes. The numerical scheme is validated for problems with known analytical solutions. A number of computations to study the influence of the surfactants in 3D-axisymmetric rising bubbles have been performed. The proposed scheme shows excellent conservation of fluid mass and of the total mass of the surfactant.
['Sashikumaar Ganesan', 'Lutz Tobiska']
Arbitrary Lagrangian-Eulerian finite-element method for computation of two-phase flows with soluble surfactants
360,549
A typical data-driven visualization of electroencephalography (EEG) coherence is a graph layout, with vertices representing electrodes and edges representing significant coherences between electrode signals. A drawback of this layout is its visual clutter for multichannel EEG. To reduce clutter, we define a functional unit (FU) as a data-driven region of interest (ROI). An FU is a spatially connected set of electrodes recording pairwise significantly coherent signals, represented in the coherence graph by a spatially connected clique. Earlier, we presented two methods to detect FUs: a maximal clique-based (MCB) method (time complexity O(3 n/3 ), with n being the number of vertices) and a more efficient watershed-based (WB) method (time complexity O(n 2 logn)). To reduce the potential oversegmentation of the WB method, we introduce an improved WB (IWB) method (time complexity O(n 2 log n)). The IWB method merges basins representing FUs during the segmentation if they are spatially connected and if their union is a clique. The WB and IWB methods are both up to a factor of 100,000 faster than the MCB method for a typical multichannel setting with 128 EEG channels, thus making interactive visualization of multichannel EEG coherence possible. Results show that considering the MCB method as the gold standard, the difference between IWB and MCB FU maps is smaller than between WB and MCB FU maps. We also introduce two novel group maps for data-driven group analysis as extensions of the IWB method. First, the group mean coherence map preserves dominant features from a collection of individual FU maps. Second, the group FU size map visualizes the average FU size per electrode across a collection of individual FU maps. Finally, we employ an extensive case study to evaluate the IWB FU map and the two new group maps for data-driven group analysis. Results, in accordance with conventional findings, indicate differences in EEG coherence between younger and older adults. However, they also suggest that an initial selection of hypothesis-driven ROIs could be extended with additional data-driven ROIs.
['M. ten Caat', 'Natasha Maurits', 'J.B.T.M. Roerdink']
Data-Driven Visualization and Group Analysis of Multichannel EEG Coherence with Functional Units
462,067
A novel automatic Vickers hardness measuring method is proposed. An algorithm called Hough fuzzy vertices detection algorithm (HFVDA) is proposed. In order to overcome the unavoidable affects of vertex detection due to surface contaminations or specimen texture, HFVDA transforms all the candidate pixels on the indentation edge lines into Hough space. Within Hough space, a weighted fuzzy c-means algorithm along with local maximum detection is proposed to find the transformed indentation edge lines. It will be shown that HFVDA is able to find the indentation vertices and calculate the hardness number with high accuracy for either specular-polished or rough-polished specimens.
['Leehter Yao', 'Chih-Heng Fang']
An automatic hardness measuring method using Hough transform and fuzzy c-means algorithm
445,350
In the Web 2.0 era, people not only read web contents but create, upload, view, share and evaluate all contents on the web. This leads us to introduce a new type of social network based on user activity and content metadata. We notice that we can determine the quality of related contents using this new social network. Based on this observation, we introduce a user evaluation algorithm for user-generated video sharing website. First, we make a social network of users from video contents and related social activities such as subscription, uploading or favorite. We then use a modified PageRank algorithm to compute user reputation from the social network. We re-calculate the content scores using user reputations and compare the results with a standard BM25 result. We apply the proposed approach to YouTube and demonstrate that the user reputation is closely related to the number of subscriptions and the number of uploaded contents. Furthermore, we show that the new ranking results relied on the user reputation is better than the standard BM25 approach by experiments.
['Yo-Sub Han', 'Laehyun Kim', 'Jeong-Won Cha']
Computing User Reputation in a Social Network of Web 2.0
601,405
The main characteristics of ad hoc networks are the lack of predefined infrastructure and the dynamic topology. These characteristics present some new security vulnerabilities for this emerging networking paradigm. Usually, security in ad hoc networks is handled through authentication and encryption. This can be considered as a first line of defense, however, this remain inefficient against some other kind of attacks such as malicious packet dropping. The purpose of this work is to provide a mechanism for detecting malicious incorrect packet forwarding attacks. To this end, a trust model extending routing protocols and based on the reputation concept is developed. Our model provides two main functionalities: monitoring the behavior of the neighboring nodes in the network and computing their reputations based on the information provided by the monitoring. This paper also discusses how the reputation information is gathered, stored and exchanged between the nodes, and computed according to the different scenarios. Our mechanism is also validated with some simulation work showing its feasibility, performance and benefits.
['Yacine Rebahi', 'Vicente Mujica-V', 'Dorgham Sisalem']
A reputation-based trust mechanism for ad hoc networks
45,296
Math word problem solving in an online tutoring system was compared for high school students who were native speakers of English English primary and their peers who were learning English English learners. Word problems were written in English, the language of instruction. Data records for word problems that had been solved by students in both language groups were located and compared. Results indicated that the English learners were less likely to answer correctly, had more incorrect answer attempts, and took longer per problem on average than English primary students. When word problems were matched for math operation, students in both language groups performed worse on problems with more challenging text. There were no differences for the two language groups with regard to self-reported math motivation, plans to attend college, or off-task 'gaming' behaviour, suggesting that the lower performance of the English learners could not be attributed to lower effort.
['Carole R. Beal', 'Federico Cirett Galán']
Math word problem solving by English learners and English primary students in an intelligent tutoring system
675,909
Intrusion prevention systems (IPSs) have long been proposed as a defense against attacks that propagate too fast for any manual response to be useful. In an important class of IPSs, the host-based IPSs, honeypots are used to collect information about attacks. The collected information will then be analyzed to generate countermeasures against the observed attack. Unfortunately, these IPSs can be rendered useless by techniques that allow the honeypots in a network to be identified ([1, 9]). In particular, attacks can be designed to avoid targeting the identified honeypots. As a result, the IPSs will have no information about the attacks, and thus no countermeasure will ever be generated. The use of honeypots is also creating other practical issues which limit the usefulness/feasibility of many host-based IPSs. We propose to solve these problems by duplicating the detection and analysis capability on every protected system; i.e., turning every host into a honeypot. In this paper, we will first lay out the necessary features of any scheme for such large scale collaboration in intrusion prevention, then we will present a framework called collaborative intrusion prevention (ClP) for realizing our idea of turning every host into a honeypot.
['Simon P. Chung', 'Aloysius K. Mok']
Collaborative Intrusion Prevention
201,672
In this paper, we present a novel acoustic sensing technique that recognizes two convenient input actions: hand gestures and on-body touch. We achieved them by observing the frequency spectrum of the wave propagated in the body, around the periphery of the wrist. Our approach can recognize hand gestures and on-body touch concurrently in real-time and is expected to obtain rich input variations by combining them. We conducted a user study that showed classification accuracy of 97%, 96%, and 97% for hand gestures, touches on the forearm, and touches on the back of the hand.
['Tomohiro Yokota', 'Tomoko Hashida']
Hand Gesture and On-body Touch Recognition by Active Acoustic Sensing throughout the Human Body
911,877
We have been developing the Robotic Communication Terminals (RCTs), which are integrated into a mobility support system to assist elderly or disabled people who suffer from impaired mobility. The RCT system consists of three types of terminals and one server: an environment-embedded terminal, a user-carried mobile terminal, a user-carrying mobile terminal, and a barrier-free map server. The RCT is an integrated system that can be used to cope with various problems of mobility, and provide suitable support to a wide variety of users. This paper provides an in-depth description of the user-carrying mobile terminal. The system itself is a kind of intelligent wheeled vehicle. It can recognize the surrounding 3D environment through infrared sensors, sonar sensors, and a stereo vision system with three cameras, and avoid hazards semi-autonomously. It also can provide adequate navigation by communicating with the geographic information system (GIS) server and detect vehicles appearing from the blind side by communicating with environment-embedded terminals in the real-world.
['Kentaro Kayama', 'Ikuko Eguchi Yairi', 'Seiji Igi']
Semi-autonomous outdoor mobility support system for elderly and disabled people
379,544
Ciprofloxacin, a fluoroquinolone antibiotic, is widely used for the treatment of bacterial infection in humans due to its broad antibacterial spectrum. An excessive use or overdose of ciprofloxacin on the other hand can cause several adverse effects not only to humans but also to microorganisms. Unabsorbed ciprofloxacin in the body is mostly excreted through urine and finally goes to the environment, providing a drug resistance pressure on bacteria. Hence a simple and efficient detection method of ciprofloxacin is necessary, which, for example, can be used to analyze ciprofloxacin content in urine. Although ciprofloxacin itself shows inherent fluorescence, direct fluorescent detection of ciprofloxacin in raw urine sample is difficult due to autofluorescence of urine by other components. Herein we report that a Tb(III) complex of DO3A (1,4,7,10-tetraazacyclododecane-1,4,7-triacetic acid) can be efficiently sensitized by ciprofloxacin to emit luminescence separately from the urine autofluorescence wavelength region. Tb-DO3A shows excellent sensitivity with a detection limit of three parts per billion in aqueous buffer solution. Further, Tb-DO3A is used to detect ciprofloxacin with high sensitivity and selectivity in a raw urine sample without any purification or separation procedures in the concentrations ranging from 1 µg·mL−1 to 50 µg·mL−1. The direct measurement of ciprofloxacin excreted in urine may be used to control overdose of the drug.
['Subhankar Singha', 'Kyo Han Ahn']
Detection of Ciprofloxacin in Urine through Sensitized Lanthanide Luminescence
955,492
Developing countries suffer from a lack of fixed and reliable infrastructure resulting in lack of Internet connectivity, particularly in the rural areas. In this paper we have addressed the challenge of providing the free education to millions of illiterate children living in remote hamlets of poverty stricken villages in the north of Iraq. The basic fundamental right of humanity is education, so different organization cooperates with governments to deliver the education for everyone in world that would lead to literacy for all. We hope that proposed work would help children in different place(s) of the world to get education especially in places where the communication service is not available either due to the environmental factors like the poor connectivity due to fading or attenuation or poor topographical terrain like in hilly area. In order to overcome these obstacles a DTN based education system is proposed which consumes less resource but delivers high throughput. This mobile based efficient education system comes with new approach of delay tolerant network that possess the capability to operate in different environments. Our proposed Mobile Education System adopts a new protocol LTPCL "The Licklider Transmission Protocol convergence layer" that operate with DTN approach to solve education problems implemented by MANET in earlier time. When the internet cannot operate in heterogeneous environments, Delay Tolerant Networking (DTN) is promising network architecture for heterogeneous Environments if any disconnection occurring between the source and the destination the in-between nodes cannot store the packet and forward it after reconnecting. The DTN approach can provide the (store-carry-forward) network because in DTN approaches there are nodes which have memory buffer.
['Rahul Johari', 'Dhari Ali Mahmood']
MeNDARIN: Mobile Education Network Using DTN Approach in North IRAQ
708,511
Agent-based models are often described as bottom-up because macro-level phenomena emerge from the micro-level interactions of agents. These macro-level phenomena include fixed points, cycles, dynamic patterns, and long transients. In this paper, I explore the link between micro-level characteristics-learning rules, diversity, network structure, and externalities-and the macro-level patterns they produce. I focus on why we need agent-level modeling, on how these models produce emergent phenomenon, and on how agent-based models help understand outcomes of social systems in a way that differs from the analytic, equilibrium approach.
['Scott E. Page']
Review: aggregation in agent-based models of economies
592,823
Social ties have been hypothesized to help people to gain support in achieving collaborative goals. We test this hypothesis in a study of organizational crowdfunding (or â crowdfunding behind the firewallâ ). 201 projects were proposed for peer-crowdfunding in a large international corporation. The crowdfunding website allowed people to join a project as Co-Proposers. We analyzed the funding success of 114 projects as a function of the number of (Co-)Proposers. Projects that had more co-proposers were more likely to reach their funding targets. Using data from an organizational social-networking service, we show how employeesâ social ties were associated with these success patterns. Our results have implications for theories of collaboration in social networks, and the design of crowdfunding websites.
['Michael J. Muller', 'Mary Keough', 'John Wafer', 'Werner Geyer', 'Alberto Alvarez Saez', 'David Leip', 'Cara Viktorov']
Social Ties in Organizational Crowdfunding: Benefits of Team-Authored Proposals
657,864
Land use and landscape pattern changes were analyzed in Nanjing during 1988-2007, based on RS, GIS and Fragstats software. This city was divided into six regions to explore the detailed variations of land use and landscape pattern in the regions. Landscape pattern were investigated at both landscape level and at class level, by way of various landscape metrics. The results revealed that urban area increased dramatically, while farmland declined, though in diverse extent. Generally, landscape pattern tent to be less fragmented, yet more complex in most regions at landscape level. At patch class level, changes of landscape in six regions were not quite consistent.
['Zhihui Wang', 'Qiu Yin']
Land use and landscape pattern changes in Nanjing during 1988–2007
319,119
Template attacks are widely accepted to be the most powerful side-channel attacks from an information theoretic point of view. For template attacks to be practical, one needs to choose some special samples as the interesting points in actual power traces. Up to now, many different approaches were introduced for choosing interesting points for template attacks. However, it is unknown that whether or not the previous approaches of choosing interesting points will lead to the best classification performance of template attacks. In this work, we give a negative answer to this important question by introducing a practical new approach which has completely different basic principle compared with all the previous approaches. Our new approach chooses the point whose distribution of samples approximates to a normal distribution as the interesting point. Evaluation results exhibit that template attacks based on the interesting points chosen by our new approach can achieve obvious better classification performance compared with template attacks based on the interesting points chosen by the previous approaches. Therefore, our new approach of choosing interesting points should be used in practice to better understand the practical threats of template attacks.
['Guangjun Fan', 'Yongbin Zhou', 'Hailong Zhang', 'Dengguo Feng']
How to Choose Interesting Points for Template Attacks More Effectively
784,431
Several studies in network traffic characterization have concluded that network traffic is self-similar and therefore not readily amenable to statistical multiplexing in a distributed computing system. This paper examines the effects of the TCP protocol stack on network traffic via an experimental study on the different implementations of TCP. We show that even when aggregate application traffic smooths out as more applications' traffic are multiplexed, TCP introduces burstiness into the aggregate traffic load, reducing network performance when statistical multiplexing is used within the network gateways.
['Wu-chun Feng', 'Peerapol Tinnakornsrisuphap', 'I. Philip']
On the burstiness of the TCP congestion-control mechanism in a distributed computing system
84,616
Driving in an urban environment is hectic and often adventurous. Getting accurate routing instructions, finding parking spots, receiving customized information that helps individual drivers reach their destination will significantly reduce the stress of driving, save fuel and reduce unnecessary delays and pollution levels. In this paper we present a system that combines smart navigation with intelligent parking assist and driver diagnostics to considerably improve driving comfort, safety and mobility in an urban environment. The smart navigation employs an on line traffic simulator which provides traffic predictions and improves the accuracy of existing navigation systems which rely on limited traffic data. The intelligent parking assist system predicts the availability of parking at the start of the journey and these predictions get updated as the destination is approached. The system uses machine learning to understand the habits and preferences of the individual driver so that the preferred parking availability information is presented to the driver. The driver diagnostics part learns the driving characteristics of the driver i.e. whether aggressive, semi aggressive or passive, reaction times, following distances etc. and provides this information to the smart navigation and parking assist system for better estimation of travel times. In addition, it can be used to support collision warnings and other driver assist devices. The proposed system has been successfully demonstrated using an AUDI vehicle in the area of Los Angeles and San Francisco.
['Petros A. Ioannou', 'Yihang Zhang']
Intelligent driver assist system for urban driving
896,007
A data model is presented for the systematic representation of image content. The basic building block of the data model consists of facts, which may be modified and linked together in different ways to express the subject matter of an image. From the data model, a canonical image description may be automatically built, which can capture a rich content semantics. The canonical description may be used to construct a content-based index using a relational database structure. The database structure uses four tables which can be indexed and searched rapidly to provide fast image identification and retrieval. It is expected that the proposed approach may work in conjunction with picture keys, which are a pictorial summary of the underlying images, to provide a flexible scheme for building a powerful query model for the efficient retrieval of images by content for a variety of image database applications.
['Clement H. C. Leung', 'Zhi-Jie Zheng']
Image data modeling for efficient content indexing
93,435
In this paper, we propose a time-switched space-time coded orthogonal frequency division multiplexing scheme over time-varying channels. The base station can use the proposed TSST-OFDM scheme with only two power amplifiers for four transmit antennas. For a given space-time code, we show that the signal to inter-antenna interference ratio is a function of Doppler frequency in time-varying channels. The proposed scheme not only gets time-switched diversity and lower decoding complexity, but also less sensitive to the time-varying channels, hence it exhibits better performance than quasi-orthogonal space-time coded scheme.
['Jiyu Jin', 'Guiyue Jin', 'Zhisen Wang']
Time-switched space-time coded OFDM over time-varying channels
926,914
We propose a systematic method for creating constellations of unitary space-time signals for multiple-antenna communication links. Unitary space-time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transmitter nor the receiver knows the fading coefficients. The signals can achieve low probability of error by exploiting multiple-antenna diversity. Because the fading coefficients are not known, the criterion for creating and evaluating the constellation is nonstandard and differs markedly from the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation-an oblong complex-valued matrix whose columns are orthonormal-and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space. This construction easily produces large constellations of high-dimensional signals. We demonstrate its efficacy through examples involving one, two, and three transmitter antennas.
['Bertrand M. Hochwald', 'Thomas L. Marzetta', 'Thomas Richardson', 'Wim Sweldens', 'Rüdiger L. Urbanke']
Systematic design of unitary space-time constellations
529,079
The paper deals with "resource leveling optimization problems", a class of problems that are often met in modern project management. The prob- lems of this kind refer to the optimal handling of available resources in a candi- date project and have emerged, as the result of the even increasing needs of project managers in facing project complexity, controlling related budgeting and finances and managing the construction production line. For the effective resource leveling optimization in problem analysis, evolutionary intelligent me- thodologies are proposed. Traditional approaches, such as exhaustive or greedy search methodologies, often fail to provide near-optimum solutions in a short amount of time, whereas the proposed intelligent approaches manage to quickly reach high quality near-optimal solutions. In this paper, a new genetic algorithm is proposed for the investigation of the start time of the non-critical activities of a project, in order to optimally allocate its resources. Experiments with small and medium size benchmark problems taken from publicly available project da- ta resources, produce highly accurate resource profiles. The proposed metho- dology proves capable of coping with larger size project management problems, where conventional techniques like complete enumeration is impossible, obtain- ing near-optimal solutions.
['Christos Kyriklidis', 'Georgios Dounias']
Application of Evolutionary Algorithms in Project Management
633,534
Two important characteristics encountered in many real-world scheduling problems are heterogeneous processors and a certain degree of uncertainty about the processing times of jobs. In this paper we address both, and study for the first time a scheduling problem that combines the classical unrelated machine scheduling model with stochastic processing times of jobs. By means of a novel time-indexed linear programming relaxation, we show how to compute in polynomial time a scheduling policy with provable performance guarantee for the stochastic version of the unrelated parallel machine scheduling problem with the weighted sum of completion times objective. Our performance guarantee depends on the squared coefficient of variation of the processing times and we show that this dependence is tight. Currently best-known bounds for deterministic scheduling problems are contained as special cases.
['Martin Skutella', 'Maxim Sviridenko', 'Marc Uetz']
Unrelated Machine Scheduling with Stochastic Processing Times
647,828
Investigating the factors that drive requirements change is an important prerequisite for understanding the nature of requirements volatility. This increased understanding will improve the process of requirements change management. We mainly focus on change analysis to identify and characterize the causes of requirements volatility. We apply a causal analysis method on change request data to develop a taxonomy of change. This taxonomy allows us to identify and trace the problems, reasons and sources of changes. Adopting an industrial case study approach, our findings reveal that the main causes of requirements volatility were changes in customer needs (or market demands), developers' increased understanding of the products, and changes in the organization policy. During the development process, we also examined the extent of requirements volatility and discovered that the rate of volatility was high at the time of requirements specification completion and while functional specification reviews were conducted.
['Nurie Nurmuliani', 'Didar Zowghi', 'S. Powell']
Analysis of requirements volatility during software development life cycle
479,992
The Web is becoming a universal information dissemination medium, due to a number of factors including its support for content dynamicity. A growing number of Web information providers post near real-time updates in domains such as auctions, stock markets, bulletin boards, news, weather, roadway conditions, sports scores, etc. External parties often wish to capture this information for a wide variety of purposes ranging from online data mining to automated synthesis of information from multiple sources. There has been a great deal of work on the design of systems that can process streams of data from Web sources, but little attention has been paid to how to produce these data streams, given that Web pages generally require "pull-based" access.#R##N##R##N#In this paper we introduce a new general-purpose algorithm for monitoring Web information sources, effectively converting pull-based sources into push-based ones. Our algorithm can be used in conjunction with continuous query systems that assume information is fed into the query engine in a push-based fashion. Ideally, a Web monitoring algorithm for this purpose should achieve two objectives: (1) timeliness and (2) completeness of information captured. However, we demonstrate both analytically and empirically using real-world data that these objectives are fundamentally at odds. When resources available for Web monitoring are limited, and the number of sources to monitor is large, it may be necessary to sacrifice some timeliness to achieve better completeness, or vice versa. To take this fact into account, our algorithm is highly parameterized and targets an application-specified balance between timeliness and completeness. In this paper we formalize the problem of optimizing for a flexible combination of timeliness and completeness, and prove that our parameterized algorithm is a 2- approximation in all cases, and in certain cases is optimal.
['Sandeep Pandey', 'Kedar Dhamdhere', 'Christopher Olston']
WIC: a general-purpose algorithm for monitoring web information sources
76,712
Enriching the Contents of Enterprises’ Wiki Systems with Web Information
['Li Zhao', 'Yexin Wang', 'Congrui Huang', 'Yan Zhang']
Enriching the Contents of Enterprises’ Wiki Systems with Web Information
270,887
We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with "difference of gaussians" connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x2) or concave (e.g., sqrt(x)) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supralinear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can "memorize" tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.
['Thomas Wennekers']
Orientation Tuning Properties of Simple Cells in Area V1 Derived from an Approximate Analysis of Nonlinear Neural Field Models
540,682
Sometimes it is difficult to obtain maximum likelihood estimates (MLE) directly from available data in presence of uncertainty. In this paper we develop an approach to dealing with uncertainty by introducing equivalent quantities of unknown variables from which concerned interests can be estimated directly. From expectation-maximization (EM) theory we treat the unknown variables as random variables and define their equivalent quantities as their mathematical expectations. To illustrate the concept of equivalence and its applications, we solve a specific MLE problem: To estimate component reliability from uncertain system life data. Two types of uncertainty for components are taken into account: status masking and left censored lifetime. We define and obtain equivalent failure and equivalent test time for components under different uncertain conditions. An EM algorithm based on the derived equivalent quantities is formulated to estimate component reliability from uncertain system life data. Compared to other methods, our approach based on equivalent quantities can handle more complex uncertainty. In addition, the convergence of the iterative estimation processes has been mathematically proved.
['Zhibin Tan']
An equivalence based approach to dealing with uncertainty in maximum likelihood estimation
313,275
Amdahl's law is a fundamental tool for understanding the evolution of performance as a function of parallelism. Following a recent trend on the timing and power analysis of general purpose many-core chips using this law, we carry out an analysis aiming at many-core SoCs integrating processors sharing the same core instruction set but each potentially having additional extensions. For SoCs targeting well defined classes of applications, higher performances can be achieved by adding application specific extensions either through the addition of instructions in the core instruction set or through coprocessors leading to architectures with functionally asymmetric processors. This kind of architectures is becoming technically viable and advocated by several groups, but the theoretical study of their properties is yet to be performed: this is precisely our goal in this paper. We use Amdahl's law to prove the performance advantage of using extensions for many-core SoCs and shows that the many-core architecture based on functionally asymmetric processors can achieve the same performance as the symmetric one but at a lower cost.
['Hao Shen', 'Frédéric Pétrot']
Using Amdahl's law for performance analysis of many-core SoC architectures based on functionally asymmetric processors
253,220
The recently increased amount of information stored in XML format has lead to the development and wide deployment of so-called native XML database management systems (XML DBMS). In parallel, (object-)relational DBMS remain well known, approved and widely used for persistent storage of data. There are many research and industrial areas, including virtual enterprises, Web portals, digital libraries, data management systems, etc., where applications need to manage both (object-) relational DBMS and XML DBMS to retrieve information from these kinds of data sources. This has created a need for integrated access to (object-)relational and XML data sources. The focus of our investigation in this context comprises design and development of an integration middleware between the application and the data sources, allowing unified access to the entire information for SQL- and XML-based applications. In this paper, we propose a query processing technique supporting integrated access to (object-)relational and XML data sources via both query languages, SQL and XQuery. The local data sources under integration can be queried from the corresponding unified global views, an SQL-View (for SQL-based applications) as well as an XML-View (for XML-based applications), both offering access to the entire integrated information.
['Iryna Kozlova', 'Norbert Ritter', 'Olga Reimer']
Towards Integrated Query Processing for Object-Relational and XML Data Sources
460,824
This paper considers the problem of low complexity implementation of high-performance semidefinite relaxation (SDR) MIMO detection methods. Currently, most SDR MIMO detectors are implemented using interior-point methods. Although such implementations have worst-case polynomial complexity (approximately cubic in the problem size), they can be quite computationally costly in practice. Here we depart from the interior-point method framework and investigate the use of other low per-iteration-complexity techniques for SDR MIMO detection. Specifically, we employ the row-by-row (RBR) method, which is a particular version of block coordinate descent, to solve the semidefinite programs that arise in the SDR MIMO context with an emphasis on the QPSK scenario. In each iteration of the RBR method, only matrix-vector multiplications are needed, and hence it can be implemented in a very efficient manner. Our simulation results show that the RBR method can indeed offer a significant speedup in runtime, while providing bit error rate performance on par with the interior-point methods.
['Hoi-To Wai', 'Wing-Kin Ma', 'Anthony Man-Cho So']
Cheap semidefinite relaxation MIMO detection using row-by-row block coordinate descent
119,739