abstract
stringlengths
5
11.1k
authors
stringlengths
9
1.96k
title
stringlengths
4
367
__index_level_0__
int64
0
1,000k
A very robust biometric for the identification of humans is Iris Recognition. In order to recognize the Iris the determination of its exact location is required. The contemporary localization approaches, although accurate, often require a very long calculation. This paper presents an Iris Location method that is both accurate and fast. The approach relies on the detection of circular boundaries under an approach of gradient analysis in points of interest of successive arcs. The quantified majority operator QMA-OWA[20] was used in order to obtain a representative value for each successive arc. The identification of the Iris circular boundary in an image portion will be given by obtaining the arc with the greatest representative value. Thus, a fast algorithm of identification of circular boundaries is obtained from an aggregation process, guided by the linguistic quantifier many. The experimentation was developed upon the image database CASIA-IrisV3.
['Yuniol Alvarez-Betancourt', 'Miguel García-Silvente']
A fast Iris location based on aggregating gradient approximation using QMA-OWA operator
419,445
Exploiting Semantics to Predict Potential Novel Links from Dense Subgraphs.
['Alejandro Flores', 'Maria-Esther Vidal', 'Guillermo Palma']
Exploiting Semantics to Predict Potential Novel Links from Dense Subgraphs.
798,843
Optimal base station density in ultra-densification heterogeneous network
['Jianyuan Feng', 'Zhiyong Feng', 'Zhiqing Wei', 'Wei Li', 'Sumit Roy']
Optimal base station density in ultra-densification heterogeneous network
662,815
Guest Editorial: Bio*Medical informatics and genomic medicine: Research and training
['Peter Tarczy-Hornoch', 'Mia K. Markey', 'John A. Smith', 'Tadaaki Hiruki']
Guest Editorial: Bio*Medical informatics and genomic medicine: Research and training
229,747
Reliable prediction of system status is a highly demanded functionality of smart energy systems, which can enable users or human operators to react quickly to potential future system changes. By adopting the multi-timescale nexting method, we develop an architecture of human-in-the-loop energy control system, which is capable of casting short-term predictive information about the specific smart energy system. The developed architecture does either require a system model nor additional acquisition of (sensor) data in the existing system configuration. Our first experiments demonstrate the performance of the proposed control architecture in an electrical heating system simulation. In the second experiment, we verify the effectiveness of our developed structure in simulating a heating system in a thermal model of a building, by employing natural EnergyPlus temperature data.
['Johannes Feldmaier', 'Dominik Meyer', 'Hao Shen', 'Klaus Diepold']
Monitoring and Prediction in Smart Energy Systems via Multi-timescale Nexting
864,463
Since long, corporations are looking for knowledge sources which can provide structured description of data and can focus on meaning and shared understanding. Structures which can facilitate open world assumptions and can be flexible enough to incorporate and recognize more than one name for an entity. A source whose major purpose is to facilitate human communication and interoperability. Clearly, databases fail to provide these features and ontologies have emerged as an alternative choice, but corporations working on same domain tend to make different ontologies. The problem occurs when they want to share their data/knowledge. Thus we need tools to merge ontologies into one. This task is termed as ontology matching. This is an emerging area and still we have to go a long way in having an ideal matcher which can produce good results. In this paper we have shown a framework to matching ontologies using graphs.
['Iti Mathur', 'Nisheeth Joshi', 'Hemant Darbari', 'Ajai Kumar']
Shiva: A Framework for Graph Based Ontology Matching
51,931
In this paper, we investigate the use of audio and visual rather than only audio features for the task of speech separation in acoustically noisy environments. The success of existing independent component analysis (ICA) systems for the separation of a large variety of signals, including speech, is often limited by the ability of this technique to handle noise. In this paper, we introduce a Bayesian model for the mixing process that describes both the bimodality and the time dependency of speech sources. Our experimental results show that the online demixing process presented here outperforms both the ICA and the audio-only Bayesian model at all levels of noise.
['Shyamsundar Rajaram', 'Ara V. Nefian', 'Thomas S. Huang']
Bayesian separation of audio-visual speech sources
71,183
Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5 ) and a space consumption of O(n 4 ). In this paper, we present an algorithm with running time O(n 3 ) and space consumption O(n 2 ).
['Gerth Stølting Brodal', 'Rolf Fagerberg', 'Christian N. S. Pedersen', 'S. Srinivasa Rao']
Computing Refined Buneman Trees in Cubic Time
295,746
In Passive POMDPs actions do not aect the world state, but still incur costs. When the agent is bounded by information-processing constraints, it can only keep an approximation of the belief. We present a variational principle for the problem of maintaining the information which is most useful for minimizing the cost, and introduce an ecient and simple algorithm for nding an optimum.
['Roy Fox', 'Naftali Tishby']
Bounded Planning in Passive POMDPs
229,540
With the increase in the use of mark-up languages, a new scenario has raised into the IR field; this new scenario is focused on structured documents, and has been defined as structured IR. The classic IR models have been extended in order to be applied to this new scenario. Generally these adaptations have been carried on by weighting the fields that form the document structure, and making the assumption of statistics independence between fields. This assumption force to an estimation of the different boosts applied to every field. In this paper a new ranking function for structured IR is proposed. This new function is based on Fuzzy Logic, and its main aim is to model through heuristics and expert knowledge the relations between fields.
['Joaquín Pérez-Iglesias', 'Víctor Fresno', 'José R. Pérez-Agüera']
Funciones de Ranking basadas en Lógica Borrosa para IR estructurada
569,057
Recovery based design (RBD) is a promising approach for the design of energy-efficient circuits under variations. RBD instruments circuits with mechanisms to identify and correct timing violations, thereby allowing reduced guard bands or design margins. In addition, RBD enables aggressive voltage overscaling to a point where timing errors occur even under nominal conditions. A major barrier to the widespread adoption of RBD is that traditional design practices and synthesis tools result in circuits with so-called"path walls", leading to an explosion in the number of timing errors beyond a certain critical operating voltage. To alleviate this effect, previous techniques focused on combinational circuit optimizations such as sizing, use of dual V th cells, re-structuring, etc . to favorably reshape the path delay distribution. However, these techniques are limited by the inherent sequential structure of the circuit, which defines the boundaries of the combinational logic. In this work, we explore a completely different approach to synthesize circuits for RBD. We propose the use of retiming, a well-known and powerful sequential optimization technique to redefine the boundaries of combinational logic, thereby creating new opportunities for RBD that cannot be explored by previous techniques. We make the key observation that, in retiming circuits with RBD (unlike classical retiming), it is acceptable for a few paths in the circuit to exceed the clock period. Using this insight, we propose a synthesis methodology, Relax-and-Retime , wherein the original circuit is relaxed by ignoring timing constraints on selected paths that are bottlenecks to retiming. When classical minimum period retiming is employed on this relaxed circuit, the path wall is shifted to a lower delay, thus allowing additional voltage overscaling. The Relax-and-Retime methodology judiciously selects bottleneck paths by trading off recovery overheads caused by timing errors due to these paths with the opportunities for retiming. We utilize the proposed methodology to synthesize a wide range of benchmarks including arithmetic circuits, ISCAS89 benchmarks and modules from the UltraSPARC T1 processor. Our results demonstrate 9-25% (average of 15.3%) improvement in overall energy compared to a well-optimized baseline with RBD.
['Shankar Ganesh Ramasubramanian', 'Swagath Venkataramani', 'Adithya Parandhaman', 'Anand Raghunathan']
Relax-and-retime: a methodology for energy-efficient recovery based design
501,033
In this work, we consider pseudocodewords of (relaxed) linear programming (LP) decoding of 3-dimensional turbo codes (3D-TCs), recently introduced by Berrou et al.. Here, we consider binary 3D-TCs while the original work of Berrou et al. considered double-binary codes. We present a relaxed LP decoder for 3D-TCs, which is an adaptation of the relaxed LP decoder for conventional turbo codes proposed by Feldman in his thesis. The vertices of this relaxed polytope are the pseudocodewords. We show that the support set of any pseudocodeword is a stopping set of iterative decoding of 3D-TCs using maximum a posteriori constituent decoders on the binary erasure channel. Furthermore, we present a numerical study of small block length 3D-TCs, which shows that typically the minimum pseudoweight (on the additive white Gaussian noise (AWGN) channel) is smaller than both the minimum distance and the stopping distance. In particular, we performed an exhaustive search over all interleaver pairs in the 3D-TC (with input block length K = 128) based on quadratic permutation polynomials over integer rings with a quadratic inverse. The search shows that the best minimum AWGN pseudoweight is strictly smaller than the best minimum/stopping distance.
['Eirik Rosnes', 'Michael Helmling', 'Alexandre Graell i Amat']
Pseudocodewords of linear programming decoding of 3-dimensional turbo codes
26,442
We present an unsupervised method to estimate the camera orientation angle on monocular video scenes in the H.264 compressed domain. The method is based on the presence of moving objects in the scene. We start by estimating the global camera motion based on the motion vectors present in the stream, detect and track moving objects and estimate their relative distance to the camera by analyzing the temporal evolution of the objects' dimensions. The evolution of the motion compensated, vertical positions of key points within moving objects are used to infer the extrinsic orientation angle of the camera.
['Christian Kas', 'Henri Nicolas']
Rough compressed domain camera pose estimation through object motion
500,161
Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm---namely elitism, niching, and restricted mating are not significantly improving the scalability problems.
['Dirk Thierens']
Scalability problems of simple genetic algorithms
285,095
On the (Im)Plausibility of Constant-Round Public-Coin Straight-Line-Simulatable Zero-Knowledge Proofs.
['Yi Deng', 'Juan A. Garay', 'San Ling', 'Huaxiong Wang', 'Moti Yung']
On the (Im)Plausibility of Constant-Round Public-Coin Straight-Line-Simulatable Zero-Knowledge Proofs.
753,560
The Importance of Proper Diversity Management in Evolutionary Algorithms for Combinatorial Optimization
['Carlos Segura', 'Arturo Hernández Aguirre', 'Sergio Ivvan Valdez Peña', 'Salvador Botello Rionda']
The Importance of Proper Diversity Management in Evolutionary Algorithms for Combinatorial Optimization
875,888
The commercial deployment of 5G networks require heterogeneous multi-tier, multiple radio access technologies (RATs) to support vehicle-to-infrastructure (V2I) communication with diversified services. Vehicles may need to cross a number of heterogeneous networks of various sizes before reaching the destination. Due to high-speed travel, vehicles may quickly move in and out of the network coverage areas while performing handover. Fast and efficient selection of an appropriate underlying network is critical for seamless handover performance. In this paper we propose a novel network selection mechanism for improved handover performance in V2I communication over heterogeneous wireless network. The idea is for vehicles to self-evaluate a candidate list of access points (AP) that are located in the vehicle movement direction and select the best underlying candidate network based on key criteria, like, the distance between target candidate and the trajectory of the vehicle movement as well as the vehicle mobility information. Fuzzy logic inference system is used to decide whether a target candidate is suitable for handover. Experimental results show that for a vehicle moving at 30km=h, an AP of 100m radius should be located at less than 30m from the road, while this distance is limited to 15m when the vehicle speed is 60km/h.
['Emmanuel Ndashimye', 'Nurul I. Sarkar', 'Sayan Kumar Ray']
A Novel Network Selection Mechanism for Vehicle-to-Infrastructure Communication
850,573
In theory, articles can attract readers on the social reference sharing site Mendeley before they can attract citations, so Mendeley altmetrics could provide early indications of article impact. This article investigates the influence of time on the number of Mendeley readers of an article through a theoretical discussion and an investigation into the relationship between counts of readers of, and citations to, 4 general library and information science LIS journals. For this discipline, it takes about 7 years for articles to attract as many Scopus citations as Mendeley readers, and after this the Spearman correlation between readers and citers is stable at about 0.6 for all years. This suggests that Mendeley readership counts may be useful impact indicators for both newer and older articles. The lack of dates for individual Mendeley article readers and an unknown bias toward more recent articles mean that readership data should be normalized individually by year, however, before making any comparisons between articles published in different years.
['Nabeil Maflahi', 'Mike Thelwall']
When are readership counts as useful as citation counts? Scopus versus Mendeley for LIS journals
397,127
We have recently designed/implemented a method for debugging XPath queries which produces a set of alternative XPath expressions with higher chances for retrieving answers from XML files. In this paper we focus on the scalability of our debugger for dealing with massive XML documents by making use of the new command FILTER which is intended to prematurely disregard those computations leading to non significant solutions (i.e., with a poor “chance degree” according to the user's preferences). The key point is the natural capability for performing “dynamic thresholding” enjoyed by the fuzzy logic language used for implementing the tool, which somehow connects with the so-called «top-k answering problem» very well-known in the fuzzy logic and soft computing arenas.
['Jesús Manuel Almendros-Jiménez', 'Alejandro Luna', 'Ginés Moreno']
Thresholded debugging of XPath queries
548,172
Lexicographic preferences on a set of attributes provide a cognitively plausible structure for modeling the behavior of human decision makers. Therefore, the induction of corresponding models from revealed preferences or observed decisions constitutes an interesting problem from a machine learning point of view. In this paper, we introduce a learning algorithm for inducing generalized lexicographic preference models from a given set of training data, which consists of pairwise comparisons between objects. Our approach generalizes simple lexicographic orders in the sense of allowing the model to consider several attributes simultaneously (instead of looking at them one by one), thereby significantly increasing the expressiveness of the model class. In order to evaluate our method, we present a case study of a highly complex real-world problem, namely the choice of the recognition method for actuarial gains and losses from occupational pension schemes. Using a unique sample of European companies, this problem is well suited for demonstrating the effectiveness of our lexicographic ranker. Furthermore, we conduct a series of experiments on benchmark data from the machine learning domain.
['Michael Bräuning', 'Eyke Hüllermeier', 'Tobias Keller', 'Martin Glaum']
Lexicographic preferences for predictive modeling of human decision making: A new machine learning method with an application in accounting
878,925
Transmission control protocol (TCP) is the most widely used transport protocol on the Internet today. Over the years, especially recently, due to requirements of high bandwidth transmission, various approaches have been proposed to improve TCP performance. The Linux 2.6 kernel is now preemptible. It can be interrupted mid-task, making the system more responsive and interactive. However, we have noticed that Linux kernel preemption can interact badly with the performance of the networking subsystem. In this paper, we investigate the performance bottleneck in Linux TCP. We systematically describe the trip of a TCP packet from its ingress into a Linux network end system to its final delivery to the application; we study the performance bottleneck in Linux TCP through mathematical modelling and practical experiments; finally, we propose and test one possible solution to resolve this performance bottleneck in Linux TCP. Copyright © 2007 John Wiley & Sons, Ltd.
['Wenji Wu', 'Matt Crawford']
Potential performance bottleneck in Linux TCP
218,705
The aim of this paper is to present our attempt to create a development platform for complex robotic applications. Such a system needs appropriate tools for handling real-time aspects, distributed architectures, portability through heterogeneous hardware, and code re-usability. We show in this paper that, by constraining the shape of an application, specifying its scheduling and building a separate representation of the hardware, it is possible to realize a system where all those aspects are integrated. The major properties of our system is a strong multithreading architecture, the possibility to handle design patterns, and a powerful model of hardware platforms using a hypergraph.
['Olivier Stasse', 'Yasuo Kuniyoshi']
PredN: achieving efficiency and code re-usability in a programming system for complex robotic applications
2,373
Identifying Close Friends on the Internet.
['Randy Baden', 'Neil Spring', 'Bobby Bhattacharjee']
Identifying Close Friends on the Internet.
797,470
Handwriting recognition in historical documents is vital for making scanned manuscript images amenable to searching and browsing in digital libraries. A valuable source of information is given by the basic character shapes that vary greatly for different manuscripts. Typically, character prototype images are extracted manually for bootstrapping a recognition system. This process, however, is time-consuming and the resulting prototypes may not cover all writing styles. In this paper, we propose an automatic character prototype selection method based on a forced alignment using Hidden Markov Models (HMM) and graph matching. Besides the predominant character shape given by the median or center graph, structurally different additional prototypes are retrieved with spanning and k-centers prototype selection. On the historical Parzival data set, it is demonstrated that the proposed automatic selection outperforms a manual selection for handwriting recognition with graph similarity features.
['Andreas Fischer', 'Horst Bunke']
Character prototype selection for handwriting recognition in historical documents
243,222
One of the most challenging objectives of mobile data management is the ubiquitous, any time, anywhere access. This objective is very difficult to meet due to several network and mobile device limitations. Optimistic data replication is a generally agreed upon approach to alleviating the difficulty of data access in the adverse mobile environment. However, the two currently most popular models, both Client/Server and Peer-to-Peer models, do not adequately meet the ubiquity objectives. In our views, mobile data management should adequately support access to any data source, from any mobile device. It should also eliminate user involvement by automating data selection, hoarding, and synchronization, regardless of the mobile device chosen by the user. In this paper, we present UbiData: an application-transparent, double-middleware architecture that addresses these challenges. UbiData supports access and update to data from heterogeneous sources (e.g. files belonging to different file systems). It provides for the automatic and device-independent selection, hoarding, and synchronization of data. We present the UbiData architecture and system component, and evaluate the effectiveness of UbiData's automatic data selection and hoarding mechanisms.
['Jinsuo Zhang', 'Abdelsalam Helal', 'Joachim Hammer']
UbiData: ubiquitous mobile file service
529,948
Graph-based semi-supervised learning (SSL) algorithms have been successfully used to extract class-instance pairs from large unstructured and structured text collections. However, a careful comparison of different graph-based SSL algorithms on that task has been lacking. We compare three graph-based SSL algorithms for class-instance acquisition on a variety of graphs constructed from different domains. We find that the recently proposed MAD algorithm is the most effective. We also show that class-instance extraction can be significantly improved by adding semantic information in the form of instance-attribute edges derived from an independently developed knowledge base. All of our code and data will be made publicly available to encourage reproducible research in this area.
['Partha Pratim Talukdar', 'Fernando Pereira']
Experiments in Graph-Based Semi-Supervised Learning Methods for Class-Instance Acquisition
204,338
Many distributed applications require a group of destinations to be coordinated with a single source. Multicasting is a communication paradigm to implement these distributed applications. However in multicasting, if at least one of the members in the group cannot satisfy the service requirement of the application, the multicast request is said to be blocked. On the contrary in manycasting, destinations can join or leave the group, depending on whether it satisfies the service requirement or not. This dynamic membership based destination group decreases request blocking. We study the behavior of manycasting over optical burst-switched networks (OBS) based on multiple quality of service (QoS) constraints. These multiple constraints can be in the form of physical-layer impairments, transmission delay, and reliability of the link. Each application requires its own QoS threshold attributes. Destinations qualify only if they satisfy the required QoS constraints set up by the application. We have developed a mathematical model based on lattice algebra for this multiconstraint problem. Due to multiple constraints, burst blocking could be high. We propose two algorithms to minimize request blocking for the multiconstrained manycast (MCM) problem. Using extensive simulation results, we have calculated the average request blocking for the proposed algorithms. Our simulation results show that MCM-shortest path tree (MCM-SPT) algorithm performs better than MCM-dynamic membership (MCM-DM) for delay constrained services and real-time service, where as data services can be better provisioned using MCM-DM algorithm.
['Balagangadhar G. Bathula', 'Vinod M. Vokkarane']
QoS-based manycasting over optical burst-switched (OBS) networks
32,716
Strong and Uniform Equivalence of Nonmonotonic Theories - An Algebraic Approach.
['Miroslaw Truszczynski']
Strong and Uniform Equivalence of Nonmonotonic Theories - An Algebraic Approach.
802,174
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual model has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.
['Siyuan Lu', 'Youngdeok Hwang', 'Ildar Khabibrakhmanov', 'Fernando J. Marianno', 'Xiaoyan Shao', 'Jie Zhang', 'Bri-Mathias Hodge', 'Hendrik F. Hamann']
Machine learning based multi-physical-model blending for enhancing renewable energy forecast - improvement via situation dependent error correction
244,925
We propose a method for extracting logical hierarchical structure of HTML documents. Because mark-up structure in HTML documents does not necessarily coincide with logical hierarchical structure, it is not trivial how to extract logical structure of HTML documents. Human readers, however, easily understand their logical structure. The key information used by them is headings in the documents. Human readers exploit the following properties of headings: (1) headings appear at the beginning of the corresponding blocks, (2) headings are given prominent visual styles, (3) headings of the same level share the same visual style, and (4) headings of higher levels are given more prominent visual styles. Our method also exploits these properties for extracting hierarchical headings and their associated blocks. Our experiment shows that our method outperforms existing methods. In addition, our method extracts not only hierarchical blocks but also their associated headings.
['Tomohiro Manabe', 'Keishi Tajima']
Extracting logical hierarchical structure of HTML documents based on headings
764,446
A wireless transmitter learns of a packet loss and infers collision only after completing the entire transmission. If the transmitter could detect the collision early [such as with carrier sense multiple access with collision detection (CSMA/CD) in wired networks], it could immediately abort its transmission, freeing the channel for useful communication. There are two main hurdles to realize CSMA/CD in wireless networks. First, a wireless transmitter cannot simultaneously transmit and listen for a collision. Second, any channel activity around the transmitter may not be an indicator of collision at the receiver. This paper attempts to approximate CSMA/CD in wireless networks with a novel scheme called CSMA/CN (collision notification). Under CSMA/CN, the receiver uses PHY-layer information to detect a collision and immediately notifies the transmitter. The collision notification consists of a unique signature, sent on the same channel as the data. The transmitter employs a listener antenna and performs signature correlation to discern this notification. Once discerned, the transmitter immediately aborts the transmission. We show that the notification signature can be reliably detected at the listener antenna, even in the presence of a strong self-interference from the transmit antenna. A prototype testbed of 10 USRP/GNU Radios demonstrates the feasibility and effectiveness of CSMA/CN.
['Souvik Sen', 'Romit Roy Choudhury', 'Srihari Nelakuditi']
CSMA/CN: carrier sense multiple access with collision notification
98,496
Rapid advances in wireless networking technologies have made it possible to construct a Mobile Ad hoc Network (MANET) which can be applied in infrastructureless situations. However, due to their inherent characteristics, MANETs are vulnerable to various kinds of attacks which aim at disrupting their routing operations. To develop a strong security scheme to protect against these attacks it is necessary to understand the possible form of attacks that may be launched. Recently, researchers have proposed and investigated several possible attacks against MANET. However, there are still unanticipated or sophisticated attacks that have not been well studied. In this paper, we present a collusion attack model against Optimized Link State Routing (OLSR) protocol which is one of the four standard routing protocols for MANETs. After analyzed the attack in detail and demonstrated the feasibility of the attack through simulations, we present a technique to detect the attack by utilizing information of two hops neighbors.
['Bounpadith Kannhavong', 'Hidehisa Nakayama', 'Nei Kato', 'Yoshiaki Nemoto', 'Abbas Jamalipour']
A Collusion Attack Against OLSR-based Mobile Ad Hoc Networks
169,965
It is expected that the number of wireless devices will grow rapidly over the next few years due to the growing proliferation of Internet-of-Things (IoT). In order to improve the energy efficiency of information transfer between small devices, we review state-of-the-art research in simultaneous wireless energy and information transfer, especially for relay based IoT systems. In particular, we analyze simultaneous informationand-energy transfer from the source node, and the design of time-switching and power-splitting operation modes, as well as#R##N#the associated optimization algorithms. We also investigate the potential of crowd energy harvesting from transmission nodes that belong to multiple radio networks. The combination of source and crowd energy harvesting can greatly reduce the use of battery power and increase the availability and reliability for relaying. We provide insight into the fundamental limits of crowd energy harvesting reliability based on a case study using real city data. Furthermore, we examine the optimization of transmissions in crowd harvesting, especially with the use of node collaboration while guaranteeing Quality-of-Service (QoS).
['Weisi Guo', 'Shengtian Zhou', 'Yunfei Chen', 'Siyi Wang', 'Xiaoli Chu', 'Zhisheng Niu']
Simultaneous Information and Energy Flow for IoT Relay Systems with Crowd Harvesting
819,222
Traditionally, confidentiality and integrity have been two desirable design goals that are have been difficult to combine. Zero-Knowledge Proofs of Knowledge (ZKPK) offer a rigorous set of cryptographic mechanisms to balance these concerns. However, published uses of ZKPK have been difficult for regular developers to integrate into their code and, on top of that, have not been demonstrated to scale as required by most realistic applications.#R##N##R##N#This paper presents ZO (pronounced "zee-not"), a compiler that consumes applications written in C# into code that automatically produces scalable zero-knowledge proofs of knowledge, while automatically splitting applications into distributed multi-tier code. ZO builds detailed cost models and uses two existing zero-knowledge back-ends with varying performance characteristics to select the most efficient translation. Our case studies have been directly inspired by existing sophisticated widely-deployed commercial products that require both privacy and integrity. The performance delivered by ZO is as much as 40× faster across six complex applications. We find that when applications are scaled to real-world settings, existing zero-knowledge compilers often produce code that fails to run or even compile in a reasonable amount of time. In these cases, ZO is the only solution we know about that is able to provide an application that works at scale.
['Matthew Fredrikson', 'Benjamin Livshits']
ZØ: an optimizing distributing zero-knowledge compiler
583,824
The aim of this article is to explore the effect of the joint procurement model adopted during the English National Programme for Information Technology (NPfIT) on the customisation, design and usability of a hospital ePrescribing system. Drawing on qualitative data collected at two case study sites deploying an ePrescribing system jointly procured within one of the NPfIT’s geographical clusters, we explain how procurement decisions, difficult relationships with the supplier and strict contractual arrangements contributed to usability issues and difficulties in the customisation process. While some limited change requests made by users were taken up by the developers, these were seen by users as insufficient to meet local clinical needs and practices. A joint procurement approach, such as the NPfIT, thus limited the opportunity and scope of the changes to the ePrescribing system, which impinged not only on the perceived success of the implementation but also on the system’s usability.
['Lisa Lee', 'Robin Williams', 'Aziz Sheikh']
How does joint procurement affect the design, customisation and usability of a hospital ePrescribing system?
812,152
Applications depend on persistent storage to recover state after system crashes. But the POSIX file system interfaces do not define the possible outcomes of a crash. As a result, it is difficult for application writers to correctly understand the ordering of and dependencies between file system operations, which can lead to corrupt application state and, in the worst case, catastrophic data loss. This paper presents crash-consistency models, analogous to memory consistency models, which describe the behavior of a file system across crashes. Crash-consistency models include both litmus tests, which demonstrate allowed and forbidden behaviors, and axiomatic and operational specifications. We present a formal framework for developing crash-consistency models, and a toolkit, called Ferrite, for validating those models against real file system implementations. We develop a crash-consistency model for ext4, and use Ferrite to demonstrate unintuitive crash behaviors of the ext4 implementation. To demonstrate the utility of crash-consistency models to application writers, we use our models to prototype proof-of-concept verification and synthesis tools, as well as new library interfaces for crash-safe applications.
['James Bornholt', 'Antoine Kaufmann', 'Jialin Li', 'Arvind Krishnamurthy', 'Emina Torlak', 'Xi Wang']
Specifying and Checking File System Crash-Consistency Models
705,885
The aim of this study was to assess the application of strategic project management SPM in Nigerian public research organisations. A case study approach involving four R and D organisations in Nigeria was used. A total of 213 questionnaires were retrieved and these were analysed using quantitative research software, SPSS version 21. The results revealed that 95 per cent of respondents acknowledged that projects executed by public research organisations were planned, but the conventional project management practices were used instead of strategic project management SPM principles. In addition, it was found that the level of implementation of the project management practices were also inadequately implemented as such affected the organisation's performance adversely. As established in this study, the concept of strategy is changing and to address the factors that affect research and development project implementation, senior project practitioners need to pay more attention to strategic, operational and project risks.
['Charity Udodirim Ugonna', 'Edward Ochieng']
Strategic Project Management in Nigerian Public Research Organisations: The Gap in Practice
716,901
Motivation:Recent improvements in high-throughput Mass Spectrometry (MS) technology have expedited genome-wide discovery of protein–protein interactions by providing a capability of detecting protein complexes in a physiological setting. Computational inference of protein interaction networks and protein complexes from MS data are challenging. Advances are required in developing robust and seamlessly integrated procedures for assessment of protein–protein interaction affinities, mathematical representation of protein interaction networks, discovery of protein complexes and evaluation of their biological relevance.#R##N##R##N#Results: A multi-step but easy-to-follow framework for identifying protein complexes from MS pull-down data is introduced. It assesses interaction affinity between two proteins based on similarity of their co-purification patterns derived from MS data. It constructs a protein interaction network by adopting a knowledge-guided threshold selection method. Based on the network, it identifies protein complexes and infers their core components using a graph-theoretical approach. It deploys a statistical evaluation procedure to assess biological relevance of each found complex. On Saccharomyces cerevisiae pull-down data, the framework outperformed other more complicated schemes by at least 10% in F1-measure and identified 610 protein complexes with high-functional homogeneity based on the enrichment in Gene Ontology (GO) annotation. Manual examination of the complexes brought forward the hypotheses on cause of false identifications. Namely, co-purification of different protein complexes as mediated by a common non-protein molecule, such as DNA, might be a source of false positives. Protein identification bias in pull-down technology, such as the hydrophilic bias could result in false negatives.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online.
['Bing Zhang', 'Byung-Hoon Park', 'Tatiana V. Karpinets', 'Nagiza F. Samatova']
From pull-down data to protein interaction networks and complexes with biological relevance
330,711
Semantic Web Infrastructure
['Jir ´ i Dokulil', 'Jaroslav Tykal', 'Jakub Yaghob', 'Filip Zavoral']
Semantic Web Infrastructure
925,544
Deep cover HCI: the ethics of covert research
['Julie Rico Williamson', 'Daniel Sundén']
Deep cover HCI: the ethics of covert research
721,729
Interest in gamification is growing steadily. But as the underlying mechanisms of gamification are not well understood yet, a closer examination of a gamified activity's meaning and individual game design elements may provide more insights. We examine the effects of points -- a basic element of gamification, -- and meaningful framing -- acknowledging participants' contribution to a scientific cause, -- on intrinsic motivation and performance in an online image annotation task. Based on these findings, we discuss implications and opportunities for future research on gamification.
['Elisa D. Mekler', 'Florian Brühlmann', 'Klaus Opwis', 'Alexandre N. Tuch']
Disassembling gamification: the effects of points and meaning on user motivation and performance
146,688
Experiential Solving: Towards a Unified Autonomous Search Constraint Solving Approach
['Broderick Crawford', 'Ricardo Soto', 'Kathleen Crawford', 'Franklin Johnson', 'Claudio León de la Barra', 'Sergio Galdames']
Experiential Solving: Towards a Unified Autonomous Search Constraint Solving Approach
652,674
Intensional Combination of Rankings for OCF-Networks.
['Gabriele Kern-Isberner', 'Christian Eichhorn']
Intensional Combination of Rankings for OCF-Networks.
766,428
The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., the bias-variance tradeoff, which is a fundamental theory in statistics. We formulate the notion of bias-variance regarding retrieval performance and estimation quality of query models. We then investigate several estimated query models, by analyzing when and why the bias-variance tradeoff will occur, and how the bias and variance can be reduced simultaneously. A series of experiments on four TREC collections have been conducted to systematically evaluate our bias-variance analysis. Our approach and results will potentially form an analysis framework and a novel evaluation strategy for query language modeling.
['Peng Zhang', 'Dawei Song', 'Jun Wang', 'Yuexian Hou']
Bias-variance analysis in estimating true query model for information retrieval
439,416
The modeling and simulation are the flexible and effective methods to design and evaluate FC-SAN. We study the impact of link failure on performance of FC network with a core/edge topology using SANSim, a simulation tool for storage area network. Simulation results show that the maximum network throughputs are reduced to 50% in the worst case with one network link failure even in the full redundant FC network design. The results also show that the network performance is sensitive to the location of failure.
['Chao-Yang Wang', 'Feng Zhou', 'Yaolong Zhu', 'Chong Tow Chong', 'Bo Hou', 'Wei-Ya Xi']
Simulation and analysis of FC network
143,210
This paper considers the problem of economical optimization of the power production in a power plant capable of utilizing three different fuel systems. The considered fuel systems are coal, gas, and oil; each has certain advantages and disadvantages e.g. gas is easier to control than coal but it is more expensive. A profit function is stated and an analysis of the optimal fuel configuration is performed based on the Hamiltonian from the maximum principle. The analysis leads to the introduction of a performability measure, which, when the value is above a confidence threshold, indicates that a change of fuel systemusage is beneficial. That is, the performability measure determines when an increase of performance is possible
['Martin Nygaard Kragelund', 'John-Josef Leth', 'Rafal Wisniewski']
Performability measure for a power plant
35,269
The current level of theoretical, methodological, and pragmatic knowledge related to a multi-method modeling and simulation (M&S) approach is limited as there are no clearly identified theoretical principles that guide the use of multi-method M&S approach. Theoretical advances are vital to enhance methodological developments, which in turn empower scientists to address a broader range of scientific inquiries and improve research quality. In order to develop theoretical principles of multi-method M&S approach, the theory of falsification is used in an M&S context to provide a meta-theoretical basis for analysis. Moreover, triangulation and commensurability are characterized and investigated as additional relevant concepts. This paper proposes four theoretical principles for justification of the use of a multi-method M&S approach, which will be analyzed and used to implement methodological guidelines in a subsequent work. A final discussion offers initial implications of the proposed theoretical view.
['Mariusz Balaban', 'Patrick T. Hester', 'S. Diallo']
Towards a theory of multi-method m&s approach: part III
506,776
Spectrum scarcity together with high capacity demands make the use of millimeter wave (mmWave) frequencies an interesting alternative for next generation, i.e., fifth generation (5G), networks. Although mmWave is expected to play a key role for both access network and backhaul (BH), its initial use in the BH network seems more straight-forward. This stems from the fact that, in the BH case, its deployment is less challenging due to the fixed locations of BH transceivers. Still, provided that mmWave spectrum consists of several subbands, each one with different characteristics and thus different deployment constraints (e.g., channel bandwidth, maximum transmission power), a comparison is required in order to gain a better insight into the potentials of each solution. To that end, in this paper, the main mmWave candidate frequency bands are compared in terms of range, throughput and energy consumption. In our results, the bandwidth availability, the maximum transmission power as well as the antenna gains of each BH technology are taken into account, as defined by the Federal Communications Commission. The results are also compared with current industry-oriented state-of-the-art transceiver characteristics in order to gain further insights into the maximum achievable gains of each subband.
['Agapi Mesodiakaki', 'Andreas Kassler', 'Enrica Zola', 'Mattias Ferndahl', 'Tao Cai']
Energy efficient line-of-sight millimeter wave small cell backhaul: 60, 70, 80 or 140 GHz?
850,361
Designing a network with optimal deployment cost and maximum reliability considerations is a hard problem, especially when the all-terminal reliability is required. For efficiently finding out an acceptable solution, Genetic Algorithms (GAs) have been widely applied to solve this problem. In these GAs, the reliability values could be calculated in their objective functions. In year 2002, an extended network reliability model was proposed which considers the connection important level between each pair of nodes. This paper proposes an approximation algorithm based on Monte Carlo simulation for the new network reliability model. This approximation algorithm can be integrated into GAs to solve the optimal cost reliable network design problem under the extended model.
['Shiang-Ming Huang', 'Quincy Wu', 'Shi-Chun Tsai']
A Monte Carlo Method for Estimating the Extended All-Terminal Reliability
91,043
Competition between groups often involves prizes that have both a public and a private component. The exact nature of the prize not only affects the strategic choice of the sharing rules determining its allocation but also gives rise to an interesting phenomenon not observed when the prize is either purely public or purely private. Indeed, we show that in the two-groups contest, for most degrees of privateness of the prize, the large group uses its sharing rule as a mean to exclude the small group from the competition, a situation called monopolization. Conversely, there is a degree of relative privateness above which the small group, besides being active, even outperforms the large group in terms of winning probabilities, giving rise to the celebrated group size paradox.
['Pau Balart', 'Sabine Flamand', 'Orestis Troumpounis']
Strategic choice of sharing rules in collective contests
642,937
One may expect the Internet to evolve from being information centric to knowledge centric. This paper introduces the concept of a Knowledge Society Operating System (KSOS) that allows users to form knowledge societies in which members can search, create, manipulate and connect geographically distributed knowledge resources (including data, documents, tools, people, devices, etc.) based on semantics (“meaning”, “intention”) in order to solve problems of mutual interest. Built on top of the current Internet infrastructure, a KSOS can take advantage of existing resources to enable the use of applications or services through a web browser. This paper discusses some crucial aspects of a KSOS.
['Ke Hao', 'Phillip C.-Y. Sheu', 'Hiroshi Yamaguchi', 'Jeffery J.P. Tsai']
KSOS — An Operating System for Knowledge Societies
700,527
Automation of the Simple Test for Evaluating Hand Function Using Leap Motion Controller
['Kouki Nagamune', 'Yosuke Uozumi', 'Yoshitada Sakai']
Automation of the Simple Test for Evaluating Hand Function Using Leap Motion Controller
860,954
We analyze the notion of "local names" in SPKI/SDSI. By interpreting local names as distributed groups, we develop a simple logic program for SPKI/SDSI's linked local-name scheme and prove that it is equivalent to the name-resolution procedure in SDSI 1.1 and the 4-tuple-reduction mechanism in SPKI/SDSI 2.0. This logic program is itself a logic for understanding SDSI's linked local-name scheme and has several advantages over previous logics. We then enhance our logic program to handle authorization certificates, threshold subjects, and certificate discovery. This enhanced program serves both as a logical characterization and an implementation of SPKI/SDSI 2.0's certificate reduction and discovery. We discuss the way SPKI/SDSI uses the threshold subjects and names for the purpose of authorization and show that, when used in a certain restricted way, local names can be interpreted as distributed roles.
['Ninghui Li']
Local names in SPKI/SDSI
127,523
This paper addresses a new information theoretic approach to minimization of polynomial expressions for Multiple Valued Logic (MVL) functions. Its focus is to determine the so-called pseudo Reed-Muller and pseudo Kronecker expressions of MVL functions. A key point of our approach is the use of information theoretic measures for efficient design of Decision Trees (DTs) to represent MVL functions. We utilize free pseudo Reed-Muller GF(4) (PSDRMGF) DTs and free pseudo Kronecker GF(4) (PSDKGF) DTs. Furthermore, we show that the suggested approach allows to manage the process of minimization in a simple way, for the most of known forms of logic function representation. Our program, Info-MV, produces, in most cases, the extremely better results, in contrast to some known heuristic minimization strategies.
['Svetlana N. Yanushkevich', 'Denis V. Popel', 'Vlad Shmerko', 'V. Cheushev', 'Radomir S. Stankovic']
Information theoretic approach to minimization of polynomial expressions over GF(4)
83,584
This paper presents two load board designs for hierarchical calibration of largely populated ATE. Compound dot technique and phase detector are used on both boards to provide automatic and low cost calibration of ATE with or without a single reference clock. Two different relay tree structures are implemented on the two boards with advanced board design techniques for group offset calibration. Various error sources have been identified and analyzed on both boards based on SPICE simulations and real measurements. TDR measurement compares the two approaches and shows that the two load boards give a maximum of 37ps group timing skew and can be calibrated out by the calibration software.
['Fengming Zhang', 'Warren Necoechea', 'P. Reiter', 'Yong-Bin Kim', 'Fabrizio Lombardi']
Load Board Designs Using Compound Dot Technique and Phase Detector for Hierarchical ATE Calibrations
323,681
As television moves beyond digital broadcast modes of distribution towards online modes of delivery, this paper considers the opportunities and challenges for people with disabilities. With accessibility relying on a complex mix of regulation, legislation and industry innovation, the paper questions whether predictions of improved accessibility are an automatic outcome of new television technologies. The paper asks ‘where to next?’ for disability and the Internet through an emphasis on the importance of television in an accessible new media environment. The paper draws on government policies, the activist intervention of a number of people with disabilities as documented online, and primary research into Australian television audiences with disabilities that took place in 2013 and 2014.
['Katie Ellis', 'Mike Kent']
Accessible television: The new frontier in disability media studies brings together industry innovation, government legislation and online activism
497,160
A new region growing algorithm has been proposed for computing Euclidean distance maps in a time comparable to widely used chamfer distance transform. We show how this algorithm can be extended to more complex tasks such as the computation of distance maps on anisotropic grids and the generation of a new type of Euclidean skeletons.
['Olivier Cuisenaire', 'Benoît Macq']
Applications of the region growing Euclidean distance transform: anisotropy and skeletons
102,563
CyclingMusic & CyclingMelody: A System for Enriching Scenery Experience in Cycling by Real-Time Synaesthetic Sonification of Passing Landscape.
['Masaki Matsubara', 'Satoshi Kuribayashi', 'Haruka Nukariya', 'Yasuaki Kakehi']
CyclingMusic & CyclingMelody: A System for Enriching Scenery Experience in Cycling by Real-Time Synaesthetic Sonification of Passing Landscape.
745,496
Bucketization is an anonymization technique for publishing sensitive data. The idea is to group records into small buckets to obscure the record-level association between sensitive information and identifying information. Compared to the traditional generalization technique, bucketization does not require a taxonomy of attribute values, so is applicable to more data sets. A drawback of previous bucketization schemes is the uniform privacy setting and uniform bucket size, which often results in a non-achievable privacy goal or excessive information loss if sensitive values have variable sensitivity.In this work, we present a flexible bucketization scheme to address these issues. In the flexible scheme, each sensitive value can have its own privacy setting and buckets of different sizes can be formed. The challenge is to determine proper bucket sizes and group sensitive values into buckets so that the privacy setting of each sensitive value can be satisfied and overall information loss is minimized. We define the bucket setting problem to formalize this requirement. We present two efficient solutions to this problem. The first solution is optimal under the assumption that two different bucket sizes are allowed, and the second solution is heuristic without this assumption. We experimentally evaluate the effectiveness of this generalized bucketization scheme.
['Ke Wang', 'Peng Wang', 'Ada Wai-Chee Fu', 'Raymond Chi-Wing Wong']
Generalized bucketization scheme for flexible privacy settings
639,560
Abstract The information and communications technology (ICT) led revolution in the last two decades has transformed a large number of traditional businesses. The impact has been more significant in industries dominated by information goods such as music, software, and newspapers. Higher education sector is information-centric and its digitization is inevitable. The new generation of the Internet-based educational business models have emerged and they have already started evolving to make electronic learning (eLeraning) as effective and efficient as electronic commerce (eCommerce) has become in retailing. Drawing on information goods theory in economics, online retailers and marketplaces literature in information systems, and contemporary research on eLearning, this paper classifies and analyses the emerging educational business models into online education marketplaces (OEM), online education providers (OEP), and online education services (OES) and also provides a roadmap for the transformation of traditional universities.
['Bhavik K. Pathak']
Emerging online educational models and the transformation of traditional universities
718,754
We propose cost reference particle filter (CRPF) and extended game theory-based H ? filter approaches to the problem of estimating frequency-selective and slowly varying nonlinear channels with unknown noise statistics. The proposed approaches have a common advantageous feature that the noise information is not required in their applications. The simulation results justify that both approaches are effective, and that CRPF is more robust against highly nonlinear and drastically varying channels.
['Jaechan Lim', 'Daehyoung Hong']
Frequency-selective and nonlinear channel estimation with unknown noise statistics
231,292
Today 4G mobile systems are evolving to provide IP connectivity for diverse applications and services up to 1Gbps. They are designed to optimize the network performance, improve cost efficiency and facilitate the uptake of mass market IP-based services. Nevertheless, the growing demand and the diverse patterns of mobile traffic place an increasing strain on cellular networks. To cater to the large volumes of traffic delivered by the new services and applications, the future 5G network will provide the fundamental infrastructure for billions of new devices with less predictable traffic patterns will join the network. The 5G technology is presently in its early research stages, so researches are currently underway exploring different architectural paths to address their key drivers. SDN techniques have been seen as promising enablers for this vision of carrier networks, which will likely play a crucial role in the design of 5G wireless networks. A critical understanding of this emerging paradigm is necessary to address the multiple challenges of the future SDN-enabled 5G technology. To address this requirement, a survey the emerging trends and prospects, followed by in-depth discussion of major challenges in this area are discussed.
['Akram Hakiri', 'Pascal Berthou']
Leveraging SDN for The 5G Networks: Trends, Prospects and Challenges
581,979
Harmonic volumetric mapping aims to establish a smooth bijective correspondence between two solid shapes with the same topology. In this paper, we develop an automatic meshless method for creating such a mapping between two given objects. With the shell surface mapping as the boundary condition, we first solve a linear system constructed by a boundary method called the method of fundamental solution , and then represent the mapping using a set of points with different weights in the vicinity of the shell of the given model. Our algorithm is a true meshless method (without the need of any specific meshing structure within the solid interior) and the behavior of the interior region is directly determined by the boundary, which can improve the computational efficiency and robustness significantly. Therefore, our algorithm can be applied to massive volume data sets with various geometric primitives and topological types. We demonstrate the utility and efficacy of our algorithm in information transfer, shape registration, deformation sequence analysis, tetrahedral remeshing, and solid texture synthesis.
['Xin Li', 'Xiaohu Guo', 'Hongyu Wang', 'Ying He', 'Xianfeng Gu', 'Hong Qin']
Meshless Harmonic Volumetric Mapping Using Fundamental Solution Methods
354,027
The standard clock (SC) method is an efficient approach for discrete-event simulation. Its basic ideas are quite different from traditional approaches. SC has neither an event list nor event lifetimes, its applicability is limited, however, to exponential distributions and a class of nonexponential distributions. In this paper we provide an efficient approach to general distributions. Shifted exponential and hyperexponential distributions are used as second-order approximations to simulation input distributions. Numerical testing demonstrates that they serve as good approximations and preserve the advantages of SC. In addition, an nth order method is presented that provides arbitrarily good approximations. The idea of event insertion extends SC use to further applications and improves simulation efficiency on SIMD machines. >
['Chun-Hung Chen', 'Yu-Chi Ho']
An approximation approach of the standard clock method for general discrete-event simulation
454,497
A theoretical framework for grounding language is introduced that provides a computational path from sensing and motor action to words and speech acts. The approach combines concepts from semiotics and schema theory to develop a holistic approach to linguistic meaning. Schemas serve as structured beliefs that are grounded in an agent's physical environment through a causal-predictive cycle of action and perception. Words and basic speech acts are interpreted in terms of grounded schemas. The framework reflects lessons learned from implementations of several language processing robots. It provides a basis for the analysis and design of situated, multimodal communication systems that straddle symbolic and non-symbolic realms.
['Deb Roy']
Semiotic schemas: a framework for grounding language in action and perception
169,358
Background#R##N#A standardized imaging proposal evaluating implanted left atrial appendage (LAA) occlusion devices by cardiac computed tomography angiography (cCTA) has never been investigated.
['Michael Behnes', 'Ibrahim Akin', 'Benjamin Sartorius', 'Christian Fastner', 'Ibrahim El-Battrawy', 'Martin Borggrefe', 'Holger Haubenreisser', 'Mathias Meyer', 'Stefan O. Schoenberg', 'Thomas Henzler']
--LAA Occluder View for post-implantation Evaluation (LOVE)--standardized imaging proposal evaluating implanted left atrial appendage occlusion devices by cardiac computed tomography.
693,899
Visualization of volumetric datasets is common in many fields and has been an active area of research in the past two decades. In spite of developments in volume visualization techniques, interacting with large datasets still demands research efforts due to perceptual and performance issues. The support of graphics hardware for texture-based visualization allows efficient implementation of rendering techniques that can be combined with interactive sculpting tools to enable interactive inspection of 3D datasets. In this paper we report the development of three 3D interactive tools, eraser, digger and clipper, which specify regions within the volume to be discarded from rendering. Sculpting is accomplished by running special fragment programs that discard fragments based on geometric predicates. The interaction techniques we proposed were implemented using the virtual hand metaphor. The tools were evaluated by comparing the use of a 3D mouse against a conventional wheel-mouse for guiding volume and tools manipulation. Two-handed input was tested with both types of mouse and the results obtained indicate a preference for a combination of 2D and 3D mouse.
['Rafael Huff', 'Carlos A. Dietrich', 'Luciana Porcher Nedel', 'Carla Maria Dal Sasso Freitas', 'João Luiz Dihl Comba', 'Sílvia Delgado Olabarriaga']
Erasing, digging and clipping in volumetric datasets with one or two hands
66,036
We consider the communication system that transmits a sequence of binary vector symbols over a vector intersymbol interference (ISI) channels subject to additive white Gaussian noise. Conventionally, maximum likelihood (ML) sequence is computed using the Viterbi algorithm (VA), whose complexity scales exponentially in both the symbol vector length and the number of ISI channel taps. We show that, as the signal to noise ratio (SNR) goes to infinity, the ML sequence can be obtained with an asymptotic complexity scaling linearly in the number of channel taps and quadratically in the symbol vector length.
['Jie Luo']
Fast Maximum Likelihood Sequence Detection Over Vector Intersymbol Interference Channels
157,751
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes.#R##N##R##N#A lower bound by Gavoille et al. [Gavoille et al., J. Alg., 2004] and an upper bound by Peleg [Peleg, J. Graph Theory, 2000] establish that labels must use Theta(log^2(n)) bits. Gavoille et al. [Gavoille et al., ESA, 2001] show that for very small approximate stretch, labels use Theta(log(n) log(log(n))) bits. Several other papers investigate various variants such as, for example, small distances in trees [Alstrup et al., SODA, 2003].#R##N##R##N#We improve the known upper and lower bounds of exact distance labeling by showing that 1/4*log^2(n) bits are needed and that 1/2*log^2(n) bits are sufficient. We also give (1 + epsilon)-stretch labeling schemes using Theta(log(n)) bits for constant epsilon > 0. (1 + epsilon)-stretch labeling schemes with polylogarithmic label size have previously been established for doubling dimension graphs by Talwar [Talwar, STOC, 2004].#R##N##R##N#In addition, we present matching upper and lower bounds for distance labeling for caterpillars, showing that labels must have size 2*log(n) - Theta(log(log(n))). For simple paths with k nodes and edge weights in [1,n], we show that labels must have size (k - 1)/k*log(n) + Theta(log(k)).
['Stephen Alstrup', 'Inge Li Gørtz', 'Esben Bistrup Halvorsen', 'Ely Porat']
Distance labeling schemes for trees
622,134
Higher compression efficiency in HEVC encoders comes with increased computational complexity, making real time encoding of high resolution videos a challenging task. This challenge can be addressed by software, yet hardware solutions are more appealing due to their superior performance and low power consumption. This paper presents an FPGA based hardware implementation of an all intra HEVC encoder, which can encode 8 bits per sample, 1920×1080 resolution, 30 frames per second raw video, that is viable in real time even at low operating frequencies. A major obstacle to real time encoding in available architectures is the dependency created by reference generation. Moreover, each coding unit (CU) has to be processed in multiple configurations to determine the most efficient split and prediction mode representation, based on the bit stream generated. We propose a new three stage architecture to reduce these dependencies and increase parallelism. Feedback needed for CU split and prediction direction decision from binarization is avoided by a Hadamard based early decision method. Feedback constrained coefficient and reconstruction derivation module exploits several optimization techniques. All modules can operate at 200 MHz and the encoder can achieve real time encoding with a minimum operating frequency of 140 MHz. The design consumes 83K LUTs, 28K registers, and 34 DSPs when implemented on Xilinx Zynq ZC706.
['Sachille Atapattu', 'N. M. V. K. Liyanage', 'Nisal Menuka', 'Ishantha Perera', 'Ajith Pasqual']
Real time all intra HEVC HD encoder on FPGA
952,403
The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher-level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and field interactions among multiple subclasses of neurons arrayed in multiple layers of incompletely connected units.
['Theodore W. Berger', 'Dong Song', 'Vasilis Z. Marmarelis']
Multi-Input, Multi-Output Nonlinear Dynamic Modeling to Identify Biologically-Based Transformations as the “Cognitive Processes” Represented by the Ensemble Coding of Neuron Populations
783,552
A method is presented to recover 3D scene structure and camera motion from multiple images without the need for correspondence information. The problem is framed as finding the maximum likelihood structure and motion given only the 2D measurements, integrating over all possible assignments of 3D features to 2D measurements. This goal is achieved by means of an algorithm which iteratively refines a probability distribution over the set of all correspondence assignments. At each iteration a new structure from motion problem is solved, using as input a set of 'virtual measurements' derived from this probability distribution. The distribution needed can be efficiently obtained by Markov Chain Monte Carlo sampling. The approach is cast within the framework of Expectation-Maximization, which guarantees convergence to a local maximizer of the likelihood. The algorithm works well in practice, as will be demonstrated using results on several real image sequences.
['Frank Dellaert', 'Steven M. Seitz', 'Charles E. Thorpe', 'Sebastian Thrun']
Structure from motion without correspondence
402,815
This paper is devoted to the analysis of the angular resolution limit (ARL), an important performance measure in the directions-of-arrival estimation theory. The main fruit of our endeavor takes the form of an explicit, analytical expression of this resolution limit, w.r.t. the angular parameters of interest between two closely spaced point sources in the far-field region. As by-products, closed-form expressions of the Cramer-Rao bound have been derived. Finally, with the aid of numerical tools, we confirm the validity of our derivation and provide a detailed discussion on several enlightening properties of the ARL revealed by our expression, with an emphasis on the impact of the signal correlation.
['Xin Zhang', 'Mohammed Nabil El Korso', 'Marius Pesavento']
Angular resolution limit for deterministic correlated sources
149,967
In this paper, we formulate margin call stock loans in finite maturity as American down-and-out calls with rebate and time-dependent strike. The option problem is solved semi-analytically based on the approach in Zhu (2006). An explicit equation for optimal exit price and a pricing formula for loan value are obtained in Laplace space. Final results are obtained by numerical inversion. Examples are provided to show the dependency of the optimal exit price and margin call stock loan value on various parameters.
['Xiaoping Lu', 'Endah R.M. Putri']
Finite maturity margin call stock loans
555,740
We propose an efficient identity based mutual authentication scheme for GSM. In the proposed scheme, mobile station (MS) and visitor location register (VLR) authenticate each other and establish a session key to communicate securely for every communication. Our scheme requires less bandwidth and storage. The scheme does not involve certificate management and is resilient to replay and man-in-the middle attacks. We also compare the performance of our scheme with that of the existing schemes.
['K.P. Kumar', 'G. Shailaja', 'Anandan Kavitha', 'A. Saxena']
Mutual Authentication and Key Agreement for GSM
453,623
This correspondence compares OOK and low-order PPM signaling formats in terms of bit error probabilities versus required signal counts per bit. The results show that QPPM requires 3 dB less signal than OOK, while BPPM requires the same or slightly more than OOK for the same performance. Optimum APD gain values are from 200 to 400. When using QPPM, k_{eff} = 0.006 , and optimum gain, 60 signal counts/bit are required at 500 Mbits/s for a 10 -6 bit error probability.
['J. B. Abshire']
Performance of OOK and Low-Order PPM Modulations in Optical Communications When Using APD-Based Receivers
378,173
Value prediction exploits localities in value streams. Previous research focused on exploiting two types of value localities, computational and context-based, in the local value history, which is the value sequence produced by the same instruction that is being predicted. Besides the local value history, value locality also exists in the global value history, which is the value sequence produced by all dynamic instructions according to their execution order. In this paper, a new type value locality, the computational locality in the global value history is studied. A novel prediction scheme, called the gDiff predictor, is designed to exploit one special and most common case of this computational model, the stridebased computation, in the global value history. Such a scheme provides a general framework to exploit global stride locality in any value stream. Experiments show that there exists very strong stride type of locality in global value sequences. Ideally, the gDiff predictor can achieve 73% prediction accuracy for all value producing instructions without any hybrid scheme, much higher than local stride and local context prediction schemes. However, the capability of realistically exploiting locality in global value history is greatly challenged by the value delay issue, i.e., the correlated value may not be available when the prediction is being made. We study the value delay issue in an out-of-order (OOO) execution pipeline model and propose a new hybrid scheme to maximize the exploitation of the global stride locality. This new hybrid scheme shows 91% prediction accuracy and 64% coverage for all value producing instructions. We also show that the global stride locality detected by gDiff in load address streams provides strong capabilities in predicting load addresses (coverage 63% and accuracy 86%) and in predicting addresses of missing loads (33% coverage and 53% accuracy).
['Huiyang Zhou', 'Jill Flanagan', 'Thomas M. Conte']
Detecting global stride locality in value streams
153,225
In distributed heterogeneous grid environments the protocols used to exchange bits are crucial. As researchers work hard to discover the best new protocol for the grid, application developers struggle with ways to use these new protocols. A stable, consistent, and intuitive framework is needed to aid in the implementation and use of these protocols. While the application must not be burdened with the protocol details some of it may need to be exposed to take advantage of potential optimizations. In this paper we examine how the Globus XIO API provides this framework. We explore the performance implications of using this abstraction layer and the benefits gained in application as well as protocol development.
['William E. Allcock', 'John Bresnahan', 'K. Kettimuthu', 'J.M. Link']
The globus extensible input/output system (XIO): a protocol independent IO system for the grid
64,398
This thesis explores the use of a recurrent neural network model for a novel story generation task. In this task, the model analyzes an ongoing story and generates a sentence that continues the story.
['Melissa Roemmele']
Writing stories with help from recurrent neural networks
814,422
Pixel fusion is used to elaborate a classification method at pixel level. It needs to take into account the as accurate as possible information and take advantage of the statistical learning of the previous measurements acquired by sensors. The classical probabilistic fusion methods lack performance when the previous learning is not representative of the real measurements provided by sensors. The Dempster-Shafer theory is then introduced to face this disadvantage by integrating further information which is the context of the sensor acquisitions. In this paper, we propose a formalism of modeling of the sensor reliability in the context that leads to two methods of integration: the first one amounts to integrate this further information in the fusion rule as degrees of trust and the second models the sensor reliability directly as mass function. These two methods are compared in the case where the sensor reliability depends on an atmospheric disturbance: the water vapor.
['Sophie Fabre', 'Xavier Briottet', 'Alain Appriou']
Impact of contextual information integration on pixel fusion
416,745
Domain oriented mobility management schemes like "hierarchical mobile IP" (HMIP) reduce the overwhelming binding updates signaling of a standard mobile IP. "Mobile anchor point" (MAP) which is a substitute of "home agent" (HA) in each domain of the network hides user's mobility from the outer domain. But there are still many deficiencies associated with the structure itself. The standard HMIPv6 structure depends on a single MAP for all users of the domain without considering their mobility patterns. Consequently, the system has to fall back to the mobile IP structure, if MAP functions improperly. Moreover vague definition of domain's boundary and ignoring users' traffic behavior are among the most addressed problems. This paper proposes a scheme which is built on top of the hierarchical architecture of HMIP. It tries to find an appropriate anchor point for each individual mobile user regarding its mobility pattern and its long term requested services. Simulation results show a considerable improvement in terms of location update cost and packet delivery delay in comparison with HMIP and two well known schemes in literature
['M. Mousavi', 'Alejandro Quintero']
Selection Mechanism in Hierarchical Mobile IP
237,211
We address the problem of estimating the position and velocity of a radio transmitter moving with constant (unknown) velocity from packet arrival timestamps collected by a set of anchor nodes in fixed known positions. The considered system is completely asynchronous: no assumption is made about node clock synchronization nor about timing of transmitted packets. A distinguishing feature of the proposed model is that it relies exclusively on reception timestamps, with no need to measure nor control transmission times. Because of that, transmitters do not need to cooperate to the tracing process, enabling the opportunistic exploitation of packets that were generated for communication (not localization) purposes. We consider a batch processing approach, where all the measurements collected within a given observation window are jointly processed. Different generalized least squares formulations are provided for the problem at hand and their equivalence is proved.
['Fabio Ricciato', 'Savio Sciancalepore', 'Gennaro Boggia']
Tracing a Linearly Moving Node From Asynchronous Time-of-Arrival Measurements
823,875
Does prior knowledge reveal cognitive and metacognitive processes during learning with a hypermedia-learning system based on eye-tracking data?
['Michelle Taub', 'Jesse J. Farnsworth', 'Roger Azevedo']
Does prior knowledge reveal cognitive and metacognitive processes during learning with a hypermedia-learning system based on eye-tracking data?
769,230
Traffic classification is a core problem underlying efficient implementation of network services. In this work we draw from our experience in classifier design for commercial systems to address this problem in SDN and OpenFlow. We identify methods from other fields of computer science and show research directions that can be applied for efficient design of packet classifiers. Proposed abstractions and design patterns can significantly reduce requirements on network elements and enable deployment of functionality that would be infeasible in a traditional way.
['Kirill Kogan', 'Sergey I. Nikolenko', 'William Culhane', 'Patrick Eugster', 'Eddie Ruan']
Towards efficient implementation of packet classifiers in SDN/OpenFlow
410,344
The interval-valued intuitionistic fuzzy sets have received great attention of researchers because they can comprehensively depict the characters of things. In the past few years, some scholars have investigated the calculus of intuitionistic fuzzy information, but yet there is no research on the integrals in interval-valued intuitionistic fuzzy circumstance. To fill this vacancy, in this paper, we shall focus on investigating the integrals of simplified interval-valued intuitionistic fuzzy functions (SIVIFFs) and give their application in group decision making. We first develop the indefinite and definite integrals of SIVIFFs, and study their characteristics in detail. Then we establish the relationship between these two classes of integrals by giving two Newton---Leibniz formulas for SIVIFFs. Finally, a practical example concerning the park siting problem is given to illustrate the application of simplified interval-valued intuitionistic fuzzy integrals.
['Peijia Ren', 'Zeshui Xu', 'Hua Zhao', 'Jiuping Xu']
Simplified interval-valued intuitionistic fuzzy integrals and their use in park siting
583,403
Localization is essential to consider in relation to wireless sensor networks issues. Establishing mobility in the localization process creates improvements in various regards. Static path planning is one of a number of mobility models that are used in localization in wireless sensor networks. Most static path planning models depend on trilateration or triangulation concepts in direct connection fashion between unknown nodes and anchors for successful node localization; however, such methods are insufficient in cases of mobility discontinuity. Considering scenarios where the mobile anchor has limited movement, in this paper we propose using the DV-Hop technique to increase the localization ratio in static path planning models in wireless sensor networks.
['Abdullah Alomari', 'Nauman Aslam', 'William J. Phillips', 'Frank Comeau']
Using the DV-Hop technique to increase the localization ratio in static path planning models in wireless sensor networks
893,357
In this paper we have proposed a novel scheme for locating text regions in an image. The method is based on multiresolution wavelet analysis. We used matched wavelets to capture textural characteristics of image regions. A clustering based approach has been proposed for estimating globally matched wavelets (GMWs) for a given collection of images. Using these GMWs, we generate feature vectors for segmentation and identification of text regions in an image. Our method, unlike most of the other methods, does not require any a priori information about the font, font size, scripts, geometric transformation, distortion or background texture. We have tested our method on various categories of images like license plates, posters, hand written documents and document images etc. The results show proposed method to be a robust, versatile and effective tool for text extraction from images.
['Sunil Kumar', 'Nitin Khanna', 'Santanu Chaudhury', 'Shiv Dutt Joshi']
Locating text in images using matched wavelets
405,007
The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.
['Valentina Vasco', 'Arren Glover', 'Chiara Bartolozzi']
Fast event-based Harris corner detection exploiting the advantages of event-driven cameras
960,189
Software development is a complex undertaking that continues to present software project teams with numerous challenges. Software project teams are adopting extreme programming (XP) practices in order to overcome the challenges of software development in an increasingly dynamic environment. The ability to coordinate software developers' efforts is critical in such conditions. Expertise coordination has been identified as an important emergent process through which software project teams manage non-routine challenges in software development. However, the extent to which XP enables software project teams to coordinate expertise is unknown. Drawing on the agile development and expertise coordination literatures, we examine the role of collective ownership and coding standards as processes and practices that govern coordination in software project teams. We examine the relationship between collective ownership, coding standards, expertise coordination, and software project technical quality in a field study of 56 software project teams comprising 509 software developers. We found that collective ownership and coding standards play a role in improving software project technical quality. We also found that collective ownership and coding standards moderated the relationship between expertise coordination and software project technical quality, with collective ownership attenuating the relationship and coding standards strengthening the relationship. Theoretical and practical implications of the findings are discussed.
['Likoebe M. Maruping', 'Xiaojun Zhang', 'Viswanath Venkatesh']
Role of collective ownership and coding standards in coordinating expertise in software project teams
259,900
The growing sizes of volumetric data sets pose a great challenge for interactive visualization. In this paper, we present a feature-preserving data reduction and focus+context visualization method based on transfer function driven, continuous voxel repositioning and resampling techniques. Rendering reduced data can enhance interactivity. Focus+context visualization can show details of selected features in context on display devices with limited resolution. Our method utilizes the input transfer function to assign importance values to regularly partitioned regions of the volume data. According to user interaction, it can then magnify regions corresponding to the features of interest while compressing the rest by deforming the 3D mesh. The level of data reduction achieved is significant enough to improve overall efficiency. By using continuous deformation, our method avoids the need to smooth the transition between low and high-resolution regions as often required by multiresolution methods. Furthermore, it is particularly attractive for focus+context visualization of multiple features. We demonstrate the effectiveness and efficiency of our method with several volume data sets from medical applications and scientific simulations.
['Yu-Shuen Wang', 'Chaoli Wang', 'Tong-Yee Lee', 'Kwan-Liu Ma']
Feature-Preserving Volume Data Reduction and Focus+Context Visualization
317,212
Delay-tolerant networks (DTNs) are characterized by a possible absence of end-to-end communication routes at any instant. In most cases, however, a form of connectivity can be established over time and space. This particularity leads to consider the relevance of a given route not only in terms of hops (topological length), but also in terms of time (temporal length). The problem of measuring temporal distances between individuals in a social network was recently addressed, based on a posteriori analysis of interaction traces. This paper focuses on the distributed version of this problem, asking whether every node in a network can know precisely and in real time how out-of-date it is with respect to every other. Answering affirmatively is simple when contacts between the nodes are punctual, using the temporal adaptation of vector clocks provided in (Kossinets et al., 2008). It becomes more difficult when contacts have a duration and can overlap in time with each other. We demonstrate that the problem remains solvable with arbitrarily long contacts and non-instantaneous (though invariant and known) propagation delays on edges. This is done constructively by extending the temporal adaptation of vector clocks to non-punctual causality. The second part of the paper discusses how the knowledge of temporal lags could be used as a building block to solve more concrete problems, such as the construction of foremost broadcast trees or network backbones in periodically-varying DTNs.
['Arnaud Casteigts', 'Paola Flocchini', 'Bernard Mans', 'Nicola Santoro']
Measuring Temporal Lags in Delay-Tolerant Networks
517,463
Leveraging Stratification in Twitter Sampling
['Vikas Joshi', 'Deepak S. Padmanabhan', 'L. V. Subramaniam']
Leveraging Stratification in Twitter Sampling
993,521
Determining in advance all objects that a robot will interact with in an open environment is very challenging, if not impossible. It makes difficult the development of models that will allow to perceive and recognize objects, to interact with them and to predict how these objects will react to interactions with other objects or with the robot. Developmental robotics proposes to make robots learn by themselves such models through a dedicated exploration step. It raises a chicken-and-egg problem: the robot needs to learn about objects to discover how to interact with them and, to this end, it needs to interact with them. In this work, we propose Novelty-driven Evolutionary Babbling (NovEB), an approach enabling to bootstrap this process and to acquire knowledge about objects in the surrounding environment without requiring to include a priori knowledge about the environment, including objects, or about the means to interact with them. Our approach consists in using an evolutionary algorithm driven by a novelty criterion defined in the raw sensorimotor flow: behaviours, described by a trajectory of the robot end effector, are generated with the goal to maximize the novelty of raw perceptions. The approach is tested on a simulated PR2 robot and is compared to a random motor babbling.
['Carlos Rizo Maestre', 'Antoine Cully', 'Christophe Gonzales', 'Stéphane Doncieux']
Bootstrapping interactions with objects from raw sensorimotor data: A novelty search based approach
564,503
The problem to track time-varying parameters in cellular radio systems is studied, and the focus is on estimation based only on the signals that are readily available. Previous work have demonstrated very good performance, but were relying on analog measurement that are not available. Most of the information is lost due to quantization and sampling at a rate that might be as low as 2 Hz (GSM case). For that matter a maximum likelihood estimator have been designed and exemplified in the case of GSM. Simulations indicate good performance both when most parameters are varying slowly, and when subject to fast variations as in realistic cases. Since most computations take place in the base stations, the estimator is ready for implementation in a second generation wireless system. No update of the software in the mobile stations is needed.
['Jonas Blom', 'Fredrik Gunnarsson', 'Fredrik Gustafsson']
Estimation in cellular radio systems
529,550
Abstract#R##N##R##N#Annoying shaky motion is one of the significant problems in home videos, since hand shake is an unavoidable effect when capturing by using a hand-held camcorder. Video stabilization is an important technique to solve this problem, but the stabilized videos resulting from some current methods usually have decreased resolution and are still not so stable. In this paper, we propose a robust and practical method of full-frame video stabilization while considering user's capturing intention to remove not only the high frequency shaky motions but also the low frequency unexpected movements. To guess the user's capturing intention, we first consider the regions of interest in the video to estimate which regions or objects the user wants to capture, and then use a polyline to estimate a new stable camcorder motion path while avoiding the user's interested regions or objects being cut out. Then, we fill the dynamic and static missing areas caused by frame alignment from other frames to keep the same resolution and quality as the original video. Furthermore, we smooth the discontinuous regions by using a three-dimensional Poisson-based method. After the above automatic operations, a full-frame stabilized video can be achieved and the important regions and objects can also be preserved.
['Bing-Yu Chen', 'Ken-Yi Lee', 'Wei-Ting Huang', 'Jong-Shan Lin']
Capturing Intention‐based Full‐Frame Video Stabilization
30,838
Semi-supervised lexical acquisition for wide-coverage parsing
['Emily Thomforde']
Semi-supervised lexical acquisition for wide-coverage parsing
515,576
Wireless ad hoc networks are gaining popularity as these networks are self organizing without requiring fixed infrastructure such as servers or access points. Nodes in wireless ad hoc networks are typically low-power devices and in some large scale ad hoc networks such as wireless sensor networks (WSNs), there might be tens of thousands of low-power energy constrained nodes in the network. In order to secure group communication for a wireless ad hoc network, the low-power nature of the nodes and the network size has to be taken into consideration. In this paper, we propose an energy-efficient and scalable group key agreement (GKA) scheme for wireless ad hoc networks, which uses a generalized circular hierarchical (C-H) group model, where the network is partitioned into subgroups at $h$ different layers and each subgroup is arranged in a circle. Next, we describe the computational and communication energy analysis of a typical node found in ad hoc networks and provide some formulas that can be used to calculate the energy consumption costs for protocols implemented using different microprocessors and radio transceiver modules. A complexity analysis and energy consumption costs analysis conclude that our proposed scheme is the most energy-efficient and scalable GKA scheme as compared to three other GKA protocols.
['Joseph Chee Ming Teo', 'Chik How Tan']
Energy-efficient and scalable group key agreement for large ad hoc networks
175,919
The explosive growth of Web 2.0, which was characterized by the creation of online social networks, has reignited the study of factors that could help us understand the growth and dynamism of these networks. Various generative network models have been proposed, including the Barabasi-Albert and Watts-Strogatz models. In this study, we revisit the problem from a perspective that seeks to compare results obtained from these generative models with those from real networks. To this end, we consider the dating network Skout Inc. An analysis is performed on the topological characteristics of the network that could explain the creation of new network links. Afterwards, the results are contrasted with those obtained from the Barabasi-Albert and Watts-Strogatz generative models. We conclude that a key factor that could explain the creation of links originates in its cluster structure, where link recommendations are more precise in Watts-Strogatz segmented networks than in Barabasi-Albert hierarchical networks. This result reinforces the need to establish more and better network segmentation algorithms that are capable of clustering large networks precisely and efficiently.
['Marcelo Mendoza', 'Matías Estrada']
Revisiting Link Prediction: Evolving Models and Real Data Findings
920,477
This paper describes a proposed modeling and design environment for teaching the concepts of performance modeling of hardware/software systems to senior computer engineering undergraduate students. This environment is being developed to support senior capstone design projects in computer engineering. Portions of this environment are currently being beta tested and educational material, including lecture slides and laboratory exercises, based on the use of the environment are being developed.
['Robert H. Klenke', 'James H. Aylor']
A proposed modeling environment to teach performance modeling and hardware/software codesign to senior undergraduates
209,958
The microblogging services become increasingly popular for people to exchange their feelings and opinions. Extracting and analyzing the sentiments in microblogs have drawn extensive attentions from both academia researchers and commercial companies. The previous literature usually focused on classifying the microblogs into positive or negative categories. However, people's sentiments are much more complex, and multiple fine-grained emotions may coexist in just one short microblog text. In this paper, we regard the emotion analysis as a multi-label learning problem and propose a novel calibrated label ranking based framework for detecting the multiple fine-grained emotions in the Chinese microblogs. We combine the learning-based method and lexicon-based method to build unified emotion classifiers, which alleviate the sparsity of the training microblog dataset. Experiment results using NLPCC 2014 evaluation dataset show that our proposed algorithm has achieved the best performance and significantly outperforms other participators' methods.
['Mingqiang Wang', 'Mengting Liu', 'Shi Feng', 'Daling Wang', 'Yifei Zhang']
A Novel Calibrated Label Ranking Based Method for Multiple Emotions Detection in Chinese Microblogs
267,586