_id
stringlengths
40
40
text
stringlengths
0
10k
fe13e79621be1fea2f6f4f37417155fb7079b05a
A family of switched-capacitor resonant circuits using only two transistors is presented. The circuit operates under zero-current switching and, therefore, the switching loss is zero. It also offers a wide choice of voltage conversions including fractional as well as multiple and inverted voltage conversion ratios.
90fcb6bd123a88bc6be5ea233351f0e12d517f98
ac7023994da7768224e76d35c6178db36062182c
050b64c2343ef3c7f0c60285e4429e9bb8175dff
Data is increasingly affecting the automotive industry, from vehicle development, to manufacturing and service processes, to online services centered around the connected vehicle. Connected, mobile and Internet of Things devices and machines generate immense amounts of sensor data. The ability to process and analyze this data to extract insights and knowledge that enable intelligent services, new ways to understand business problems, improvements of processes and decisions, is a critical capability. Hadoop is a scalable platform for compute and storage and emerged as de-facto standard for Big Data processing at Internet companies and in the scientific community. However, there is a lack of understanding of how and for what use cases these new Hadoop capabilities can be efficiently used to augment automotive applications and systems. This paper surveys use cases and applications for deploying Hadoop in the automotive industry. Over the years a rich ecosystem emerged around Hadoop comprising tools for parallel, in-memory and stream processing (most notable MapReduce and Spark), SQL and NOSQL engines (Hive, HBase), and machine learning (Mahout, MLlib). It is critical to develop an understanding of automotive applications and their characteristics and requirements for data discovery, integration, exploration and analytics. We then map these requirements to a confined technical architecture consisting of core Hadoop services and libraries for data ingest, processing and analytics. The objective of this paper is to address questions, such as: What applications and datasets are suitable for Hadoop? How can a diverse set of frameworks and tools be managed on multi-tenant Hadoop cluster? How do these tools integrate with existing relational data management systems? How can enterprise security requirements be addressed? What are the performance characteristics of these tools for real-world automotive applications? To address the last question, we utilize a standard benchmark (TPCx-HS), and two application benchmarks (SQL and machine learning) that operate on a dataset of multiple Terabytes and billions of rows.
b8a0cfa55b3393de4cc600d115cf6adb49bfa4ee
The increasing use of social networks and online sites where people can express their opinions has created a growing interest in Opinion Mining. One of the main tasks of Opinion Mining is to determine whether an opinion is positive or negative. Therefore, the role of the feelings expressed on the web has become crucial, mainly due to the concern of businesses and government to automatically identify the semantic orientation of the views of customers or citizens. This is also a concern, in the area of health to identify psychological disorders. This research focuses on the development of a web application called SWePT (Web Service for Polarity detection in Spanish Texts), which implements the Sequential Minimal Optimization (SMO) algorithm, extracting its features from an affective lexicon in Mexican Spanish. For this purpose, a corpus and an affective lexicon in Mexican Spanish were created. The experiments using three (positive, neutral, negative) and five categories (very positive, positive, neutral, negative, and very negative) allow us to demonstrate the effectiveness of the presented method. SWePT has also been implemented in the Emotion-bracelet interface, which shows the opinion of a user graphically.
f176b7177228c1a18793cf922455545d408a65ae
Multi-converter power electronic systems exist in land, sea, air, and space vehicles. In these systems, load converters exhibit constant power load (CPL) behavior for the feeder converters and tend to destabilize the system. In this paper, the implementation of novel active-damping techniques on dc/dc converters has been shown. Moreover, the proposed active-damping method is used to overcome the negative impedance instability problem caused by the CPLs. The effectiveness of the new proposed approach has been verified by PSpice simulations and experimental results.
5bf4644c104ac6778a0aa07418321b14e0010e81
The interaction between drivers and their cars will change significantly with the introduction of autonomous vehicles. The driver's role will shift towards a supervisory control of their autonomous vehicle. The eventual relief from the driving task enables a complete new area of research and practice in human-computer interaction and interaction design. In this one-day workshop, participants will explore the opportunities the design space of autonomous driving will bring to HCI researchers and designers. On the day before workshop participants are invited to visit (together with workshop organizers) Google Partnerplex and Stanford University. At Google participants will have the opportunity to explore Google's autonomous car simulator and might have the chance to experience one of the Google Cars (if available). At Stanford participants are invited to ride in a Wizard-of-Oz autonomous vehicle. Based on this first-hand experience we will discuss design approaches and prototype interaction systems during the next day's workshop. The outcome of this workshop will be a set of concepts, interaction sketches, and low-fidelity paper prototypes that address constraints and potentials of driving in an autonomous car.
8e79e46513e83bad37a029d1c49fca4a1c204738
We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEOQUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.1
a1d326e7710cb9a1464ef52ca557a20ea5aa7e91
In this work, we present a 4-band dual-polarized antenna designed for 8-chanel applications. Based on LTCC technology, the antenna is a patch coupled aperture with a backed cavity. Each antenna element of a designated band contains two channels through two orthogonally polarized ports. Combining four dual-polarized antenna elements under different frequency, an 8-channel antenna for 60GHz applications can be achieved. The array antenna contains 8 feeding ports, which correspond to 8 independent channels. The isolation between each port can reach 20dB in most of the frequency band.
cbd92fac853bfb56fc1c3752574dc0831d8bc181
We present a framework for information retrieval that combines document models and query models using a probabilistic ranking function based on Bayesian decision theory. The framework suggests an operational retrieval model that extends recent developments in the language modeling approach to information retrieval. A language model for each document is estimated, as well as a language model for each query, and the retrieval problem is cast in terms of risk minimization. The query language model can be exploited to model user preferences, the context of a query, synonomy and word senses. While recent work has incorporated word translation models for this purpose, we introduce a new method using Markov chains defined on a set of documents to estimate the query models. The Markov chain method has connections to algorithms from link analysis and social networks. The new approach is evaluated on TREC collections and compared to the basic language modeling approach and vector space models together with query expansion using Rocchio. Significant improvements are obtained over standard query expansion methods for strong baseline TF-IDF systems, with the greatest improvements attained for short queries on Web data.
61d234dd4f7b733e5acf2550badcf1e9333b6de1
In urban environments, moving obstacles detection and free space determination are key issues for driving assistance systems and autonomous vehicles. When using lidar sensors scanning in front of the vehicle, uncertainty arises from ignorance and errors. Ignorance is due to the perception of new areas and errors come from imprecise pose estimation and noisy measurements. Complexity is also increased when the lidar provides multi-echo and multi-layer information. This paper presents an occupancy grid framework that has been designed to manage these different sources of uncertainty. A way to address this problem is to use grids projected onto the road surface in global and local frames. The global one generates the mapping and the local one is used to deal with moving objects. A credibilist approach is used to model the sensor information and to do a global fusion with the world-fixed map. Outdoor experimental results carried out with a precise positioning system show that such a perception strategy increases significantly the performance compared to a standard approach.
3bad518b0f56e72efadc4791a2bd65aaeaf47ec1
The aim of the research is to identify and evaluate the main problems experienced in the ERP postimplementation stage of multinational, privately-owned Egyptian and governmental organizations in Egypt. Data gathering was achieved by means of a set of interviews and online questionnaire conducted to 50 companies implementing ERP in Egypt. The paper presents a descriptive analysis of the difficulties and problems encountered by organizations in Egypt following ERP implementation and how these have contributed to unsuccessful implementation overall.
749546a58a1d46335de785c41a3eae977e84a0df
The objective of machine learning is to identify a model that yields good generalization performance. This involves repeatedly selecting a hypothesis class, searching the hypothesis class by minimizing a given objective function over the model’s parameter space, and evaluating the generalization performance of the resulting model. This search can be computationally intensive as training data continuously arrives, or as one needs to tune hyperparameters in the hypothesis class and the objective function. In this paper, we present a framework for exact incremental learning and adaptation of support vector machine (SVM) classifiers. The approach is general and allows one to learn and unlearn individual or multiple examples, adapt the current SVM to changes in regularization and kernel parameters, and evaluate generalization performance through exact leave-one-out error estimation. I. I NTRODUCTION SVM techniques for classification and regression provide powerful tools for learning models that generalize well even in sparse, high dimensional settings. Their success can be attributed to Vapnik’s seminal work in statistical learning theory [15] which provided key insights into the factors affecting generalization performance. SVM learning can be viewed as a practical implementation of Vapnik’s structural risk minimizationinduction principle which involves searching over hypothesis classes of varying capacity to find the model with the best generalization performance. SVM classifiers of the formf(x) = w ·Φ(x)+b are learned from the data{(xi, yi) ∈ R I m × {−1, 1} ∀ i ∈ {1, . . . , N}} by minimizing min w,b,ξ 1 2 ‖w‖ + C N ∑
6df617304e9f1185694f11ca5cae5c27e868809b
Wireless microsensor networks have been identified as one of the most important technologies for the 21st century. This paper traces the history of research in sensor networks over the past three decades, including two important programs of the Defense Advanced Research Projects Agency (DARPA) spanning this period: the Distributed Sensor Networks (DSN) and the Sensor Information Technology (SensIT) programs. Technology trends that impact the development of sensor networks are reviewed, and new applications such as infrastructure security, habitat monitoring, and traffic control are presented. Technical challenges in sensor network development include network discovery, control and routing, collaborative signal and information processing, tasking and querying, and security. The paper concludes by presenting some recent research results in sensor network algorithms, including localized algorithms and directed diffusion, distributed tracking in wireless ad hoc networks, and distributed classification using local agents.
3b655db109beaae48b238045cf9618418e349f36
Fitting data by a bounded complexity linear model is equivalent to low-rank approximation of a matrix constructed from the data. The data matrix being Hankel structured is equivalent to the existence of a linear time-invariant system that fits the data and the rank constraint is related to a bound on the model complexity. In the special case of fitting by a static model, the data matrix and its low-rank approximation are unstructured. We outline applications in system theory (approximate realization, model reduction, output error, and errors-in-variables identification), signal processing (harmonic retrieval, sum-of-damped exponentials, and finite impulse response modeling), and computer algebra (approximate common divisor). Algorithms based on heuristics and local optimization methods are presented. Generalizations of the low-rank approximation problem result from different approximation criteria (e.g., weighted norm) and constraints on the data matrix (e.g., nonnegativity). Related problems are rank minimization and structured pseudospectra. 2007 Elsevier Ltd. All rights reserved.
ee1140f49c2f1ce32d0ed9404078c724429cc487
This paper gives the design of a highly compact comparator at a Ku-band frequency and presents analysis results of the comparator for the fabrication inaccuracies. First an unconventional magic-t using a nonstandard waveguide is designed at 15.50 GHz. To reduce the volume occupied by the magic-t, its E-arm (or difference port) is kept parallel to the plane of two inputs of the magic-t instead of perpendicular to them as is done in a convention magic-t. The sum and the difference ports of the above folded magic-t are then matched using inductive windows at 15.50 GHz. Keeping the required location of the outputs of the comparator in mind, four of these matched folded magic-ts are suitably interconnected to design a highly compact comparator. The effects of the fabrication errors in the waveguide and matching elements dimensions on the centre frequency, magnitude and phase response of the comparator are also analyzed and presented.
ecbcccd71b3c7e0cca8ecf0997e9775019b51488
We develop an individual behavioral model that integrates the role of top management and organizational culture into the theory of planned behavior in an attempt to better understand how top management can influence security compliance behavior of employees. Using survey data and structural equation modeling, we test hypotheses on the relationships among top management participation, organizational culture, and key determinants of employee compliance with information security policies. We find that top management participation in information security initiatives has significant direct and indirect influences on employees’ attitudes towards, subjective norm of, and perceived behavioral control over compliance with information security policies. We also find that the top management participation strongly influences organizational culture which in turn impacts employees’ attitudes towards and perceived behavioral control over compliance with information security policies. Furthermore, we find that the effects of top management participation and organizational culture on employee behavioral intentions are fully mediated by employee cognitive beliefs about compliance with information security policies. Our findings extend information security research literature by showing how top management can play a proactive role in shaping employee compliance behavior in addition to the deterrence oriented remedies advocated in the extant literature. Our findings also refine the theories about the role of organizational culture in shaping employee compliance behavior. Significant theoretical and practical implications of ∗This project was partially funded by a grant to the authors from the Defense Information Systems Agency (DISA) of the Department of Defense (DoD). The authors express their thanks to the editor, senior editor, associate editor, and two anonymous reviewers for their detailed and constructive comments and suggestions throughout the review process. †Corresponding author.
1ac7018b0935cdb5bf52b34d738b110e2ef0416a
Most online reviews consist of plain-text feedback together with a single numeric score. However, understanding the multiple `aspects' that contribute to users' ratings may help us to better understand their individual preferences. For example, a user's impression of an audio book presumably depends on aspects such as the story and the narrator, and knowing their opinions on these aspects may help us to recommend better products. In this paper, we build models for rating systems in which such dimensions are explicit, in the sense that users leave separate ratings for each aspect of a product. By introducing new corpora consisting of five million reviews, rated with between three and six aspects, we evaluate our models on three prediction tasks: First, we uncover which parts of a review discuss which of the rated aspects. Second, we summarize reviews by finding the sentences that best explain a user's rating. Finally, since aspect ratings are optional in many of the datasets we consider, we recover ratings that are missing from a user's evaluation. Our model matches state-of-the-art approaches on existing small-scale datasets, while scaling to the real-world datasets we introduce. Moreover, our model is able to `disentangle' content and sentiment words: we automatically learn content words that are indicative of a particular aspect as well as the aspect-specific sentiment words that are indicative of a particular rating.
be0b922ec9625a5908032bde6ae47fa6c4216a38
Neural networks have achieved state-ofthe-art performance on several structuredoutput prediction tasks, trained in a fully supervised fashion. However, annotated examples in structured domains are often costly to obtain, which thus limits the applications of neural networks. In this work, we propose Maximum Margin Reward Networks, a neural networkbased framework that aims to learn from both explicit (full structures) and implicit supervision signals (delayed feedback on the correctness of the predicted structure). On named entity recognition and semantic parsing, our model outperforms previous systems on the benchmark datasets, CoNLL-2003 and WebQuestionsSP.
01a29e319e2afa2d29cab62ef1f492a953e8ca70
This paper describes a personalized k-anonymity model for protecting location privacy against various privacy threats through location information sharing. Our model has two unique features. First, we provide a unified privacy personalization framework to support location k-anonymity for a wide range of users with context-sensitive personalized privacy requirements. This framework enables each mobile node to specify the minimum level of anonymity it desires as well as the maximum temporal and spatial resolutions it is willing to tolerate when requesting for k-anonymity preserving location-based services (LBSs). Second, we devise an efficient message perturbation engine which runs by the location protection broker on a trusted server and performs location anonymization on mobile users' LBS request messages, such as identity removal and spatio-temporal cloaking of location information. We develop a suite of scalable and yet efficient spatio-temporal cloaking algorithms, called CliqueCloak algorithms, to provide high quality personalized location k-anonymity, aiming at avoiding or reducing known location privacy threats before forwarding requests to LBS provider(s). The effectiveness of our CliqueCloak algorithms is studied under various conditions using realistic location data synthetically generated using real road maps and traffic volume data
0d8f17d8d1d05d6405be964648e7fc622c776c5d
Mobile contexts of use vary a lot, and may even be continuously changing during use. The context is much more than location, but its other elements are still difficult to identify or measure. Location information is becoming an integral part of different mobile devices. Current mobile services can be enhanced with location-aware features, thus providing the user with a smooth transition towards context-aware services. Potential application fields can be found in areas such as travel information, shopping, entertainment, event information and different mobile professions. This paper studies location-aware mobile services from the user's point of view. The paper draws conclusions about key issues related to user needs, based on user interviews, laboratory and field evaluations with users, and expert evaluations of location-aware services. The user needs are presented under five main themes: topical and comprehensive contents, smooth user interaction, personal and user-generated contents, seamless service entities and privacy issues.
e9a9d7f2a1226b463fb18f2215553dfd01aa38e7
We study the sign language recognition problem which is to translate the meaning of signs from visual input such as videos. It is well-known that many problems in the field of computer vision require a huge amount of dataset to train deep neural network models. We introduce the KETI sign language dataset which consists of 10,480 videos of high resolution and quality. Since different sign languages are used in different countries, the KETI sign language dataset can be the starting line for further research on the Korean sign language recognition. Using the sign language dataset, we develop a sign language recognition system by utilizing the human keypoints extracted from face, hand, and body parts. The extracted human keypoint vector is standardized by the mean and standard deviation of the keypoints and used as input to recurrent neural network (RNN). We show that our sign recognition system is robust even when the size of training data is not sufficient. Our system shows 89.5% classification accuracy for 100 sentences that can be used in emergency situations.
7c1cdcbdd30163f3d7fd9789e42c4a37eb2f7f04
In web search, users queries are formulated using only few terms and term-matching retrieval functions could fail at retrieving relevant documents. Given a user query, the technique of query expansion (QE) consists in selecting related terms that could enhance the likelihood of retrieving relevant documents. Selecting such expansion terms is challenging and requires a computational framework capable of encoding complex semantic relationships. In this paper, we propose a novel method for learning, in a supervised way, semantic representations for words and phrases. By embedding queries and documents in special matrices, our model disposes of an increased representational power with respect to existing approaches adopting a vector representation. We show that our model produces high-quality query expansion terms. Our expansion increase IR measures beyond expansion from current word-embeddings models and well-established traditional QE methods.
8d4d06159413e1bb65ef218b4c78664d84a9b3c3
The Android operating system for mobile phones, which is still relatively new, is rapidly gaining market share, with dozens of smartphones and tablets either released or set to be released. In this paper, we present the first methodology and toolset for acquisition and deep analysis of volatile physical memory from Android devices. The paper discusses some of the challenges in performing Android memory acquisition, discusses our new kernel module for dumping memory, named dmd, and specifically addresses the difficulties in developing device-independent acquisition tools. Our acquisition tool supports dumping memory to either the SD on the phone or via the network. We also present analysis of kernel structures using newly developed Volatility functionality. The results of this work illustrate the potential that deep memory analysis offers to digital forensics investigators. a 2011 Elsevier Ltd. All rights reserved.
8db37013b0b3315badaa7190d4c3af9ec56ab278
Android remains the dominant OS in the smartphone market even though the iOS share of the market increased during the iPhone 6 release period. As various types of Android smartphones are being launched in the market, forensic studies are being conducted to test data acquisition and analysis. However, since the application of new Android security technologies, it has become more difficult to acquire data using existing forensic methods. In order to address this problem, we propose a new acquisition method based on analyzing the firmware update protocols of Android smartphones. A physical acquisition of Android smartphones can be achieved using the flash memory read command by reverse engineering the firmware update protocol in the bootloader. Our experimental results demonstrate that the proposed method is superior to existing forensic methods in terms of the integrity guarantee, acquisition speed, and physical dump with screen-locked smartphones (USB debugging disabled). © 2015 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access articleunder theCCBY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
34f3955cb11db849789f7fbc78eb3cb347dd573d
4ef973984a8ea481edf74e0d2074e19d0389e76b
A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single gray-scale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottom-up from the visual input. Instead, three other mechanisms are used that can bridge the gap between the two-dimensional image and knowledge of three-dimensional objects. First, a process of perceptual organization is used to form groupings and structures in the image that are likely to be invariant over a wide range of viewpoints. Second, a probabilistic ranking method is used to reduce the size of the search space during model based matching. Finally, a process of spatial correspondence brings the projections of three-dimensional models into direct correspondence with the image by solving for unknown viewpoint and model parameters. A high level of robustness in the presence of occlusion and missing data can be achieved through full application of a viewpoint consistency constraint. It is argued that similar mechanisms and constraints form the basis for recognition in human vision. This paper has been published in Artificial Intelligence, 31, 3 (March 1987), pp. 355–395.
37de340b2a26a94a0e1db02a155cacb33c10c746
A Vivaldi flexible antenna with a −6 dB bandwidth from 150 MHz to 2000 MHz is introduced. The antenna is fabricated on a 60∗60 cm2 silicone substrate. In this paper, we present the design, realization and performances of this wide band and directive antenna. The proposed structure is lightweight, easy to realize and does not require any matching network. The targeted application is the radio-localization of signal source emissions from helium gas balloon. Six antennas are integrated on the bottom side of the balloon and the information is recovered with a cable that serves to its stabilization.
407cf7a598d69c7802d16ada79d25e3c59275c9b
Since a large scale Wireless Sensor Network (WSN) is to be completely integrated into Internet as a core part of Internet of Things (IoT) or Cyber Physical System (CPS), it is necessary to consider various security challenges that come with IoT/CPS, such as the detection of malicious attacks. Sensors or sensor embedded things may establish direct communication between each other using 6LoWPAN protocol. A trust and reputation model is recognized as an important approach to defend a large distributed sensor networks in IoT/CPS against malicious node attacks, since trust establishment mechanisms can stimulate collaboration among distributed computing and communication entities, facilitate the detection of untrustworthy entities, and assist decision-making process of various protocols. In this paper, based on in-depth understanding of trust establishment process and quantitative comparison among trust establishment methods, we present a trust and reputation model TRM-IoT to enforce the cooperation between things in a network of IoT/CPS based on their behaviors. The accuracy, robustness and lightness of the proposed model is validated through a wide set of simulations.
e11f9ca6e574c779bdf0a868c368e5b1567a1517
Learning to learn has emerged as an important direction for achieving artificial intelligence. Two of the primary barriers to its adoption are an inability to scale to larger problems and a limited ability to generalize to new tasks. We introduce a learned gradient descent optimizer that generalizes well to new tasks, and which has significantly reduced memory and computation overhead. We achieve this by introducing a novel hierarchical RNN architecture, with minimal perparameter overhead, augmented with additional architectural features that mirror the known structure of optimization tasks. We also develop a meta-training ensemble of small, diverse optimization tasks capturing common properties of loss landscapes. The optimizer learns to outperform RMSProp/ADAM on problems in this corpus. More importantly, it performs comparably or better when applied to small convolutional neural networks, despite seeing no neural networks in its meta-training set. Finally, it generalizes to train Inception V3 and ResNet V2 architectures on the ImageNet dataset for thousands of steps, optimization problems that are of a vastly different scale than those it was trained on. We release an open source implementation of the meta-training algorithm.
0458cec30079a53a2b7726a14f5dd826b9b39bfd
As robots begin to collaborate with humans in everyday workspaces, they will need to understand the functions of tools and their parts. To cut an apple or hammer a nail, robots need to not just know the tool's name, but they must localize the tool's parts and identify their functions. Intuitively, the geometry of a part is closely related to its possible functions, or its affordances. Therefore, we propose two approaches for learning affordances from local shape and geometry primitives: 1) superpixel based hierarchical matching pursuit (S-HMP); and 2) structured random forests (SRF). Moreover, since a part can be used in many ways, we introduce a large RGB-Depth dataset where tool parts are labeled with multiple affordances and their relative rankings. With ranked affordances, we evaluate the proposed methods on 3 cluttered scenes and over 105 kitchen, workshop and garden tools, using ranked correlation and a weighted F-measure score [26]. Experimental results over sequences containing clutter, occlusions, and viewpoint changes show that the approaches return precise predictions that could be used by a robot. S-HMP achieves high accuracy but at a significant computational cost, while SRF provides slightly less accurate predictions but in real-time. Finally, we validate the effectiveness of our approaches on the Cornell Grasping Dataset [25] for detecting graspable regions, and achieve state-of-the-art performance.
e0398ab99daa5236720cd1d91e5b150985aac4f3
We are developing a dietary assessment system that records daily food intake through the use of food images taken at a meal. The food images are then analyzed to extract the nutrient content in the food. In this paper, we describe the image analysis tools to determine the regions where a particular food is located (image segmentation), identify the food type (feature classification) and estimate the weight of the food item (weight estimation). An image segmentation and classification system is proposed to improve the food segmentation and identification accuracy. We then estimate the weight of food to extract the nutrient content from a single image using a shape template for foods with regular shapes and area-based weight estimation for foods with irregular shapes.
ab2a41722ee1f2b26575080238ba25f7173a6ae2
2.24 w power-amplifier (PA) module at 35 GHz presented using broad-band spatial power-combining system. The combiner can accommodate more monolithic microwave integrated-circuit (MMIC) PA with stagger placement structure on limited microstrip space in Ka-band waveguide structure with good return losses, and heat can dissipated into aluminum carrier quickly. This combiner is based on a slotline-to-microstrip transition structure, which also serves as a four-way power combiner. The proposed 2*2 combining structure combined by vertical stacking inside the waveguide was analyzed and optimized by finite-element-method (FEM) simulations and experiments.
fa9c7e3c6d55175de25bea79ba66ef91607f3920
High power solid-state power amplifiers require a high efficiency power dividing/combining structure to keep the power loss as low as possible. The heat sinking capability of the divider/combiner also limits its maximum output power with continues wave (CW) configuration. In this paper, we introduce a novel 8-way Ku band power divider/combiner system, it demonstrate advantages of low loss, broadband and good heat sinking capability simultaneously. As its sub-components, low loss probes for waveguide-to-microstrip transition and low loss broadband 1-to-2 power combiners are designed and fabricated. The measured back-to-back insertion loss of the whole 8-way power combiner is lower than 0.5dB in the whole Ku band, and the corresponding combining efficiency is as high as 94.5%. The simulated thermal resistance of the system is as low as 0.21°C/W, indicating the proposed power combiner is able to produce 50W of CW output power with commercial available Monolithic Microwave Integrated Circuits (MMICs).
c5695d4104e245ad54d3fe8e4ad33e65970c2d6a
In this paper a system for measuring impedance, based on AD5933 circuit is presented. The impedance measuring range is between 9 Ω and 18 MΩ for a 1 kHz ÷ 100 kHz frequency range. Possibilities of expanding this range of measurement are also presented in the paper. The system calibration is automatic and the relative error of the impedance modulus measurement is in the range ±2%. Measured impedance main parameters are shown locally on an OLED display but can also be stored in an SD memory card for further processing. The system is portable, modular and adaptable to a large number of applications.
1df5051913989b441e7df2ddc00aa8c3ab5960d0
Mobile devices are among the most disruptive technologies of the last years, gaining even more diffusion and success in the daily life of a wide range of people categories. Unfortunately, while the number of mobile devices implicated in crime activities is relevant and growing, the capability to perform the forensic analysis of such devices is limited both by technological and methodological problems. In this paper, we focus on Anti-Forensic techniques applied to mobile devices, presenting some fully automated instances of such techniques to Android devices. Furthermore, we tested the effectiveness of such techniques versus both the cursory examination of the device and some acquisition tools. a 2010 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved.
93083f4225ea62b3733a76fc64f9991ed5fd6878
We present the results of our participation in the VarDial 4 shared task on discriminating closely related languages. Our submission includes simple traditional models using linear support vector machines (SVMs) and a neural network (NN). The main idea was to leverage language group information. We did so with a two-layer approach in the traditional model and a multi-task objective in the neural network. Our results confirm earlier findings: simple traditional models outperform neural networks consistently for this task, at least given the amount of systems we could examine in the available time. Our two-layer linear SVM ranked 2nd in the shared task.
4fa0d9c4c3d17458085ee255b7a4b7c325d59e32
The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.
1c2dbbc5268eff6c78f581b8fc7c649d40b60538
The Semantic Web has recently seen a rise of large knowledge bases (such as DBpedia) that are freely accessible via SPARQL endpoints. The structured representation of the contained information opens up new possibilities in the way it can be accessed and queried. In this paper, we present an approach that extracts a graph covering relationships between two objects of interest. We show an interactive visualization of this graph that supports the systematic analysis of the found relationships by providing highlighting, previewing, and filtering features.
4b9a9fb54b3451e4212e298053f81f0cd49d70a2
The evolution of smartphones together with increasing computational power has empowered developers to create innovative context-aware applications for recognizing user-related social and cognitive activities in any situation and at any location. The existence and awareness of the context provide the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze, and share local sensory knowledge in the purpose for a large-scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects and also assist individuals. However, many open challenges remain, which are mostly arisen because the middleware services provided in mobile devices have limited resources in terms of power, memory, and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved and, at the same time, better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlightens them by proposing possible solutions.
4c65005c8822c3117bd3c3746e3a9b9e17386328
27208c88f07a1ffe97760c12be08fad3ab68fee2
Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train a deep network that learns features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. We validate our methods on the CUAVE and AVLetters datasets with an audio-visual speech classification task, demonstrating superior visual speech classification on AVLetters and effective multimodal fusion.
21c9dd68b908825e2830b206659ae6dd5c5bfc02
We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems.
39b7007e6f3dd0744833f292f07ed77973503bfd
Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higherand lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higherand lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations,1 learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.2
5b44f587c4c7611d04e304fd7fa37648338d0cbf
Data-efficient reinforcement learning (RL) in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy (“torques”) from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model for learning a low-dimensional feature embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning is crucial for longterm predictions, which lie at the core of the adaptive nonlinear model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art RL methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces, is lightweight and an important step toward fully autonomous end-to-end learning from pixels to torques.
d06ae5effef2922e7ee24a4b0f8274486f0a6523
Using a self-administered questionnaire, 149 respondents rated service elements associated with a recently visited store or restaurant on scales that differed only in the number of response categories (ranging from 2 to 11) and on a 101-point scale presented in a different format. On several indices of reliability, validity, and discriminating power, the two-point, three-point, and four-point scales performed relatively poorly, and indices were significantly higher for scales with more response categories, up to about 7. Internal consistency did not differ significantly between scales, but test-retest reliability tended to decrease for scales with more than 10 response categories. Respondent preferences were highest for the 10-point scale, closely followed by the seven-point and nine-point scales. Implications for research and practice are discussed.
bc93ff646e6f863d885e609db430716d7590338f
Nowadays, GPS-based car navigation systems mainly use speech and aerial views of simplified road maps to guide drivers to their destination. However, drivers often experience difficulties in linking the simple 2D aerial map with the visual impression that they get from the real environment, which is inherently ground-level based. Therefore, supplying realistically textured 3D city models at ground-level proves very useful for pre-visualizing an upcoming traffic situation. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the latter will more easily understand the required maneuver. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. We present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed which could allow for pre-visualization of any conceivable traffic situation by car navigation modules.
6dc245637d1d7335f50dbab0ee9d8463e7b35a49
Indexing and classification tools for Content Based Visual Information Retrieval (CBVIR) have been penetrating the universe of medical image analysis. They have been recently investigated for Alzheimer’s disease (AD) diagnosis. This is a normal “knowledge diffusion” process, when methodologies developed for multimedia mining penetrate a new application area. The latter brings its own specificities requiring an adjustment of methodologies on the basis of domain knowledge. In this paper, we develop an automatic classification framework for AD recognition in structural Magnetic Resonance Images (MRI). The main contribution of this work consists in considering visual features from the most involved region in AD (hippocampal area) and in using a late fusion to increase precision results. Our approach has been first evaluated on the baseline MR images of 218 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database and then tested on a 3T weighted contrast MRI obtained from a subsample of a large French epidemiological study: “Bordeaux dataset”. The experimental results show that our classification of patients with AD versus NC (Normal Control) subjects achieves the accuracies of 87 % and 85 % for ADNI subset and “Bordeaux dataset” respectively. For the most challenging group of subjects with the Mild Cognitive Impairment (MCI), we reach accuracies of 78.22 % and 72.23 % for MCI versus NC and MCI versus AD respectively on ADNI. The late fusion scheme improves classification results by 9 % in average for these three categories. Results demonstrate very promising classification performance and simplicity compared to the state-of-the-art volumetric AD diagnosis methods.
7d4c85662ca70abb26e37b2fc40a045fd0369f70
DC microgrids are popular due to the integration of renewable energy sources such as solar photovoltaics and fuel cells. Owing to the low output voltage of these dc power generators, high efficient high gain dc–dc converters are in need to connect the dc microgrid. In this paper, a nonisolated high gain dc–dc converter is proposed without using the voltage multiplier cell and/or hybrid switched-capacitor technique. The proposed topology utilizes two nonisolated inductors that are connected in series/parallel during discharging/charging mode. The operation of switches with two different duty ratios is the main advantage of the converter to achieve high voltage gain without using extreme duty ratio. The steady-state analysis of the proposed converter using two different duty ratios is discussed in detail. In addition, a 100 W, 20/200 V prototype circuit of the high gain dc–dc converter is developed, and the performance is validated using experimental results.
43afc11883fb147ac37b4dc40bf6e7fa5fccf341
We propose hashing to facilitate efficient kernels. This gen eralizes previous work using sampling and we show a principled way to compute the kernel matrix for d ata streams and sparse feature spaces. Moreover, we give deviation bounds from the exact ke rnel matrix. This has applications to estimation on strings and graphs.
bfcf14ae04a9a326f9263dcdd30e475334a96d39
An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of “normal” examples with only a small percentage of “abnormal” or “interesting” examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
d920943892caa0bc9f300cb9e3b7f3ab250f78c9
BigData applications are emerging during the last years, and researchers frommany disciplines are aware of the high advantages related to the knowledge extraction from this type of problem. However, traditional learning approaches cannot be directly applied due to scalability issues. To overcome this issue, the MapReduce framework has arisen as a “de facto” solution. Basically, it carries out a “divide-andconquer” distributed procedure in a fault-tolerant way to adapt for commodity hardware.Being still a recent discipline, few research has been conducted on imbalanced classification for Big Data. The reasons behind this are mainly the difficulties in adapting standard techniques to the MapReduce programming style. Additionally, inner problems of imbalanced data, namely lack of data and small disjuncts, are accentuated during the data partitioning to fit theMapReduce programming style. This paper is designed under three main pillars. First, to present the first outcomes for imbalanced classification in Big Data problems, introducing the current B Alberto Fernández [email protected] Sara del Río [email protected] Nitesh V. Chawla [email protected] Francisco Herrera [email protected] 1 Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain 2 Department of Computer Science and Engineering, 384 Fitzpatrick Hall, University of Notre Dame, Notre Dame, IN 46556, USA 3 Interdisciplinary Center for Network Science and Applications, 384 Nieuwland Hall of Science, University of Notre Dame, Notre Dame, IN 46556, USA research state of this area. Second, to analyze the behavior of standard pre-processing techniques in this particular framework. Finally, taking into account the experimental results obtained throughout this work, we will carry out a discussion on the challenges and future directions for the topic.
cd1481e9cc0c86bcf3a44672f887522a95a174e8
In the new era of “smart rail mobility”, infrastructure, trains, and travelers will be interconnected to achieve optimized mobility, higher safety, and lower costs. In order to realize a seamless high-data rate wireless connectivity, up to dozens of GHz bandwidth is required, and this motivates the exploration of the underutilized millimeter wave (mmWave) as well as the largely unexplored Terahertz (THz) bands. In order to realize the smart rail mobility at mmWave and THz bands, it is critical to gain a thorough understanding of the wireless channels. In this paper, according to the state of the art in research on railway wireless channels, we identify the main technical challenges and the corresponding chances concerning the reference scenario modules, accurate and efficient simulation platform, beamforming strategies, and handover design.
6d1e97df31e9a4b0255243d86608c4b7f725133b
We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstractions necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved performance and generality compared to a similar, state-of-theart, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future.
04975368149e407c2105b76a7523e027661bd4f0
The goal of encryption is to ensure confidentiali y of data in communication and storage processes. Recently, its use in constrained devices led to consider additional features, such as the ability to delegate computations to untrusted computers. For this purpose, we would like to give the untrusted computer only an encrypted version of the data to process. The computer will perform the computation on this encrypted data, hence without knowing anything on its real value. Finally, it will send back the result, and we will decrypt it. For coherence, the decrypted result has to be equal to the intended computed value if performed on the original data. For this reason, the encryption scheme has to present a particular structure. Rivest et al. proposed in 1978 to solve this issue through homomorphic encryption [1]. Unfortunately, Brickell and Yacobi pointed out in [2] some security f aws in the firs proposals of Rivest et al. Since this f rst attempt, a lot of articles have proposed solutions dedicated to numerous application contexts: secret sharing schemes, threshold schemes (see, e.g., [3]), zero-knowledge proofs (see, e.g., [4]), oblivious transfer (see, e.g., [5]), commitment schemes (see, e.g., [3]), anonymity, privacy, electronic voting, electronic auctions, lottery protocols (see, e.g., [6]), protection ofmobile agents (see, e.g., [7]), multiparty computation (see, e.g., [3]), mix-nets (see, e.g., [8, 9]), watermarking or finge printing protocols (see, e.g., [10–14]), and so forth. The goal of this article is to provide nonspecialists with a survey of homomorphic encryption techniques. Section 2 recalls some basic concepts of cryptography and presents homomorphic encryption; it is particularly aimed at noncryptographers, providing guidelines about the main characteristics of encryption primitives: algorithms, performance, security. Section 3 provides a survey of homomorphic encryption schemes published so far, and analyses their characteristics. Most schemes we describe are based onmathematical notions the reader may not be familiar with. In the cases these notions can easily be introduced, we present them briefl . The reader may refer to [15] for more information concerning those we could not introduce properly, or algorithmic problems related to their computation. Before going deeper in the subject, let us introduce some notation. The integer (x) denotes the number of bits constituting the binary expansion of x. As usual, Zn will denote the set of integers modulo n, and Z∗n the set of its invertible elements.
1c1c40927787c40ffe0db9629ede6828ecf09e65
We evaluate the possibility of using a finline orthomode transducer (OMT) at millimeter wavelengths. A finline OMT has low loss, low cross-polarization, and good return loss over a full waveguide band. We propose a novel finline OMT structure for millimeter-wavelengths and present results at X-band.
88323e38f676a31ed613dad604829808ff96f714
A novel broadband electromagnetic band-gap (EBG) structure is presented using multi-period mushroom-like structure with different patch size cascaded. Direct transmission method is used to determine the band-gap of the EBG structure. The effects of the unit number and patch size on the mushroomlike EBG structure are investigated. Two kinds of unit with different patch size are cascaded to enlarge the band-gap of EBG structure, which achieves almost 87.1%. The simulation results show that the band-gap almost covers the stop-band produced by the two uniform configurations with different patch size respectively.
03b18dcde7ba5bb0e87b2bdb68ab7af951daf162
Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrasebased statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method based on importance sampling that allows us to use a very large target vocabulary without increasing training complexity. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to match, and in some cases outperform, the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use an ensemble of a few models with very large target vocabularies, we achieve performance comparable to the state of the art (measured by BLEU) on both the English→German and English→French translation tasks of WMT’14.
f7b48b0028a9887f85fe857b62441f391560ef6d
A new design of two-dimensional cylindrical Luneberg lens is introduced based on TE10 mode propagation between parallel plates, with special focus on ease of manufacturing. The parallel plates are partially filled with low cost polymer material (Rexolite epsivr = 2.54) to match Luneberg's law. A planar linear tapered slot antenna (LTSA) is inserted into the air region between the parallel plates at the edge of the Luneberg lens as a feed antenna, with fine positioning to the focal point of the Luneberg lens to optimize the antenna system performance. A combined ray-optics/diffraction method is used to obtain the radiation pattern of the system and results are compared with predictions of a time domain numerical solver. Measurements done on a 10-cm Luneberg lens designed for operation at 30 GHz agree very well with predictions. For this prototype, 3-dB E- and if-plane beamwidths of 6.6deg and 54deg respectively were obtained, and the sidelobe level in the E-plane was -17.7-dB. Although the parallel plate configuration should lead to a narrow band design due to the dispersion characteristics of the TE10 mode, the measurement results demonstrate broadband characteristics with radiation efficiencies varying between 43% and 72% over the tested frequency band of 26.5-37 GHz. The designed cylindrical Luneberg lens can be used to launch multiple beams by implementing an arc array of planar LTSA elements at the periphery of the lens, and can be easily extended to higher mm-wave frequencies.
2881b79ff142496c27d9558361e48f105208dec4
Action research is an established research method in use in the social and medical sciences since the mid-twentieth century, and has increased in importance for information systems toward the end of the 1990s. Its particular philosophic context is couched in strongly post-positivist assumptions such as idiographic and interpretive research ideals. Action research has developed a history within information systems that can be explicitly linked to early work by Lewin and the Tavistock Institute. Action research varies in form, and responds to particular problem domains. The most typical form is a participatory method based on a five-step model, which is exemplified by published IS research.
ee42bceb15d28ce0c7fcd3e37d9a564dfbb3ab90
443362dc552b36c33138c415408d307213ddfa36
6fb37cbc83bd6cd1d732f07288939a5061400e91
In this paper, we apply a bidirectional Long Short-Term Memory with a Conditional Random Field to the task of disfluency detection. Long-range dependencies is one of the core problems for disfluency detection. Our model handles long-range dependencies by both using the Long Short-Term Memory and hand-crafted discrete features. Experiments show that utilizing the hand-crafted discrete features significantly improves the model’s performance by achieving the state-of-the-art score of 87.1% on the Switchboard corpus.
fb17e9cab49665863f360d5f9e61e6048a7e1b28
Raw depth images captured by consumer depth cameras suffer from noisy and missing values. Despite the success of CNN-based image processing on color image restoration, similar approaches for depth enhancement have not been much addressed yet because of the lack of raw-clean pairwise dataset. In this paper, we propose a pairwise depth image dataset generation method using dense 3D surface reconstruction with a filtering method to remove low quality pairs. We also present a multi-scale Laplacian pyramid based neural network and structure preserving loss functions to progressively reduce the noise and holes from coarse to fine scales. Experimental results show that our network trained with our pairwise dataset can enhance the input depth images to become comparable with 3D reconstructions obtained from depth streams, and can accelerate the convergence of dense 3D reconstruction results.
40de599b11b1553649354991cdf849048cb05f00
Both cost-sensitive classification and online learning have been extensively studied in data mining and machine learning communities, respectively. However, very limited study addresses an important intersecting problem, that is, “Cost-Sensitive Online Classification". In this paper, we formally study this problem, and propose a new framework for Cost-Sensitive Online Classification by directly optimizing cost-sensitive measures using online gradient descent techniques. Specifically, we propose two novel cost-sensitive online classification algorithms, which are designed to directly optimize two well-known cost-sensitive measures: (i) maximization of weighted sum of sensitivity and specificity, and (ii) minimization of weighted misclassification cost. We analyze the theoretical bounds of the cost-sensitive measures made by the proposed algorithms, and extensively examine their empirical performance on a variety of cost-sensitive online classification tasks. Finally, we demonstrate the application of the proposed technique for solving several online anomaly detection tasks, showing that the proposed technique could be a highly efficient and effective tool to tackle cost-sensitive online classification tasks in various application domains.
2b7c330e7b3fbe96ea6f5342eae17d90095026cc
1bd1b7344044e8cc068a77b439fca011120c4bc3
The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state-of-the-art is limited to small systems. In this work-in-progress report we give an overview of our work on mitigating this difficulty, by pursuing two complementary directions: devising scalable verification techniques, and identifying design choices that result in deep learning systems that are more amenable to verification. ACM Reference Format: Lindsey Kuper, Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, and Mykel J. Kochenderfer. 2018. Toward Scalable Verification for SafetyCritical Deep Networks. In Proceedings of SysML Conference (SysML). ACM, New York, NY, USA, 3 pages.
6abe5eda71c3947013c59bbae700402813a1bc7f
Recently NoSQL databases and their related technologies are developing rapidly and are widely applied in many scenarios with their BASE (Basic Availability, Soft state, Eventual consistency) features. At present, there are more than 225 kinds of NoSQL databases. However, the overwhelming amount and constantly updated versions of databases make it challenging for people to compare their performance and choose an appropriate one. This paper is trying to evaluate the performance of five NoSQL clusters (Redis, MongoDB, Couchbase, Cassandra, HBase) by using a measurement tool – YCSB (Yahoo! Cloud Serving Benchmark), explain the experimental results by analyzing each database's data model and mechanism, and provide advice to NoSQL developers and users.
39424070108220c600f67fa2dbd25f779a9fdb7a
Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing. In this paper, we target at essay generation, which takes as input a topic word in mind and generates an organized article under the theme of the topic. We follow the idea of text planning (Reiter and Dale, 1997) and develop an essay generation framework. The framework consists of three components, including topic understanding, sentence extraction and sentence reordering. For each component, we studied several statistical algorithms and empirically compared between them in terms of qualitative or quantitative analysis. Although we run experiments on Chinese corpus, the method is language independent and can be easily adapted to other language. We lay out the remaining challenges and suggest avenues for future research.
e25221b4c472c4337383341f6b2c9375e86709af
e9c9da57bbf9a968489cb90ec7252319bcab42fb
Training convolutional networks (CNNs) that fit on a single GPU with minibatch stochastic gradient descent has become effective in practice. However, there is still no effective method for training large networks that do not fit in the memory of a few GPU cards, or for parallelizing CNN training. In this work we show that a simple hard mixture of experts model can be efficiently trained to good effect on large scale hashtag (multilabel) prediction tasks. Mixture of experts models are not new [7, 3], but in the past, researchers have had to devise sophisticated methods to deal with data fragmentation. We show empirically that modern weakly supervised data sets are large enough to support naive partitioning schemes where each data point is assigned to a single expert. Because the experts are independent, training them in parallel is easy, and evaluation is cheap for the size of the model. Furthermore, we show that we can use a single decoding layer for all the experts, allowing a unified feature embedding space. We demonstrate that it is feasible (and in fact relatively painless) to train far larger models than could be practically trained with standard CNN architectures, and that the extra capacity can be well used on current datasets.
0ec33f27de8350470935ec5bf9d198eceaf63904
We present Local Naive Bayes Nearest Neighbor, an improvement to the NBNN image classification algorithm that increases classification accuracy and improves its ability to scale to large numbers of object classes. The key observation is that only the classes represented in the local neighborhood of a descriptor contribute significantly and reliably to their posterior probability estimates. Instead of maintaining a separate search structure for each class's training descriptors, we merge all of the reference data together into one search structure, allowing quick identification of a descriptor's local neighborhood. We show an increase in classification accuracy when we ignore adjustments to the more distant classes and show that the run time grows with the log of the number of classes rather than linearly in the number of classes as did the original. Local NBNN gives a 100 times speed-up over the original NBNN on the Caltech 256 dataset. We also provide the first head-to-head comparison of NBNN against spatial pyramid methods using a common set of input features. We show that local NBNN outperforms all previous NBNN based methods and the original spatial pyramid model. However, we find that local NBNN, while competitive with, does not beat state-of-the-art spatial pyramid methods that use local soft assignment and max-pooling.
68603a9372f4e9194ab09c4e585e3150b4025e97
Female Pattern Hair Loss or female androgenetic alopecia is the main cause of hair loss in adult women and has a major impact on patients' quality of life. It evolves from the progressive miniaturization of follicles that lead to a subsequent decrease of the hair density, leading to a non-scarring diffuse alopecia, with characteristic clinical, dermoscopic and histological patterns. In spite of the high frequency of the disease and the relevance of its psychological impact, its pathogenesis is not yet fully understood, being influenced by genetic, hormonal and environmental factors. In addition, response to treatment is variable. In this article, authors discuss the main clinical, epidemiological and pathophysiological aspects of female pattern hair loss.
3c398007c04eb12c0b7417f5d135919a300a470d
In recent years we have seen a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intrane ts. Automatic text categorization, which is the task of assigning text documents to pre-specified classes (topics o r themes) of documents, is an important task that can help both in organizing as well as in finding information on these h uge resources. Text categorization presents unique challenges due to the large number of attributes present in t he data set, large number of training samples, and attribute dependencies. In this paper we focus on a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our extensive experiments show that this centroid-based classifier consi stently and substantially outperforms other algorithms su ch as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. O ur analysis shows that the similarity measure used by the centroid-based scheme allows it to class ify new document based on how closely its behavior matches the behavior of the documents belonging to differen t classes, as measured by the average similarity between the documents. This matching allows it to dynamically adjus t for classes with different densities. Furthermore, our analysis shows that the similarity measure of the centroidbased scheme accounts for dependencies between the terms in the different classes. We believe that this feature is the reason why it consistently outperforms other classifiers th at cannot take these dependencies into account.
7f13e66231c96f34f8de2b091e5b5dafb5db5327
Neural machine translation (NMT) models are able to partially learn syntactic information from sequential lexical information. Still, some complex syntactic phenomena such as prepositional phrase attachment are poorly modeled. This work aims to answer two questions: 1) Does explicitly modeling source or target language syntax help NMT? 2) Is tight integration of words and syntax better than multitask training? We introduce syntactic information in the form of CCG supertags either in the source as an extra feature in the embedding, or in the target, by interleaving the target supertags with the word sequence. Our results on WMT data show that explicitly modeling syntax improves machine translation quality for English↔German, a high-resource pair, and for English↔Romanian, a lowresource pair and also several syntactic phenomena including prepositional phrase attachment. Furthermore, a tight coupling of words and syntax improves translation quality more than multitask training.
f218e9988e30b0dea133b8fcda7033b6f1172af9
Distinguishing between natural images (NIs) and computer-generated (CG) images by naked human eyes is difficult. In this paper, we propose an effective method based on a convolutional neural network (CNN) for this fundamental image forensic problem. Having observed the rather limited performance of training existing CCNs from scratch or fine-tuning pre-trained network, we design and implement a new and appropriate network with two cascaded convolutional layers at the bottom of a CNN. Our network can be easily adjusted to accommodate different sizes of input image patches while maintaining a fixed depth, a stable structure of CNN, and a good forensic performance. Considering the complexity of training CNNs and the specific requirement of image forensics, we introduce the so-called local-to-global strategy in our proposed network. Our CNN derives a forensic decision on local patches, and a global decision on a full-sized image can be easily obtained via simple majority voting. This strategy can also be used to improve the performance of existing methods that are based on hand-crafted features. Experimental results show that our method outperforms existing methods, especially in a challenging forensic scenario with NIs and CG images of heterogeneous origins. Our method also has good robustness against typical post-processing operations, such as resizing and JPEG compression. Unlike previous attempts to use CNNs for image forensics, we try to understand what our CNN has learned about the differences between NIs and CG images with the aid of adequate and advanced visualization tools.
7c38c9ff0108e774cdfe2a90ced1c89812e7f498
The development of radar signal processing algorithms for target tracking and higher-level automotive applications is mainly done based on real radar data. A data basis has to be acquired during cost-expensive and time-consuming test runs. For a comparably simple application like the adaptive cruise control (ACC), the variety of significant traffic situations can sufficiently be covered by test runs. But for more advanced applications like intersection assistance, the effort for the acquisition of a representative set of radar data will be unbearable. In this paper, we propose a way of simulating radar target lists in a realistic but computationally undemanding way, which will allow to significantly reduce the amount of real radar data needed
bca4e05a45f310ceb327d67278858343e8df7089
1717dee0e8785d963e0333a0bb945757444bb651
Using validated carving techniques, we show that popular operating systems (e.g. Windows, Linux, and OSX) frequently have residual IP packets, Ethernet frames, and associated data structures present in system memory from long-terminated network traffic. Such information is useful for many forensic purposes including establishment of prior connection activity and services used; identification of other systems present on the system’s LAN or WLAN; geolocation of the host computer system; and cross-drive analysis. We show that network structures can also be recovered from memory that is persisted onto a mass storage medium during the course of system swapping or hibernation. We present our network carving techniques, algorithms and tools, and validate these against both purpose-built memory images and a readily available forensic corpora. These techniques are valuable to both forensics tasks, particularly in analyzing mobile devices, and to cyber-security objectives such as malware analysis. Published by Elsevier Ltd.
62a7cfab468ef3bbd763db8f80745bd93d2be7dd
Android, the fastest growing mobile operating system released in November 2007, boasts of a staggering 1.4 billion active users. Android users are susceptible to malicious applications that can hack into their personal data due to the lack of careful monitoring of their in-device security. There have been numerous works on devising malware detection methods. However, none of earlier works are conclusive enough for direct application and lack experimental validation. In this paper, we have investigated the natures and identities of malicious applications and devised two novel detection approaches for detection: network-based detection and system call based detection approaches. To evaluate our proposed approaches, we performed experiments on a subset of 1260 malwares, acquired from Android Malware Genome Project, a malware database created by Y. Zhou et al. [1] and 227 non-malware (benign) applications. Results show that our system call based approach is able to detect malwares with an accuracy of 87% which is quite significant in general malware detection context. Our proposed detection approaches along with the experimental results will provide security professionals with more precise and quantitative approaches in their investigations of mobile malwares on Android systems.
e2d76fc1efbbf94a624dde792ca911e6687a4fd4
With over 50 billion downloads and more than 1.3 million apps in Google’s official market, Android has continued to gain popularity amongst smartphone users worldwide. At the same time there has been a rise in malware targeting the platform, with more recent strains employing highly sophisticated detection avoidance techniques. As traditional signature based methods become less potent in detecting unknown malware, alternatives are needed for timely zero-day discovery. Thus this paper proposes an approach that utilizes ensemble learning for Android malware detection. It combines advantages of static analysis with the efficiency and performance of ensemble machine learning to improve Android malware detection accuracy. The machine learning models are built using a large repository of malware samples and benign apps from a leading antivirus vendor. Experimental results and analysis presented shows that the proposed method which uses a large feature space to leverage the power of ensemble learning is capable of 97.3 % to 99% detection accuracy with very low false positive rates. Keywordsmobile security; Android; malware detection; ensemble learning; static analysis; machine learning; data mining; random forest
08d32340e0e6aa50952860b90dfba2fe4764a85a
The sharp increase in the number of smartphones on the market, with the Android platform posed to becoming a market leader makes the need for malware analysis on this platform an urgent issue. In this paper we capitalize on earlier approaches for dynamic analysis of application behavior as a means for detecting malware in the Android platform. The detector is embedded in a overall framework for collection of traces from an unlimited number of real users based on crowdsourcing. Our framework has been demonstrated by analyzing the data collected in the central server using two types of data sets: those from artificial malware created for test purposes, and those from real malware found in the wild. The method is shown to be an effective means of isolating the malware and alerting the users of a downloaded malware. This shows the potential for avoiding the spreading of a detected malware to a larger community.
12ef153d9c7ccc374d56acf34b59fb2eaec6f755
The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions.
8e0b8e87161dd4001d31832d5d9864fd31e8eccd
This paper presents the technique for increasing bandwidth of rectangular patch antenna from 0.88 GHz (7.76 - 8.64 GHz) to 6.75 GHz (3.49 - 10.24 GHz). This technique use inset feed patch antenna with modified ground plane for achieved widest bandwidth. We will propose three types of rectangular patch antenna: the simple rectangular patch fed by microstrip line, inset feed rectangular patch and inset feed rectangular patch with modifies ground plane. The final simulation result show that the lower edge of frequency is moved down from 7.76 GHz to 3.49 GHz and the higher edge of frequency is shifted up from 8.64 GHz to 10.24 GHz, which is the one selection for increasing bandwidth of rectangular patch antenna for wideband. Details of the increasing bandwidth of microstrip patch antenna are described, and simulation results for obtained wideband performance are presented by using IE3D Zeland software.
3cd0b6a48b14f86ed261240f30113a41bacd2255
Context is a key issue in interaction between human and computer, describing the surrounding facts that add meaning. In mobile computing research published the parameter location is most often used to approximate context and to implement context-aware applications. We propose that ultra-mobile computing, characterized by devices that are operational and operated while on the move (e.g. PDAs, mobile phones, wearable computers), can significantly benefit from a wider notion of context. To structure the field we introduce a working model for context, discuss mechanisms to acquire context beyond location, and application of context-awareness in ultra-mobile computing. We investigate the utility of sensors for context-awareness and present two prototypical implementations-a light sensitive display and an orientation aware PDA interface. The concept is then extended to a model for sensor fusion to enable more sophisticated context recognition. Based on an implementation of the model an experiment is described and the feasibility of the approach is demonstrated. Further we explore fusion of sensors for acquisition of information on more sophisticated contexts. 1 Introduction Context is " that which surrounds, and gives meaning to something else " =. Various areas of computer science have been investigating this concept over the last 40 years, to relate information processing and communication to aspects of the situations in which such processing occurs. Most notably, context is a key concept in Natural Language Processing and more generally in Human-Computer Interaction. For instance, state of the art graphical user interfaces use context to adapt menus to contexts such as user preference and dialogue status. A new domain, in which context currently receives growing attention, is mobile computing. While a first wave of mobile computing was based on portable general-purpose computers and primarily focussed on location transparency, a second wave is now based on ultra-mobile devices and an interest in relating these to their surrounding situation of usage. Ultra-mobile devices are a new class of small mobile computer, defined as computing devices that are operational and operated while on the move, and characterized by a shift from general-purpose computing to task-specific support. Ultra-mobile devices comprise for instance Personal Digital Assistants (PDAs), mobile phones, and wearable computers. A primary concern of context-awareness in mobile computing is awareness of the physical environment surrounding a user and their ultra-mobile device. In recent work, this concern has been addressed by implementation of location-awareness, for instance based on global positioning, or the use of beacons. Location …
62edb6639dc857ad0f33e5d8ef97af89be7a3bc7
A novel system for the location of people in an office environment is described. Members of staff wear badges that transmit signals providing information about their location to a centralized location service, through a network of sensors. The paper also examines alternative location techniques, system design issues and applications, particularly relating to telephone call routing. Location systems raise concerns about the privacy of an individual and these issues are also addressed.
a332fa84fb865fac25e9c7cf0c18933303a858d0
Significant progress has been made in recent years in the development of microwave tomographic imaging systems for medical applications. In order to design an appropriate microwave imaging system for industrial applications, and to interpret the images produced, the materials under imaging need to be characterised. In this paper, we describe the use of open-ended coaxial probes for the measurement of dielectric properties of liquids at frequencies between 400MHz and 20GHz. The results obtained using the Misra-Blackham model for a number of liquids including water of different salinity are compared with those published in the literature showing a good agreement. For saline water, in particular, the frequency of the minimum loss depends on the salinity. It may change from 1.5GHz for the inclusion of 0.2% NaCl to 7GHz for the inclusion of 3.5% NaCl. The real part of the permittivity may also change by approximately 50% from 400MHz to 20GHz.
c02fd0b0ad018556de5f9cddcccdf813c8fbb0f8
High-resolution satellite imagery has been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Despite the high availability, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this data set is used to train deep-learning-based models in order to accurately classify satellite images that contain or not contain zebra crossings. A novel data set with more than 240000 images from 3 continents, 9 countries, and more than 20 cities was used in the experiments. The experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale.
c14dff27746b49bea3c5f68621261f266a766461
f32d9a72d51f6db6ec26f0209be73dd3c400b42e
A 10-point plan toward fashioning a proposal to ban some---if not all---lethal autonomous weapons.
bbbd015155bbe5098aad6b49a548e9f3570e49ec
This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from 1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; 2) the development of a Gabor-Fisher classifier for multi-class problems; and 3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features.
0160ec003ae238a98676b6412b49d4b760f63544
We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixture-of-Gaussians model (for multimodal distributions). These probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.
ac2c955a61002b674bd104b91f89087271fc3b8e
A multilevel boost power factor correction (PFC) rectifier is presented in this paper controlled by cascaded controller and multicarrier pulse width modulation technique. The presented topology has less active semiconductor switches compared to similar ones reducing the number of required gate drives that would shrink the manufactured box significantly. A simple controller has been implemented on the studied converter to generate a constant voltage at the output while generating a five-level voltage waveform at the input without connecting the load to the neutral point of the dc bus capacitors. Multicarrier pulse-width modulation technique has been used to produce switching pulses from control signal at a fixed switching frequency. Multilevel voltage waveform harmonics has been analyzed comprehensively which affects the harmonic contents of input current and the size of required filters directly. Full experimental results confirm the good dynamic performance of the proposed five-level PFC boost rectifier in delivering power from ac grid to the dc loads while correcting the power factor at the ac side as well as reducing the current harmonics remarkably.
c2fafa93bd9b91ede867d4979bc747334d989040
Minutiae, as the essential features of fingerprints, play a significant role in fingerprint recognition systems. Most existing minutiae extraction methods are based on a series of hand-defined preprocesses such as binarization, thinning and enhancement. However, these preprocesses require strong prior knowledge and are always lossy operations. And that will lead to dropped or false extractions of minutiae. In this paper, a novel minutiae extraction approach based on deep convolutional neural networks is proposed, which directly extract minutiae on raw fingerprint images without any preprocess since we tactfully take the advantage of the strong representative capacity of deep convolutional neural networks. Minutiae can be effectively extracted due to the well designed architectures. Furthermore, the accuracy is guaranteed in that the comprehensive estimate is made to eliminate spurious minutiae. Moreover, a number of implement skills are employed both to avoid overfitting and to improve the robustness. This approach makes a good performance because it not only makes all use of information in fingerprint images but also learns the minutiae patterns from large amounts of data. Comparisons are made with previous works and a widely applied commercial fingerprint identification system. Results show that our approach performs better both in accuracy and robustness.
36b0ba31eb7489772616ea9d5bd789483d494e93
New regulations impose more stringent limits on current harmonics injected by power converters that are achieved with pulsewidth-modulated (PWM) rectifiers. In addition, several applications demand the capability of power regeneration to the power supply. This work presents the state of the art in the field of regenerative rectifiers with reduced input harmonics and improved power factor. Regenerative rectifiers are able to deliver energy back from the dc side to the ac power supply. Topologies for single- and three-phase power supplies are considered with their corresponding control strategies. Special attention is given to the application of voltage- and current-source PWM rectifiers in different processes with a power range from a few kilowatts up to several megawatts. This paper shows that PWM regenerative rectifiers are a highly developed and mature technology with a wide industrial acceptance.
7b8031213276b23060fbd17d1d7182835fc2e0c3
This paper describes an integrated frequency multiplier, implemented as a Gilbert cell based frequency doubler in a 130 nm SiGe BiCMOS technology. The circuit demonstrates a 3 dB bandwidth of 97–134GHz with peak output power of 1 dBm for 1 dBm input power. The fundamental suppression, measured at the single-ended output, is better than 21 dBc while the frequency doubler consumes 69mW from a 3.3V supply. The doubler is preceded by a differential amplifier functioning as an active balun to generate a differential signal for the Gilbert cell.
2495ebdcb6da8d8c2e82cf57fcaab0ec003d571d
Given a large dataset of images, we seek to automatically determine the visually similar object and scene classes together with their image segmentation. To achieve this we combine two ideas: (i) that a set of segmented objects can be partitioned into visual object classes using topic discovery models from statistical text analysis; and (ii) that visual object classes can be used to assess the accuracy of a segmentation. To tie these ideas together we compute multiple segmentations of each image and then: (i) learn the object classes; and (ii) choose the correct segmentations. We demonstrate that such an algorithm succeeds in automatically discovering many familiar objects in a variety of image datasets, including those from Caltech, MSRC and LabelMe.
6d4e3616d0b27957c4107ae877dc0dd4504b69ab
In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.
f226ec13e016943102eb7ebedab7cf3e9bef69b2
f7ec4269303b4f5a4b4964a278a149a69f2a5910
Accurate and early diagnosis of Alzheimer’s disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance.