abstract
stringlengths 8
9.19k
| authors
stringlengths 9
1.96k
| title
stringlengths 8
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
Given a graph G = (V,E) with weights on its edges and a set of specified nodes S ⊆ V, the Steiner 2-edge connected subgraph problem is to find a minimum weight 2-edge connected subgraph of G, spanning S. This problem has applications to the design of reliable communication and transportation networks. In this paper we give a complete linear description of the dominant of the associated polytope in a class of graphs called perfectly Steiner 2-edge connected graphs, which contains series-parallel graphs. We also discuss related polyhedra. | ['Mourad Baïou'] | On the dominant of the Steiner 2-edge connected subgraph polytope | 535,432 |
The rising accessibility and popularity of gambling products has increased interest in the effects of gambling. Nonetheless, research of gambling measures is scarce. This paper presents the application of data mining techniques, on 46,514 gambling sessions, to distinguish types of gambling and identify potential instances of problem gambling in EGMs. Gambling sessions included measures of gambling involvement, out-of-pocket expense, winnings and cost of gambling. In this first exploratory study, sessions were clustered into four clusters, as a stability test determined four clusters to be the most high-quality yielding and stable solution within our clustering criteria. Based on the expressed gambling behavior within these sessions, our k-means cluster analysis results indicated sessions were classified as potential non-problem gambling sessions, potential low risk gambling sessions, potential moderate risk gambling sessions, and potential problem gambling sessions. While the complexity of EGM data prevents researchers from recognizing the incidence of problem gambling in a specific individual, our methods suggest that the lack of player identification does not prevent one from identifying the incidence of problem gambling behavior. | ['Maria Gabriella Mosquera', 'Vlado Keelj'] | Identifying Behavioral Characteristics in EGM Gambling Data Using Session Clustering | 648,645 |
We consider the problem of simultaneous bitrate allocation for multiple video streams. Current methods for multiplexing video streams often rely on identifying the relative complexity of the video streams to improve the combined overall quality. In such methods, not all the videos benefit from the multiplexing process. Typically, the quality of high motion videos is improved at the expense of a reduction in the quality of low motion videos. In our approach, we use a competitive equilibrium allocation of bitrate to improve the quality of all the video streams by finding trades between videos across time. A central controller collects rate-distortion information from each video user and makes a joint bitrate allocation decision. Each user encodes and transmits his video at the allocated bitrate through a shared channel. The proposed method uses information about not only the differing complexity of the video streams at every moment but also the differing complexity of each stream over time. Using the competitive equilibrium bitrate allocation approach for multiple video streams, simulation results show that all the video streams perform better or at least as well as with individual encoding. The results of this research will be useful both for ad hoc networks that employ a cluster head model and for cellular architectures. | ['Mayank Tiwari', 'Theodore Groves', 'Pamela C. Cosman'] | Competitive Equilibrium Bitrate Allocation for Multiple Video Streams | 190,463 |
A major problem with graph visualization libraries and packages is the lack of interactivity and 3D visualization. This makes understanding and analyzing complex graphs and topologies difficult. Existing packages and tools which do provide similar functionality are difficult to use, install, integrate and have many dependencies. This paper discusses Net-workViz.jl, a Julia package which addresses the issues of existing graph visualization platforms while ensuring simplicity, efficiency, a diverse set of features and easy integration with other packages. This package supports two- and three-dimensional visualizations and uses a force-directed graph drawing approach to generate aesthetically pleasing and easy-to-use graphs. The library was built entirely in Julia due to its good documentation, large open source community and in order to fully utilize the inherent advantages provided by the language. As graph visualizations are important for analyzing complex networks, testing routing algorithms, as teaching aids, etc., we believe that NetworkViz.jl will be of integral use in the fields of research and education. | ['Chirag Jamadagni', 'Abhijith Anilkumar', 'Kevin Thomas Mathew', 'Manjunath Mulimani', 'Shashidhar G. Koolagudi'] | Dynamic 3D graph visualizations in julia | 981,576 |
Alvaro Ocampo traversed many landscapes to arrive at his current space in the digital art landscape. Eventually, the artist then made his way to the digital world, where he is no longer subjected to the tyranny of the one-off. He believes digital art is the new version of traditional etching in the way that it eliminates the idea of the one original piece of art. | ['Gary Singh'] | Landscapes of the Digital Baroque | 725,104 |
Error-Resilient Multicast for Multi-View 3D Videos in IEEE 802.11 Networks | ['Chi-Heng Lin', 'De-Nian Yang', 'Ji-Tang Lee', 'Wanjiun Liao'] | Error-Resilient Multicast for Multi-View 3D Videos in IEEE 802.11 Networks | 638,950 |
This paper investigates the temporal cluster hypothesis: in search tasks where time plays an important role, do relevant documents tend to cluster together in time? We explore this question in the context of tweet search and temporal feedback: starting with an initial set of results from a baseline retrieval model, we estimate the temporal density of relevant documents, which is then used for result reranking. Our contributions lie in a method to characterize this temporal density function using kernel density estimation, with and without human relevance judgments, and an approach to integrating this information into a standard retrieval model. Experiments on TREC datasets confirm that our temporal feedback formulation improves search effectiveness, thus providing support for our hypothesis. Our approach out-performs both a standard baseline and previous temporal retrieval models. Temporal feedback improves over standard lexical feedback (with and without human judgments), illus- trating that temporal relevance signals exist independently of document content. | ['Miles Efron', 'Jimmy J. Lin', 'Jiyin He', 'Arjen P. de Vries'] | Temporal feedback for tweet search with non-parametric density estimation | 282,037 |
Our ability to record increasingly larger and more complex sets of data is accompanied by a decline in our capacity to interpret and understand these data in the fullest sense. Multivariate analysis partially assists us in our quest by reducing the dimensionality in optimal ways, but our view is stuck in two dimensions because of the planar nature of the graphical medium, be it the printed page or the computer screen. | ['Michael Greenacre'] | Dynamic graphics for research and teaching, with applications in the life sciences | 185,682 |
To measure the volatility spillover effects between gold market and stock market, a VAR-DCC-BVGARCH model is utilized to analyze the relationship of both. The bivariate GARCH model (BVGARCH), employed to simultaneously capture the conditional volatilities of both assets and the dynamic conditional correlation model, used to estimate their time-varying conditional correlation are combined. Empirical results show that the volatility spillover effects of both gold and stocks persist in the long run. Especially, the spillover effects from gold prices to stock prices are more obvious. Furthermore, the time-varying correlation between two assets is also found and is more significant as the volatilities of gold prices increase. These results can help investors to better manage risks and returns of the portfolio including both assets. | ['Xunfa Lu', 'Jiawei Wang', 'Kin Keung Lai'] | Volatility Spillover Effects between Gold and Stocks Based on VAR-DCC-BVGARCH Model | 88,035 |
With development of wireless communication technologies, users are no longer satisfied with only a single service provided per time. They are willing to enjoy multiple services simultaneously. Therefore scheduling multiple services per user becomes quite important usability issue in the area of resource management. In this paper, the multiple-service scheduling problem is firstly formulated as an integrated optimization problem based on a utility function in homogeneous service systems. Due to its NP-hard characteristic, a set of low-complexity sub-optimal algorithms is therefore proposed and used to schedule resources for multiple services per user at the downlink of Orthogonal Frequency Division Multiplexing (OFDM) systems. The proposed algorithms are capable to effectively and efficiently distribute assigned resources among multiple services for one user. Moreover the utility of our algorithms is further extended from homogeneous service systems to heterogeneous service systems. And full exploitation of multi-user diversity gain is achieved while guaranteeing quality of service (QoS). The simulation results show that the proposed algorithm outperforms traditional algorithm in terms of system best effort service throughput and fairness criterion. | ['Ying Wang', 'Zixiong Chen', 'Cong Shi', 'Ping Zhang'] | Utility Based Scheduling Algorithm for Multiple Services per User in OFDM Systems | 489,753 |
We study convexity properties of graphs. In this paper we present a linear-time algorithm for the geodetic number in tree-cographs. Settling a 10-year-old conjecture, we prove that the Steiner number is at least the geodetic number in AT-free graphs. Computing a maximal and proper monophonic set in $\AT$-free graphs is NP-complete. We present polynomial algorithms for the monophonic number in permutation graphs and the geodetic number in $P_4$- sparse graphs. | ['Wing-Kai Hon', 'Ton Kloks', 'Hsiang Hsuan Liu', 'Hung-Lung Wang', 'Yue-Li Wang'] | Convexities in Some Special Graph Classes ---New Results in AT-free Graphs and Beyond | 549,726 |
High-level synthesis promises a significant shortening of the FPGA design cycle when compared with design entry using register transfer level (RTL) languages. Recent evaluations report that C-to-RTL flows can produce results with a quality close to hand-crafted designs [1]. Algorithms which use dynamic, pointer-based data structures, which are common in software, remain difficult to implement well. In this paper, we describe a comparative case study using Xilinx Vivado HLS as an exemplary state-of-the-art high-level synthesis tool. Our test cases are two alternative algorithms for the same compute-intensive machine learning technique (clustering) with significantly different computational properties. We compare a data-flow centric implementation to a recursive tree traversal implementation which incorporates complex data-dependent control flow and makes use of pointer-linked data structures and dynamic memory allocation. The outcome of this case study is twofold: We confirm similar performance between the hand-written and automatically generated RTL designs for the first test case. The second case reveals a degradation in latency by a factor greater than 30× if the source code is not altered prior to high-level synthesis. We identify the reasons for this shortcoming and present code transformations that narrow the performance gap to a factor of four. We generalise our source-to-source transformations whose automation motivates research directions to improve high-level synthesis of dynamic data structures in the future. | ['Felix Winterstein', 'Samuel Bayliss', 'George A. Constantinides'] | High-level synthesis of dynamic data structures: A case study using Vivado HLS | 361,944 |
Background#R##N#Three-dimensional (3D) reconstruction in electron tomography (ET) has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs) offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs. | ['Xiaohua Wan', 'Zhang F', 'Q.P. Chu', 'Zhiyong Liu'] | High-performance blob-based iterative three-dimensional reconstruction in electron tomography using multi-GPUs | 483,885 |
A Linear Code and its Application into Secret Sharing. | ['Juan Carlos Ku-Cauich', 'Guillermo Morales-Luna'] | A Linear Code and its Application into Secret Sharing. | 740,939 |
The error floor performance of finite-length irregular low-density parity-check (LDPC) codes can be very poor if code degree distributions are chosen to optimize the threshold performance. In this paper we show that by constraining the optimization process, a balance between threshold and error floor' performance can be obtained. The resulting degree distributions give the best threshold performance subject to some minimum requirement on the error floor | ['Sarah J. Johnson', 'Steven R. Weller'] | Constraining LDPC degree distributions for improved error floor performance | 362,633 |
We propose a shape-based, hierarchical part-template matching approach to simultaneous human detection and segmentation combining local part-based and global shape-template-based schemes. The approach relies on the key idea of matching a part-template tree to images hierarchically to detect humans and estimate their poses. For learning a generic human detector, a pose-adaptive feature computation scheme is developed based on a tree matching approach. Instead of traditional concatenation-style image location-based feature encoding, we extract features adaptively in the context of human poses and train a kernel-SVM classifier to separate human/nonhuman patterns. Specifically, the features are collected in the local context of poses by tracing around the estimated shape boundaries. We also introduce an approach to multiple occluded human detection and segmentation based on an iterative occlusion compensation scheme. The output of our learned generic human detector can be used as an initial set of human hypotheses for the iterative optimization. We evaluate our approaches on three public pedestrian data sets (INRIA, MIT-CBCL, and USC-B) and two crowded sequences from Caviar Benchmark and Munich Airport data sets. | ['Zhe Lin', 'Larry S. Davis'] | Shape-Based Human Detection and Segmentation via Hierarchical Part-Template Matching | 210,853 |
We introduce three generalizations of homotopy equivalence in digital images, to allow us to express whether a finite and an infinite digital image are similar with respect to homotopy. #R##N#We show that these three generalizations are not equivalent to ordinary homotopy equivalence, and give several examples. We show that, like homotopy equivalence, our three generalizations imply isomorphism of fundamental groups, and are preserved under wedges and Cartesian products. | ['Laurence A. Boxer', 'P. Christopher Staecker'] | Homotopy relations for digital images | 642,987 |
Recently multi-task feature learning has become a widely applied approach for visual tracking, since it is benefited from the shared features across tasks. However, selecting features appropriately from multiple tasks is still a challenging problem due to the complex variation of the appearance of moving objects, which influences not only the features of single task but also the relationships between the features of multiple tasks. To address this problem, this paper presents a novel sparse learning model for selecting multi-task features adaptively. Compared to the existing multi-task models, the proposed model is capable of both calibrating the loss function according to the noise level of a task to keep its specific features, and identifying the relevant and irrelevant (outlier) tasks simultaneously by decomposing the regularized matrix into two specified structures. The proposed model allows to preserve specific features of individual tasks via calibration and to exploit sparse pattern over the relevant task via identification. Empirical evaluations demonstrate that the proposed method has better performance than a number of the state-of-the-art trackers on available public image sequences. | ['Pengguang Chen', 'Xingming Zhang', 'Aihua Mao', 'Jianbin Xiong'] | Visual tracking via adaptive multi-task feature learning with calibration and identification | 900,022 |
Time and frequency modulated arrays have numerous application areas including radar, navigation, and communications. Specifically, a time modulated array can create a beampattern with low sidelobes via connecting and disconnecting the antenna elements from the feed network, while the frequency modulated frequency diverse array produces a range-dependent pattern. In this paper, we aim to introduce these advanced arrays to the signal processing community so that more investigations in terms of theory, methods, and applications, can be facilitated. The research progress of time/frequency modulated array studies is reviewed and the most recent advances are discussed. Moreover, potential applications in radar and communications are presented, along with their technical challenges, especially in signal processing aspects. | ['Wen-Qin Wang', 'Hing Cheung So', 'Alfonso Farina'] | An Overview on Time/Frequency Modulated Array Processing | 934,526 |
Vers l'émergence d'une taxonomie pour la personnalisation adaptative par une approche multi-agent locale. | ['Sylvain Videau', 'Valérie Camps', 'Pierre Glize'] | Vers l'émergence d'une taxonomie pour la personnalisation adaptative par une approche multi-agent locale. | 793,686 |
The emergence of IoT systems introduced new kind of challenges for the designers of such large scale highly distributed systems. The sheer number of participating devices raises a crucial question: how they can be coordinated. Engineers often opt for using a simulator to evaluate new approaches or scenarios in various environments. This raises the second crucial question: how such a large system can be simulated efficiently. Existing simulators (even if they are IoT focused) are often focused on some particular scenarios and not capable to evaluate coordination approaches. In this paper we propose a chemical coordination model and a new extension to the DISSECT-CF cloud simulator. We expect that their combination on one hand ensures a distributed adaptive coordination on the other hand allows the separation of simulation problems into manageable sizes; these enable the analysis of large scale IoT systems with decentralized coordination approaches. | ['Gabor Kecskemeti', 'Zsolt Németh'] | Foundations for Simulating IoT Control Mechanisms with a Chemical Analogy | 976,920 |
We investigate XML query processing in a portable/handheld client device with limited memory in ubiquitous computing environment. Because of memory limitation in the client, the source XML data possibly of large volume is fragmented in the server and streamed in fragments over which query processing is done in the client. The state-of-the-art techniques employ the holefiller model in fragmenting XML data and processing queries over XML fragment stream. In this paper, we propose a new technique where an XML labeling scheme is employed instead of the hole-filler model. Through preliminary experiments, we show that our technique outperforms the state-of-the-art techniques both in memory usage and in query processing time. | ['Sangwook Lee', 'Jin Kim', 'Hyunchul Kang'] | XFlab: a technique of query processing over XML fragment stream | 219,647 |
Hyperelliptic curve cryptography with genus larger than one has not been seriously considered for cryptographic purposes because many existing implementations are significantly slower than elliptic curve versions with the same level of security. In this paper, the first ever complete hardware implementation of a hyperelliptic curve coprocessor is described. This coprocessor is designed for genus two curves over F2113. Additionally, a modification to the Extended Euclidean Algorithm is presented for the GCD calculation required by Cantor's algorithm. On average, this new method computes the GCD in one-fourth the time required by the Extended Euclidean Algorithm. | ['Nigel Boston', 'T. Clancy', 'Yihsiang Liow', 'J. Webster'] | Genus Two Hyperelliptic Curve Coprocessor | 385,419 |
The anxiety inducing paradigms such as the threat-of-shock paradigm have provided ample data on the emotional processing of predictable and unpredictable threat, but little is known about the processing of aversive, threat-irrelevant stimuli in these paradigms. We investigated how the predictability of threat influences the neural visual processing of threat-irrelevant fearful and neutral faces. Thirty-two healthy individuals participated in an NPU-threat test, consisting of a safe or neutral condition (N) and a predictable (P) as well as an unpredictable (U) threat condition, using audio-visual threat stimuli. In all NPU-conditions, we registered participants' brain responses to threat-irrelevant faces via magnetoencephalography. The data showed that increasing unpredictability of threat evoked increasing emotion regulation during face processing predominantly in dorsolateral prefrontal cortex regions during an early to mid-latency time interval. Importantly, we obtained only main effects but no significant interaction of facial expression and conditions of different threat predictability, neither in behavioral nor in neural data. Healthy individuals with average trait anxiety are thus able to maintain adaptive stimulus evaluation processes under predictable and unpredictable threat conditions. | ['Isabelle A.G. Klinkenberg', 'Maimu Alissa Rehbein', 'Christian Steinberg', 'Anna Luisa Klahn', 'Peter Zwanzger', 'Pienie Zwitserlood', 'Markus Junghöfer'] | Healthy individuals maintain adaptive stimulus evaluation under predictable and unpredictable threat. | 770,956 |
With the development of grid techniques and the growing complexity of grid applications, reasoning the temporal properties of grid application to ensure its reliability is becoming more and more critical. In this work, two decomposition approaches are proposed to improve the performance of the temporal reasoning of complex grid applications. The proposed approaches are implemented in our GridPiAnalyzer for equipment grid. Results show that our approach can reduce both CPU time and memory cost compared to using traditional formal verification algorithm alone due to the exponential reduction of system state space. | ['Ke Xu', 'Yuexuan Wang', 'Cheng Wu'] | Aspect Oriented Region Analysis for Efficient Grid Application Reasoning | 155,923 |
Multilevel secure (MLS) DBMSs are subject to a number of security-related architectural and functional factors that affect performance. These factors include, among others, the distribution of data among security levels, the session levels at which queries are run, and how the database is physically partitioned into files. In this paper, we present a benchmark methodology, a test database design, and a query suite designed to quantify this impact upon query processing. We introduce three metrics (uniformity, scale-up and speed-up) that characterize DBMS performance with varying data distributions. Finally, we provide comparisons and analysis of the results of a number of actual benchmarking experiments using DBMSs representative of the two major MLS DBMS architectures (trusted-subject and TCB-subset). > | ['Vinti Doshi', 'William R. Herndon', 'Sushil Jajodia', 'Catherine D. McCollum'] | Benchmarking multilevel secure database systems using the MITRE benchmark | 524,955 |
Software Security Analysis and Assessment for the Web-Based Applications. | ['Yong Wang', 'William M. Lively', 'Dick B. Simmons'] | Software Security Analysis and Assessment for the Web-Based Applications. | 735,176 |
Mining Visual Phrases for Visual Robot Localization | ['Kanji Tanaka', 'Yuuto Chokushi', 'Masatoshi Ando'] | Mining Visual Phrases for Visual Robot Localization | 640,491 |
To engineer reliable real-time systems, it is desirable to detect timing anomalies early in the development process. However, there is little work addressing the problem of accurately predicting timing properties of real-time systems before implementations are developed. This paper describes an approach to the specification and schedulability analysis of real-time systems based on the timed process algebra ACSR-VP, which is an extension of algebra of communicating shared resources (ACSR) with value-passing communication and dynamic priorities. Combined with the existing features of ACSR for representing time, synchronization and resource requirements, ACSR-VP is capable of specifying a variety of real-time systems with different scheduling disciplines in a modular fashion. Moreover, we can perform schedulability analysis on real-time systems specified in ACSR-VP automatically by checking for a certain bisimulation relation. | ['Jin Young Choi', 'Insup Lee', 'Hong Liang Xie'] | The specification and schedulability analysis of real-time systems using ACSR | 412,415 |
Generating 3D Spatial Descriptions from Stereo Vision Using SIFT Keypoint Clouds | ['Marjorie Skubic', 'Samuel Blisard', 'Robert H. Luke', 'Erik E. Stone', 'Derek T. Anderson', 'James M. Keller'] | Generating 3D Spatial Descriptions from Stereo Vision Using SIFT Keypoint Clouds | 995,461 |
The isogeometric analysis associated with a novel quasi-3D shear deformation theory is proposed to investigate size-dependent behaviours of functionally graded microplates. The modified couple stress theory with only one material length scale parameter is employed to effectively capture the size-dependent effects within the microplates. Meanwhile, the quasi-3D theory which is constructed from a novel seventh-order shear deformation refined plate theory with four unknowns is able to consider both shear deformations and thickness stretching effect without requiring shear correction factors. The NURBS-based isogeometric analysis is integrated to exactly describe the geometry and approximately calculate the unknown fields with higher-order derivative and continuity requirements. The proposed approach is successfully applied to study the static bending, free vibration and buckling responses of rectangular and circular functionally graded microplates with various types of boundary conditions in which some benchmark numerical examples are presented. A number of investigations are also conducted to illustrate the effects of the material length scale, material index, and aspect ratios on the responses of the microplates. | ['Hoang X. Nguyen', 'Tuan N. Nguyen', 'Magd M. Abdel-Wahab', 'Stephane Pierre Alain Bordas', 'H. Nguyen-Xuan', 'Thuc P. Vo'] | Isogeometric analysis for functionally graded microplates based on modified couple stress theory | 714,983 |
An obstacle avoidance method of action support 7-DOF manipulators is proposed in this paper. The manipulators are controlled with impedance control to follow user's motions. 7-DOF manipulators are able to avoid obstacles without changing the orbit of the end-effector because they have kinematic redundancy. A joint rate vector is used to change angular velocity of an arbitrary joint with kinematic redundancy. The priority of avoidance is introduced into the proposed method, so that avoidance motions precede follow motions when obstacles are close to the manipulators. The usefulness of the proposed method is demonstrated through obstacle avoidance simulations and experiments. | ['Masafumi Hamaguchi', 'Takao Taniguchi'] | An Obstacle Avoidance Method for Action Support 7-DOF Manipulators Using Impedance Control | 181,278 |
In this paper, we present data downloaded from Maven, one of the most popular component repositories. The data includes the binaries of 186,392 components, along with source code for 161,025. We identify and organize these components into groups where each group contains all the versions of a library. In order to asses the quality of these components, we make available report generated by the FindBugs tool on 64,574 components. The information is also made available in the form of a database which stores total number, type, and priority of bug patterns found in each component, along with its defect density. We also describe how this dataset can be useful in software engineering research. | ['Vaibhav Saini', 'Hitesh Sajnani', 'Joel Ossher', 'Cristina Videira Lopes'] | A dataset for maven artifacts and bug patterns found in them | 523,465 |
Face to strong competitive market, current companies tend to new methods of production, switching from a logic of «projected planning»to a logic of "Just in time". In this context, the system that allows controlling the production has to be a modular, flexible and reactive system. The hierarchized and classical approaches don't permit any more to take into account the complexity linked to such a system. That's why, we propose an approach, which has reactive, distributive, and emergent properties to control the system of production, based on multi-agent system principles. After having introduced the context and reasoning work, we describe the different parts of our multi-agent model. Lastly, we illustrate this approach on a practical example of production cell. | ['A.J.N. van Breemen'] | Integrating agents in software applications | 195,863 |
Participatory Design has developed methods that empower people with impairments to actively take part in the design process. Many designed artifacts for this target group likewise aim to empower their users in daily life. In this workshop, we share and relate best practices of both empowering methods and empowering designs . Participants are therefore invited to bring along cases of designing for- and with people with sensory-, cognitive- or social impairments. Our workshop consists of three parts: (1) Foregrounding empowering elements in PD methods using method stories, containing the backstory of a method put into practice; (2) Reflecting on technological artifacts , exploring the empowering qualities of person-artifact-context interaction; (3) constructing a critical synopsis of the various relationships between empowering products and - methods. | ['Jelle van Dijk', 'Niels Hendriks', 'Christopher Frauenberger', 'Fenne Verhoeven', 'Karin Slegers', 'Eva Brandt', 'Rita Maldonado Branco'] | Empowering people with impairments: how participatory methods can inform the design of empowering artifacts | 863,732 |
For fault-tolerant wireless ad-hoc networks, detection and notification of failure of intermediate wireless nodes are critical. Until now, various failure detection and notification methods for recovery such as timeout and watchdog have been proposed for stop failure and Byzantine failure restricted to different data message transmissions. In order to avoid desperate cases such that correctly working wireless intermediate nodes are inappropriately taken away from the wireless multihop transmission routes due to malicious failure notification, this paper proposes a novel cooperative watchdog method. Here, not only the previous-hop intermediate wireless nodes as in the conventional watchdog method but also another neighbor wireless node observe transmissions of both data messages and control messages such as failure notification to detect failure of an intermediate wireless node. This paper also proposes an ad-hoc routing protocol to determine both a sequence of intermediate wireless nodes and additional observing wireless nodes for the cooperative watchdog. Simulation experiments show that the route detection ratio by the proposed routing protocol is a little lower than the well-know ad-hoc routing protocol AODV; however, performance of data message transmissions is never reduced since no additional control messages are usually transmitted in the proposed cooperative watchdog method. | ['Norihiro Sota', 'Hiroaki Higaki'] | Cooperative Watchdog for Malicious Failure Notification in Wireless Ad-Hoc Networks | 957,836 |
This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion. | ['Jiaya Jia', 'Yu-Wing Tai', 'Tai-Pang Wu', 'Chi-Keung Tang'] | Video repairing under variable illumination using cyclic motions | 437,608 |
Artificially synthesized short interfering RNAs (siRNAs) are widely used in functional genomics to knock down specific target genes. One ongoing challenge is to guarantee that the siRNA does not elicit off-target effects. Initial reports suggested that siRNAs were highly sequence-specific; however, subsequent data indicates that this is not necessarily the case. It is still uncertain what level of similarity and other rules are required for an off-target effect to be observed, and scoring schemes have not been developed to look beyond simple measures such as the number of mismatches or the number of consecutive matching bases present.#R##N##R##N#We created design rules for predicting the likelihood of a non-specific effect and present a web server that allows the user to check the specificity of a given siRNA in a flexible manner using a combination of methods. The server finds potential off-target matches in the corresponding RefSeq database and ranks them according to a scoring system based on experimental studies of specificity.#R##N##R##N#Availability: The server is available at#R##N##R##N#http://informatics-eskitis.griffith.edu.au/SpecificityServer.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary analysis and figures are available at Bioinformatics online. | ['Alistair Morgan Chalk', 'Erik L. L. Sonnhammer'] | siRNA specificity searching incorporating mismatch tolerance data | 327,499 |
The performance of any word recognizer depends on the lexicon presented. Usually, large lexicons or lexicons containing similar entries pose difficulty for recognizers. However, the literature lacks any quantitative methodology of capturing the precise dependence between word recognizers and lexicons. This paper presents a performance model that views word recognition as a function of character recognition and statistically "discovers" the relation between a word recognizer and the lexicon. It uses model parameters that capture a recognizer's ability of distinguishing characters (of the alphabet) and its sensitivity to lexicon size. These parameters are determined by a multiple regression model which is derived from the performance model. Such a model is very useful in comparing word recognizers by predicting their performance based on the lexicon presented. We demonstrate the performance model with extensive experiments on five different word recognizers, thousands of images, and tens of lexicons. The results show that the model is a good fit not only on the training data but also in predicting the recognizers' performance on testing data. | ['Hanhong Xue', 'Venu Govindaraju'] | On the dependence of handwritten word recognizers on lexicons | 198,432 |
Given a wireless network where each link undergoes small-scale (Rayleigh) fading, we consider the problem of routing a message from a source node to a target node while minimizing energy or power expenditure given a fixed time budget, or vice versa. Given instantaneous channel state information, we develop tight hyperbolic bounds on the quantities of interest and solve the related optimizations in closed form or via lightweight computations. If only average channel state information is available, probabilistical performance measures must be introduced. We therefore develop another set of bounds that supports resource-optimal routing with a guaranteed success probability. Our results rest on novel formulations and solution methods for hyperbolic convex programs and, more generally, nonlinear multicriterion combinatorial optimization. | ['Matthew Brand', 'Andreas F. Molisch'] | Delay-Energy Tradeoffs in Wireless Ad-Hoc Networks with Partial Channel State Information | 164,609 |
Consider the 'basic LUL factorization' of the matrices as the generalization of the LU factorization and the UL factorization, and using this LUL factorization of the matrices, we propose an "improved iterative method" such that the spectral radius of this iterative matrix is equal to zero, and this method converges at most n iterations. Our main concern is the necessary and sufficient conditions that the improved iterative matrix is equal to the iterative matrix of the improved SOR method with orderings. Concerning the tridiagonal matrices and the upper Hessenberg matrices, this method becomes the improved SOR method with orderings, and we give n selections of the multiple relaxation parameters such that the spectral radii of the corresponding improved SOR matrices are 0. We extend these results to a class of $n \times n$ matrices. We also consider the basic LUL factorization and improved iterated method 'corresponding to permutation matrices'. | ['Yoshiaki Muroya', 'Emiko Ishiwata'] | Basic LUL factorization and improved iterative method with orderings | 150,263 |
Purpose – The purpose of this paper is to present a proposal for process improvement at the Department of Social Responsibility of a Colombian process-based organization, called CAJASAN. The department has four main processes: Foninez (children fund), Fosfec (unemployment fund), Project Management and International Cooperation and Network Management and Alliances. The objective of this paper is to suggest an improvement in these processes through BPM application. Design/methodology/approach – The authors followed the BPM method proposed by Dumas et al. (2013) for process improvement composed by process identification; process discovery; process analysis; process redesign; process implementation and process monitoring and controlling. The authors modeled the processes by using the software Bizagi®. Findings – The actual processes work in an independent way and with no communication. Moreover, the department experiences short-term problems solutions and process inefficiency. It was possible to suggest chang... | ['Carolina Resende Haddad', 'Diego Hernando Florez Ayala', 'Mauricio Uriona Maldonado', 'Fernando Antonio Forcellini', 'Álvaro Guillermo Rojas Lezana'] | Process improvement for professionalizing non-profit organizations: BPM approach | 723,268 |
Given a graph G = (V;E) and a set of terminal vertices T we say that a superset S of T is T -connecting if S induces a connected graph, and S is minimal if no strict subset of S is T -connecting. In this paper we prove that there are at most jVnTj jTj 2 3 jVnTj | ['Jan Arne Telle', 'Yngve Villanger'] | Connecting Terminals and 2-Disjoint Connected Subgraphs | 421,875 |
This paper discusses the design and motion experiments of the newly constructed jumping & rolling inspector Leg-in-rotor-II. The features of Leg-in-rotor-II are as follow; (i) The driving method of 3-D jumping & rolling by the reduced degrees of freedom on the separated drive. (ii) The introduction of the passively stored leg. (iii) The introduction of the wheel with the light anisotropy elasticity of the high changing ratio. (iv) The pneumatic jumping control method and its energy saving, (v) The sensing method to estimate the desired jumping start point. (vi) The structure of the sideway tumble prevention and the shock buffer. Finally, the motion experiments of Leg-in-rotor-II to roll, jump and land on debris are shown and the validity of the introduced design methods is verified. | ['Hideyuki Tsukagoshi', 'Yotaro Mori', 'Masashi Sasaki', 'Takahiro Tanaka', 'Ato Kitagawa'] | Leg-in-rotor-II: a jumping inspector with high traverse-ability on debris | 71,742 |
In Big Data era, applications are generating orders of magnitude more data in both volume and quantity. While many systems emerge to address such data explosion, the fact that these data’s descriptors, i.e., metadata, are also “big” is often overlooked. The conventional approach to address the big metadata issue is to disperse metadata into multiple machines. However, it is extremely difficult to preserve both load-balance and data-locality in this approach. To this end, in this work we propose hierarchical indirection layers for indexing the underlying distributed metadata. By doing this, data locality is achieved efficiently by the indirection while load-balance is preserved. Three key challenges exist in this approach, however: first, how to achieve high resilience; second, how to ensure flexible granularity; third, how to restrain performance overhead. To address above challenges, we design Dindex, a distributed indexing service for metadata. Dindex incorporates a hierarchy of coarse-grained aggregation and horizontal key-coalition. Theoretical analysis shows that the overhead of building Dindex is compensated by only two or three queries. Dindex has been implemented by a lightweight distributed key-value store and integrated to a fully-fledged distributed filesystem. Experiments demonstrated that Dindex accelerated metadata queries by up to 60 percent with a negligible overhead. | ['Dongfang Zhao', 'Kan Qiao', 'Zhou Zhou', 'Tonglin Li', 'Zhihan Lu', 'Xiaohua Xu'] | Toward Efficient and Flexible Metadata Indexing of Big Data Systems | 967,241 |
Our research explores how humans can understand and develop viewing behaviors with mutual paralleled first person view sharing in which a person can see others' first person video perspectives as well as their own perspective in realtime. We developed a paralleled first person view sharing system which consists of multiple video see-through head mounted displays and an embedded eye tracking system. With this system, four persons can see four shared first person videos of each other. We then conducted workshop based research with two activities, drawing pictures and playing a simple chasing game with our view sharing system. Our results show that 1) people can complement each other's memory and decisions and 2) people can develop their viewing behaviors to understand their own physical embodiment and spatial relationship with others in complex situations. Our findings about patterns of viewing behavior and design implications will contribute to building design experience in paralleled view sharing applications. | ['Shunichi Kasahara', 'Mitsuhito Ando', 'Kiyoshi Suganuma', 'Jun Rekimoto'] | Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing | 736,607 |
The resource leveling problem (RLP) involves the determination of a project baseline schedule that specifies the planned activity starting times while satisfying both the precedence constraints and the project deadline constraint under the objective of minimizing the variation in the resource utilization. However, uncertainty is inevitable during project execution. The baseline schedule generated by the deterministic RLP model tends to fail to achieve the desired objective when durations are uncertain. We study the robust resource leveling problem in which the activity durations are stochastic and the objective is to obtain a robust baseline schedule that minimizes the expected positive deviation of both resource utilizations and activity starting times. We present a genetic algorithm for the robust RLP. In order to demonstrate the effectiveness of our genetic algorithm, we conduct extensive computational experiments on a large number of randomly generated test instances and investigate the impact of different factors (the marginal cost of resource usage deviations, the marginal cost of activity starting time deviations, the activity duration variability, the due date, the order strength, the resource factor and the resource constrainedness). | ['Hongbo Li', 'Erik Demeulemeester'] | A genetic algorithm for the robust resource leveling problem | 592,979 |
This paper addresses the compressive sensing with Multiple Measurement Vectors (MMV) problem where the correlation amongst the different sparse vectors (channels) are used to improve the reconstruction performance. We propose the use of Convolutional Deep Stacking Networks (CDSN), where the correlations amongst the channels are captured by a moving window containing the "residuals" of different sparse vectors. We develop a greedy algorithm that exploits the structure captured by the CDSN to reconstruct the sparse vectors. Using a natural image dataset, we compare the performance of the proposed algorithm with two types of reconstruction algorithms: Simultaneous Orthogonal Matching Pursuit (SOMP) which is a greedy solver and the model-based Bayesian approaches that also exploit correlation among channels. We show experimentally that our proposed method outperforms these popular methods and is almost as fast as the greedy methods. | ['Hamid Palangi', 'Rabab K. Ward', 'Li Deng'] | Exploiting correlations among channels in distributed compressive sensing with convolutional deep stacking networks | 772,262 |
Coded source compression, also known as source compression with helpers, has been a major variant of distributed source compression, but has hitherto received little attention in the quantum regime. This letter treats and solves the corresponding quantum coded source compression through an observation that connects coded source compression with channel simulation. First, we consider classical source coding with quantum side information, where the quantum side information is observed by a helper and sent to the decoder via a classical channel. We derive a single-letter characterization of the achievable rate region for this problem. The direct coding theorem of our result is proved via the measurement compression theory of Winter, a quantum-to-classical channel simulation. Our result reveals that a helper’s scheme which separately conducts a measurement and a compression is suboptimal, and measurement compression seems necessary to achieve the optimal rate region. We then study coded source compression in the fully quantum regime, where two different scenarios are considered depending on the types of communication channels between the legitimate source and the receiver. We further allow entanglement assistance from the quantum helper in both scenarios. We characterize the involved quantum resources and derive single-letter expressions of the achievable rate region. The direct coding proofs are based on well-known quantum protocols, the quantum state merging protocol, and the fully quantum Slepian–Wolf protocol, together with the quantum reverse Shannon theorem. | ['Min-Hsiu Hsieh', 'Shun Watanabe'] | Channel Simulation and Coded Source Compression | 641,259 |
The latest digital subscriber line (DSL) technology, VDSL2, used for broadband access over twisted-pairs, promises up to 100 Mbit/s for both transmission directions on short loops. Since these systems are designed to operate in a far-end crosstalk (FEXT) limited environment, there is a severe performance degradation when deployed in distributed network scenarios. With power back-off (PBO) the network operators at- tempt to protect modems deployed on long loops by reducing the trans- mit power of the short ones. However, currently very little guidance has been given to operators on how to set and optimize the parameters for PBO. In this paper we explore one promising method, the cable bun- dle unique PBO (CUPBO), which optimizes these parameters according to the actual situation in the cable with regard to noise and network topology. Using real VDSL systems and cables we show that CUPBO algorithm achieves a significant increase in performance compared to the case when one naively takes the PBO values given in the VDSL standard. | ['Driton Statovci', 'Tomas Nordström'] | Performance Evaluation of the Cable Bundle Unique Power Back-Off Algorithm | 24,801 |
Presentation d'une methode d'identification simple des parametres cinematiques d'un robot avec des articulations paralleles. Une fonction erreur est definie et les parametes cinematiques, qui annulent cette fonction, sont obtenus par la methode des moindres carres iterative basee sur la decomposition en valeur singuliere | ['Doh-Hyun Kim', 'K. H. Cook', 'Jun-Ho Oh'] | Identification and Compensation of a Robot Kinematic Parameter for Positioning Accuracy Improvement | 175,257 |
The goal of this paper is to analyze, using homogenization techniques, the effective behavior of a coupled system of reaction–diffusion equations, arising in the modeling of some biochemical processes contributing to carcinogenesis in living cells. We shall focus here on the carcinogenic effects produced in the human cells by Benzo-[a]-pyrene molecules. | ['Claudia Timofte'] | Multiscale analysis of a carcinogenesis model | 826,651 |
A remarkable result in [4] shows that in spite of its being less expressive than CCS w.r.t. weak bisimilarity, CCS! (a CCS variant where infinite behavior is specified by using replication rather than recursion) is Turing powerful. This is done by encoding Random Access Machines (RAM) in CCS!. The encoding is said to be non-faithful because it may move from a state which can lead to termination into a divergent one which do not correspond to any configuration of the encoded RAM. I.e., the encoding is not termination preserving.#R##N##R##N#In this paper we study the existence of faithful encodings into CCS! of models of computability strictly less expressive than Turing Machines. Namely, grammars of Types 1 (Context Sensitive Languages), 2 (Context Free Languages) and 3 (Regular Languages) in the Chomsky Hierarchy. We provide faithful encodings of Type 3 grammars. We show that it is impossible to provide a faithful encoding of Type 2 grammars and that termination-preserving CCS! processes can generate languages which are not Type 2. We finally show that the languages generated by termination-preserving CCS! processes are Type 1. | ['Jesús Aranda', 'Cinzia Di Giusto', 'Mogens Nielsen', 'Frank D. Valencia'] | CCS with replication in the Chomsky hierarchy: the expressive power of divergence | 210,536 |
Bridging vaccine ontology and NCIt vaccine domain for cancer vaccine data integration and analysis | ['Yongqun He', 'Guoqian Jiang'] | Bridging vaccine ontology and NCIt vaccine domain for cancer vaccine data integration and analysis | 804,290 |
This paper addresses the robust image transmission over powerline communication (PLC) channel in the presence of impulse noise. Under this framework, an adaptive noise clipping-based hybrid progressive median filter (ANC-HPMF), which is a combination of hybrid progressive median filter, noise clipping technique, image compression algorithm and coded OFDM modulation is designed to ensure image transmission over the PLC channel. For this purpose, image compression and turbo codes algorithms are inserted before image transmission in order to reduce the size of the transmitted data, and, therefore, save a significant amount of the PLC channel for forward error correction. The adaptive noise clipping method using neighboring coefficients is designed at the receiver side as a first stage. It is based on an improved estimation of noise threshold from the standard deviation of the noise and the peak value of the received noisy image. To enhance the performance of the proposed system, a new form of median filter is applied to the received image as a second stage of impulse noise reduction. By combining the noise clipping and the new median filtering, the proposed technique showed high robustness for the reduction of impulse noise even under high impulse level conditions while maintaining good visual quality of the images by preserving the edges. The performances of the proposed technique were compared with other well-known methods dedicated for impulse noise reduction, and showed much superior performance against the impulse noise generated over the PLC channel. | ['Yassine Himeur', 'Abdelkrim Boukabou'] | Robust image transmission over powerline channel with impulse noise | 712,624 |
The paper in question [G. Arechavaleta, J. P. Laumond, H. Hicheur, and A. Berthoz, “An optimality principle governing human walking,” IEEE Trans. Robot., vol. 24, no. 1, pp. 5-14, Feb. 2008] suggested that human-walking paths minimize variation in curvature and hence can be approximated by the solution to an optimal control problem. This conclusion was reached by analysis of experimental data based on the maximum principle. We correct two errors in this analysis and consider their consequences. | ['Timothy Bretl', 'Gustavo Arechavaleta', 'Abdullah Akce', 'Jean-Paul Laumond'] | Comments on "An Optimality Principle Governing Human Walking | 486,016 |
Boundary objects in clinical simulation and design of eHealth | ['Sanne Jensen', 'André Kushniruk'] | Boundary objects in clinical simulation and design of eHealth | 704,591 |
For NTCIR Workshop 5 UC Berkeley participated in the bilingual task of the CLIR track. Our focus was on Chinese topic searches against the Japanese News document collection, and on Japanese topic search against the Chinese News Document Collection. Extending our work of NTCIR 4 workshop, we performed search experiments to segment and use Chinese search topics directly as if they were Japanese topics and vice versa. We also utilized a commercial Machine Translation (MT) between the two languages, with English as a pivot language. The best performance of Chinese topic search for Japanese documents was achieved using a hybrid approach which combined MT pivot translation with direct use of Chinese topic expressions. | ['Fredric C. Gey'] | How Similar are Chinese and Japanese for Cross-Language Information Retrieval? | 490,887 |
This paper describes an effect on performance of classification with applying of a low-pass filter to an image of a few of distinct image types in the image analysis using Subspace classifier method. The feature extraction was firstly examined based on three kinds of intensity images, and for classification, the feature vector and Subspace dimension were determined. Afterwards, the images with a few of distinct image types were analyzed for classification performance, and filtered images were also analyzed. The analyzed accuracies of filtered images were compared with the accuracy without filtering. Our results showed that the features of true-color channel were suitable for classification, and that an application of filter to an image of a few of distinct image types had an influence on a classification accuracy. | ['Nobuo Matsuda', 'Fumiaki Tajima', 'Hideaki Sato'] | Filtering Effects for Image Data Types in Image Analysis Using Subspace Classifier | 957,186 |
Software architectures have emerged as a promising approach for managing, analyzing, building, integrating, reusing, and improving the quality of software systems. Specifically, early design decisions can be improved by the analysis of architectural models for different properties. This paper addresses the problem of estimating the reliability of data-flow architectures before the construction of the system. The proposed model uses an operational profile of the system and a set of component test profiles. A test profile is a set of test cases extended with information about the software intra-component execution. The analysis of the system is performed by composing the test points along the virtual execution among the components. This strategy overcomes the determination of intermediate operational profiles. In addition, metrics to select the best match in the execution trace and to evaluate the selection error in such kind of match are described. | ['Gerardo Padilla', 'Tong Gao', 'I-Ling Yen', 'Farokh B. Bastani', 'Carlos Montes de Oca'] | An Early Reliability Assessment Model for Data-Flow Software Architectures | 535,535 |
Coupled and k-Sided Placements: Generalizing Generalized Assignment | ['Madhukar R. Korupolu', 'Adam Meyerson', 'Rajmohan Rajaraman', 'Brian Tagiku'] | Coupled and k-Sided Placements: Generalizing Generalized Assignment | 691 |
The energy consumption of each node in the sensor network can be effectively balanced by using mobile sinks for data gathering, thus avoiding resulting in energy hole problem. This paper proposes a virtual grid margin optimization and energy balancing (VGMEB) protocol for mobile sinks to balance zthe energy consumption in wireless sensor networks. VGMEB achieves high energy efficiency by designing a virtual grid margin method and determining a novel evaluation model for cluster head selection. In addition, an approach in multiple attribute decision making based on relative entropy is integrated for determining the weight value of each metric. The experimental results show that VGMEB outperforms these protocols and it can efficiently mitigate the energy hole problem and prolong the network lifetime. | ['Chengpei Tang', 'Nian Yang'] | Virtual grid margin optimization and energy balancing scheme for mobile sinks in wireless sensor networks | 812,772 |
In computer graphics, numerous geometry processing applications reduce to the solution of a Poisson equation. When considering geometries with symmetry, a natural question to consider is whether and how the symmetry can be leveraged to derive an efficient solver for the underlying system of linear equations. In this work we provide a simple representation-theoretic analysis that demonstrates how symmetries of the geometry translate into block diagonalization of the linear operators and we show how this results in efficient linear solvers for surfaces of revolution with and without angular boundaries. | ['Misha Kazhdan'] | Fast and exact (Poisson) solvers on symmetric geometries | 318,291 |
As in the foundation and principles for complex process science and engineering, a major problem is the lack of formal specification language to treat the dynamics of modeling complex processes with its simulations, emulations and enactments. This paper defines a formal specification language which allows integrating the complex thinking with software engineering principles for guiding the characterization of minimum requirements to design technology within living complex processes. There is a lack of research in the literature referring to the model process structural complexity. Such a model complex process can be directed toward acquiring good maintainability attributes according to the principles of complex process science and engineering. In this work a Value Based Business Process Management Network Model VBPMN is developed to acquire directly from the target complex process codes the knowledge hidden among and within composite and elementary complex processes. | ['Fuad Gattaz Sobrinho', 'Cristiane Chaves Gattaz', 'Oscar Ivan Palma Pacheco'] | A VALUE BASED BUSINESS PROCESS MANAGEMENT NETWORK MODEL | 52,507 |
Spatial and temporal localities used in keeping references in cache is limited by the behavior of applications. Many applications that lack these localities, and have high frequency of use of accesses results in a degradation of system performance under the conventional cache design. The proposed method, tracks most frequently used references by dynamically monitoring the accesses to main memory and allows them to be in the cache while rejecting less frequently used accesses. This reduces eviction rate and thereby improves the overall performance. | ['Nagi N. Mekhiel'] | Cache Filter Method Based on DRAM Access Frequency to Improve System Performance | 555,550 |
Centroid-Means-Embedding: An Approach to Infusing Word Embeddings into Features for Text Classification | ['Mohammad Golam Sohrab', 'Makoto Miwa', 'Yutaka Sasaki'] | Centroid-Means-Embedding: An Approach to Infusing Word Embeddings into Features for Text Classification | 669,682 |
Non-orthogonal multiple access (NOMA) is a promising candidate for 5G networks. NOMA achieves superior spectral efficiency than conventional orthogonal multiple access (OMA), as in NOMA multiple users can use the same time and frequency resources. Multiple-Input-multiple-output is a promising technique that can enhance system performance. In this paper we present a multiple antenna based NOMA, known as spatially modulated NOMA. In proposed scheme different users are multiplexed in power domain while cell edge users are multiplexed in spatial domain. The information to cell edge users are conveyed using antenna indices thereby reduces number of decoding steps at NOMA users, cell edge interference reduces as no power is allocated to cell edge users and hence is more energy efficient as compare to conventional NOMA. Simulation results shows that proposed scheme achieves superior spectral efficiency as compare to conventional NOMA. | ['Mohammad Irfan', 'Bo Sun Kim', 'Soo Young Shin'] | A spectral efficient spatially modulated non-orthogonal multiple access for 5G | 928,080 |
A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the “right” thing at the “right” time in the “right” way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given. | ['Gerhard Fischer'] | User Modeling in Human–Computer Interaction | 150,113 |
An integrative CT simulation technique is presented that creates realistic CT images of virtual fecal-tagged material that was added to given clinical DICOM CT images. The energy spectrum of the CT X-ray source, the energy-dependent attenuation, and the scattering properties of the soft tissue and tagging material were incorporated in the generation technique for the DICOM image-based virtual sinograms, followed by CT reconstruction reflecting the vendor-specific filtering kernels. Dark band artifacts were generated by appropriate combining of beam-hardening and -scattering effects into the generation procedure for the virtual sinograms. We used a set of simple numerical phantoms to assess the basic behavior of artifact production. A reference set of CTC images with and without tagging material and artifacts was used for evaluation of the realism of the simulated results. The level of realism was evaluated in terms of the artifact strength and patterns around the added tagging material, compared to real tagging images. The results showed that our CT simulation technique provides sufficient realism for virtual fecal-tagged images that reflect a chain of physical and numerical processes, including beam hardening, scattering, and vendor-specific kernel filtered backprojection. The technique presented has the potential to be used as a tool for investigating the effect of tagging materials on image quality and to gauge how well the electronic cleansing technique performs. | ['Zepa Yang', 'Hyeong-min Jin', 'Jong Hyo Kim'] | Application of CT simulation technique for virtual fecal tagging in CTC | 345,870 |
In this article, we report on design insights found during the evaluation of an innovative IT-artifact to support financial service encounters. Relating to previous work in this field, we carefully designed the artifact to omit any visualization and enforcement of rigid process structures, as those had turned out to be harmful. Our main design element was a mind-map-like content hierarchy to capture the client's situation. Surprisingly, we noticed that both clients and advisors talked about every information item visible on the screen just for the sake of completeness. They also followed a sequential process apparently inferred from the content hierarchy. We call this phenomenon "coercing into completeness". This phenomenon negatively influences the conversation between client and advisor inducing shorter discussion units and sudden, incomprehensible topic shifts. This article contributes an exploration of this phenomenon and its effects on the collaborative setting. | ['Mehmet Kilic', 'Peter Heinrich', 'Gerhard Schwabe'] | Coercing into Completeness in Financial Advisory Service Encounters | 232,698 |
"Garbage in. garbage out" is a well-known phrase in computer analysis, and one that comes to mind when mining Web data to draw conclusions about Web users. The challenge is that data analysts wish to infer patterns of client-side behavior from server-side data. However, because only a fraction of the user's actions ever reach the Web server, analysts must rely on incomplete data. In this paper, we propose a client-side monitoring system that is unobtrusive and supports flexible data collection. Moreover, the proposed framework encompasses client-side applications beyond the Web browser. Expanding monitoring beyond the browser to incorporate standard office productivity tools enables analysts to derive a much richer and more accurate picture of user behavior on the Web. | ['Kurt D. Fenstermacher', 'Mark Ginsburg'] | Mining client-side activity for personalization | 230,247 |
Opportunistic Network is one kind of Challenged Networks, which focus on the feature that there doesn't exist a persistent path between the source and the destination. DTN (Delay/Disruption Tolerant Networking) is proposed to solve the problems in Challenged Networks, a lot of routing mechanisms has been presented, but most of them only focus one routing objective such as delay, packet losing rate, energy in their design. In this paper, we present a multi-objective routing decision-making model to satisfy different users' and applications' needs better. Firstly, we present a descriptive model of opportunistic network. Secondly, we give the partition of routing knowledge and routing objective in opportunistic network. Lastly, we present a multi-objective routing decision-making activities model and automata model. | ['Meng Chen', 'Haiquan Wang'] | A multi-objective routing decision-making model for opportunistic network | 270,891 |
In an expected increase of the inland navigation transport demand, the management of the navigation networks requires the design of new optimal management approaches of the water resource. This resource is necessary for the navigation accommodation. In this paper, two dynamic optimization methods are designed with the aim to improve the management of the water resource at the minimal operating cost. They aim at determining the optimal water allocation planning over a future time horizon. They are based on the proposal of a weighted directed flow graph. This flow graph is composed of dynamic capacities on each node and dynamic constraints on each edge. The proposed optimization approaches are tested and compared considering an inland navigation network composed of three reaches. | ['Eric Duviella', 'Houda Nouasse', 'Arnaud Doniec', 'Karine Chuquet'] | Dynamic optimization approaches for resource allocation planning in inland navigation networks | 945,711 |
The wireless access is expected to be one of the key access technologies for providing IP services to the user, end-to-end seamlessly. The wireless network, as the "last hop" of the wireline IP network has its own unique set of complex characteristics. To improve the behavior of the wireless link susceptible to frequent error bursts (due to fading, shadowing etc.), various low layer (physical/link layer) techniques have to be used to map the service associated network level QoS parameters such as delay, jitter, BER and throughput to meet end-to-end IP performance. The motivation of this paper is two-fold: (i) to present some of the unique characteristics of the radio link and show what kind of flexibility of resource management and mapping techniques required to guarantee QoS over the wireless, and (ii) propose a framework for a wireless QoS agent. The wireless QoS agent, in a nutshell, will be responsible for mapping multimedia IP QoS requirements to radio link specific requirements. The wireless QoS agent will interwork with the IP QoS Manager framework within IETF such as diff-serv in core networks. | ['Sanjoy K. Sen', 'A. Arunachalam', 'Kalyan Basu', 'M. Wernik'] | A QoS management framework for 3G wireless networks | 77,797 |
In this article, a mathematical model is developed to formulate optimal ordering policies for retailer when demand is practically constant and partially dependent on the stock, and the supplier offers progressive credit periods to settle the account. The notion of progressive credit period is as follows: If the retailer settles the outstanding amount by M, the supplier does not charge any interest. If the retailer pays after M but before second period N offered by the supplier, then the supplier charges the retailer on the unpaid balance at the rate Ic1. If the retailer settles the account after N, then he will have to pay an interest rate Ic2 on the unpaid balance (Ic2 > Ic1). The cost minimization is considered to be an objective function. An algorithm is given to find the flow of optimal ordering policy. A numerical illustration is given to study the effect of various parameters on ordering policy and total cost of an inventory system. | ['Hardik N. Soni', 'Nita H. Shah'] | Ordering policy for stock-dependent demand rate under progressive payment scheme | 272,528 |
This article addresses parameter convergence problem in identification of nonlinear dynamic systems using fuzzy models. We first establish persistent excitation conditions and then propose several detailed algorithms to generate input signals that guarantee the convergence of the parameter estimates in the fuzzy system models to the true values in identifications of second-order nonlinear moving-average and auto-regressive-moving-average systems. Numerical example is given to illustrate the ideas and results. | ['Feng Wan', 'Li-Xin Wang', 'He-Yun Zhu', 'Youxian Sun'] | Generating persistently exciting inputs for nonlinear dynamic system identification using fuzzy models | 261,638 |
This paper proposes a double sampling phase detector (DSPD) for the charge-pump phase-locked loop (PLL) design. The DSPD can double the PLL loop bandwidth to obtain the fast settling time and meanwhile shift the reference spur to higher frequency to suppress the reference spur. Verilog-AMS charge-pump PLL timing models with DSPD and conventional phase detector (PD) are developed to verify the fast settling time and low reference spur. By comparing the DSPD architecture to the conventional PD architecture, the settling time can be reduced 50% in the 30 ppm frequency accuracy and the reference spur can be suppressed 5.9 dB. | ['Guo-Jue Huang', 'Che-Sheng Chen', 'Wen-Shen Wuen', 'Kuei-Ann Wen'] | A fast settling and low reference spur PLL with double sampling phase detector | 512,550 |
In this paper a computer network is defined to be a set of autonomous, independent computer systems, interconnected so as to permit interactive resource sharing between any pair of systems. An overview of the need for a computer network, the requirements of a computer communication system, a description of the properties of the communication system chosen, and the potential uses of such a network are described in this paper. | ['Lawrence G. Roberts', 'Barry D. Wessler'] | Computer network development to achieve resource sharing | 137,144 |
Forecasting in geophysical time series is a challenging problem with numerous applications. The presence of correlation (i.e. spatial correlation across several sites and time correlation within each site) poses difficulties with respect to traditional modeling, computation and statistical theory. This paper presents a cluster-centric forecasting methodology that allows us to yield a characterization of correlation in geophysical time series through a spatio-temporal clustering step. The clustering phase is designed for partitioning time series of numeric data routinely sampled at specific space locations. A forecasting model is then computed by resorting to multivariate time series analysis, in order to predict the future values of a time series by utilizing not only its own historical values, but also information from other cluster-time series. Experimental results highlight the importance of dealing with both temporal and spatial correlation and validate the proposed cluster-centric strategy in the computation of a multivariate time series forecasting model. | ['Sonja Pravilovic', 'Massimo Bilancia', 'Annalisa Appice', 'Donato Malerba'] | Using multiple time series analysis for geosensor data forecasting | 928,469 |
Soft errors on hardware could affect the reliability of computer system. To estimate system reliability, it is important to know the effects of soft errors to system reliability. This paper explores the effects of soft errors to computer system reliability. We propose a new approach to measure system reliability for soft error factor. In our approach, hardware components reliability is concerned first. Then, system reliability which shows the ability to perform required function is concerned. We equal system reliability to software reliability based on the mechanism that soft errors affect system reliability. We build a software reliability model under soft errors condition. In our software model, we analyze the state of software combining with the state of hardware. For program errors which are resulted from soft errors, we give an analysis of error mask. These real errors which could lead to software failure are distinguished. Finally, our experiments illustrate our analyses and validate our approach. | ['Lei Xiong', 'Qingping Tan', 'Jianjun Xu'] | Effects of Soft Error to System Reliability | 398,592 |
This brief presents the analysis of zeros present in boost dc–dc converters that are operating in continuous inductor current mode. It proposes the utilization of the concept of multiple forward pathways from input to output to analyze the origin of the resulting zeros and determine their locations. This brief provides insight into the various zeros present in the boost converter through a state diagram approach. | ['Vikas V. Paduvalli', 'Robert J. Taylor', 'Poras T. Balsara'] | Analysis of Zeros in a Boost DC–DC Converter: State Diagram Approach | 825,893 |
In this paper, a doubly iterative receiver for orthogonal frequency-division multiplexing (OFDM) systems is designed to mitigate impulsive interference based on pulse blanking. Although being a very efficient countermeasure, pulse blanking introduces intercarrier interference in OFDM systems leading to system performance degradation. The doubly iterative receiver is designed to overcome this drawback. The inner loop combines iterative demodulation and decoding to cancel intercarrier interference. To make soft information available to the outer loop, the equivalent noise power is derived to employ soft demodulation. The outer loop improves the performance of pulse blanking by performing a hypothesis test, which is used to decide if peaks in the received signal are due to impulsive interference or the large peak-to-average-power ratio common to OFDM signals. This allows to optimize the blanking threshold through maximizing the signal-to-interference-and-noise ratio. To evaluate performance and complexity of the proposed receiver, extrinsic information transfer (EXIT) chart analysis is carried out, where EXIT functions for pulse blanker, demodulator, and decoder are derived. An EXIT chart-based trellis search approach is introduced to calculate the loop schedule achieving target bit-error-rate performance with minimized complexity. Numerical results show a considerable bit-error-rate performance improvement as well as a remarkable complexity reduction. | ['Qiaoyu Li', 'Jun Zhang', 'Ulrich Epple'] | Design and EXIT Chart Analysis of a Doubly Iterative Receiver for Mitigating Impulsive Interference in OFDM Systems | 698,420 |
In this paper, we show the equivalence between complementary code keying (CCK) codeword and coset of the first order Reed-Muller (RM) code with variables of three. The CCK codewords are Golay sequences which have peak-to-average power ratio (PAPR) of two at most and can correct one error. We propose a CCK-orthogonal frequency division multiplexing (OFDM) modem to reduce PAPR. Also, we present the performance improvement techniques by increasing the variables of four to correct three errors and reduce PAPR at least 9dB with this system. Although, two Fast Hadamard Transform (FHT) blocks of size 8 /spl times/ 64 are required at the receiver, we reduce the complexity by using FHT blocks of size 8 /spl times/ 64 and 2 /spl times/ 4 without deteriorating the performance. We generalize our results that we may increase the variables of RM code to enhance the error correcting and PAPR reduction capabilities without increasing receiver's complexity. | ['Won-Jeong Jeong', 'Hyuncheol Park', 'Hyuckjae Lee', 'Sunghyun Hwang'] | Performance improvement techniques for CCK-OFDM WLAN modem | 453,595 |
Although deep-brain stimulation (DBS) can be used to improve some of the severe symptoms of Parkinson's disease (e.g., Bradykinesia, rigidity, and tremors), the mechanisms by which the symptoms are eliminated are not well understood. Moreover, DBS does not prevent neurodegeneration that leads to dementia or death. In order to fully investigate DBS and to optimize its use, a comprehensive long-term stimulation study in an animal model is needed. However, since the brain region that must be stimulated, known as the subthalamic nucleus (STN), is extremely small (500 /spl mu/m/spl times/500 /spl mu/m/spl times/1 mm) and deep within the rat brain (10 mm), the stimulating probe must have geometric and mechanical properties that allow accurate positioning in the brain, while minimizing tissue damage. We have designed, fabricated, and tested a novel micromachined probe that is able to accurately stimulate the STN. The probe is designed to minimize damage to the surrounding tissue. The probe shank is coated with gold and the electrode interconnects are insulated with silicon nitride for biocompatibility. The probe has four platinum electrodes to provide a variety of spatially distributed stimuli, and is formed in a novel 3-D plating process that results in a microwire like geometry (i.e., smoothly tapering diameter) with a corresponding mechanically stable shank. | ['Paulo S. Motta', 'Jack W. Judy'] | Multielectrode microprobes for deep-brain stimulation fabricated with a customizable 3-D electroplating process | 252,939 |
In this paper we present a high throughput VLSI architecture design for context-based adaptive binary arithmetic decoding (CABAD) in MPEG-4 AVC/H.264. To speed-up the inherent sequential operations in CABAD, we break down the processing bottleneck by proposing a look-ahead codeword parsing technique on the segmenting context tables with cache registers, which averagely reduces up to 53% of cycle count. Based on a 0.18 mum CMOS technology, the proposed design outperforms the existing design by both reducing 40% of hardware cost and achieving about 1.6 times data throughput at the same time | ['Yao-Chang Yang', 'Chien-Chang Lin', 'Hsui-Cheng Chang', 'Ching-Lung Su', 'Jiun-In Guo'] | A High Throughput VLSI Architecture Design for H.264 Context-Based Adaptive Binary Arithmetic Decoding with Look Ahead Parsing | 132,715 |
Industrial Analytics to Discover Knowledge from Instrumented Networked Machines. | ['Aldo Dagnino', 'David Cox'] | Industrial Analytics to Discover Knowledge from Instrumented Networked Machines. | 799,635 |
An encryption algorithm of JPEG2000 streams for supporting ciphertext-based transcoding | ['Yong Fu', 'Xiaowei Yi', 'Hengtai Ma'] | An encryption algorithm of JPEG2000 streams for supporting ciphertext-based transcoding | 698,523 |
In this paper, we study the problem of random field estimation with wireless sensor networks. We consider two encoding strategies, namely, Compress-and-Estimate (CE delay-tolerant (DT) networks, where the time horizon is enlarged to a number of consecutive timeslots. For both scenarios and encoding strategies, we extensively analyze the distortion in the reconstructed random field. In DT scenarios, we find closed-form expressions of the optimal number of samples to be encoded in each timeslot (Q&E and C&E cases). Besides, we identify buffer stability conditions and a number of interesting distortion versus buffer occupancy tradeoffs. Latency issues in the reconstruction of the random field are addressed, as well. Computer simulation and numerical results are given in terms of distortion versus number of sensor nodes or SNR, latency versus network size, or buffer occupancy. | ['Javier Matamoros', 'Carles Anton-Haro'] | Random field estimation with delay-constrained and delay-tolerant wireless sensor networks | 173,590 |
Clustering is a basic operation in image processing and computer vision, and it plays an important role in unsupervised pattern recognition and image segmentation. While there are many methods for clustering, the single-link hierarchical clustering is one of the most popular techniques. In this paper, with the advantages of both optical transmission and electronic computation, we design efficient parallel hierarchical clustering algorithms on the arrays with reconfigurable optical buses (AROB). We first design three efficient basic operations which include the matrix multiplication of two N×N matrices, finding the minimum spanning tree of a graph with N vertices, and identifying the connected component containing a specified vertex. Based on these three data operations, an O(logN) time parallel hierarchical clustering algorithm is proposed using N3 processors. Furthermore, if the connectivity of the AROB with four-port connection is allowed, two constant time clustering algorithms can be also derived using N4 and N3 processors, respectively. These results improve on previously known algorithms developed on various parallel computational models. | ['Chin-Hsiung Wu', 'Shi-Jinn Horng', 'Horng-Ren Tsai'] | Efficient Parallel Algorithms for Hierarchical Clustering on Arrays with Reconfigurable Optical Buses | 143,004 |
MapReduce for Big Data Analysis: Benefits, Limitations and Extensions | ['Yang Song', 'Hongzhi Wang', 'Jianzhong Li', 'Hong Gao'] | MapReduce for Big Data Analysis: Benefits, Limitations and Extensions | 847,198 |
Given a binary object 2D or 3D, its Betti numbers characterize the number of holes in each dimension. They are obtained algebraically, and even though they are perfectly defined, there is no unique way to display these holes. We propose two geometric measures for the holes, which are uniquely defined and try to compensate the loss of geometric information during the homology computation: the thickness and the breadth. They are obtained by filtering the information of the persistent homology computation of a filtration defined through the signed distance transform of the binary object. | ['Aldo Gonzalez-Lorenzo', 'Alexandra Bac', 'Jean-Luc Mari', 'Pedro Real'] | Two Measures for the Homology Groups of Binary Volumes | 833,621 |
This paper proposes a flexible low-density parity-check (LDPC) decoder which leverages graphic processor units (GPU) to provide high decoding throughput. LDPC codes are widely adopted by the new emerging standards for wireless communication systems and storage applications due to their near-capacity error correcting performance. To achieve high decoding throughput on GPU, we leverage the parallelism embedded in the check-node computation and variable-node computation and propose a parallel strategy of partitioning the decoding jobs among multi-processors in GPU. In addition, we propose a scalable multi-codeword decoding scheme to fully utilize the computation resources of GPU. Furthermore, we developed a novel adaptive performance-tuning method to make our decoder implementation more flexible and scalable. The experimental results show that our LDPC decoder is scalable and flexible, and the adaptive performance-tuning method can deliver the peak performance based on the GPU architecture. | ['Guohui Wang', 'Michael Wu', 'Yang Sun', 'Joseph R. Cavallaro'] | GPU accelerated scalable parallel decoding of LDPC codes | 923,453 |
Leakage power has become the dominant factor to the total power consumption when technology scales down to nano-region. Moreover, due to the exponential relationship between leakage power and temperature, positive feedback loop can cause thermal-runaway hazard. This poses a significant barrier for 3D integration of multi-cache-core processor, which has high I/O bandwidth but also has high leakage-power density and long heat-removal path. Nano-Electro-Mechanical Switches (NEMS) are among the most promising emerging devices to solve the thermal-runaway problem due to their zero leakage current and infinite sub-threshold slope. In order to have a proper control of thermal-runaway hazard for many-core system, this paper studies hybrid CMOS-NEMS designs of thermal buffer and power gating to reduce leakage power and thermal-runaway at thermal-time-constant scale. Experimental results show that our proposed NEMS based thermal management can effectively prevent the thermal-runaway in 3D multi-cache-core processor. | ['Xiwei Huang', 'Hao Yu', 'Wei Zhang'] | NEMS based thermal management for 3D many-core system | 106,586 |
Adaptable Learning and Learning Analytics: A Case Study in a Programming Course | ['Hallvard Trætteberg', 'Anna Mavroudi', 'Michail N. Giannakos', 'John Krogstie'] | Adaptable Learning and Learning Analytics: A Case Study in a Programming Course | 878,604 |
Технология семантического структурирования контента научных электронных библиотек (A Technology for Semantic Structurization of Scientific Digital Library Content). | ['Sergey Parinov', 'Mikhail R. Kogalovsky'] | Технология семантического структурирования контента научных электронных библиотек (A Technology for Semantic Structurization of Scientific Digital Library Content). | 789,291 |
In this paper, we report about a systematic mapping study in software requirements prioritization with a specific focus on empirical studies.The results show that the interest from the research community is clustered around the more recent years. The majority of the studies are about the validation of research or solution proposals. We report the prevalence of studies on techniques and methodologies while there is a scarce interest in the strict evaluation of tools that could be beneficial to industry. In most of the empirical studies we found a bottom-up approach, centering on the techniques and on accuracy as the dependent variable, as well as on functional requirements as the main research focus. Based on the results, we provide recommendations for future research directions. | ['Massimiliano Pergher', 'Bruno Rossi'] | Requirements prioritization in software engineering: A systematic mapping study | 294,436 |
As the fundamental information for describing the dynamic state of vehicles, vehicle position is a significant element for the cooperative vehicle infrastructure systems. Global navigation satellite system GNSS is considered as an effective approach for realising accurate and reliable vehicle positioning. To overcome the GNSS-challenged environments in urban areas, a low-cost cooperative vehicle positioning solution is proposed in this paper by exploring the capability of dedicated short range communication DSRC and dead reckoning DR. The mechanism of DSRC-based cooperative positioning using the carrier frequency offset CFO observation is analysed. The Bayesian filtering scheme is employed for realising the information fusion of DSRC and DR, which promotes a DSRC/DR integration-based solution to compensate GNSS and achieve the desirable accuracy and availability. Results from a cooperative simulation platform show an encouraging performance stage of the proposed solution. Under the GNSS-challenged conditions, the accuracy and service availability are assured for supporting the cooperative vehicular applications. | ['Jiang Liu', 'Bai-gen Cai', 'Yinghong Wen', 'Jian Wang'] | Integrating DSRC and dead-reckoning for cooperative vehicle positioning under GNSS-challenged vehicular environments | 646,998 |
There are numerous privacy challenges specific to healthcare, ranging from patient expectations for confidentiality to sensors designed to collect health-related data that falls outside the bounds of traditional medical practice. All of these challenges make healthcare a unique environment when it comes to privacy. Learn what ubicomp researchers and practitioners can do to improve the state of privacy in a ubiquitous healthcare environment. | ['Kelly Caine'] | Privacy Is Healthy | 924,779 |
Some major features and attributes of the probabilistic design-for-reliability (PDfR) approach in aerospace electronics are indicated and discussed. The general concepts are illustrated by practical examples. The incentives for using PDfR methods and techniques are addressed, as well as the importance to consider the physics of failure and, when possible and appropriate, the most likely application(s) of the product of interest. | ['E. Suhir'] | When adequate and predictable reliability is imperative | 224,324 |
Semantic annotation has been used to combine varied information sources - gathered as unobtrusively as possible - and produce enhanced tools for working with digital resources. In this paper we describe trials carried out using a location tracking system and Semantic Web annotation technologies to analyse activities in a simulated ward environment. The motivation for semantic annotation of the space will be outlined along with the practicalities of the location based tracking system. The integration of location, annotations and video information will be discussed together with the technologies and approaches applicability to use in a real ward environment. | ['Mark J. Weal', 'Danius T. Michaelides', 'Kevin R. Page', 'David De Roure', 'Mary Gobbi', 'Eloise Monger', 'Fernando D. Martinez'] | Location based semantic annotation for ward analysis | 165,958 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.