Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
3708
2
null
421
1
null
You can see if you can mix Spark streaming ([https://spark.apache.org/docs/1.1.0/streaming-programming-guide.html](https://spark.apache.org/docs/1.1.0/streaming-programming-guide.html)) and Spark ML Library ([https://spark.apache.org/docs/1.1.0/mllib-guide.html](https://spark.apache.org/docs/1.1.0/mllib-guide.html)). Spark Streaming permits to process live data streams and Spark ML Library is a Machine Learning Library for Spark. So maybe you can do something good! But this is a very interesting subject, I am working on it. It can be good to create a Google Community on it ([https://plus.google.com/communities](https://plus.google.com/communities))?
null
CC BY-SA 3.0
null
2014-12-16T08:51:30.987
2014-12-16T08:51:30.987
null
null
5165
null
3711
1
3717
null
14
23253
Naive Bayes apparently handles missing data differently, depending on whether they exist in training or testing/classification instances. When classifying instances, the attribute with the missing value is simply not included in the probability calculation ([reference](http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/naive-2x2.pdf)) In training, the instance [with the missing data] is not included in frequency count for attribute value-class combination. ([reference](http://www.csee.wvu.edu/%7Etimm/cs591o/old/BasicMethods.html)) Does that mean that particular training record simply isn't included in the training phase? Or does it mean something else?
How does the naive Bayes classifier handle missing data in training?
CC BY-SA 4.0
null
2014-12-16T13:07:55.063
2022-12-03T12:19:36.527
2020-08-17T15:55:05.267
98307
6451
[ "machine-learning", "data-mining", "classification", "naive-bayes-classifier" ]
3713
2
null
3702
4
null
See a similar answer [here](https://stats.stackexchange.com/questions/105979/is-knn-a-discriminative-learning-algorithm). To clarify, k nearest neighbor is a discriminative classifier. The difference between a generative and a discriminative classifier is that the former models the joint probability where as the latter models the conditional probability (the posterior) starting from the prior. In the case of nearest neighbors, the conditional probability of a class given a data point is modeled. To do this, one starts with the prior probability on the classes.
null
CC BY-SA 3.0
null
2014-12-17T02:58:06.057
2014-12-17T09:34:11.103
2017-04-13T12:44:20.183
-1
847
null
3715
2
null
1190
2
null
This seems to be a standard regression problem in which there are two goals: - Obtain a predictive model that can be used for prediction. - Which variables seem to be the most important ones to be used. For both the above problems use an ensemble model. Consider both a random forest and a gradient boosted machine. Both these models will use the independent variables and predict the Hospital time. Additionally, through variable importances, you can obtain which variables are the most important ones and have the most impact in predicting the output.
null
CC BY-SA 3.0
null
2014-12-17T09:07:11.320
2014-12-17T09:07:11.320
null
null
847
null
3716
2
null
1243
1
null
In the way that you've defined or set up the problem, i.e. ``` sales = alpha*quality + beta*position + epsilon ``` We can easily quantify `beta` given that your model is correct. You just need to run it through linear regression and it will give you the coefficient for `beta`*. If you would like to model click through rates, you would have to train a classifier. So you would have to fit a logistic model that models: ``` clicks ~ alpha*quality + beta*position + epsilon ``` *I believe you would have to restrict the training set to contain results where all impressions were obtained on the first page otherwise your model will not hold (I would guess that `beta` is going to be strongly dependent on the page).
null
CC BY-SA 3.0
null
2014-12-17T09:32:53.703
2014-12-17T09:32:53.703
null
null
847
null
3717
2
null
3711
18
null
In general, you have a choice when handling missing values hen training a naive Bayes classifier. You can choose to either - Omit records with any missing values, - Omit only the missing attributes. I'll use the example linked to above to demonstrate these two approaches. Suppose we add one more training record to that example. ``` Outlook Temperature Humidity Windy Play ------- ----------- -------- ----- ---- rainy cool normal TRUE no rainy mild high TRUE no sunny hot high FALSE no sunny hot high TRUE no sunny mild high FALSE no overcast cool normal TRUE yes overcast hot high FALSE yes overcast hot normal FALSE yes overcast mild high TRUE yes rainy cool normal FALSE yes rainy mild high FALSE yes rainy mild normal FALSE yes sunny cool normal FALSE yes sunny mild normal TRUE yes NA hot normal FALSE yes ``` - If we decide to omit the last record due to the missing outlook value, we would have the exact same trained model as discussed in the link. - We could also choose to use all of the information available from this record. We could choose to simply omit the attribute outlook from this record. This would yield the following updated table. Outlook Temperature Humidity ==================== ================= ================= Yes No Yes No Yes No Sunny 2 3 Hot 3 2 High 3 4 Overcast 4 0 Mild 4 2 Normal 7 1 Rainy 3 2 Cool 3 1 ----------- --------- ---------- Sunny 2/9 3/5 Hot 3/10 2/5 High 3/10 4/5 Overcast 4/9 0/5 Mild 4/10 2/5 Normal 7/10 1/5 Rainy 3/9 2/5 Cool 3/10 1/5 Windy Play ================= ======== Yes No Yes No False 7 2 10 5 True 3 3 ---------- ---------- False 7/10 2/5 10/15 5/15 True 3/10 3/5 Notice there are 15 observations for each attribute except `Outlook`, which has only 14. This is since that value was unavailable for the last record. All further development would continue as discussed in the linked article. For example in the R package `e1071` naiveBayes implementation has the option `na.action` which can be set to na.omit or na.pass.
null
CC BY-SA 4.0
null
2014-12-17T18:15:42.133
2022-12-03T12:19:36.527
2022-12-03T12:19:36.527
4724
4724
null
3718
1
3723
null
9
1020
I'm looking to graph and interactively explore live/continuously measured data. There are quite a few options out there, with plot.ly being the most user-friendly. Plot.ly has a fantastic and easy to use UI (easily scalable, pannable, easily zoomable/fit to screen), but cannot handle the large sets of data I'm collecting. Does anyone know of any alternatives? I have MATLAB, but don't have enough licenses to simultaneously run this and do development at the same time. I know that LabVIEW would be a great option, but it is currently cost-prohibitive. Thanks in advance!
Interactive Graphing while logging data
CC BY-SA 3.0
null
2014-12-17T21:17:13.340
2016-07-15T23:19:12.977
2014-12-18T02:39:23.860
2452
6469
[ "dataset", "visualization" ]
3719
1
null
null
13
6809
I am working on a data-science project related on social relationship mining and need to store data in some graph databases. Initially I chose Neo4j as the database. But it seams Neo4j doesn't scale well. The alternative I found out are Titan and oriebtDB. I have gone through [this](http://db-engines.com/en/system/Neo4j%3BOrientDB%3BTitan) comparison on these three Databases, But I would like to get more details on these databases. So Could some one help me in choosing the best one. Mainly I would like to compare performance, scaling, on line documentation/tutorials available, Python library support, query language complexity and graph algorithm support of these databases. Also is there any other good database options ?
Neo4j vs OrientDB vs Titan
CC BY-SA 3.0
null
2014-12-18T04:36:06.107
2015-06-26T07:32:27.147
2015-06-02T17:00:53.513
5091
5091
[ "data-mining", "graphs", "databases", "social-network-analysis" ]
3720
2
null
3718
1
null
Visualizing large datasets is a long standing problem. One of the issues is to understand how we can show over a million points on a screen that has only about ~ million pixels. Having said that, here are a few tools that can handle big data: - Tableau: you could use their free desktop tool. - Tabplot: built on top of ggplot2 in R to handle larger datasets. - See this review for 5 other products that can help you do your job.
null
CC BY-SA 3.0
null
2014-12-18T04:41:33.493
2014-12-18T04:41:33.493
null
null
847
null
3721
1
3722
null
3
1838
With increasingly sophisticated methods that work on large scale datasets, financial applications are obvious. I am aware of machine learning being employed on financial services to detect fraud and flag fraudulent activities but I have a lesser understanding of how it helps to predict the price of the stock the next day and how many stocks of a particular company to buy. Do the hedge funds still employ portfolio optimization techniques that are right out of the mathematical finance literature or have they started to use machine learning to hedge their bets? More importantly, what are the features that are used by these hedge funds and what is a representative problem set up?
Machine Learning for hedging/ portfolio optimization?
CC BY-SA 3.0
null
2014-12-18T04:48:49.820
2014-12-18T08:25:09.517
null
null
847
[ "machine-learning", "feature-selection", "optimization" ]
3722
2
null
3721
4
null
That is a rather broad question, and there is tons of literature about quantitative analysis and stock market prediction using machine learning. The most classical example of predicting the stock market is employing neural networks; you can use whatever feature you think might be relevant for your prediction, for example the unemployment rate, the oil price, the gold price, the interest rates, and the timeseries itself, i. e. the volatility, the change in the last 2,3,7,..., days etc. - a more classical approach is the input-output-analysis in econometrics, or the autoregression analysis, but all of it can be modeled using neural networks or any other function approximator / regression in a very natural way. But, as said, there are tons of other possibilities to model the market, to name a few: Ant Colony Optimization (ACO), Classical regression analysis, genetic algorithms, decision trees, reinforcement learning etc. you name it, almost EVERYTHING has probably been applied to the stock market prediction problem. There are different fond manager types on the markets. There are still the Quants which are doing a quantitative analysis using classical financial maths and maths borrowed from the physics to describe the market movements. There are still the most conservative ones which do a long-term, fundamental analysis of the corporation, that is, looking in how the corporation earns money and where it spends money. Or the tactical analysts who just look for immediate signals to buy / sell a stock in the short term. And those quantitative guys who employ machine learning amongst other methods.
null
CC BY-SA 3.0
null
2014-12-18T08:10:00.630
2014-12-18T08:25:09.517
2014-12-18T08:25:09.517
3132
3132
null
3723
2
null
3718
7
null
For this answer, I have assumed that you prefer open source solutions to big data visualization. This assumption is based on budgetary details from your question. However, there is one exclusion to this - below I will add a reference to one commercial product, which I believe might be beneficial in your case (provided that you could afford that). I also assume that browser-based solutions are acceptable (I would even prefer them, unless you have specific contradictory requirements). Naturally, the first candidate as a solution to your problem I would consider D3.js JavaScript library: [http://d3js.org](http://d3js.org). However, despite flexibility and other benefits, I think that this solution is too low-level. Therefore, I would recommend you to take a look at the following open source projects for big data visualization, which are powerful and flexible enough, but operate at a higher level of abstraction (some of them are based on D3.js foundation and sometimes are referred to as D3.js [visualization stack](http://schoolofdata.org/2013/08/12/climbing-the-d3-js-visualisation-stack)). - Bokeh - Python-based interactive visualization library, which supports big data and streaming data: http://bokeh.pydata.org - Flot - JavaScript-based interactive visualization library, focused on jQuery: http://www.flotcharts.org - NodeBox - unique rapid data visualization system (not browser-based, but multi-language and multi-platform), based on generative design and visual functional programming: https://www.nodebox.net - Processing - complete software development system with its own programming language, libraries, plug-ins, etc., oriented to visual content: https://www.processing.org (allows executing Processing programs in a browser via http://processingjs.org) - Crossfilter - JavaScript-based interactive visualization library for big data by Square (very fast visualization of large multivariate data sets): http://square.github.io/crossfilter - bigvis - an R package for big data exploratory analysis (not a visualization library per se, but could be useful to process large data sets /aggregating, smoothing/ prior to visualization, using various R graphics options): https://github.com/hadley/bigvis - prefuse - Java-based interactive visualization library: http://prefuse.org - Lumify - big data integration, analysis and visualization platform (interesting feature: supports Semantic Web): http://lumify.io Separately, I'd like to mention two open source big data analysis and visualization projects, focused on graph/network data (with some support for streaming data of that type): [Cytoscape](http://www.cytoscape.org) and [Gephi](https://gephi.github.io). If you are interested in some other, more specific (maps support, etc.) or commercial (basic free tiers), projects and products, please see this awesome compilation, which I thoroughly curated to come up with the main list above and analyzed: [http://blog.profitbricks.com/39-data-visualization-tools-for-big-data](http://blog.profitbricks.com/39-data-visualization-tools-for-big-data). Finally, as I promised in the beginning, Zoomdata - a commercial product, which I thought you might want to take a look at: [http://www.zoomdata.com](http://www.zoomdata.com). The reason I made an exclusion for it from my open source software compilation is due to its built-in support for big data platforms. In particular, Zoomdata provides data connectors for Cloudera Impala, Amazon Redshift, MongoDB, Spark and Hadoop, plus search engines, major database engines and streaming data. Disclaimer: I have no affiliation with Zoomdata whatsoever - I was just impressed by their range of connectivity options (which might cost you dearly, but that's another aspect of this topic's analysis).
null
CC BY-SA 3.0
null
2014-12-18T13:30:54.210
2014-12-18T13:30:54.210
null
null
2452
null
3724
2
null
2668
5
null
[ScalaNLP](http://www.scalanlp.org/) is a suite of machine learning and numerical computing libraries with support for common natural language processing tasks. [](http://www.scalanlp.org/) [Here](https://www.datasciencecentral.com/profiles/blogs/top-15-scala-libraries-for-data-science-in-2018-1) is a newly updated list of scala libraries for data science.
null
CC BY-SA 4.0
null
2014-12-18T18:07:18.170
2021-03-11T20:14:30.817
2021-03-11T20:14:30.817
29169
6478
null
3725
2
null
3700
1
null
Recall the definition he makes for the graph Laplacian earlier, $L = D -W $. Now consider the map in the RHS parentheses which I'll call $L^*$, $$ L^*f(x_i) := f(x_i) - \alpha \sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}f(x_j).$$ The suggested weight matrix definition is natural because it lets us write $$ L^* := I - D^{-1}W. $$ Here's [a reference](http://www.cs.yale.edu/homes/singer/publications/laplacian_ACHA.pdf) to a related paper with some easy to read exposition. Hope this helps!
null
CC BY-SA 3.0
null
2014-12-19T01:20:39.907
2014-12-19T01:20:39.907
null
null
4724
null
3726
2
null
2673
1
null
This is the fundamental challenge to all data modeling. We don't just want to memorize the the link between a given input and a given output (otherwise you wouldn't be modeling data, you'd be memorizing 1:1 connections with a dict / hash / relational database table / etc). We want to capture the underlying pattern in the data from only looking at the training data. Let's expand a little on your gravity example. You have your 10 training samples showing the start and ending position of an object dropped. For consistency, let's say the object was dropped the moment the object's location was initially recorded and the ending location was recorded at some precise time interval later (but before the object hit the ground). Let's also say the model (neural network in this case) managed to precisely learn the expected change in location since it just comes down to subtraction in one axis. You can show it another 10, 100, 1000 examples that all leverage the connection found and your model will keep performing well. Why not keep going to 10k, 100k, or even more samples? Theoretically, if you managed to isolate the connection and run the experiment the same way each time, your model will always work. But realistically, something is going to eventually change in the system. You hire a new lab assistant who tends to press the 'record location' button well after having dropped the object (giving the object more initial velocity, which you won't notice having only recorded location). Maybe you lost your initial ball and had to use something else which is lighter and catches the wind more (so it goes slower). .... the longer you run the experiment, the more small changes will creep into your system. Eventually these changes will alter the connection enough to make your initial model wrong. When modeling data, we want to capture the underlying patterns and acknowledge that the model only matters as long as those underlying patterns stay relevant. It's not really about the number of samples. It's about the connections / the model itself. The number of samples just happens to be one of the better proxies we have - the more samples you use, the more confident you have some underlying pattern. 'Statistical validity' is one stab at solving this, though it's validity is still up for question in the era of big data. There is [plenty of work](https://ieeexplore.ieee.org/document/5726612) done trying to solve for how to gain confidence in good generalization in neural networks specifically, but it's still very much an open question. For a different example, if you're looking at user behavior, you'll see differences between day and night; weekdays and weekends; summer and winter; year of a person's life; culture a person grew up in... even if you prove you found a pattern in your initial sample, the system will eventually change and it's up to luck whether the connection(s) you found are a part of the system that changed or a part of the system that didn't.
null
CC BY-SA 4.0
null
2014-12-19T01:42:36.780
2022-04-22T13:15:54.290
2022-04-22T13:15:54.290
25180
6391
null
3728
1
3729
null
1
255
I was wondering if anyone knew which piece of software is being used in this video? It is an image recognition system that makes the training process very simple. [http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098](http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098) The example is with car images, though the video should start at the right spot.
What software is being used in this image recognition system?
CC BY-SA 3.0
null
2014-12-19T11:42:04.547
2014-12-19T16:02:06.330
null
null
5175
[ "classification" ]
3729
2
null
3728
1
null
I'm pretty sure that the software you're referring to is a some kind of internal research project software, developed by Enlitic ([http://www.enlitic.com](http://www.enlitic.com)), where Jeremy Howard works as a founder and CEO. By "internal research project software" I mean either a proof-of-concept software, or a prototype software.
null
CC BY-SA 3.0
null
2014-12-19T16:02:06.330
2014-12-19T16:02:06.330
null
null
2452
null
3730
1
null
null
9
2753
I am training random forest models in R using `randomForest()` with 1000 trees and data frames with about 20 predictors and 600K rows. On my laptop everything works fine, but when I move to amazon ec2, to run the same thing, I get the error: ``` Error: cannot allocate vector of size 5.4 Gb Execution halted ``` I'm using the `c3.4xlarge` instance type so it's pretty beefy. Does anyone know a workaround for this to get it to run on this instance? I would love to know the memory nuances that causes this problem only on the ec2 instance and not on my laptop (OS X 10.9.5 Processor 2.7 GHz Intel Core i7; Memory 16 GB 1600 MHz DDR3) Thanks.
R random forest on Amazon ec2 Error: cannot allocate vector of size 5.4 Gb
CC BY-SA 4.0
null
2014-12-19T16:02:48.693
2019-06-07T17:27:34.900
2019-06-07T17:27:34.900
29169
null
[ "r", "random-forest" ]
3731
2
null
3730
5
null
Here's some advice (use at your own risk!): - make sure that your R environment on EC2 is identical to the one on your laptop - make sure you're using 64-bit images for your virtual machine instance - try to create/enable swap space: https://stackoverflow.com/a/22247782/2872891 - see this discussion: https://stackoverflow.com/q/5171593/2872891 - see this discussion: https://stackoverflow.com/q/12322959/2872891 - see this discussion: https://stackoverflow.com/q/1358003/2872891 If the above-mentioned simpler measures don't help OR you want to achieve more scalability and/or performance, including an ability to parallelize the process on a single machine or across multiple machines, consider using `bigrf` R package: [http://cran.r-project.org/web/packages/bigrf](http://cran.r-project.org/web/packages/bigrf). Also see this discussion: [https://stackoverflow.com/q/1358003/2872891](https://stackoverflow.com/q/1358003/2872891).
null
CC BY-SA 3.0
null
2014-12-19T16:33:44.423
2014-12-19T16:33:44.423
2017-05-23T12:38:53.150
-1
2452
null
3732
2
null
3730
2
null
Additional to other ideas: reduce your data until you figure out what you can run on the Amazon instance. If it can't do 100k rows then something is very wrong, if it fails at 590k rows then its marginal. The c3.4xlarge instance has 30Gb of RAM, so yes it should be enough.
null
CC BY-SA 3.0
null
2014-12-19T23:24:55.017
2014-12-19T23:24:55.017
null
null
471
null
3733
1
3736
null
5
600
I have a non-function (not in closed form) that takes in a few parameters (about 20) and returns a real value. A few of these parameters are discrete while others are continuous. Some of these parameters can only be chosen from a finite space of values. Since I don't have the function in closed form, I cannot use any gradient based methods. However, the discrete nature and the boxed constraints on a few of those parameters restrict even the number of derivative free optimization techniques at my disposal. I am wondering what are the options in terms of optimization methods that I can use.
Which Optimization method to use?
CC BY-SA 3.0
null
2014-12-20T03:37:21.820
2017-03-26T19:26:39.027
null
null
847
[ "optimization" ]
3734
1
null
null
2
164
My apologies in advance as I am new to this. I have searched the internet and tried various processes and nothing seems to work or address this situation. I have a dataset of 30,000 transactions and 500,000 items. Average item size for a transaction is 50. The dataset is sparse, so the support number must be set quite low. Furthermore, the rules become more valuable the larger the number of items in the rule. I have tried running this in arules and the tests fail after exceeding 64 gb of RAM (the limit of the machine). I have tried reducing items and transactions to smaller subsets, but still hit this memory limit. Ultimately, I am looking for ways to cluster large groups of similar accounts by selection of items and generate confidence and lift of various next items selected from those clusters. My question: are there alternative, more efficient ways to do this, or other approaches to consider? Thank you.
Approaches to high dimension pattern matching problem
CC BY-SA 3.0
null
2014-12-20T07:03:57.837
2015-04-19T14:49:57.930
null
null
6506
[ "r" ]
3735
2
null
3734
1
null
According to the following discussion on StackOverflow, a situation like that you've described can occur, when one of the variables in a data set is of unexpected type (for example, a `factor` instead of a `character`): [https://stackoverflow.com/q/7246412/2872891](https://stackoverflow.com/q/7246412/2872891). Also, consider using package `bigmemory`, recommended in the accepted answer, or similar packages for big data analysis. For the latter, please see section "Large memory and out-of-memory data" in CRAN Task View [High-Performance and Parallel Computing with R](http://cran.r-project.org/web/views/HighPerformanceComputing.html). Finally, an additional note. There is an ecosystem of R packages, built around the `arules` package, which includes supporting packages for algorithms (`arulesNBMiner`), applications (`arulesSequences`, `arulesClassify`) and visualization (`arulesViz`). You are likely aware of [that](http://www.jmlr.org/papers/volume12/hahsler11a/hahsler11a.pdf), but I have decided to include this good-to-know fact just in case and for the sake of completeness.
null
CC BY-SA 3.0
null
2014-12-20T08:06:13.077
2014-12-20T08:06:13.077
2017-05-23T12:38:53.587
-1
2452
null
3736
2
null
3733
4
null
[Bayesian optimization](http://en.wikipedia.org/wiki/Bayesian_optimization) is a principled way of sequentially finding the extremum of black-box functions. What's more, there a numerous software packages that make it easy, such as [BayesOpt](https://bitbucket.org/rmcantin/bayesopt) and [MOE](http://engineeringblog.yelp.com/2014/07/introducing-moe-metric-optimization-engine-a-new-open-source-machine-learning-service-for-optimal-ex.html). Another flexible Bayesian framework that you can use for optimization is [Gaussian processes](http://en.wikipedia.org/wiki/Gaussian_process): [Global Optimisation with Gaussian Processes](http://ml.dcs.shef.ac.uk/gpss/gpss13/talks/Sheffield-GPSS2013-Osborne.pdf)
null
CC BY-SA 3.0
null
2014-12-21T00:40:12.337
2017-03-26T19:26:39.027
2017-03-26T19:26:39.027
29575
381
null
3737
2
null
2510
1
null
This is a tricky question, because it's easy and difficult at the same time. Easy, because there is a lot of resources that potentially can help you make a decision on the topic. Difficult, because the situation is very different for a particular person (not to mention that their interest might change at any time), which makes extremely difficult for other people to give you a good advice and for you to make the right decision. As for data science career options, you can certainly consider a degree path (MS or MS + PhD), but you need to be aware of other options. For a comprehensive resource, dedicated to data science and related degree programs (both traditional and online), please visit this page: [http://www.kdnuggets.com/education/index.html](http://www.kdnuggets.com/education/index.html). A comprehensive review of all these offerings is IMHO an enormous task and is far beyond an answer here on Stack Exchange or, even, a lengthy blog post. However, nowadays one is not limited to traditional educational options and I think it's important to be aware of other educational options. One of the other options include certifications (linked at the above-mentioned page, but, in my opinion, the only certification worth considering is the [Certified Analytics Professional](https://www.informs.org/Certification-Continuing-Ed/Analytics-Certification) as a solid and vendor-neutral certification from a reputable INFORMS). Another option is recently booming data science intensive educational offerings, from short-term (and often too commercial, to put it lightly) bootcamps to more solid offerings, including free, but competitive, ones, such as [Insight Data Science Fellows Program](http://insightdatascience.com), where one needs to be a PhD to apply, or its sister program [Insight Data Engineering Fellows Program](http://www.insightdataengineering.com), which doesn't have such requirement. Finally, there is yet another option: self-study. It partially intersects with the certificate option, if one uses massive open online courses (MOOC) (a review of which deserves a separate comprehensive post), but there are open curricula that might suit one better, such as the Open Source Data Science Masters curriculum, linked in my [earlier relevant answer](https://datascience.stackexchange.com/a/742/2452). P.S. While your question focuses on data science, I think that it may be wise to at least consider another career path, given your math background. I'm talking about operations research field, which is not that far away from data science (and even somewhat intersects with it). While similar, data science is IMHO more statistics-focused, whereas operations research is more math-focused, at least that's how I see it. Despite all the popularity and "gold rush" of data science, operations research career is a solid one, just not as hot. Of course, if you're excited about things like artificial intelligence, machine learning and, especially, deep learning, data science career is the way to go. Whatever you will choose, the good thing is that with your math background it will be easy to change focus, should you decide to. Hope this helps. Good luck!
null
CC BY-SA 3.0
null
2014-12-21T11:43:35.100
2014-12-25T10:41:26.943
2017-04-13T12:50:41.230
-1
2452
null
3738
1
null
null
27
11805
### Background I'm working on a time series data set of energy meter readings. The length of the series varies by meter - for some I have several years, others only a few months, etc. Many display significant seasonality, and often multiple layers - within the day, week, or year. One of the things I've been working on is clustering of these time series. My work is academic for the moment, and while I'm doing other analysis of the data as well, I have a specific goal to carry out some clustering. I did some initial work where I calculated various features (percentage used on weekends vs. weekday, percentage used in different time blocks, etc.). I then moved on to looking at using Dynamic Time Warping (DTW) to obtain the distance between different series, and clustering based on the difference values, and I've found several papers related to this. ### Question Will the seasonality in a specific series changing cause my clustering to be incorrect? And if so, how do I deal with it? My concern is that the distances obtained by DTW could be misleading in the cases where the pattern in a time series has changed. This could lead to incorrect clustering. In case the above is unclear, consider these examples: ### Example 1 A meter has low readings from midnight until 8AM, the readings then increase sharply for the next hour and stay high from 9AM until 5PM, then decrease sharply over the next hour and then stay low from 6PM until midnight. The meter continues this pattern consistently every day for several months, but then changes to a pattern where readings simply stay at a consistent level throughout the day. ### Example 2 A meter shows approximately the same amount of energy being consumed each month. After several years, it changes to a pattern where energy usage is higher during the summer months before returning to the usual amount. ### Possible Directions - I've wondered whether I can continue to compare whole time series, but split them and consider them as a separate series if the pattern changes considerably. However, to do this I'd need to be able to detect such changes. Also, I just don't know if this is a suitable way or working with the data. - I've also considered splitting the data and considering it as many separate time series. For instance, I could consider every day/meter combination as a separate series. However, I'd then need to do similarly if I wanted to consider the weekly/monthly/yearly patterns. I think this would work, but it's potentially quite onerous and I'd hate to go down this path if there's a better way that I'm missing. ### Further Notes These are things that have come up in comments, or things I've thought of due to comments, which might be relevant. I'm putting them here so people don't have to read through everything to get relevant information. - I'm working in Python, but have rpy for those places where R is more suitable. I'm not necessarily looking for a Python answer though - if someone has a practical answer of what should be done I'm happy to figure out implementation details myself. - I have a lot of working "rough draft" code - I've done some DTW runs, I've done a couple of different types of clustering, etc. I think I largely understand the direction I'm taking, and what I'm really looking for is related to how I process my data before finding distances, running clustering, etc. Given this, I suspect the answer would be the same whether the distances between series are calculated via DTW or a simpler Euclidean Distance (ED). - I have found these papers especially informative on time series and DTW and they may be helpful if some background is needed to the topic area: http://www.cs.ucr.edu/~eamonn/selected_publications.htm
How to deal with time series which change in seasonality or other patterns?
CC BY-SA 3.0
null
2014-12-22T03:30:45.673
2020-09-03T13:08:20.857
2020-06-16T11:08:43.077
-1
5246
[ "data-mining", "clustering", "time-series", "beginner" ]
3739
2
null
3738
4
null
If you want to just mine for seasonal patterns, then look into [autocorrelation](http://en.wikipedia.org/wiki/Autocorrelation). If you're looking for a model that can learn seasonal patterns and make forecasts from it, then [Holt-Winters](https://stat.ethz.ch/R-manual/R-patched/library/stats/html/HoltWinters.html) is a good start, and [ARIMA](http://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) would be a good thing to follow up with. [Here](http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/)[[pdf]](https://media.readthedocs.org/pdf/a-little-book-of-r-for-time-series/latest/a-little-book-of-r-for-time-series.pdf) is the tutorial that got me off the ground.
null
CC BY-SA 3.0
null
2014-12-22T05:14:16.130
2014-12-22T05:14:16.130
null
null
6391
null
3740
1
null
null
2
3222
I am performing document (text) classification on the category of websites, and use the website content (tokenized, stemmed and lowercased). My problem is that I have an over-represented category which has vastly more data points than any other (roughly 70% or 4000~ of my data points are of his one category, while about 20 other categories make up the last 30%, some of which have fewer than 50 data points). My first question: What could I do to improve the accuracy of my classifier in this case of sparse data for some of the labels? Should I simply discard a certain proportion of the data points in the category which is over-represented? Should I use something other than Gaussian Naive Bayes with tf-idf? My second question: After I perform the classification, I save the tfidf vector as well as the classifier to disk. However, when I re-rerun the classification on the same data, I sometimes get different results from what I initially got (for example, if previously a data point was classified as "Entertainment", it might receive "News" now). Is this indicative of an error in my implementation, or expected?
Improving Naive Bayes accuracy for text classification
CC BY-SA 3.0
null
2014-12-22T09:31:31.747
2016-01-27T06:39:35.897
2016-01-27T06:39:35.897
8820
5199
[ "machine-learning", "classification", "accuracy", "naive-bayes-classifier" ]
3741
1
null
null
2
186
I'm trying to figure out a strange phenomenon, when I use matrix factorization (the Netflix Prize solution) for a rating matrix: $R = P^T * Q + B_u + B_i$ with ratings ranging from 1 to 10. Then I evaluate the model by each label's absolute mean average error in test set, the first column is origin_score, the second(we don't transform the data, then train and its prediction error), the third(we transform the data all by dividing 2, train, and when I use this model to make prediction, firstly reconstruct the matrix and then just multiply 2 and make it back to the same scale) As you see, in grade 3-4 (most samples are label from 3-4), it's more precise while in high score range(like 9 and 10, just 2% of the whole traiing set), it's worse. ` +----------------------+--------------------+--------------------+ | rounded_origin_score | abs_mean_avg_error | abs_mean_avg_error | +----------------------+--------------------+---------------------+ | 1.0 | 2.185225396100167 | 2.559125413626183 | | 2.0 | 1.4072212825108161 | 1.5290497332538155 | | 3.0 | 0.7606073396581479 | 0.6285151230269825 | | 4.0 | 0.7823491986435621 | 0.6419077576969795 | | 5.0 | 1.2734369551159568 | 1.256590210555053 | | 6.0 | 1.9546560495715863 | 2.0461809588933835 | | 7.0 | 2.707229888048017 | 2.8866856489147494 | | 8.0 | 3.5084244741417137 | 3.7212155956153796 | | 9.0 | 4.357185793060213 | 4.590550124054919 | | 10.0 | 5.180752400467891 | 5.468600926567884 | +----------------------+--------------------+---------------------+ ` I've re-train the model several times, and got same result, so I think it's not effect by randomness.
Stochastic gradient descent in matrix factorization, sensitive to label's scale?
CC BY-SA 3.0
null
2014-12-22T13:04:38.577
2015-05-23T23:24:59.583
2014-12-22T15:41:29.910
1131
1048
[ "gradient-descent" ]
3742
1
3753
null
0
547
I've been working in SAS for a few years but as my time as a student with a no-cost-to-me license comes to an end, I want to learn R. Is it possible to transpose a data set so that all the observations for a single ID are on the same line? (I have 2-8 observations per unique individual but they are currently arranged vertically rather than horizontally.) In SAS, I had been using PROC SQL and PROC TRANSPOSE depending on my analysis aims. Example: ``` ID date timeframe fruit_amt veg_amt <br/> 4352 05/23/2013 before 0.25 0.75 <br/> 5002 05/24/2014 after 0.06 0.25 <br/> 4352 04/16/2014 after 0 0 <br/> 4352 05/23/2013 after 0.06 0.25 <br/> 5002 05/24/2014 before 0.75 0.25 <br/> ``` Desired: ``` ID B_fr05/23/2013 B_veg05/23/2013 A_fr05/23/2013 A_veg05/23/2013 B_fr05/24/2014 B_veg05/24/2014 (etc) <br/> 4352 0.25 0.75 0.06 0.25 . . <br/> 5002 . . . . 0.75 0.25 <br/> ```
Data transposition code in R
CC BY-SA 3.0
null
2014-12-22T14:06:45.610
2014-12-23T20:36:06.047
2014-12-22T16:23:45.540
6491
6491
[ "data-mining", "r", "dataset", "beginner" ]
3743
1
3746
null
2
654
I recently read a lot about the n-armed bandit problem and its solution with various algorithms, for example for webscale content optimization. Some discussions were referring to 'contextual bandits', I couldn't find a clear definition what the word 'contextual' should mean here. Does anyone know what is meant by that, in contrast to 'usual' bandits?
What does 'contextual' mean in 'contextual bandits'?
CC BY-SA 3.0
null
2014-12-22T15:52:12.133
2014-12-22T18:44:18.983
null
null
3132
[ "machine-learning" ]
3745
1
null
null
1
1239
I was looking to learn about Bayesian theory in decision tree and how it avoids overfitting but couldn't find any tutorials for someone just starting. Do you know any resources to learn about it?
Bayesian Decision Tree
CC BY-SA 3.0
null
2014-12-22T18:33:35.363
2020-05-16T17:36:09.177
null
null
6523
[ "machine-learning" ]
3746
2
null
3743
2
null
A contextual bandit algorithm not only adapts to the user-click feedback as the algorithm progresses, it also utilizes pre-existing information about the user's (and similar users) browsing patterns to select which content to display. So, rather than starting with no prediction (cold start) with what the user will click (traditional bandit and also traditional A/B testing), it takes other data into account (warm start) to help predict which content to display during the bandit test. See: [http://www.research.rutgers.edu/~lihong/pub/Li10Contextual.pdf](http://www.research.rutgers.edu/~lihong/pub/Li10Contextual.pdf)
null
CC BY-SA 3.0
null
2014-12-22T18:44:18.983
2014-12-22T18:44:18.983
null
null
3466
null
3747
2
null
3740
3
null
Regarding your first question... Do you anticipate the majority category to be similarly over-represented in real-world data as it is in your training data? If so, perhaps you could perform two-step classification: - Train a binary classifier (on all your training data) to predict membership (yes/no) in the majority class. - Train a multi-class classifier (on the rest of the training data) to predict membership in the remaining minority classes.
null
CC BY-SA 3.0
null
2014-12-22T19:12:53.943
2014-12-22T19:12:53.943
null
null
819
null
3748
2
null
2651
9
null
There's also Richard Socher's recent PhD dissertation on intersection of NLP and deep learning: [Recursive Deep Learning for Natural Language Processing and Computer Vision](http://nlp.stanford.edu/~socherr/thesis.pdf)
null
CC BY-SA 3.0
null
2014-12-22T19:14:26.313
2014-12-22T19:14:26.313
null
null
819
null
3749
2
null
3742
0
null
Try 'arrange(Data.frame.name, ID)' function from package 'dplyr'
null
CC BY-SA 3.0
null
2014-12-22T23:55:11.037
2014-12-22T23:55:11.037
null
null
5224
null
3751
1
null
null
5
331
I would like to pick up on the topic of deep learning. Should I begin from the topic of AI before working my way into Deep learning?
Route to picking up Deep learning
CC BY-SA 3.0
null
2014-12-23T07:15:34.500
2015-05-24T10:11:15.927
2015-05-24T10:11:15.927
1367
6536
[ "beginner", "deep-learning" ]
3752
2
null
3745
1
null
Regarding the being new to decision trees and wanting to get off the ground, I wrote [a tutorial on decision trees](http://thegrimmscientist.com/2014/10/23/tutorial-decision-trees/) that will help. Regarding methods to avoid overfitting: The game for any model is to limit its complexity to what is reasonable given the data you have. Complexity in decision trees is manifested as adding new decision boundaries, so any limit in complexity is a limit in the decision boundaries it can draw. Two common ways to do this is to place constraints on when a new decision can be created (a minimum of data in a leaf, significant increase in information, etc) or more simply to limit the max depth of the tree.
null
CC BY-SA 3.0
null
2014-12-23T17:10:10.777
2014-12-23T17:10:10.777
null
null
6391
null
3753
2
null
3742
0
null
You can use the `reshape2` package for this task. First, transform the data to the long format with `melt`: ``` library(reshape2) dat_m <- melt(dat, measure.vars = c("fruit_amt", "veg_amt")) ``` where `dat` is the name of your data frame. Second, cast to the wide format: ``` dcast(dat_m, ID ~ timeframe + variable + date) ``` The result: ``` ID after_fruit_amt_04/16/2014 after_fruit_amt_05/23/2013 after_fruit_amt_05/24/2014 after_veg_amt_04/16/2014 1 4352 0 0.06 NA 0 2 5002 NA NA 0.06 NA after_veg_amt_05/23/2013 after_veg_amt_05/24/2014 before_fruit_amt_05/23/2013 before_fruit_amt_05/24/2014 1 0.25 NA 0.25 NA 2 NA 0.25 NA 0.75 before_veg_amt_05/23/2013 before_veg_amt_05/24/2014 1 0.75 NA 2 NA 0.25 > ```
null
CC BY-SA 3.0
null
2014-12-23T18:11:24.117
2014-12-23T20:36:06.047
2014-12-23T20:36:06.047
106
106
null
3754
1
null
null
1
215
I am working on a project with two data sets. A time vs. speed data set (let's call it traffic), and a time vs. weather data set (called weather). I am looking to find a correlation between these two sets using Pig. However the traffic data set has the time field, D/M/Y hr:min:sec, and the weather data set has the time field, D/M/Y. Due to this I would like to average the speed per day and put it into a single D/M/Y value inside the traffic file. I then plan to use: ``` data = JOIN speed BY day, JOIN weather BY day with 'merge' ``` I will then find the correlation using: (I am borrowing this code from elsewhere) ``` set = LOAD 'data.txt' AS (speed:double, weather:double) rel = GROUP set ALL cor = FOREACH rel GENERATE COR(set.speed, set.weather) dump cor; ``` This is my first experience with Pig (I've never even used SQL), so I would like to know a few things: ``` 1. How can I merge the rows of my traffic file (ie. average D/M/Y hr:min:sec into D/M/Y)? 2. Is there a better way to find a correlation between the fields of different datasets? 3. Are the JOIN BY and the COR() functions used appropriately in my above code? ```
Hadoop/Pig Aggregate Data
CC BY-SA 3.0
null
2014-12-23T19:46:57.267
2014-12-23T19:46:57.267
null
null
2614
[ "apache-hadoop", "beginner", "correlation", "apache-pig" ]
3757
2
null
116
3
null
Apart from the fancier methods you could try the Bayes formula $P(I | p_1 ... p_n) = {{P(p_1 ... p_n | I) P(I)} \over \sum_i (P(p_1 ... p_n | i) P(i))}$ $P(I | p_1 ... p_n)$ is the probability that a user belongs to age group I if he liked $p_1, .., p_n$ $P(i)$ is the probability that a user belongs to age group $I$ $P(p_1 .. p_n | i)$ is the probability that a user liked $p_1, .., p_n$ if he belongs to age group $i$. - You already have the estimates for $P(i)$ from your data: this is just the proportion of users in age group I. - To estimate $P(p_1 ... p_n |i)$, for each age group $i$ estimate the probability (frequency) $p_{ij}$ to like a page $j$. To have $p_{ij}$ non-zero for all j, you can mix in the frequency for the whole population with a small weight. - Then $log P(p_1...p_n| i) = \sum(log p_{ij}, i = p_1, .., p_n)$, the sum over all pages that a new user likes. This formula would be approximately true assuming that a user likes the pages in his age group independently. - Theoretically, you should also add log $(1-p_{ij})$ for all $i$ that he hasn't liked, but in practice you should find that the sum of $log (1-p_{ij})$ will be irrelevantly small, so you won't need too much memory. If you or someone else has tried this, please comment about the result.
null
CC BY-SA 4.0
null
2014-12-24T13:46:47.367
2021-02-09T04:31:08.427
2021-02-09T04:31:08.427
29169
6550
null
3758
2
null
3741
2
null
The larger your target scores, the larger latent variables should be (well, it's not only a magnitude that matters, but also a variance, but it still applies to your case). There's no problem with larger coefficients of latent vectors unless you use regularization (and, likely, you do). In case of regularization your optimal solution will tend towards smaller values, and'd sometimes prefer to sacrifice some accuracy for lower regularization penalty. Gradient Descent doesn't suffer from problem of large coefficients (unless you run into some sort of numerical issues): if the learning rate is tuned properly (there are lots of stuff on it, google), it should arrive to equivalent parameters. Otherwise nobody guarantees you convergence :-) The common rule of thumb when doing regression (and your instance of matrix factorization is a kind of regression) is to standardize your data: make it having zero mean and unit variance.
null
CC BY-SA 3.0
null
2014-12-24T20:02:42.213
2014-12-24T20:02:42.213
null
null
811
null
3759
1
null
null
1
56
I am new to data science/ machine learning world. I know that in Statistics we assume that a certain event/ process has some particular distribution and the samples of that random process are part of some sampling distribution. The findings from the data could then be generalized by using confidence intervals and significance levels. How do we generalize our findings once we "learn" the patterns in the data set? What is the alternative to confidence levels here?
How are the findings learnt from the data set are generalized compared to Statistics?
CC BY-SA 3.0
null
2014-12-25T05:24:55.590
2014-12-29T09:21:26.780
null
null
4933
[ "machine-learning", "statistics" ]
3760
1
null
null
1
506
I'm trying to determine what is the best number of hidden neurons for my MATLAB neural network. I was thinking to adopt the following strategy: - Loop for some values of hidden neurons, e.g. 1 to 40; - For each NN with a fixed number of hidden neurons, perform a certain number of training (e.g. 40, limiting the number of epoch for time reasons: I was thinking to doing this because the network seems to be hard to train, the MSE after some epochs is very high) - Store the MSE obtained with all the nets with different number of hidden neurons - Perform the previous procedure more than 1 time, e.g. 4, to take into account the initial random weight, and take the average of the MSEs - Select and perform the "real" training on a NN with a number of hidden neurons such that the MSE previously calculated is minimized The MSE that I'm referring is the validation MSE: my samples splitting in trainining, testing and validation to avoid overfitting is 70%, 15% and 15% respectively) Other informations related to my problem are: fitting problem 9 input neurons 2 output neurons 1630 samples This strategy could be work? Is there any better criterion to adopt? Thank you Edit: Test done, so the result suggest me to adopt 12 neurons? (low validation MSE and number of neurons lower than 2*numberOfInputNeurons? but also 18 could be good... ![enter image description here](https://i.stack.imgur.com/qxOSV.png)
Neural Network Hidden Neuron Selection Strategy
CC BY-SA 3.0
null
2014-12-25T10:13:16.273
2022-04-27T22:30:55.150
2014-12-26T20:37:37.720
6559
6559
[ "machine-learning", "neural-network" ]
3761
2
null
3760
2
null
A rule of thumb approach is: - start with a number of hidden neurons equal (or little higher) that the number of features. - In your case it would be 9. My suggestion is to start with 9*2 = 18 to cover a wider range of possibilities. - Be sure your test and validation sets are selected "fairly": a random selection and varying the seed some number of times to test different configurations would be ok. In general, a number of neurons equal to the number of features will tend to make each hidden neuron try to learn that special thing that each feature is adding, so will could say it is "learning each feature" separately. Although this sounds good it might tend to overfitting. Since your number of inputs and your dataset size is small its ok to start with a hidden layer size of the double (18) and start lowering down. When the training error and test error stabilize in a difference lower than a threshold then you could have found a better generalizing model. Neural networks are very good at finding local optima by exploring deeply a solution from a starting point. However, the starting point it is also very important. If you are not getting a good generalization you might try to find good initial starting points with methods of Hybrid Neural Networks. A common one, for example, is using genetic algorithms to find an initial combination of weights and then start the neural from that point. Given that your search space would be better covered (in case your problem actually needs that). As for every problem in machine learning is very important to clean your data before introducing it to the NN. Try to be very detailed in order to avoid the NN to learn things you already know. For example if you know how two features are correlated improve the input data by making this correlation explicit so less workload is given to the NN (that might actually get you in trouble).
null
CC BY-SA 3.0
null
2014-12-25T15:40:43.283
2014-12-25T15:40:43.283
null
null
5143
null
3762
2
null
3760
0
null
## Top level: The rule is to chose the most simple network that can perform satisfactorily. See [this publication](https://ieeexplore.ieee.org/document/5726612) and its [PDF](https://web.archive.org/web/20181025152731/https://www.pdx.edu/sites/www.pdx.edu.sysc/files/EvaluatingANNs.pdf). ## The Methodology: So do your proposed test (training many networks at each number of hidden nodes) and plot the results. At the minimum number of nodes, you'll see the worst performance. As you increase the number of nodes, you'll see an increase in performance (reduction of error). At some point N, you'll see the performance seems to hit an upper limit and increasing nodes beyond this will stop giving significant performance gains. Further increases may start to hurt performance a little as training gets more difficult). That point N is the number of nodes you want. # How it worked for me: The first time I used this methodology, it created a beautiful almost-sigmoid-looking function with a very clear number of nodes that were needed to achieve good results. I hope this works for you as well as it worked for me.
null
CC BY-SA 4.0
null
2014-12-25T18:10:13.447
2022-04-27T22:30:55.150
2022-04-27T22:30:55.150
25180
6391
null
3763
1
null
null
4
581
In [this](http://www.wired.com/2014/01/how-to-hack-okcupid/all/) article, Chris McKinlay says he used AdaBoost to choose the proper "importances" of questions he answered on okcupid. If you haven't read and don't want to read the article, or are unfamiliar with okcupid and the question system, here's the data and problem he had: The goal is to "match" as highly as possible with as many users as possible, each of whom may have answered an arbitrary number of questions. These questions may have between 2 and 4 answers each, and for the sake of simplicity, let's pretend that the formula for a match% $\ M $ between you and another user is given by $\ M = Q_a/Q_c $ Where $\ Q_c $ is the number of questions you and the other user have in common, and $\ Q_a $ is the number of questions you both answered with the same value. The real formula is slightly more complex, but the approach would be the same regarding "picking" a correct answer (he actually used boosting to find the ideal "importance" to place on a given question, rather than the right answer). In any case, the point is you want to pick a certain value for each question, such that you maximize your match% with as many users as possible - something you might quantify by the sum of $\ M $ over all users. Now I've watched the MIT course on AI up to and including the lecture on boosting, but I don't understand how you would apply it to a problem like this. Honestly I don't even know where to begin with choosing rules for the weak learners. I don't have any "rules" about what values to choose for each question (if the user is under 5'5, choose A, etc) - I'm just trying to fit the data I have. Is this not the way boosting is supposed to be used? Is there likely some other optimization left out of how he figured this out?
How to apply AdaBoost to more "complex" (non-binary) classifications/data fitting?
CC BY-SA 3.0
null
2014-12-26T06:53:29.670
2015-05-18T00:09:04.393
null
null
6568
[ "machine-learning", "bigdata", "optimization" ]
3764
2
null
3738
20
null
After reading your question, I became curious about the topic of time series clustering and dynamic time warping (DTW). So, I have performed a limited search and came up with basic understanding (for me) and the following set of IMHO relevant references (for you). I hope that you'll find this useful, but keep in mind that I have intentionally skipped research papers, as I was more interested in practical aspects of the topic. Resources: - UCR Time Series Classification/Clustering: main page, software page and corresponding paper - Time Series Classification and Clustering with Python: a blog post and an ipython notebook - Capital Bikeshare: Time Series Clustering: another blog post - Dynamic Time Warping using rpy and Python: another blog post - Mining Time-series with Trillions of Points: Dynamic Time Warping at Scale: another blog post - Time Series Analysis and Mining in R (to add R to the mix): yet another blog post - And, finally, two tools implementing/supporting DTW, to top it off: R package and Python module
null
CC BY-SA 4.0
null
2014-12-26T10:01:46.500
2020-09-03T13:08:20.857
2020-09-03T13:08:20.857
86627
2452
null
3766
1
null
null
2
1232
If I execute the following code I have no problem: ``` require(foreign) require(nnet) require(ggplot2) require(reshape2) ml <- read.dta("https://stats.idre.ucla.edu/stat/data/hsbdemo.dta") ml$prog2 <- relevel(ml$prog, ref = "academic") test <- multinom(prog2 ~ ses + write, data = ml) predict(test, newdata = dses, "probs") ``` but if I try: ``` require(caret) ml <- read.dta("http://www.ats.ucla.edu/stat/data/hsbdemo.dta") ml$prog2 <- relevel(ml$prog, ref = "academic") test <- train(prog2 ~ ses + write,method="multinom" ,data = ml) predict(test$finalModel, newdata = dses, "probs") ``` it returns `Error in eval(expr, envir, enclos) : object 'sesmiddle' not found`, why?
predict with Multinomial Logistic Regression
CC BY-SA 4.0
null
2014-12-26T16:25:27.373
2018-09-08T22:06:18.120
2018-09-08T22:06:18.120
24968
6572
[ "machine-learning", "r", "logistic-regression", "predictive-modeling" ]
3767
2
null
3766
2
null
That's not the error I get and I'm thinking that you left out some code. I get: predict(test$finalModel, newdata = dses, "probs") Error in as.data.frame(newdata) : object 'dses' not found I don't know why I see this so much, but you should avoid using the `finalModel` object for prediction. `train` is doing some things that the model from `multinom` sin't aware of (such as expanding dummy variables). Try using predict(test, newdata = dses, "probs") and never use `predict(test$finalModel)` again... Max
null
CC BY-SA 3.0
null
2014-12-26T17:26:02.033
2014-12-26T17:26:02.033
null
null
810
null
3770
1
3801
null
12
12599
Google Trends returns weekly data so I have to find a way to merge them with my daily/monthly data. What I have done so far is to break each serie into daily data, for exemple: from: 2013-03-03 - 2013-03-09 37 to: 2013-03-03 37 2013-03-04 37 2013-03-05 37 2013-03-06 37 2013-03-07 37 2013-03-08 37 2013-03-09 37 But this is adding a lot of complexity to my problem. I was trying to predict google searchs from the last 6 months values, or 6 values in monthly data. Daily data would imply a work on 180 past values. (I have 10 years of data so 120 points in monthly data / 500+ in weekly data/ 3500+ in daily data) The other approach would be to "merge" daily data in weekly/monthly data. But some questions arise from this process. Some data can be averaged because their sum represent something. Rainfall for example, the amount of rain in a given week will be the sum of the amounts for each days composing the weeks. In my case I am dealing with prices, financial rates and other things. For the prices it is common in my field to take volume exchanged into account, so the weekly data would be a weighted average. For financial rates it is a bit more complex a some formulas are involved to build weekly rates from daily rates. For the other things i don't know the underlying properties. I think those properties are important to avoid meaningless indicators (an average of fiancial rates would be a non-sense for example). So three questions: For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ? I feel like breaking weekly/monthly data into daily data like i've done is somewhat wrong because I am introducing quantities that have no sense in real life. So almost the same question: For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ? Last but not least: when given two time series with different time steps, what is better: Using the Lowest or the biggest time step ? I think this is a compromise between the number of data and the complexity of the model but I can't see any strong argument to choose between those options. Edit: if you know a tool (in R Python even Excel) to do it easily it would be very appreciated.
How to merge monthly, daily and weekly data?
CC BY-SA 3.0
null
2014-12-28T11:29:14.240
2015-01-06T01:06:26.827
2014-12-28T11:50:12.840
303
303
[ "time-series" ]
3771
1
4872
null
1
179
I am trying to predict clients comportement from market rates. The value of the products depends on the actual rate but this is not enough. The comportement of the client also depends on their awareness wich depends on the evolution of rates. I've added this in model using past 6 month rates as features in polynomial regression. In fact media coverage of rate mostly depends on rate variations and I wanted to add that in my model. The idea would be to add a derivative/variation of rate as a feature. But I anticipated something wrong, example with only two month , my variation will be of the form $x_n - x_{n-1}$ that is a simple linear combination of actual and past rates. So for a 1d polynomial regression i will have: $$ x_{n+1} = a * x_{n} + b * x_{n-1} + c * (x_{n} - x_{n-1})$$ instead of: $$ x_{n+1} = a_0 * x_{n} + b_0 * x_{n-1}$$ wich is strictly equivalent with $ a + c = a_0 $ and $b-c= b_0$. Higher polynomial degree results in a more or less equivalent result. I am thinking about a way to include derivative information but it seems not possible. So I am wondering if all the information is included in my curve. Is this a general idea ? all information is somewhat directly contained in data and modifications of features will result in higher order objective function ?
Time series: variations as a feature
CC BY-SA 3.0
null
2014-12-28T14:42:52.403
2015-01-14T01:08:04.997
2015-01-09T15:28:36.703
303
303
[ "time-series", "feature-selection", "predictive-modeling", "optimization" ]
3772
1
3774
null
5
1348
I am trying to predict a time serie from another one. My approach is based on a moving windows. I predict the output value of the serie from the following features: the previous value and the 6 past values of the source serie. Is it usefull to add the previous value of the time serie ? I feel like I don't use all the information contained in the curve to predict futures values. But I don't see how it would be possible to use all previous data to predict a value (first, the number of features would be growing trough time...). What are the caveats of a 6 month time-window approach ? Is there any paper about differents method of feature selection for time-series ?
Time series prediction
CC BY-SA 3.0
null
2014-12-28T14:58:45.320
2014-12-29T06:53:32.200
null
null
303
[ "time-series", "feature-selection" ]
3773
2
null
3772
3
null
I'm fairly new to this myself, but have spent a lot of time recently learning about time series and hope that I can help fellow learners. If I had the reputation to comment I'd ask you a few things first, but I can't. I'll happily do further work and edit this response if you respond or make edits to your question. With those caveats out of the way: # Is it useful to use the previous value as a feature? One of the first things I would say is that the correct aspects to be looking at in your data very much depends on the nature of the data, as well as what you're trying to do with it: - It sounds like you have monthly values, but it's not clear how far into the future you're wanting to predict, or how much historic data you have access to. - We also don't know what these two series represent, or why one time series is being used to predict the other - and without that, I don't think anyone will be able to tell you whether the previous value of the series to be predicted is valuable information or not. # Any caveats to using a 6 month time window? One obvious caveat to only using the last 6 months is that if there's any seasonality over the year-long period then you're going to miss it. - If you're not sure: if you have multiple years of information, try plotting the series you want to predict over multiple years. You may well be able to see whether the series generally increases or decreases at certain times of year. If you can share this plot here, it might help people answer your questions in more depth. As far as caveats about this time-window approach, I'm not too clear from your post what algorithm you're using to predict values. More information on that would be helpful; it's possible that rather than questioning what features to select, you should be questioning what methodology to use for forecasting. # Helpful further reading? Once you've provided more information I'll be happy to tackle your last question on suitable reading if I'm able to. For now, I will say that there's a lot of information available, but a lot of it is academic in nature. Quite frequently papers aren't easy to digest, seem to contradict one another, or are only relevant to specific situations. This is a rapidly growing and changing field, so it's sometimes difficult to find a clear best practice or consensus opinion. That said, it might be worth looking at some of the free, online courses available to see if any would help you understand the area you're interested in: - Coursera "data science" related courses
null
CC BY-SA 3.0
null
2014-12-28T20:10:11.300
2014-12-28T20:10:11.300
null
null
5246
null
3774
2
null
3772
4
null
Let me give you a few simple approaches in time series analysis. The first approach consists in using previous values of your time series $Y_{t}$ as in $Y_{t} = \phi_{1}Y_{t-1} + ... + \phi_{n}Y_{t-n}$. In case you don't know, these models are called autoregressive (AR) models. This answers your first question. Of course it is useful to include the previous value of your time series. There is a whole set of models based on that idea. The second approach is taking a window and extracting some features to describe the time series at each point in time. Then you use a conventional machine learning technique to predict future values as typically done. This is more common in a classification or regression setting but future values can be thought of as classifying future values. This technique has the advantage of dramatically reducing the number of features, although you usually lose characteristics associated with time. This addresses your second concern. Another model that could be helpful in your case is the vector autoregressive model (VAR) (using [Wikipedia](http://en.wikipedia.org/wiki/Vector_autoregression)'s notation): $$\left( \begin{array}{ccc} y_{1,t} \\ y_{2,t} \end{array}\right) = \left( \begin{array}{ccc}c_{1} \\ c_{2}\end{array}\right) + \left( \begin{array}{ccc}A_{1,1} & A_{1,2} \\ A_{2,1} & A_{2,2}\end{array}\right)\left( \begin{array}{ccc} y_{1,t-1} \\ y_{2,t-1} \end{array}\right) + \left( \begin{array}{ccc} e_{1,t} \\ e_{2,t} \end{array}\right)$$ Here you can see that $y_{1,t}$ has a contribution from its previous value $t_{1,t-1}$ but also includes the value of the other series $y_{2,t-1}$ in a linear combination. As usual, the purpose is to find the elements of $A_{i,j}$ that minimize some measure of error between observed values and estimated values. A general suggestion: The first thing you need to do is to test the autocorrelation of your first series in order to confirm that an autoregressive approach is suitable and then test the cross correlation between both series to support the idea that using the second series to improve your predictions is appropriate.
null
CC BY-SA 3.0
null
2014-12-29T06:53:32.200
2014-12-29T06:53:32.200
null
null
4621
null
3775
2
null
2507
2
null
This does not seem a Data Science problem. However there are very nice tools to do exactly that, checkout: logstash, flume and fluentd. Actually if you want to be able to filter in fast and "smart" way checkout Kibana from the guys of ElastichSearch ([http://www.elasticsearch.org/overview/kibana](http://www.elasticsearch.org/overview/kibana)). Those tools are enough to solve your problem in a very efficient way.
null
CC BY-SA 3.0
null
2014-12-29T08:38:44.673
2014-12-29T08:38:44.673
null
null
5143
null
3776
2
null
3759
1
null
Bias-Variance decomposition is one way, and V-C dimension/bound is another... Both of these are metrics you can use to get a feeling of how confident you should be that your training results will generalize to out-of-sample. V-C dimension focuses on the results of this learning algorithm outcome. Bias-Variance focuses on the expected outcome of the algorithm itself. Pick your poison - I sincerely hope that this was helpful.
null
CC BY-SA 3.0
null
2014-12-29T09:03:08.853
2014-12-29T09:08:32.210
2014-12-29T09:08:32.210
6597
6597
null
3777
2
null
3759
0
null
Essentially machine learning uses non parametric methods...The assumption is that you have enough data and (computation) time You Identify best model by cross validation (rather than eg assessing significance of coefficients) , and estimate prediction error by using test set. Confidence intervals can also be generated by bootstrapping.
null
CC BY-SA 3.0
null
2014-12-29T09:21:26.780
2014-12-29T09:21:26.780
null
null
1256
null
3779
1
null
null
2
227
Let's assume I'm building a content recommendation engine for online content. I have web log data which I can import into a graph, containing a user ID, the page they viewed, and a timestamp for when they viewed it. Essentially, it's a history of which pages each user has viewed. I've used Neo4J and Cypher to write a simple traversal algorithm. For each page (node) I want to build recommendations for, I find which pages are most popular amongst other users who have also visited this page. That seems to give decent results. But, I'd like to explore alternatives to see which method gives the most relevant recommendations. In addition to my simple traversal, I'm curious if there are graph-level properties I can utilize to build another set of recommendations with this data set. I've looked at [SNAP](http://snap.stanford.edu/), it has a good library for algorithms like Page Rank, Clauset-Newman-Moore community detection, Girvan-Newman community detection, betweenness centrality, K core, and so on. There are many algorithms to choose from. Which algorithms have you had success with? Which would you consider trying?
Which Graph Properties are Useful for Predictive Analytics?
CC BY-SA 3.0
null
2014-12-29T17:27:59.763
2015-05-11T21:06:39.957
null
null
3466
[ "recommender-system", "predictive-modeling", "graphs" ]
3780
1
null
null
1
48
Recently, I studied a paper called "What Does Your Chair Know About Your Stress Level? It can be download at the link below. > http://www.barnrich.ch/wiki/data/media/pub/2010_what_does_your_chair_know_about_your_stress_level.pdf at page 210, Fig.5 (picture 5) mentioned that > Spectra of the norm of the CoP vector during the experiment (stages 3–7) for the same subject used in Fig. 4. How did they do to transform Fig.4 to Fig.5? And what do x-axis and y-axis mean in Fig.5? Fig4 ![Fig4](https://i.stack.imgur.com/Iq7qc.png) Fig5 ![Fig5](https://i.stack.imgur.com/7HPxu.png)
How to transform one graph to a spectrum?
CC BY-SA 3.0
null
2014-12-30T06:43:24.340
2014-12-30T06:43:24.340
null
null
6602
[ "machine-learning" ]
3781
1
null
null
4
5115
I have a dataset of xyz coordinates with a date component in a pandas dataframe ex: - date1: $[x_1,y_1,z_1]$, - date2: $[x_2,y_2,z_2]$, - date3: $[x_3,y_3,z_3]$, .. I would like to classify a sample of object positions over the period of a week (using indexes to re-map the classification label back to the date), like this: - Week 1: $[x_1,y_1,z_1], [x_2, y_2, z_2], [x_3,y_3,z_3], [x_4,y_4,z_4], [x_5,y_5,z_5], [x_6,y_6,z_6], [x_7,y_7,z_7]$, - Week 2: $[x_8,y_8,z_8],[x_9,y_9,z_9],[x_{10},y_{10},z_{10}],[x_{11},y_{11},z_{11}],[x_{12},y_{12},z_{12}],[x_{13},y_{13},z_{13}],[x_{14},y_{14},z_{14}]$, When I try to run KMeans it returns ``` k_means = KMeans(n_clusters=cclasses) k_means.fit(process_set.hpc) date_classes = k_means.labels_ ValueError: Found array with dim 3. Expected <= 2 ``` Questions: - Do I have to run it through Principal Component Analysis (PCA) first? if so, how do I maintain date mapping to the classification created? - Are there any other methods I could use? - Am I doing everything completely backwards and should consider a different approach, any thoughts? Thanks!
Scikit Learn: KMeans Clustering 3D data over a time period (dimentionality reduction?)
CC BY-SA 3.0
null
2014-12-30T10:33:08.240
2015-04-15T15:10:56.650
2015-01-15T13:41:08.957
1367
6606
[ "python", "scikit-learn", "k-means", "dimensionality-reduction" ]
3783
1
null
null
0
161
Imagine modeling the "input(plaintext) - output(ciphertext)" pairs of an encryption algorithm as a data science problem. Very informally, the strength of an encryption scheme is measured by the randomness (unpredictability, or entropy) of the output. This is counter-intuitive to classic regression problems, which are frequently used for prediction. Say, informally, the strength of an encryption scheme is determined by the number of such input-output pairs needed beyond which it becomes predictable. How do we model this as data science problem? Given all pairs of two different encryption schemes, can we determine which is stronger just by using the input-output pairs of both the schemes? Is there any other way apart from regression to solve this problem?
How to model this "un predicatability" problem?
CC BY-SA 3.0
null
2014-12-30T12:55:06.247
2015-10-21T18:49:41.357
2015-10-21T02:42:36.147
13413
4695
[ "machine-learning", "regression" ]
3784
1
null
null
2
58
while comparing two different algorithms to feature selection I stumbled upon the follwing question: For a given dataset with a discrete class variable we want to train a naive bayes classifier. We decide to conduct feature selection during preprocessing using naive bayes in a wrapper approach. Does this method of feature selection consider the size of the used feature subsets? ![Naive Bayes Classification](https://i.stack.imgur.com/2Vh2Q.png) When considering how NB classifies a given instance, the size of the feature subset being used for classification only influences the number of parts that the product of the conditional dependencies has but that does not make a difference, or does it? It'd be great if someone could offer a solid explanation since for me it's more of a gut feeling at the moment.
Does a NB wrapper consider feature subset size?
CC BY-SA 3.0
null
2014-12-30T14:23:58.323
2015-04-30T01:37:30.127
null
null
6608
[ "classification", "feature-selection" ]
3785
2
null
3740
3
null
The correct term for what you're describing here is 'class imbalance' or 'class imbalance problem' . It'd be great if you could include that in the title to have a more meaningful title. Concerning your first question: Have you plotted a confusion matrix of the resulting classifications? Is the reason why the accuracy is not satisfying really that instances are wrongly classified as the most common class? Given your context of application it seems that you could use a certain degree of oversampling. To what extent this can be applied should depend on the frequency of each underrepresented class. If there is a reasonably high variation in the value distributions on the underrepresented class instances then one could argue that this should be considered when applying the oversampling. Also, the approach Charlie suggested in his answer could be considered, given that the instances of the underrepresented classes would form a dataset that is suitable for classification. Edit: I haven't used naive bayes for text classification yet, so I'm not too sure how your attributes look like exactly. Do you just use the frequency of the terms that scored best with tfidf? More general, do you have discrete or continously valued attributes? If the latter is the case you should consider using some kind of discretisation. Regarding your second question: Are you splitting the dataset in any way? Normally, given that the classifier has been trained on the same data, the outcome for the same instance should also be the same.
null
CC BY-SA 3.0
null
2014-12-30T19:45:57.283
2014-12-30T19:56:36.333
2014-12-30T19:56:36.333
6608
6608
null
3786
1
null
null
1
720
I have an excel file containing lot of columns. I want to create a graph database in Neo4J with all these columns as a separate node so that I can establish relationship between them and play around with the cypher query to get to know my data quite well. I did write a cypher query to make each column as a node, but when I run the relationship query it shows me the name of one node and gives a random number for all the related nodes despite changing the captions in the properties section. Sample Create statement: ``` LOAD CSV WITH HEADERS FROM "File Location" AS row CREATE (:NodeName {DATE_OCCURED: row.Date}); ``` Query to visualize the relationship between nodes: ``` MATCH (a)-[:`REL NAME`]->(b) RETURN a,b LIMIT 25; ``` This gives me the values the node "a" and random numbers for all the node "b". I don't know where I am going wrong. I thought it has something to do with the way the data has been set up in my file. So I created a file for each column of my main file and wrote the create statements to get the nodes. I still don't get the actual values of the related nodes. ![Sample output for the relationship query](https://i.stack.imgur.com/xxKVi.png)
How to create separate nodes for each column in a spreadsheet using Neo4J?
CC BY-SA 3.0
null
2014-12-30T21:45:13.760
2014-12-31T14:48:00.310
2014-12-31T14:48:00.310
5043
5043
[ "neo4j", "csv" ]
3787
2
null
3784
1
null
# Short Version The difference is in the trade-off between complexity of model and how hard it will be to train. The more states your variable can take, the much more data you'll need to train it. It sounds like you're reducing the feature space through some front-end processing. # Example ### Context: Say you're using NBC to model if you go outside given the high temperature that day. Let's say temperature is given to you as an int in Celsius which just happened to range between 0 and 40 degrees. Over one year of running the experiment - recording the temperature and if you went outside - you'll have 365 data points. ### Internal Representation: The internal structure the NBC uses will be a matrix with 82 values (41 values for your one input variable * 2 because you have two states in your output class). That means each bin will have an average of `365/82 = 8.9ish` samples. In practice, you'll probably see a few bins with lots more samples than this and a many bins with 0 samples. ### A Pitfall Say you saw 8 cases at a temperature of 5 C (all of which you stayed inside) and nothing at a temp of 3 or 4 C. If, after the model is built, you 'ask it' what class a temp of 4 C should be in. It won't know. Intuitively, we'd say "stay inside", but the model won't. One way to fix this is to bin the classes 0,1,2,... into larger groups of temperatures (i.e., class 0 for temp 0-3, class 1 for temp 4-7, etc). The extreme case of this would be two temperature states "high" and "low". The actual cut-off should depend more on the data observed, but one such scheme is converting temperatures within `0-20` to "low" and `21-40` is "high". ### Discussion This front end processing seems to be what you're talking about as the "wrapper" around the NBC. The result of our "high" and "low" binning scheme will result in a much smaller model (2 input states and 2 output classes gives you a `2*2 = 4` number of classes. This will be way easier to process and will take way less data to confidently train at the expense of complexity. One example of a drawback of such a scheme is: Say the actual pattern is that you love spring and fall weather and only go outside when it's not too hot or not too cold. So your outside adventures are evenly split across the upper part of the "low" class and the lower part of the "high" class giving you a really useless model. If this were the case, a 3-class bin scheme of "high", "medium", and "low" would be best.
null
CC BY-SA 3.0
null
2014-12-31T00:09:35.943
2014-12-31T00:09:35.943
null
null
6391
null
3788
1
null
null
2
550
What's the best / easiest way to create a graph from address data? For example, if I have 100 houses from all across a city is there any easy way to determine the shortest distance between two houses and all that good stuff? Would this require changing the data into coordinates and using GIS software or can I get away with using Python or R?
Creating a Graph from Address Data
CC BY-SA 3.0
null
2014-12-31T02:50:32.863
2016-01-28T12:41:51.127
null
null
6611
[ "python", "graphs" ]
3789
5
null
null
0
null
null
CC BY-SA 3.0
null
2014-12-31T15:34:17.397
2014-12-31T15:34:17.397
2014-12-31T15:34:17.397
-1
-1
null
3790
4
null
null
0
null
Statistics is a scientific approach to inductive inference and prediction based on probabilistic models of the data. By extension, it covers the design of experiments and surveys to gather data for this purpose.
null
CC BY-SA 3.0
null
2014-12-31T15:34:17.397
2017-06-18T08:05:52.913
2017-06-18T08:05:52.913
13526
6622
null
3791
2
null
3788
0
null
If you have latitude/longitude coordinate data, there should be no problem accomplishing this using great circle calculations which could indeed be accomplished in Python, R, or essentially any other language. Here is an article on several methods and calculations for this: [Calculate distance, bearing and more between Latitude/Longitude points](http://www.movable-type.co.uk/scripts/latlong.html) The main issue with street address data alone is of course the lack of contextual information regarding the physical layout of the streets. Given a sufficiently complete scale map of the relevant area, you could also calculate a distance. That said, the haversine formula discussed in the article above would have a greater accuracy unless the scale map mentioned was also plotted as the surface of a sphere.
null
CC BY-SA 3.0
null
2014-12-31T16:18:21.237
2014-12-31T16:18:21.237
null
null
2932
null
3792
1
null
null
2
141
I'm wondering if e-commerce companies where products are offered by users, such as EBay, are using Object Recognition to ensure that an uploaded image corresponds to an specific type of object (clothing, shoes, glasses, etc) either to classify automatically or more importantly to filter undesired images (such as non related or even illegal types). If so, which algorithms and/or open platforms could be use for doing so? From what I've looked it seems that HOG+Exemplar SVM might be one of the most accurate methods developed so far ([http://www.cs.cmu.edu/~efros/exemplarsvm-iccv11.pdf](http://www.cs.cmu.edu/~efros/exemplarsvm-iccv11.pdf)), even having couple of public repo's with Matlab implementations ([https://github.com/quantombone/exemplarsvm](https://github.com/quantombone/exemplarsvm)), but I'm still wondering if this is being used in industry.
Object Recognition for classification, is it being used in industry?
CC BY-SA 3.0
null
2014-12-31T17:57:48.243
2015-03-03T12:21:48.467
null
null
5143
[ "classification", "object-recognition", "hog", "computer-vision" ]
3793
2
null
3788
1
null
Here's a simple solution using the R package `ggmap`. ``` start <- '95 clark st, new haven, ct' end <- 'maison mathis, new haven, ct' legs <- route(start,end, alternatives = TRUE) ``` Will find routes from `start` to `end`. ``` m km miles seconds minutes hours startLon startLat endLon endLat leg route 1 60 0.060 0.0372840 7 0.1166667 0.001944444 -72.91617 41.31511 -72.91678 41.31541 1 A 2 718 0.718 0.4461652 95 1.5833333 0.026388889 -72.91678 41.31541 -72.92100 41.30979 2 A 3 436 0.436 0.2709304 64 1.0666667 0.017777778 -72.92100 41.30979 -72.92555 41.31171 3 A 4 431 0.431 0.2678234 68 1.1333333 0.018888889 -72.92555 41.31171 -72.92792 41.30829 4 A 5 60 0.060 0.0372840 7 0.1166667 0.001944444 -72.91617 41.31511 -72.91678 41.31541 1 B 6 1276 1.276 0.7929064 179 2.9833333 0.049722222 -72.91678 41.31541 -72.92430 41.30543 2 B 7 421 0.421 0.2616094 62 1.0333333 0.017222222 -72.92430 41.30543 -72.92869 41.30729 3 B 8 129 0.129 0.0801606 28 0.4666667 0.007777778 -72.92869 41.30729 -72.92792 41.30829 4 B 9 60 0.060 0.0372840 7 0.1166667 0.001944444 -72.91617 41.31511 -72.91678 41.31541 1 C 10 421 0.421 0.2616094 58 0.9666667 0.016111111 -72.91678 41.31541 -72.91924 41.31211 2 C 11 522 0.522 0.3243708 101 1.6833333 0.028055556 -72.91924 41.31211 -72.92502 41.31382 3 C 12 240 0.240 0.1491360 33 0.5500000 0.009166667 -72.92502 41.31382 -72.92555 41.31171 4 C 13 431 0.431 0.2678234 68 1.1333333 0.018888889 -72.92555 41.31171 -72.92792 41.30829 5 C ``` And then we can find the length of time estimated to walk there with something like this. ``` tapply(legs$minutes, legs$route, max) ``` Hope this helps!
null
CC BY-SA 3.0
null
2014-12-31T21:47:37.490
2014-12-31T21:47:37.490
null
null
4724
null
3795
1
null
null
0
543
This refers to a system described in a book by Nick Lawrence titled "[Correlithm Object Technology](http://rads.stackoverflow.com/amzn/click/0975276107)". The author coined the term "correlithm" as a combination of "correlation" and "algorithm". The Correlithm Objects ("COs") described in the book are, in the author's view, "primary data tokens" in biological neural systems and are central to "all high-level data representation, storage, and manipulation". In addition to this the author describes the COs as "important mathematical objects of the statistics of bounded, high-dimensional spaces". Considering these descriptions it seems like these COs could be useful for AI. What I was curious about is whether they are actually used anywhere in the industry, and if so, in what kind of situations?
Are Correlithm Objects used for anything in the industry?
CC BY-SA 3.0
0
2015-01-02T05:36:54.783
2018-10-16T20:25:49.890
null
null
6625
[ "machine-learning", "algorithms", "correlation" ]
3796
2
null
3792
1
null
As far as I am aware, object recognition is not extensively used in industry yet. Google's image search, for instance, is based on exploiting the text on the web pages rather than the images themselves. I have seen several start-up companies that market prototypes based on object recognition, but these products are either prototypes or not widely used. Regarding state-of-the-art techniques, you should take a look to recent literature on deep learning and deep neural networks applied to object recognition, along with more specific techniques like convolutional networks.
null
CC BY-SA 3.0
null
2015-01-02T10:58:38.187
2015-01-02T10:58:38.187
null
null
2576
null
3797
1
null
null
7
3361
I am reading up about lambda architecture. It makes sense. we have queue based data ingestion. we have an in-memory store for data which is very new and we have HDFS for old data. So we have our entire data set. in our system. very good. but the architecture diagram shows that the merge layer is able to query both the batch layer and the speed layer in one shot. How to do that? Your batch layer is probably a map reduce job or a HIVE query. The speed layer query is probably a scala program which is execution on the spark. Now how will you merge these? Is there any guidance.
Lambda Architecture - How to implement the Merge Layer / Query Layer
CC BY-SA 3.0
null
2015-01-02T20:03:59.950
2015-02-03T19:53:47.233
null
null
6644
[ "bigdata", "apache-hadoop" ]
3798
2
null
2346
5
null
Try exploring the rich field of "Anomaly Detection in Time Series". Control charts and CUSUMs (or cumulative sum control charts) might help you. Simple [Bullet Graphs](http://en.wikipedia.org/wiki/Bullet_graph) might be all you need. Based on historical data and domain knowledge, define normal variance. Then make it clear to stakeholders when the current value is outside of predefined ranges. Stephen Few is an expert in business dashboards. Any of his books will help you. If you are open to R, try [Shiny](http://shiny.rstudio.com/) to create simple interactive web applications (It is very straightforward). There is also an open source package for [anomaly detection](https://github.com/twitter/AnomalyDetection), both local and global. Create quick prototypes and get feedback!
null
CC BY-SA 3.0
null
2015-01-03T01:07:15.080
2015-01-08T03:12:03.630
2015-01-08T03:12:03.630
1330
1330
null
3799
2
null
1174
4
null
I have shared a number of resources on time series classification and clustering in one of my recent answers here on Data Science StackExchange: [https://datascience.stackexchange.com/a/3764/2452](https://datascience.stackexchange.com/a/3764/2452). Hopefully, you will find them relevant to this question and useful.
null
CC BY-SA 3.0
null
2015-01-03T01:17:47.957
2015-01-03T01:17:47.957
2017-04-13T12:50:41.230
-1
2452
null
3800
2
null
3770
2
null
This won't be a very satisfying answer, but here's my take... > For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ? For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ? Same answer for both: you can't do this for unknown properties, and for known properties it will depend on how the values were computed. As you alluded to: > (an average of fiancial rates would be a non-sense for example) There is no single transformation that will be appropriate in all cases, whether the properties/values are known or unknown. Even with known properties, you'll likely need a unique transformation for each type: mean, median, mode, min, max, boolean, etc. > when given two time series with different time steps, what is better: Using the Lowest or the biggest time step ? Whenever possible, try to preserve the full granularity of the smallest possible step. Assuming you know how to transform the values, you can always roll-up the steps (e.g., day to month, month to year)... but you won't necessarily be able to reconstruct smaller steps from larger ones following a lossy conversion.
null
CC BY-SA 3.0
null
2015-01-03T03:15:08.960
2015-01-03T03:15:08.960
2020-06-16T11:08:43.077
-1
819
null
3801
2
null
3770
9
null
> when given two time series with different time steps, what is better: Using the Lowest or the biggest time step ? For your timeseries analysis you should do both: get to the highest granularity possible with the daily dataset, and also repeat the analysis with the monthly dataset. With the monthly dataset you have 120 data points, which is sufficient to get a timeseries model even with seasonality in your data. > For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ? To obtain say weekly or monthly data from daily data, you can use smoothing functions. For financial data, you can use moving average or exponential smoothing, but if those do not work for your data, then you can use the spline smoothing function "smooth.spline" in R: [https://stat.ethz.ch/R-manual/R-patched/library/stats/html/smooth.spline.html](https://stat.ethz.ch/R-manual/R-patched/library/stats/html/smooth.spline.html) The model returned will have less noise than the original daily dataset, and you can get values for the desired time points. Finally, these data points can be used in your timeseries analysis. > For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ? To obtain daily data when you have monthly or weekly data, you can use interpolation. First, you should find an equation to describe the data. In order to do this you should plot the data (e.g. price over time). When factors are known to you, this equation should be influenced by those factors. When factors are unknown, you can use a best fit equation. The simplest would be a linear function or piecewise linear function, but for financial data this won't work well. In that case, you should consider piecewise cubic spline interpolation. This link goes into more detail on possible interpolation functions: [http://people.math.gatech.edu/~meyer/MA6635/chap2.pdf](http://people.math.gatech.edu/~meyer/MA6635/chap2.pdf). In R, there is a method for doing interpolation of timeseries data. Here you would create a vector with say weekly values and NAs in the gaps for the daily values, and then use the "interpNA" function to get the interpolated values for the NAs. However, this function uses the "approx" function to get the interpolated values, which applies either a linear or constant interpolation. To perform cubic spline interpolation in R, you should use the "splinefun" function instead. Something to be aware of is that timeseries models typically do some sort of averaging to forecast future values whether you are looking at exponential smoothing or Auto-Regressive Integrated Moving Average (ARIMA) methods amongst others. So a timeseries model to forecast daily values may not be the best choice, but the weekly or monthly models may be better.
null
CC BY-SA 3.0
null
2015-01-03T07:58:10.663
2015-01-03T07:58:10.663
null
null
6648
null
3802
2
null
3770
5
null
I'm not an expert in this area, but I believe that your question is concerned with time series aggregation and disaggregation. If that is the case, here are some hopefully relevant resources, which might be helpful in solving your problem (first five items are main, but representative, and last two are supplementary): - Temporal Aggregation and Economic Time Series - Temporal Disaggregation of Time Series (IMHO, an excellent overview paper) - CRAN Task View: Time Series Analysis (R-focused) - Introduction to R's Time Series Facilities - Working with Financial Time Series Data in R - Notes on chapters contents for the book "Time Series Analysis and Forecasting" - Discussion on Cross Validated on daily to monthly data conversion (Python-focused)
null
CC BY-SA 3.0
null
2015-01-03T08:48:51.070
2015-01-03T09:00:28.947
2017-04-13T12:44:20.183
-1
2452
null
3804
1
null
null
2
3484
For an upcoming project, I'm mining textual posts from an online forum, using Scrapy. What is the best way to store this text data? I'm thinking of simply exporting it into a JSON file, but is there a better format? Or does it not matter?
Format for storing textual data
CC BY-SA 3.0
null
2015-01-03T22:38:39.097
2017-07-18T08:40:39.433
2015-01-08T23:25:32.430
1367
6660
[ "text-mining", "crawling" ]
3806
2
null
2346
3
null
The optimal solution very much depends on multiple factors, including your (current and future) business and IT processes, stakeholders' needs and preferences, and, in general, business and IT architectures. So, IMHO it's difficult to answer this wide and not well-defined question. Having said that, I hope that you will find the following earlier answers of mine here on Data Science StackExchange relevant and useful. - On data analysis workflow (introductory): https://datascience.stackexchange.com/a/1006/2452 - On data analysis workflow (more detailed): https://datascience.stackexchange.com/a/759/2452 - On dashboard visualization: https://datascience.stackexchange.com/a/907/2452 - On big data visualization: https://datascience.stackexchange.com/a/3723/2452
null
CC BY-SA 3.0
null
2015-01-04T05:22:48.670
2015-01-04T05:22:48.670
2017-04-13T12:50:41.230
-1
2452
null
3807
1
null
null
1
233
Im doing my academic project. im having the base paper for reference the paper is IEEE paper "effective and efficient clustering methods for correlated probabilistic graph". i wish to do this in R tool. in this paper two algorithm are implemented. i like to implement the peedr algorithm in the paper. how can i give the input for that algorithm.? suggest the packgages in R tool the paper can be found here [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6570474](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6570474)
How i can generate the probabilistic graph for my dataset?
CC BY-SA 3.0
null
2015-01-04T10:59:42.973
2015-05-09T01:45:07.583
2015-04-09T01:03:45.010
8953
6664
[ "r", "graphs" ]
3808
1
null
null
1
124
I'm trying to develop my neural network with both early stopping and bayesian regularization (matlab implementation, lm algorithm is used for both). Since in bayesian regularization I have not the validation set, how can I compare the generalization capability of the networks obtained with the two methodologies? Thanks
Compare Neural Network generalization results
CC BY-SA 3.0
null
2015-01-04T11:22:28.817
2015-01-05T22:33:02.010
null
null
6559
[ "machine-learning", "neural-network" ]
3810
2
null
3807
2
null
Søren Højsgaard has many quality resources for graphical models in R. He has a tutorial ["Graphical Models with R"](http://people.math.aau.dk/~sorenh/misc/2012-Oslo-GMwR/GMwR-notes.pdf) and a [list of CRAN packages](http://cran.r-project.org/web/views/gR.html). Additionally, [mclust](http://cran.r-project.org/web/packages/mclust/index.html) is one of the best clustering packages in R.
null
CC BY-SA 3.0
null
2015-01-04T19:29:48.930
2015-01-04T19:29:48.930
null
null
1330
null
3811
1
3818
null
1
96
I'm building a neural network to analyze a business' sales. I'm normalizing all input values to the range `{0,1}`. I'm struggling with the day of the week column. Business days are identified by a number ranging `{1-5}` (1=Monday). Normalizing these values to the range `{0,1}` is straightforward, but results in a major bias in the final output. The reason is that the full range of normalized values for the business day column is explored with every week worth of data, whereas other price-related column explore their full range of normalized values infrequently. The business day column ends up being the largest contributor to the final output. How can I normalize it to make its contribution more in tune with the rest of the inputs?
normalize identification values properly
CC BY-SA 3.0
null
2015-01-04T21:52:38.977
2015-01-05T23:09:25.800
null
null
6669
[ "machine-learning", "neural-network" ]
3812
2
null
2578
0
null
Don't forget about MLlib (the hot kid on the block), cloudera-ml, cloudera oryx, cloudera oryx2, and a myriad of other efforts to make old algorithms like k-means solve nonexistant problems on wannabe big data... # There is no one size fits all. Thus, there is no answer to your question, in particular no concise answer.
null
CC BY-SA 3.0
null
2015-01-04T22:29:14.980
2015-01-04T22:29:14.980
null
null
924
null
3813
2
null
3804
1
null
In general, use a storage method that allows you to quickly query it. If your collection is huge, you might need something Lucene-based, like ElasticSearch. If you are a SQL crack and your favorite DB supports it, a full-text index might do the trick. For small sizes like the 5000 documents, even Linux' LocateDb+grep or OSX' spotlight could be enough. The important point is to be able to quickly verify assumptions about the content of your data - how many documents contain X and Y, does any document contain W but not V, etc. This will be useful at both the whole set level as well as for analyzing your topic clusters. Finally, a few GNU tools or SQL mastery can also help you profile your document sets more efficiently (n-gram counts/ranks, collocations, concordances, etc) Edit: that means, for the above reasons and given your collection size, good old plain text (in a file system or a database) might be more efficient than any "fancy" format.
null
CC BY-SA 3.0
null
2015-01-04T23:57:20.150
2015-01-04T23:57:20.150
null
null
6672
null
3814
1
null
null
1
83
It is often pointed out that sample is an overloaded term in statistics and the sciences being supported by statistics. In my field (geological sciences) as in most other sciences, the process of collecting meaningful data is critical and discussions about the traps and pitfalls in that process talk about sampling. Not far down the road from that, particularly when lab results are back, conversations involving statisticians, data scientists, geomathematicians, GIS analysts and even normal geologists are likely to attempt to include multiple meanings of sample in the same sentence! Q: Have any data scientists (or statisticians) found practical ways to communicate these different meanings? One way is to always add soil, rock, statistical and so on before sample. But I was curious if there are any other approaches to effective communication that are in use.
Communicating clearly about "samples"
CC BY-SA 3.0
null
2015-01-05T18:34:55.920
2015-04-05T23:59:08.727
null
null
6646
[ "statistics", "sampling" ]
3815
2
null
3814
1
null
"Sample" as a noun usually refers to a single data point. "Sample" as a verb is the act of extracting a subset of data points from some larger body (reality or a larger dataset). The only way to be less ambiguous is use more specific words than "data" or "sample". ## Example: Say you collect 1MM data points from four different sensors in the field giving you four sets of 250k data points. Say this data is too big for some demo of a model you're testing or an analysis you're running, so you pick 100k data points evenly split across the four sensors (giving four sets of 25k data points). In this example, you're sampling twice. First, to gather your 1MM data points sampling from reality. Second, you sample again to decrease the size of your data set to something more manageable. 'Data' or 'sample' could refer to reality, the 1MM dataset, the 100k dataset, or any of the sensor-specific subsets. To make it less ambiguous, establish a unique name as soon as possible for each possible definition you'll be working with. ('reality' for the set of all possibly observed samples. 'the complete dataset', something derived from the source of the dataset, or even X for the full 1MM dataset. 'our trial dataset', or even Y for the small 100k dataset. What you actually do comes down to context and what's appropriate for your intended audience, but the general answer is to use more specific words.
null
CC BY-SA 3.0
null
2015-01-05T22:16:06.093
2015-01-05T22:16:06.093
null
null
6391
null
3816
2
null
3808
2
null
As Neil said in the comments, split out a [test set](http://en.wikipedia.org/wiki/Test_set) (data you don't use in training either model) and see how each trained model performs on the test set.
null
CC BY-SA 3.0
null
2015-01-05T22:33:02.010
2015-01-05T22:33:02.010
null
null
6391
null
3817
2
null
3770
2
null
> For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ? Aggregation. For example, you have the number of time people searched for 'widgets' every day. Add up the daily totals for a month to get monthly totals. I would need to see more specifics about the actual data collected at each granularity to give you a more complete version. > For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ? You can't. In physics, a comparable idea is the [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency). The general idea is that you can't add more information than what you already have present in your data without bringing in more data. Given only the day someone ran a query, how can you tell what time of day that query was ran? You may be able to make some inferences, but the only way to answer the question is to directly or indirectly bring in more information to the system. There are things you can do to make informed guesses at the daily state of monthly variables (as gchaks mentioned, interpolation), but your data is still fundamentally monthly data stretched to look daily. > When given two time series with different time steps, what is better: Using the Lowest or the biggest time step ? That totally depends on what you're trying to answer. The smaller granularity will be more sensitive to noise and other anomalies. The lager granularity will be able to answer questions more confidently, but loose some of it's usefulness. For example, if you're trying to see when people start looking up venues to weekend plans to know when to launch marketing campaigns for a new night club, you'll want to be looking at daily data, if not smaller. If you're looking at the general trending of night clubs to figure out who you want to invest in, then monthly would probably be better.
null
CC BY-SA 3.0
null
2015-01-05T22:49:27.293
2015-01-05T22:49:27.293
null
null
6391
null
3818
2
null
3811
1
null
It is possible that the other variables you're feeding into the NN are simply bad at predicting sales. Sell prediction is a notoriously hard problem. Specifically the addressing of mapping a multi-state categorical variable to the NN's {0,1} input range: Another idea is to change that one, 5-state variable into five boolean variables. Rather than {0,0.25,0.5,0.75,1.0} on your one variable, make each of the five boolean variables represent a single day and make [1,0,0,0,0] equal Monday, [0,1,0,0,0] equal Tuesday, etc. I've personally had more success both with training good networks and introspecting the network itself when spreading out states of classes like that. Other hacks you can try: * Take out the the 'day' column all together and see if any of the other variables get used. * Plot the distribution of spend as a function of day. Even if nothing else comes of this current model, it sounds like you've found one interesting insight already. * Consider also trying different models.
null
CC BY-SA 3.0
null
2015-01-05T23:09:25.800
2015-01-05T23:09:25.800
null
null
6391
null
3819
2
null
3770
2
null
Being able to aggregate data based on date segments is a piece of cake using Tableau software. You would simply plug your data into the tool, and then you can drag and drop both the metric and date dimensions onto a report body space. Tableau will instantaneously whip up aggregate visualizations and/or detail data tables, on the fly. You can group/sum by year, quarter, month, week, day, hour, etc. (standard, built in, out of the box functionality offered by the tool) Also, if you wanted to incorporate additional data later on (which I assume is what you meant by 'unknown properties'), you can import another data set and easily append it onto the first one, as long as it also offers dates. I would recommend checking out the free version, which I believe is called Tableau Public.
null
CC BY-SA 3.0
null
2015-01-06T01:00:54.457
2015-01-06T01:06:26.827
2015-01-06T01:06:26.827
6682
6682
null
3820
2
null
1007
0
null
You don't need a tool and I don't recommend you use one. Convert the html to well-formed XML (XHTML) - I recommend the tagsoup. Once you've done that the data is just another XML feed and you can write an XSLT transformation (or XQuery) to access and pull out the data you want in the format you want. That might mean learning XSLT/XQuery if you don't already know it but you will be learning skills that (unlike scraping tools) have multiple rather than just than one useful application.
null
CC BY-SA 3.0
null
2015-01-06T01:26:41.587
2015-01-06T01:26:41.587
null
null
498
null
3821
1
3823
null
7
1711
I am interested in finding a statistic that tracks the unpredictability of a time series. For simplicity sake, assume that each value in the time series is either 1 or 0. So for example, the following two time series are entirely predictable TS1: 1 1 1 1 1 1 1 1 TS2: 0 1 0 1 0 1 0 1 0 1 0 1 However, the following time series is not that predictable: TS3: 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 1 1 1 I am looking for a statistic that given a time series, would return a number between 0 and 1 with 0 indicating that the series is completely predictable and 1 indicating the series in completely unpredictable. I looked at some entropy measures like Kolmogorov Complexity and Shannon entropy, but neither seem to fit my requirement. In Kolmogorov complexity, the statistic value changes depending on the length of the time series (as in "1 0 1 0 1" and "1 0 1 0" have different complexities, so its not possible to compare predictability of two time series with differing number of observations). In Shannon entropy, the order of observations didn't seem to matter. Any pointers on what would be a good statistic for my requirement?
Finding unpredictability or uncertainty in a time series
CC BY-SA 3.0
null
2015-01-06T04:34:09.967
2015-01-06T07:40:12.223
null
null
6685
[ "time-series" ]
3823
2
null
3821
7
null
Since you have looked at Kolmogorov-Smirnov and Shannon entropy measures, I would like to suggest some other hopefully relevant options. First of all, you could take a look at the so-called [approximate entropy $ApEn$](http://en.wikipedia.org/wiki/Approximate_entropy). Other potential statistics include block entropy, T-complexity (T-entropy) as well as Tsallis entropy: [http://members.noa.gr/anastasi/papers/B29.pdf](http://members.noa.gr/anastasi/papers/B29.pdf) In addition to the above-mentioned potential measures, I would like to suggest to have a look at available statistics in Bayesian inference-based model of stochastic volatility in time series, implemented in `R` package `stochvol`: [http://cran.r-project.org/web/packages/stochvol](http://cran.r-project.org/web/packages/stochvol) (see detailed [vignette](http://cran.r-project.org/web/packages/stochvol/vignettes/article.pdf)). Such statistics of uncertainty include overall level of volatility $\mu$, persistence $\phi$ and volatility of volatility $\sigma$: [http://simpsonm.public.iastate.edu/BlogPosts/btcvol/KastnerFruwhirthSchnatterASISstochvol.pdf](http://simpsonm.public.iastate.edu/BlogPosts/btcvol/KastnerFruwhirthSchnatterASISstochvol.pdf). A comprehensive example of using stochastic volatility model approach and `stochvol` package can be found in the excellent blog post ["Exactly how volatile is bitcoin?"](http://www.themattsimpson.com/2014/02/22/exactly-how-volatile-is-bitcoin) by Matt Simpson.
null
CC BY-SA 3.0
null
2015-01-06T07:40:12.223
2015-01-06T07:40:12.223
null
null
2452
null
3825
1
null
null
1
78
I have a data set, in excel format, with account names, reported symptoms, a determined root cause and a date in month year format for each row. I am trying to implement a mahout like system with a purpose of determining the likelihood symptoms an account can report by doing a user based similarity kind of a thing. Technically, I am just hoping to tweak the recommendation system into a deterministic system to spot out the probable symptoms an account can report on. Instead of ratings, I can get the frequency of symptoms by accounts. Is it possible to use a programming language or any other software to build such system? Here is an example: Account : X Symptoms : AB, AD, AB, AB Account : Y Symptoms : AE, AE, AB, AB, EA For the sake of this example, let's assume that all the dates are this month. O/P: Account : X Symptom: AE Here both of them have reported AB 2 or more times. I could fix such number as a threshold to look for probable symptoms.
Mimic a Mahout like system
CC BY-SA 3.0
null
2015-01-06T14:08:35.533
2015-11-20T01:08:37.500
null
null
5043
[ "recommender-system", "similarity", "apache-mahout" ]
3826
1
3827
null
2
211
I'm trying to do a correlation analysis between inputs and outputs inspecting the data in order to understand which input variables to include. What could be a threshold in the correlation value to consider a variable eligible to be an input for my Neural Network?
Correlation threshold for Neural Network features selection
CC BY-SA 3.0
null
2015-01-06T20:43:35.480
2015-01-07T01:26:17.560
null
null
6559
[ "machine-learning" ]
3827
2
null
3826
2
null
Given non-linearity of neural networks, I believe correlation analysis isn't a good way to estimate importance of variables. For example, imagine that you have 2 input variables - `x1` and `x2` - and following conditions hold: - cor(x2, y) = 1 if x1 = 1 - cor(x2, y) = 0 otherwise - x1 = 1 in 10% of cases That is, `x2` is a very good predictor for `y`, but only given that `x1 = 1`, which is the case only in 10% of data. Taking into account correlations of `x1` and `x2` separately won't expose this dependency, and you will most likely drop out both variables. There are other ways to perform feature selection, however. Simplest one is to train your model with all possible sets of variables and check the best subset. This is pretty inefficient with many variables, though, so many ways to improve it exist. For a good introduction in best subset selection see chapter 6.1 of [Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/).
null
CC BY-SA 3.0
null
2015-01-07T01:26:17.560
2015-01-07T01:26:17.560
null
null
1279
null