Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
567
2
null
564
5
null
I think you're looking for a [similarity matrix](http://cgi.mtc.sri.com/Cluster-Lab/) (see bottom of the page). If you don't have data on similarity between certain pairs, you can always leave them as grey or white. Also, this will only work for data sets small enough to actually make out what's going on. I'd say 25 rows / columns maximum. In a similarity matrix, x, and y coordinates correspond to the two things you're comparing, while a [colormap](http://en.wikipedia.org/wiki/False_color) magnitude represents similarity EDIT: One thing you could do to replace the colormap is the insert, say, circles of different sizes according to the similarity metric. Or you could insert the numbers themselves, again, varying the size of the number as the magnitude of that number varies. Size usually works best is business visualizations.
null
CC BY-SA 3.0
null
2014-06-24T20:28:33.693
2014-06-24T20:37:02.377
2014-06-24T20:37:02.377
1163
1163
null
568
2
null
447
3
null
I suggest the use of Hidden Markov Models, with two possible states: (1) high levels and (0) low levels. This technique might be helpful to decode your signal. Probably you would need a specific HMM for each codification. If noise is an issue an FIR filter with a Blackman-Harris window function would allow you to isolate the frequency you're concerned with.
null
CC BY-SA 3.0
null
2014-06-24T20:34:18.473
2014-06-25T06:58:43.670
2014-06-25T06:58:43.670
890
1155
null
569
2
null
559
1
null
Is it a bird? Is it a cat? We have black-and-white cat-sized! magpies here. so that would fail. First thing would be to exclude all areas that are green, cats are seldom green. Then compare the rest to a reference image to remove static things like stones and stairs. Detecting objects of a minimum size should be possible, but for a classification the resolution is too low. Could be also your neighbor testing his new remote controlled drone. With two cameras you could do a 3d mapping of the objects and eliminate flying objects.
null
CC BY-SA 3.0
null
2014-06-24T20:35:44.713
2014-06-24T20:48:37.837
2014-06-24T20:48:37.837
1164
1164
null
570
2
null
564
2
null
Personally, I think Netflix got it right. Break it down into a confidence rating from 1-5 and show your recommendations based on the number of yellow stars. It doesn't have to be stars, but those icon based graphs are very easy to interpret and get the point across clearly.
null
CC BY-SA 3.0
null
2014-06-25T00:13:50.803
2014-06-25T00:13:50.803
null
null
434
null
571
1
572
null
5
744
I'm trying to rank some percentages. I have numerators and denominators for each ratio. To give a concrete example, consider ratio as `total graduates / total students` in a school. But the issue is that `total students` vary over a long range (1000-20000). Smaller schools seem to have higher percentage of students graduating, but I want to standardize it, and not let the size of the school affect the ranking. Is there a way to do it?
Standardize numbers for ranking ratios
CC BY-SA 3.0
null
2014-06-25T02:33:22.673
2014-06-25T03:59:46.827
2014-06-25T03:59:46.827
84
1174
[ "statistics" ]
572
2
null
571
5
null
This is relatively simple to do mathematically. First, fit a regression line to the scatter plot of "total graduates" (y) vs. "total students" (x). You will probably see a downward sloping line if your assertion is correct (smaller schools graduate a higher %). You can identify the slope and y-intercept for this line to convert it into an equation y = mx + b, and then do a little algebra to convert the equation into normalized form: "y / x = m + b / x" Then, with all the ratios in your data , you should subtract this RHS: normalized ratio = (total grads / total students) - (m + b / total students) If the result is postive, then the ratio is above normal for that size (i.e. above the regression line) and if it is negative it is below the regression line. If you want all positive numbers, you can add a positive constant to move all results above zero. --- This is how to do it mathematically, but I suggest that you consider whether it is wise, from a data analysis point of view, to normalize by school size. This depends on the purpose of your analysis and specifically how this ratio is being analyzed in relation to other data.
null
CC BY-SA 3.0
null
2014-06-25T03:11:02.457
2014-06-25T03:11:02.457
null
null
609
null
573
2
null
559
2
null
OpenCV's [background subtraction](http://docs.opencv.org/trunk/doc/tutorials/video/background_subtraction/background_subtraction.html) will find objects moving about your harden. After that you could use a classifier or shape analysis to differentiate between cats, people, trees and etc.
null
CC BY-SA 3.0
null
2014-06-25T03:30:19.787
2014-06-25T03:30:19.787
null
null
1176
null
574
2
null
116
2
null
This is a very interesting problem. I faced a similar one by analyzing the pictures users upload to the social network. I did the following approach: - Rather than associating data to ages (15 y.o., 27 y.o., ...) what I did is to establish different groups of ages: Less than 18, from 18 to 30 and greater than 30 (this is due to the specific problem we were facing, but you can choose whatever intervals you want). This division helps a lot to solve the problem. - Afterwards, I created a hierarchical clustering (divisive or aggregative). Then I choose those branches where I had users with known ages (or group ages) and then for that branch I extended the same age to that group. This approach is semi-supervised learning and I recommended it in case you only have some data labeled. Please, notice that on a social network, people usually lie about the age (just for fun, or sometimes because they want to camuflate themselves on the social net).
null
CC BY-SA 3.0
null
2014-06-25T07:22:33.170
2014-06-25T07:22:33.170
null
null
1155
null
575
2
null
376
3
null
This is a very good question and a common situation. In my opinion there are three different factors that must be controlled: - Data: There exist already different benchmarks in order to evaluate algorithms and architectures. These data must be publicly available so that everybody can contrast their approaches. - Architecture: My suggestion is to test everything on the cloud, so that everyone can contrast their results and also there is no doubt the same machines and software is used. - Algorithms: If you have developed a distributed algorithm, it would be great to compare your algorithm on a specific data. In this case, algorithms must not be public. So, answering your question, if you want to compare different experiments and state to what extent your distributed algorithm outperforms others, you should try to replicate as accurate as possible the same environment (data and architecture) where the experiments were carried out. If this is not possible, my suggestion is that you test your algorithm with public data and cloud architecture so that you become a referent as you are facilitating the comparison of future algorithms.
null
CC BY-SA 3.0
null
2014-06-25T09:19:25.560
2014-06-25T09:19:25.560
null
null
1155
null
576
2
null
196
1
null
You should try arules package in R. It allows you to create not only the association rules but also to specify the length of each rule, the importance of each rule and also you can filter them, which is what you are looking for (try the rhs() command of this package).
null
CC BY-SA 3.0
null
2014-06-25T10:15:31.477
2014-06-25T10:15:31.477
null
null
1155
null
577
2
null
376
2
null
Though it's easy to say, it's better to treat the environment that changes as variables, describe/estimate your algorithm's performance base on these variables. And hopefully others will do the same. Of interest, [Experiments as Research Validation -- Have We Gone too Far?](http://infolab.stanford.edu/~ullman/pub/experiments.pdf).
null
CC BY-SA 3.0
null
2014-06-25T10:59:56.737
2014-06-25T10:59:56.737
null
null
743
null
580
2
null
265
5
null
I have read some about it and I had [this blog](http://fellgernon.tumblr.com/post/46117939292/predicting-who-will-win-a-nfl-match-at-half-time#.UtehM7TWtQg) in mind. This blog deals with the prediction of a NFL match after the half time is already over. The prediction is 80% accurate with simple GLM model. I do not know if that is suitable for soccer.
null
CC BY-SA 4.0
null
2014-06-25T14:30:10.597
2020-08-20T18:25:42.540
2020-08-20T18:25:42.540
98307
1155
null
581
2
null
22
32
null
In my opinion, there are solutions to deal with categorical data in clustering. R comes with a specific distance for categorical data. This distance is called [Gower](http://www.rdocumentation.org/packages/StatMatch/versions/1.2.0/topics/gower.dist) and it works pretty well.
null
CC BY-SA 4.0
null
2014-06-25T14:36:29.163
2020-08-07T14:12:05.687
2020-08-07T14:12:05.687
98307
1155
null
582
2
null
116
4
null
Some research from [D. Nguyen et al.](http://www.dongnguyen.nl/publications.html) try to predict twitter user's age based on their tweets. Maybe you find them useful. They use logistic and linear regression.
null
CC BY-SA 3.0
null
2014-06-25T14:43:53.680
2014-06-25T14:43:53.680
null
null
743
null
583
2
null
518
4
null
The original MacQueen k-means publication (the first to use the name "kmeans") is an online algorithm. > MacQueen, J. B. (1967). "Some Methods for classification and Analysis of Multivariate Observations". Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability 1. University of California Press. pp. 281–297 After assigning each point, the mean is incrementally updated. As far as I can tell, it was also meant to be a single pass over the data only, although it can be trivially repeated multiple times to reassign points until convergence. MacQueen usually takes fewer iterations than Lloyds to converge if your data is shuffled. On ordered data, it can have problems. On the downside, it requires more computation for each object, so each iteration takes slightly longer. When you implement a parallel version of k-means, make sure to study the update formulas in MacQueens publication. They're useful.
null
CC BY-SA 3.0
null
2014-06-25T15:42:19.287
2014-06-25T15:42:19.287
null
null
924
null
584
2
null
458
7
null
The original MacQueen k-means publication (the first to use the name "kmeans") is an online algorithm. > MacQueen, J. B. (1967). "Some Methods for classification and Analysis of Multivariate Observations". Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability 1. University of California Press. pp. 281–297 After assigning each point, the mean is incrementally updated using a simple weighted-average formula (old mean is weighted with n, the new observation is weighted with 1, if the mean had n observations before). As far as I can tell, it was also meant to be a single pass over the data only, although it can be trivially repeated multiple times to reassign points until convergence. MacQueen usually takes fewer iterations than Lloyds to converge if your data is shuffled (because it updates the mean faster!). On ordered data, it can have problems. On the downside, it requires more computation for each object, so each iteration takes slightly longer (additional math operations, obviously).
null
CC BY-SA 3.0
null
2014-06-25T15:46:21.267
2014-06-25T15:46:21.267
null
null
924
null
585
2
null
103
6
null
DBSCAN (see also: Generalized DBSCAN) does not require a distance. All it needs is a binary decision. Commonly, one would use "distance < epsilon" but nothing says you cannot use "similarity > epsilon" instead. Triangle inequality etc. are not required. Affinity propagation, as the name says, uses similarities. Hierarchical clustering, except for maybe Ward linkage, does not make any assumption. In many implementations you can just use negative distances when you have similarities, and it will work just fine. Because all that is needed is min, max, and <. Kernel k-means could work IF your similarity is a good kernel function. Think of it as computing k-means in a different vector space, where Euclidean distance corresponds to your similarity function. But then you need to know k. PAM (K-medoids) should work. Assign each object to the most similary medoid, then choose the object with the highest average similarity as new medoid... no triangle inequality needed. ... and probably many many more. There are literally hundreds of clustering algorithms. Most should work IMHO. Very few seem to actually require metric properties. K-means has probably the strongest requirements: it minimizes variance (not distance, or similarity), and you must be able to compute means.
null
CC BY-SA 3.0
null
2014-06-25T15:53:30.723
2014-06-25T15:53:30.723
null
null
924
null
586
1
910
null
7
2398
I'm new to the world of text mining and have been reading up on annotators at places like the [UIMA website](http://uima.apache.org/). I'm encountering many new terms like named entity recognition, tokenizer, lemmatizer, gazetteer, etc. Coming from a layman background, this is all very confusing so can anyone tell me or link to resources that can explain what the main categories of annotators are and what they do?
What are the main types of NLP annotators?
CC BY-SA 3.0
null
2014-06-25T17:37:23.380
2015-10-12T07:20:26.220
null
null
1192
[ "nlp", "text-mining" ]
587
1
null
null
14
29355
There are plenty of sources which provide the historical stock data but they only provide the OHLC fields along with volume and adjusted close. Also a couple of sources I found provide market cap data sets but they're restricted to US stocks. Yahoo Finance provides this data online but there's no option to download it ( or none I am aware of ). - Where can I download this data for stocks belonging to various top stock exchanges across countries by using their ticker name ? - Is there some way to download it via Yahoo Finance or Google Finance ? I need data for the last decade or so and hence need some script or API which would do this.
Where can I download historical market capitalization and daily turnover data for stocks?
CC BY-SA 3.0
null
2014-06-25T18:06:14.293
2021-04-04T12:43:17.083
2015-07-22T12:34:31.783
97
793
[ "dataset" ]
588
2
null
587
7
null
Quant SE is better place for questions related to getting financial data: - What data sources are available online - Market capitalization data | Quantitative Finance
null
CC BY-SA 4.0
null
2014-06-25T18:12:04.010
2021-03-30T08:44:47.947
2021-03-30T08:44:47.947
85045
97
null
591
2
null
468
3
null
If you are totally unfamiliar with ordinal regression, I would try to read the Tabachnick / Fidell ([http://www.pearsonhighered.com/educator/product/Using-Multivariate-Statistics-6E/0205849571.page](http://www.pearsonhighered.com/educator/product/Using-Multivariate-Statistics-6E/0205849571.page)) chapter on the topic first - while not written for R, the book is very good at conveying the general logic and the "do's" and "do nots". As a question: What are your response catgeories exactly? If they are some sort of scale, like "good - bad" it would be ok to use a linear regression (market research does it all the time...), but if the items are more disjunct, an ordinal regression might be better. I dimly remember that some books about structural equatiotion modelling mentioned that linear regression was superior for good scales than probit - bit I cannot recall the book at the moment, sorry! The most serious problem might be the number of dummy variables - a couple of hundred dummy variables will make the analysis slow, hard to interpret and probably unstable - are there enough cases for each dummy / dummy-combination?
null
CC BY-SA 3.0
null
2014-06-26T06:51:12.243
2014-06-26T06:51:12.243
null
null
791
null
592
2
null
554
4
null
I usually take a two-step approach - compute univariate (variable by variable) summary statistics such as mean, range, variance, number of missing, cardinality, etc. for each variable and look for oddities (e.g. range not plausible given the meaning of the variable). Plot histograms for those odd variables. - split the data into manageable subsets (choose a meaningful variable and split the data according to it e.g. all positive examples, and all negative) and explore them visually (e.g. with ggobi ). Especially use tools like brushing and scatter plots to understand how variables are linked together. And when you start building models, make sure to plot the residuals, looking for extreme errors that might be due to an outlier, or look at the confusion matrix and make sure it is balanced. Use k-fold cross validation to optimize your models and look at the variance of the training error for each fold, if one fold performs much worse than the others, it may contain outliers.
null
CC BY-SA 3.0
null
2014-06-26T07:41:52.403
2014-06-26T07:41:52.403
null
null
172
null
593
2
null
536
3
null
As many have answered, it is always best to avoid anything done manually as it is less reproducible/documentable. Your point about the overhead of writing a script vs. opening and fixing the file is valid, though. Best practice is often to - keep an untouched version of the data - build a working copy of the data with errors fixed - have a way to re-create a working copy from the original data The last point can be done with a script. Then make sure to be as specific as needed to modify only the data you want to modify, and to write the script in such a way that adding a fix by modifying the script is as easy as modifying the data directly. If your data lie in files, you can also use diffs/patches to store the original data along with the patches needed to produce the working data. To generate them, duplicate your working copy, perform the change, extract the diff/patch, save it, and delete the previous working copy.
null
CC BY-SA 3.0
null
2014-06-26T08:01:40.763
2014-06-26T12:39:40.443
2014-06-26T12:39:40.443
172
172
null
594
1
null
null
5
655
While doing a Google image search, the page displays some figured out categories for the images of the topic being searched for. I'm interested in learning how this works, and how it chooses and creates categories. Unfortunately, I couldn't find much about it at all. Is anyone able to shed some light on algorithms they may be using to do this, and what basis these categories are created from? For example, if I search for "animals" I get the categories: > "cute", "baby", "wild", "farm", "zoo", "clipart". If I go into "wild", I then have subcategories: > "forest", "baby", "africa", "clipart", "rainforest", "domestic".
How does Google categorize results from its image search?
CC BY-SA 3.0
null
2014-06-26T12:11:51.253
2016-10-11T12:03:51.227
2014-06-26T16:19:49.043
84
1206
[ "machine-learning", "classification", "google", "search" ]
595
1
611
null
2
2175
I'm new to machine learning, but I have an interesting problem. I have a large sample of people and visited sites. Some people have indicated gender, age, and other parameters. Now I want to restore these parameters to each user. Which way do I look for? Which algorithm is suitable to solve this problem? I'm familiar with Neural Networks (supervised learning), but it seems they don't fit.
How to use neural networks with large and variable number of inputs?
CC BY-SA 3.0
null
2014-06-26T12:25:55.663
2014-06-27T19:18:11.433
2014-06-26T16:25:31.680
84
1207
[ "machine-learning", "data-mining", "algorithms", "neural-network" ]
596
1
5759
null
10
923
There are several classic datasets for machine learning classification/regression tasks. The most popular are: - Iris Flower Data Set; - Titanic Data Set; - Motor Trend Cars; - etc. But does anyone know similar datasets for networks analysis / graph theory? More concrete - I'm looking for Gold standard datasets for comparing/evaluating/learning: - centrality measures; - network clustering algorithms. I don't need a huge list of publicly available networks/graphs, but a couple of actually must-know datasets. EDIT: It's quite difficult to provide exact features for "gold standard dataset", but here are some thoughts. I think, real classic dataset should satisfy these criteria: - Multiple references in articles and textbooks; - Inclusion in well-known network analysis software packages; - Sufficient time of existence; - Usage in a number of courses on graph analysis. Concerning my field of interest, I also need labeled classes for vertices and/or precomputed (or predefined) "authority scores" (i.e. centrality estimates). After asking this question I continued searching, and here are some suitable examples: - Zachary's Karate Club: introduced in 1977, cited more than 1.5k times (according to Google Scholar), vertexes have attribute Faction (which can be used for clustering). - Erdos Collaboration Network: unfortunately, I haven't find this network in form of data-file, but it's rather famous, and if someone will enrich network with mathematicians' specialisations data, it also could be used for testing clustering algorithms.
Network analysis classic datasets
CC BY-SA 3.0
null
2014-06-26T13:32:18.050
2015-05-11T21:08:34.500
2014-06-27T05:22:25.863
941
941
[ "dataset", "graphs" ]
597
2
null
594
3
null
I am not working at Google, but I think it is some sort of recommendation system based on the words which millions of users searched before. So those people who search for "animals" often search for "wild animals" for example. Like in many online stores they recommend you to buy something in addition to the product you are looking for based on the previous purchases of other users. There are many approaches how to build such recommendation system using machine learning, no one knows for sure what google uses.
null
CC BY-SA 3.0
null
2014-06-26T13:45:16.573
2014-06-26T13:45:16.573
null
null
478
null
598
2
null
595
2
null
There exist many possibilities for populating empty gaps on data. - Most repeated value: Fill the gaps with the most common value. - Create a distribution: Make the histogram and drop values according to that distribution. - Create a new label: Since you do not have information, do not assume any value and create another label/category to indicate that value is empty. - Create a classifier: Make a relation among the variable with empty gaps and the rest of the data and create a simple classifier. With this, populate the rest of the data. There exist many others, but these are the most common strategies. My suggestion is not to populate and to keep unknown what is unknown.
null
CC BY-SA 3.0
null
2014-06-26T14:17:44.077
2014-06-26T14:17:44.077
null
null
1155
null
599
2
null
596
1
null
The only thing I know about is benchmark data for Graph Databases, such as Neo4j. You may find links similar to this one: [http://istc-bigdata.org/index.php/benchmarking-graph-databases/](http://istc-bigdata.org/index.php/benchmarking-graph-databases/) where you can find data to test network analysis and graph theory. Furthermore, you could play with the API of Twitter/Facebook to collect your own data. This is also a suggestion in case you do not find the data you are looking for.
null
CC BY-SA 3.0
null
2014-06-26T14:21:10.513
2014-06-26T14:21:10.513
null
null
1155
null
600
1
null
null
11
2430
I am working on a political campaign where dozens of volunteers will be conducting door-knocking promotions over the next few weeks. Given a list with names, addresses and long/lat coordinates, what algorithms can be used to create an optimized walk list.
How do you create an optimized walk list given longitude and latitude coordinates?
CC BY-SA 3.0
null
2014-06-26T14:39:06.410
2014-07-07T14:41:59.523
2014-06-26T18:15:57.907
434
1208
[ "algorithms" ]
602
2
null
594
3
null
I thought of expanding a bit on the answer by Stanpol. While recommendation system is one approach of suggesting related queries, one more standard information retrieval based approach is the query expansion technique. Generally speaking, query expansion involves selecting additional terms from the top ranked documents retrieved in response to an initial query. Terms are typically selected by a combination of a term scoring function such as tf-idf and a co-occurrence based measure. For example, in response to a query term "animal", a term selection function may choose the term "zoo", because - "zoo" may be a dominating term (high tf-idf) in the top (say 10) documents retrieved in response to the query "animal" - "zoo" may co-occur frequently (in close proximity) with the original query term "animal" in these documents
null
CC BY-SA 3.0
null
2014-06-26T16:19:46.480
2014-06-26T23:01:36.187
2014-06-26T23:01:36.187
984
984
null
603
2
null
600
4
null
People see something closely related to the [Travelling Salesman Problem](http://en.wikipedia.org/wiki/Travelling_salesman_problem) and think that it can't be solved. A good deal of work has been done on this topic and not all of it indicates that a solution is not available. Depending on the parameters and the desired solution, you may be able to find something that will work. You may want to give a look at the [OpenOpt](http://openopt.org/TSP) python library. Another resource to look at would be the [TSP Solver and Generator](http://tspsg.info/). If you are using R, there is a [TSP package available](http://tsp.r-forge.r-project.org/). Actually implementing a solution to your problem is a little too much to cover here, but this should provide a good starting point. Within these packages and at the documentation within the links that I provided for you, you will find that there are a fairly wide variety of algorithmic strategies available. You have a small geographic region and a small set of "salespeople", so the computational power needed to calculate a strategy within a reasonable time frame should be available on your desktop. In practical terms, you don't need to find the absolutely most optimal strategy. You just need a very good one. Pick a TSP package that looks the least overwhelming and give it a go.
null
CC BY-SA 3.0
null
2014-06-26T18:01:06.443
2014-06-26T18:10:42.907
2014-06-26T18:10:42.907
434
434
null
604
2
null
594
2
null
Google isn't going to give away their proprietary work, but we can speculate. Here's what I can gather from my limited usage: - The recommendations do not seem to be user, geography, or history specific. - There is never an empty recommendation (one that returns no results) - There is not always a recommendation (some searches just return images) - The recommendations are not always the same (consecutive searches sometimes return different recommendations) - Result ordering shifts regularly (search for a specific image and it won't always be in the same place) - Very popular searches seem to be pre-calculated and more static than unpopular searches. - Recommendations are not always one additional word, and recommendations do not always include the base query. It seems to me that they do this based on the history of the general end user population, they rotate recommendations when there are many popular ones, and they do some additional processing to determine that the result set is of a reasonable size. I would postulate that it works as follows: - Use consecutive search strings from users (short-to-long-tail searches) as training data for a machine-learning algorithm. - Run searches that occur > N amount of times a week against that recommendation algorithm. - Validate and clean results. - Push them out to the general population in rotation/A-B testing. - Track click-throughs. - Refine results over time.
null
CC BY-SA 3.0
null
2014-06-26T19:16:31.470
2014-06-26T19:16:31.470
null
null
434
null
605
1
null
null
6
3013
I'm building a recommender system and using SVD as one of the preprocessing techniques. However, I want to normalize all my preprocessed data between 0 and 1 because all of my similarity measures (cosine, pearson, euclidean) depend on that assumption. After I take the SVD (A = USV^T), is there a standard way to normalize the matrix 'A' between 0 and 1? Thanks! Edit: I want all of my similarity measurements to give results between 0 and 1 and my normalized euclidean distance in particular fails if the input matrix does not have values between 0 and 1.
How to normalize results of Singular Value Decomposition (SVD) between 0 and 1?
CC BY-SA 3.0
null
2014-06-26T19:23:46.043
2014-07-01T18:48:13.757
2014-06-27T00:22:33.810
838
838
[ "machine-learning", "statistics", "recommender-system", "feature-selection" ]
606
2
null
554
3
null
Folks here stated great steps, but I think there are great information at the following link [what I do when I get a new data set as told through tweets](http://simplystatistics.org/2014/06/13/what-i-do-when-i-get-a-new-data-set-as-told-through-tweets/), It sums up the steps the folks tweeted answering the great @hmason question "Data people: What is the very first thing you do when you get your hands on a new data set?" Hope it will be useful.
null
CC BY-SA 3.0
null
2014-06-26T20:49:38.470
2014-06-26T20:49:38.470
null
null
1220
null
607
1
629
null
4
1459
I am brand new to the field of data science, want to break into it, and there are so many tools out there. These VMs have a lot of software on them, but I haven't been able to find any side-by-side comparison. Here's a start from my research, but if someone could tell me that one is objectively more rich-featured, with a larger community of support, and useful to get started then that would help greatly: datasciencetoolKIT.org -> vm is on vagrant cloud (4 GB) and seems to be more "hip" with R, iPython notebook, and other useful command-line tools (html->txt, json->xml, etc). There is a book being released in August with detail. datasciencetoolBOX.org -> vm is a vagrant box (24 GB) downloadable from their website. There seems to be more features here, and more literature.
Do you need a virtual machine for data science?
CC BY-SA 4.0
null
2014-06-26T22:17:52.570
2019-05-07T04:23:55.110
2019-05-07T04:23:55.110
1330
1223
[ "tools", "software-recommendation" ]
608
1
612
null
9
623
I have just learned about regularisation as an approach to control over-fitting, and I would like to incorporate the idea into a simple implementation of backpropagation and [Multilayer perceptron](http://en.wikipedia.org/wiki/Multilayer_perceptron) (MLP) that I put together. Currently to avoid over-fitting, I cross-validate and keep the network with best score so far on the validation set. This works OK, but adding regularisation would benefit me in that correct choice of the regularisation algorithm and parameter would make my network converge on a non-overfit model more systematically. The formula I have for the update term (from Coursera ML course) is stated as a batch update e.g. for each weight, after summing all the applicable deltas for the whole training set from error propagation, an adjustment of `lambda * current_weight` is added as well before the combined delta is subtracted at the end of the batch, where `lambda` is the regularisation parameter. My implementation of backpropagation uses per-item weight updates. I am concerned that I cannot just copy the batch approach, although it looks OK intuitively to me. Does a smaller regularisation term per item work just as well? For instance `lambda * current_weight / N` where N is size of training set - at first glance this looks reasonable. I could not find anything on the subject though, and I wonder if that is because regularisation does not work as well with a per-item update, or even goes under a different name or altered formula.
Any differences in regularisation in MLP between batch and individual updates?
CC BY-SA 3.0
null
2014-06-26T22:58:32.380
2014-06-27T12:28:25.467
2014-06-27T09:52:53.277
836
836
[ "neural-network" ]
609
2
null
410
26
null
Below is a very good note (page 12) on learning rate in Neural Nets (Back Propagation) by Andrew Ng. You will find details relating to learning rate. [http://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf](http://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf) For your 4th point, you're right that normally one has to choose a "balanced" learning rate, that should neither overshoot nor converge too slowly. One can plot the learning rate w.r.t. the descent of the cost function to diagnose/fine tune. In practice, Andrew normally uses the L-BFGS algorithm (mentioned in page 12) to get a "good enough" learning rate.
null
CC BY-SA 3.0
null
2014-06-26T23:46:22.487
2014-06-27T20:17:32.853
2014-06-27T20:17:32.853
322
1224
null
611
2
null
595
0
null
I had almost the same problem: 'restoring' age, gender, location for social network users. But I used users' ego-networks, not visited sites statistics. And I faced with two almost independent tasks: - 'Restoring' or 'predicting' data. You can use a bunch of different technics to complete this task, but my vote is for simplest ones (KISS, yes). E.g., in my case, for age prediction, mean of ego-network users' ages gave satisfactory results (for about 70% of users error was less than +/-3 years, in my case it was enough). It's just an idea, but you can try to use for age prediction weighted average, defining weight as similarity measure between visited sites sets of current user and others. - Evaluating prediction quality. Algorithm from task-1 will produce prediction almost in all cases. And second task is to determine, if prediction is reliable. E.g., in case of ego network and age prediction: can we trust in prediction, if a user has only one 'friend' in his ego network? This task is more about machine-learning: it's a binary classification problem. You need to compose features set, form training and test samples from your data with both right and wrong predictions. Creating appropriate classifier will help you to filter out unpredictable users. But you need to determine, what are your features set. I used a number of network metrics, and summary statistics on feature of interest distribution among ego-network. This approach wouldn't populate all the gaps, but only predictable ones.
null
CC BY-SA 3.0
null
2014-06-27T07:06:14.297
2014-06-27T07:06:14.297
null
null
941
null
612
2
null
608
2
null
Regularization is relevant in per-item learning as well. I would suggest to start with a basic validation approach for finding out lambda, whether you are doing batch or per-item learning. This is the easiest and safest approach. Try manually with a number of different values. e.g. 0.001. 0.003, 0.01, 0.03, 0.1 etc. and see how your validation set behaves. Later on you may automate this process by introducing a linear or local search method. As a side note, I believe the value of lambda should be considered in relation to the updates of the parameter vector, rather than the training set size. For batch training you have one parameter update per dataset pass, while for online one update per sample (regardless of the training set size). I recently stumbled upon this [Crossvalidated Question](https://stats.stackexchange.com/questions/64224/regularization-and-feature-scaling-in-online-learning), which seems quite similar to yours. There is a link to a paper about [a new SGD algorithm](http://leon.bottou.org/publications/pdf/jmlr-2009.pdf), with some relevant content. It might be useful to take a look (especially pages 1742-1743).
null
CC BY-SA 3.0
null
2014-06-27T07:19:11.420
2014-06-27T09:54:54.977
2017-04-13T12:44:20.183
-1
1085
null
613
1
619
null
2
859
I understand Hadoop MapReduce and its features but I am confused about R MapReduce. One difference I have read is that R utilizes maximum RAM. So do perform parallel processing integrated R with Hadoop. ## My doubt is: - R can do all stats, math and data science related stuff, but why R MapReduce? - Is there any new task I can achieve by using R MapReduce instead of Hadoop MapReduce? If yes, please specify. - We can achieve the task by using R with Hadoop (directly) but what is the importance of MapReduce in R and how it is different from normal MapReduce?
Difference Between Hadoop Mapreduce(Java) and RHadoop mapreduce
CC BY-SA 3.0
0
2014-06-27T12:03:53.357
2014-06-27T19:39:00.247
2014-06-27T19:39:00.247
322
1235
[ "machine-learning", "r", "apache-hadoop", "map-reduce" ]
614
2
null
608
2
null
To complement what insys said : Regularization is used when computing the backpropagation for all the weights in your MLP. Therefore, instead of computing the gradient in regard to all the input of the training set (`batch`) you only use some/one item(s) (`stochastic or semi-stochastic`). You will end up limiting a result of the update in regard to one item instead of all which is also correct. Also, if i remember correctly, Andrew NG used `L2-regularization`. The `/N` in `lambda * current_weight / N` is not mandatory, it just helps rescaling the input. However if you choose not to use it, you will have (in most of the case) to select another value for `lambda`. You can also use the [Grid-search algorithm](http://en.wikipedia.org/wiki/Hyperparameter_optimization) to choose the best value for `lambda` (the hyperparameter => the one you have to choose).
null
CC BY-SA 3.0
null
2014-06-27T12:28:25.467
2014-06-27T12:28:25.467
null
null
968
null
615
2
null
468
3
null
One standard reference written from social science perspective is [J Scott Long's Limited Dependent Variables](http://rads.stackoverflow.com/amzn/click/0803973748) book. It goes much deeper than say Tabachnik suggested in [another answer](https://datascience.stackexchange.com/a/591/1237): Tabachnik is a cookbook at best, with little to no explanations of the "why", and it seems like you would benefit from figuring this out in more detail that can be found in Long's book. Ordinal regression should be covered in most introductory econometrics courses (Wooldridge's [Cross-Section and Panel Data](http://rads.stackoverflow.com/amzn/click/0262232588) is a great graduate-level book), as well as quantitative social science courses (sociology, psychology), although I would imagine that the latter will loop back to Long's book. Given that your number of variables is wa-a-ay lower than the sample size, the R package you should be looking is probably [ordinal](http://cran.r-project.org/web/packages/ordinal/) rather than `glmnetcr`. [Another answer](https://datascience.stackexchange.com/a/469/1237) mentioned that you can find this functionality in a more mainstream `MASS` package.
null
CC BY-SA 3.0
null
2014-06-27T13:22:22.577
2014-06-27T13:22:22.577
2017-04-13T12:50:41.230
-1
1237
null
616
1
null
null
7
1366
I have a matrix that is populated with discrete elements, and I need to cluster them (using R) into intact groups. So, for example, take this matrix: ``` [A B B C A] [A A B A A] [A B B C C] [A A A A A] ``` There would be two separate clusters for A, two separate clusters for C, and one cluster for B. The output I'm looking for would ideally assign a unique ID to each cluster, something like this: ``` [1 2 2 3 4] [1 1 2 4 4] [1 2 2 5 5] [1 1 1 1 1] ``` Right now I wrote a code that does this recursively by just iteratively checking nearest neighbor, but it quickly overflows when the matrix gets large (i.e., 100x100). Is there a built in function in R that can do this? I looked into raster and image processing, but no luck. I'm convinced it must be out there.
Identifying “clusters” or “groups” in a matrix
CC BY-SA 3.0
null
2014-06-27T15:58:19.340
2014-06-28T14:34:17.487
2014-06-27T17:06:52.027
322
1240
[ "r", "clustering" ]
617
2
null
616
2
null
What do you think is distance measure in your case? I assume there are three dimensions here: - RowN (row number) - ColN (column number) - Value (value: A, B or C) That means data you get from `4x5` matrix looks like: ``` Sample1 -> (1, 1, A) Sample2 -> (1, 2, B) ... Sample5 -> (1, 5, A) Sample6 -> (2, 1, A) ... Sample15 -> (3, 5, C) ... Sample20 -> (4, 5, A) ``` Is `value` scaled? In other words, is `A < B < C`? If yes, then - you can replace {A, B, C} with {0, 1, 2} (or may be {10, 11, 12}, if you want this difference be less important than RowN and ColN attributes) - normalize your data - use, for example, K-Means clustering algorithm (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/kmeans.html) from stats R package In that case the distance between two will be: ``` Sqrt( (RowN1-RowN2)^2 + (ColN1-ColN2)^2 + (Value1-Value2)^2 ) ``` If `value` is not scaled (regular categorical variable), use some [modifications of K-Means that work with categorical data](https://datascience.stackexchange.com/questions/22/k-means-clustering-for-mixed-numeric-and-categorical-data). So in case of 100x100 matrix you have 10000 observations and three variables, which is pretty trivial sample size.
null
CC BY-SA 3.0
null
2014-06-27T16:40:17.393
2014-06-27T16:49:26.077
2017-04-13T12:50:41.230
-1
97
null
618
2
null
595
1
null
Although adesantos has already given a good answer, I would like to add a little background information. The name for the problem you are looking at is "imputation". As adesantos already said, one of the possibilities is to fit a distribution. For example, you could fit a multivariate Gaussian to the data. You will get the mean only from the samples you know and you calculate the covariances only from the samples you know. You can then use standard MVG results to impute the missing data linearly. This is probably the simplest probabilistic method of imputation and it is already quite involved. If you are a neural networks, a recently proposed method that can do so are deep latent gaussian models by Rezende et al. However, understand the method will require a lot of neural net knowledge, quite some variational Bayes knowledge about Markov chains. Another method, which I have hear to work well is to train a generative stochastic network (Bengio et al). This is done by training a denoising auto encoder on the data you have (neglecting missing values in the reconstruction loss). Say you have a reconstruction function f and a input x. Then you will reconstruct it via x' = f(x). You then reset the values of x' with those you know from x. (I.e. you only keep the values that were missing before reconstruction.) If you do so many times, you are guaranteed to sample from the distribution given the values you know. But in either case, these methods require quite some knowledge about statistics and neural nets.
null
CC BY-SA 3.0
null
2014-06-27T19:18:11.433
2014-06-27T19:18:11.433
null
null
1193
null
619
2
null
613
2
null
[rhadoop](https://github.com/RevolutionAnalytics/RHadoop) (the part you are interested in is now called [rmr2](https://github.com/RevolutionAnalytics/rmr2)) is simply a client API for MapReduce written in R. You invoke MapReduce using R package APIs, and send an R function to the workers, where it is executed by an R interpreter locally. But it is otherwise exactly the same MapReduce. You can call anything you like in R this way, but no R functions are themselves parallelized to use MapReduce in this way. The point is simply that you can invoke M/R from R. I don't think it somehow lets you do anything more magical than that.
null
CC BY-SA 3.0
null
2014-06-27T19:37:44.847
2014-06-27T19:37:44.847
null
null
21
null
620
2
null
454
12
null
Yes, it's problematic. If you oversample the minority, you risk overfitting. If you undersample the majority, you risk missing aspects of the majority class. Stratified sampling, btw, is the equivalent to assigning non-uniform misclassification costs. Alternatives: (1) Independently sampling several subsets from the majority class and making multiple classifiers by combining each subset with all the minority class data, as suggested in the answer from @Debasis and described in this [EasyEnsemble paper](http://cse.seu.edu.cn/people/xyliu/publication/tsmcb09.pdf), (2) [SMOTE (Synthetic Minority Oversampling Technique)](http://arxiv.org/pdf/1106.1813.pdf) or [SMOTEBoost, (combining SMOTE with boosting)](http://www3.nd.edu/~nchawla/papers/ECML03.pdf) to create synthetic instances of the minority class by making nearest neighbors in the feature space. SMOTE is implemented in R in [the DMwR package](http://cran.r-project.org/web/packages/DMwR/index.html).
null
CC BY-SA 3.0
null
2014-06-27T21:45:23.237
2014-06-27T21:45:23.237
null
null
953
null
621
2
null
455
5
null
There is a bunch of datasets made free by UC Irvine [to play with here](https://archive.ics.uci.edu/ml/datasets.html). Among those datasets, [there are a few dozen textual datasets](https://archive.ics.uci.edu/ml/datasets.html?format=&task=&att=&area=&numAtt=&numIns=&type=text&sort=nameUp&view=table) that might help you guys with your task. Those are kind of generic datasets, so depending on your purpose they should not be used as the only data to train your models, or else your model -- while it might work -- will not produce quality results.
null
CC BY-SA 3.0
null
2014-06-28T01:07:27.263
2014-06-28T01:07:27.263
null
null
553
null
622
2
null
616
1
null
I'm not sure if your question classifies as a clustering problem. In clustering you are trying to discover clusters of similar examples using unlabelled data. Here, it seems you wish to enumerate existing "clusters" of nearby nodes. To be honest, I have no idea of such a function in R. But, as far as the algorithm is concerned, I believe what you are looking for is [Connected-Component Labeling](http://en.wikipedia.org/wiki/Connected-component_labeling). Kind of a bucket fill, for matrices. The wikipedia article is linked above. One of the algorithms presented there, termed as single-pass algorithm, is as follows: ``` One-Pass(Image) [M, N]=size(Image); Connected = zeros(M,N); Mark = Value; Difference = Increment; Offsets = [-1; M; 1; -M]; Index = []; No_of_Objects = 0; for i: 1:M : for j: 1:N: if(Image(i,j)==1) No_of_Objects = No_of_Objects +1; Index = [((j-1)*M + i)]; Connected(Index)=Mark; while ~isempty(Index) Image(Index)=0; Neighbors = bsxfun(@plus, Index, Offsets'); Neighbors = unique(Neighbors(:)); Index = Neighbors(find(Image(Neighbors))); Connected(Index)=Mark; end Mark = Mark + Difference; end end end ``` I guess it'd be easy to roll your own using the above.
null
CC BY-SA 3.0
null
2014-06-28T14:34:17.487
2014-06-28T14:34:17.487
null
null
1085
null
623
1
null
null
13
349
I'm currently using several different classifiers on various entities extracted from text, and using precision/recall as a summary of how well each separate classifier performs across a given dataset. I'm wondering if there's a meaningful way of comparing the performance of these classifiers in a similar way, but which also takes into account the total numbers of each entity in the test data that's being classified? Currently, I'm using precision/recall as a measure of performance, so might have something like: ``` Precision Recall Person classifier 65% 40% Company classifier 98% 90% Cheese classifier 10% 50% Egg classifier 100% 100% ``` However, the dataset I'm running these on might contain 100k people, 5k companies, 500 cheeses, and 1 egg. So is there a summary statistic I can add to the above table which also takes into account the total number of each item? Or is there some way of measuring the fact that e.g. 100% prec/rec on the Egg classifier might not be meaningful with only 1 data item? Let's say we had hundreds of such classifiers, I guess I'm looking for a good way to answer questions like "Which classifiers are underperforming? Which classifiers lack sufficient test data to tell whether they're underperforming?".
Measuring performance of different classifiers with different sample sizes
CC BY-SA 3.0
null
2014-06-28T14:57:18.177
2020-08-06T11:03:56.237
2017-06-05T19:30:18.057
31513
474
[ "classification", "performance" ]
624
2
null
607
2
null
In most cases a practicing data scientist creates his own working environment on personal computed installing preferred software packages. Normally it is sufficient and efficient use of computing resources, because to run a virtual machine (VM) on your main machine you have to allocate a significant portion of RAM for it. The software will run noticeably slower on both the main and the virtual machine unless a lot of RAM. Due to this impact on speed it is not common to use VMs as main working environment but they are a good solution in several cases when there is a need of additional working environment. The VMs be considered when: - There is a need to easily replicate a number of identical computing environments when teaching a course or doing a presentation on a conference. - There is a need to save and recreate an exact environment for an experiment or a calculation. - There is a need to run a different OS or to test a solution on a tool that runs on a different OS. - One wants to try out a bundle of software tools before installing them on the main machine. E.g. there is an opportunity to instal an instance of Hadoop (CDH) on a VM during an Intro to Hadoop course on Udacity. - VMs are sometimes used for fast deployment in the cloud like AWS EC, Rackspace etc. The VMs mentioned in the original question are made as easily installable data science software bundles. There are more than these two. This [blog post](http://jeroenjanssens.com/2013/12/07/lean-mean-data-science-machine.html) by Jeroen Janssens gives a comparison of at least four: - Data Science Toolbox - Mining the Social Web - Data Science Toolkit - Data Science Box
null
CC BY-SA 3.0
null
2014-06-29T07:06:00.430
2014-06-29T07:20:51.053
2014-06-29T07:20:51.053
454
454
null
625
2
null
596
4
null
Maybe you can check here - [http://snap.stanford.edu/data/](http://snap.stanford.edu/data/) For each data set you will also see references of the works where they have been used
null
CC BY-SA 3.0
null
2014-06-29T12:48:08.840
2014-06-29T12:48:08.840
null
null
816
null
626
2
null
623
2
null
In my opinion, it is difficult to compare the performance when there is such a big difference of size. On this link, (please check it out [here in Wikipedia](http://en.wikipedia.org/wiki/Effect_size)), you may see different strategies. The one I suggest is one related to the variance. For instance, consider the performance of the classifier (100%) and the person classifier (65%). The minimum error you commit with the former classifier is 100%. However, the minimum error you can commit with the latter classifier is 10e-5. So one way to compare classifier is to have on mind this [Rule of Three](http://en.wikipedia.org/wiki/Rule_of_three_(statistics)) where you can compare the performance and its variability. Other possibility is F-measure which is a combination of Precision and Recall and it is somehow independent to the effect size.
null
CC BY-SA 4.0
null
2014-06-29T21:11:47.727
2020-08-06T11:03:56.237
2020-08-06T11:03:56.237
98307
1155
null
627
2
null
61
17
null
Roughly speaking, over-fitting typically occurs when the ratio ![enter image description here](https://i.stack.imgur.com/AVVpB.png) is too high. Think of over-fitting as a situation where your model learn the training data by heart instead of learning the big pictures which prevent it from being able to generalized to the test data: this happens when the model is too complex with respect to the size of the training data, that is to say when the size of the training data is to small in comparison with the model complexity. Examples: - if your data is in two dimensions, you have 10000 points in the training set and the model is a line, you are likely to under-fit. - if your data is in two dimensions, you have 10 points in the training set and the model is 100-degree polynomial, you are likely to over-fit. ![enter image description here](https://i.stack.imgur.com/HH23h.png) From a theoretical standpoint, the amount of data you need to properly train your model is a crucial yet far-to-be-answered question in machine learning. One such approach to answer this question is the [VC dimension](https://math.stackexchange.com/a/656167/24265). Another is the [bias-variance tradeoff](https://www.quora.com/Machine-Learning/What-is-an-intuitive-explanation-for-bias-variance-tradeoff/answer/Franck-Dernoncourt?share=1). From an empirical standpoint, people typically plot the training error and the test error on the same plot and make sure that they don't reduce the training error at the expense of the test error: ![enter image description here](https://i.stack.imgur.com/I7LiT.png) I would advise to watch [Coursera' Machine Learning course](https://www.coursera.org/course/ml), section "10: Advice for applying Machine Learning". (PS: please go [here](https://datascience.meta.stackexchange.com/q/6/843) to ask for TeX support on this SE.)
null
CC BY-SA 3.0
null
2014-06-29T22:44:02.560
2014-06-29T22:44:02.560
2017-04-13T12:19:36.717
-1
843
null
628
2
null
361
51
null
A model underfits when it is too simple with regards to the data it is trying to model. One way to detect such situation is to use the [bias–variance approach](http://en.wikipedia.org/wiki/Bias%E2%80%93variance_dilemma), which can represented like this: ![enter image description here](https://i.stack.imgur.com/t0zit.png) Your model is underfitted when you have a high bias. --- To know whether you have a too high bias or a too high variance, you view the phenomenon in terms of training and test errors: High bias: This learning curve shows high error on both the training and test sets, so the algorithm is suffering from high bias: ![enter image description here](https://i.stack.imgur.com/KFjM4.png) High variance: This learning curve shows a large gap between training and test set errors, so the algorithm is suffering from high variance. ![enter image description here](https://i.stack.imgur.com/Ypj9y.png) If an algorithm is suffering from high variance: - more data will probably help - otherwise reduce the model complexity If an algorithm is suffering from high bias: - increase the model complexity I would advise to watch [Coursera' Machine Learning course](https://www.coursera.org/course/ml), section "10: Advice for applying Machine Learning", from which I took the above graphs.
null
CC BY-SA 3.0
null
2014-06-29T23:14:39.513
2014-07-02T16:55:27.367
2014-07-02T16:55:27.367
843
843
null
629
2
null
607
3
null
# Do you need a VM? You need to keep in mind that a virtual machine is a software emulation of your own or another machine hardware configuration that can run an operating systems. In most basic terms, it acts as a layer interfacing between the virtual OS, and your own OS which then communicates with the lower level hardware to provide support to the virtual OS. What this means for you is: # Cons ### Hardware Support A drawback of virtual machine technology is that it supports only the hardware that both the virtual machine hypervisor and the guest operating system support. Even if the guest operating system supports the physical hardware, it sees only the virtual hardware presented by the virtual machine. The second aspect of virtual machine hardware support is the hardware presented to the guest operating system. No matter the hardware in the host, the hardware presented to the guest environment is usually the same (with the exception of the CPU, which shows through). For example, VMware GSX Server presents an AMD PCnet32 Fast Ethernet card or an optimized VMware-proprietary network card, depending on which you choose. The network card in the host machine does not matter. VMware GSX Server performs the translation between the guest environment's network card and the host environment's network card. This is great for standardization, but it also means that host hardware that VMware does not understand will not be present in the guest environment. ### Performance Penalty Virtual machine technology imposes a performance penalty from running an additional layer above the physical hardware but beneath the guest operating system. The performance penalty varies based on the virtualization software used and the guest software being run. This is significant. # Pros ### Isolation > One of the key reasons to employ virtualization is to isolate applications from each other. Running everything on one machine would be great if it all worked, but many times it results in undesirable interactions or even outright conflicts. The cause often is software problems or business requirements, such as the need for isolated security. Virtual machines allow you to isolate each application (or group of applications) in its own sandbox environment. The virtual machines can run on the same physical machine (simplifying IT hardware management), yet appear as independent machines to the software you are running. For all intents and purposes—except performance, the virtual machines are independent machines. If one virtual machine goes down due to application or operating system error, the others continue running, providing services your business needs to function smoothly. ### Standardization > Another key benefit virtual machines provide is standardization. The hardware that is presented to the guest operating system is uniform for the most part, usually with the CPU being the only component that is "pass-through" in the sense that the guest sees what is on the host. A standardized hardware platform reduces support costs and increases the share of IT resources that you can devote to accomplishing goals that give your business a competitive advantage. The host machines can be different (as indeed they often are when hardware is acquired at different times), but the virtual machines will appear to be the same across all of them. ### Ease of Testing > Virtual machines let you test scenarios easily. Most virtual machine software today provides snapshot and rollback capabilities. This means you can stop a virtual machine, create a snapshot, perform more operations in the virtual machine, and then roll back again and again until you have finished your testing. This is very handy for software development, but it is also useful for system administration. Admins can snapshot a system and install some software or make some configuration changes that they suspect may destabilize the system. If the software installs or changes work, then the admin can commit the updates. If the updates damage or destroy the system, the admin can roll them back. Virtual machines also facilitate scenario testing by enabling virtual networks. In VMware Workstation, for example, you can set up multiple virtual machines on a virtual network with configurable parameters, such as packet loss from congestion and latency. You can thus test timing-sensitive or load-sensitive applications to see how they perform under the stress of a simulated heavy workload. ### Mobility > Virtual machines are easy to move between physical machines. Most of the virtual machine software on the market today stores a whole disk in the guest environment as a single file in the host environment. Snapshot and rollback capabilities are implemented by storing the change in state in a separate file in the host information. Having a single file represent an entire guest environment disk promotes the mobility of virtual machines. Transferring the virtual machine to another physical machine is as easy as moving the virtual disk file and some configuration files to the other physical machine. Deploying another copy of a virtual machine is the same as transferring a virtual machine, except that instead of moving the files, you copy them. # Which VM should I use if I am starting out? The Data Science Box or the Data Science Toolbox are your best bets if you just getting into data science. They have the basic software that you will need, with the primary difference being the virtual environment in which each of these can run. The DSB can run on AWS while the DST can run on Virtual Box (which is the most common tool used for VMs). # Sources - http://www.devx.com/vmspecialreport/Article/30383 - http://jeroenjanssens.com/2013/12/07/lean-mean-data-science-machine.html
null
CC BY-SA 3.0
null
2014-06-30T01:51:11.027
2014-06-30T01:51:11.027
2020-06-16T11:08:43.077
-1
62
null
630
2
null
600
1
null
Here's a crazy idea: talk to the volunteers who know the neighborhoods and who have done door-to-door work before. Get their advice and ideas. They will probably have insights that no algorithm will produce, and those modifications will be valuable to any computer-generated route list. One example: Avoiding crossing heavily traveled streets with slow lights or no lights. Another example: pairs of volunteers working on opposite sides of the same street will feel safer than a volunteer working that street alone.
null
CC BY-SA 3.0
null
2014-06-30T04:00:35.077
2014-06-30T04:00:35.077
null
null
609
null
634
1
635
null
10
156
I'm working on a fraud detection system. In this field, new frauds appear regularly, so that new features have to be added to the model on ongoing basis. I wonder what is the best way to handle it (from the development process perspective)? Just adding a new feature into the feature vector and re-training the classifier seems to be a naive approach, because too much time will be spent for re-learning of the old features. I'm thinking along the way of training a classifier for each feature (or a couple of related features), and then combining the results of those classifiers with an overall classifier. Are there any drawbacks of this approach? How can I choose an algorithm for the overall classifier?
Handling a regularly increasing feature set
CC BY-SA 3.0
null
2014-06-30T09:43:01.940
2014-07-11T14:27:01.603
2014-07-09T00:19:42.423
322
1271
[ "machine-learning", "bigdata" ]
635
2
null
634
4
null
In an ideal world, you retain all of your historical data, and do indeed run a new model with the new feature extracted retroactively from historical data. I'd argue that the computing resource spent on this is quite useful actually. Is it really a problem? Yes, it's a widely accepted technique to build an ensemble of classifiers and combine their results. You can build a new model in parallel just on new features and average in its prediction. This should add value, but, you will never capture interaction between the new and old features this way, since they will never appear together in a classifier.
null
CC BY-SA 3.0
null
2014-06-30T10:47:06.453
2014-06-30T10:47:06.453
null
null
21
null
636
1
null
null
3
1122
I am new to machine learning. I have a task at hand of predicting click probability given user information like city, state, OS version, OS family, device, browser family, browser version, etc. I have been advised to try logit since logit seems to be what MS and Google are using. I have some questions regarding logistic regression: Click and non click is a very very unbalanced class and the simple GLM predictions do not look good. How can I make the data work better with the GLM? All the variables I have are categorical and things like device and city can be numerous. Also the frequency of occurrence of some devices or some cities can be very very low. How can I deal with this distribution of categorical variables? One of the variables that we get is device ID. This is a very unique feature that can be translated to a user's identity. How can I make use of it in logit, or should it be used in a completely different model based on user identity?
Data preparation and machine learning algorithm for click prediction
CC BY-SA 3.0
null
2014-06-30T12:05:38.597
2014-07-09T00:19:12.683
2014-07-09T00:19:12.683
322
1273
[ "machine-learning", "bigdata", "data-mining", "dataset", "data-cleaning" ]
637
2
null
634
2
null
Here's an idea that just popped out of the blue – what if you make use of [Random Subspace Sampling](http://en.wikipedia.org/wiki/Random_subspace_method) (as in fact Sean Owen already suggested) to train a bunch of new classifiers every time a new feature appears (using a random feature subset, including the new set of features). You could train those models on a subset of samples as well to save some training time. This way you can have new classifiers possibly taking on both new and old features, and at the same time keeping your old classifiers. You might even, perhaps using a cross validation technique to measure each classifier's performance, be able to kill-off the worst performing ones after a while, to avoid a bloated model.
null
CC BY-SA 3.0
null
2014-06-30T13:56:10.753
2014-06-30T13:56:10.753
null
null
1085
null
640
1
642
null
5
224
I'm currently working on a project that would benefit from personalized predictions. Given an input document, a set of output documents, and a history of user behavior, I'd like to predict which of the output documents are clicked. In short, I'm wondering what the typical approach to this kind of personalization problem is. Are models trained per user, or does a single global model take in summary statistics of past user behavior to help inform that decision? Per user models won't be accurate until the user has been active for a while, while most global models have to take in a fixed length feature vector (meaning we more or less have to compress a stream of past events into a smaller number of summary statistics).
Large Scale Personalization - Per User vs Global Models
CC BY-SA 3.0
null
2014-06-30T20:51:58.640
2014-06-30T23:10:53.397
null
null
684
[ "classification" ]
641
1
null
null
21
9681
I'm currently searching for labeled datasets to train a model to extract named entities from informal text (something similar to tweets). Because capitalization and grammar are often lacking in the documents in my dataset, I'm looking for out of domain data that's a bit more "informal" than the news article and journal entries that many of today's state of the art named entity recognition systems are trained on. Any recommendations? So far I've only been able to locate 50k tokens from twitter published [here](https://github.com/aritter/twitter_nlp/blob/master/data/annotated/ner.txt).
Dataset for Named Entity Recognition on Informal Text
CC BY-SA 4.0
null
2014-06-30T21:02:05.053
2021-01-11T06:31:25.200
2019-06-08T03:31:44.327
29169
684
[ "dataset", "nlp" ]
642
2
null
640
4
null
The answer to this question is going to vary pretty wildly depending on the size and nature of your data. At a high level, you could think of it as a special case of multilevel models; you have the option of estimating a model with complete pooling (i.e., a universal model that doesn't distinguish between users), models with no pooling (a separate model for each user), and partially pooled models (a mixture of the two). You should really read Andrew Gelman on this topic if you're interested. You can also think of this as a learning-to-rank problem that either tries to produce point-wise estimates using a single function or instead tries to optimize on some list-wise loss function (e.g., NDCG). As with most machine learning problems, it all depends on what kind of data you have, the quality of it, the sparseness of it, and what kinds of features you are able to extract from it. If you have reason to believe that each and every user is going to be pretty unique in their behavior, you might want to build a per-user model, but that's going to be unwieldy fast -- and what do you do when you are faced with a new user?
null
CC BY-SA 3.0
null
2014-06-30T23:10:53.397
2014-06-30T23:10:53.397
null
null
159
null
643
2
null
511
5
null
I guess you say that you want to use 3-fold cross-validation because you know something about your data (that using k=10 would cause overfitting? I'm curious to your reasoning). I am not sure that you know this, if not then you can simply use a larger k. If you still think that you cannot use standard k-fold cross-validation, then you could modify the algorithm a bit: say that you split the data into 30 folds and each time use 20 for training and 10 for evaluation (and then shift up one fold and use the first and the last 9 as evaluation and the rest as training). This means that you're able to use all your data. When I use k-fold cross-validation I usually run the process multiple times with a different randomisation to make sure that I have sufficient data, if you don't you will see different performances depending on the randomisation. In such cases I would suggest sampling. The trick then is to do it often enough.
null
CC BY-SA 3.0
null
2014-07-01T08:04:30.127
2014-07-01T08:04:30.127
null
null
127
null
644
1
649
null
10
3739
For a recommendation system I'm using cosine similarity to compute similarities between items. However, for items with small amounts of data I'd like to bin them under a general "average" category (in the general not mathematical sense). To accomplish this I'm currently trying to create a synthetic observation to represent that middle of the road point. So for example if these were my observations (rows are observations, cols are features): ``` [[0, 0, 0, 1, 1, 1, 0, 1, 0], [1, 0, 1, 0, 0, 0, 1, 0, 0], [1, 1, 1, 1, 0, 1, 0, 1, 1], [0, 0, 1, 0, 0, 1, 0, 1, 0]] ``` A strategy where I'd simply take the actual average of all features across observations would generate a synthetic datapoint such as follows, which I'd then append to the matrix before doing the similarity calculation. ``` [ 0.5 , 0.25, 0.75, 0.5 , 0.25, 0.75, 0.25, 0.75, 0.25] ``` While this might work well with certain similarity metrics (e.g. L1 distance) I'm sure there are much better ways for cosine similarity. Though, at the moment, I'm having trouble reasoning my way through angles between lines in high dimensional space. Any ideas?
Create most "average" cosine similarity observation
CC BY-SA 3.0
null
2014-07-01T13:44:23.290
2023-04-13T05:11:50.153
2014-07-01T15:33:10.697
754
754
[ "recommender-system", "similarity" ]
645
2
null
623
5
null
You need to look at the confidence interval of the statistic. This helps measure how much uncertainty in the statistic, which is largely a function of sample size.
null
CC BY-SA 3.0
null
2014-07-01T17:01:12.263
2014-07-01T17:01:12.263
null
null
178
null
646
1
null
null
2
30
I'm trying to build a nonparametric density function for a fairly large dataset that can be evaluated efficently, and can be updated efficiently when new points are added. There will only ever be a maximum of 4 independent variables, but we can start off with 2. Lets use a gaussian kernel. Let the result be a probability density function, i.e. its volume will be 1. In each evaluation, we can omit all points for which the evaluation point is outside a certain ellipsoid corresponding to the minimum gaussian value we care about. We can change this threshold for accuracy or performance, and the maximum number of points inside the threshold will depend on the chosen covariance matrix of the kernel. Then, we can evaluate the distribution approximately using the subset of points. If we use a fixed kernel, then we can use the eigenvalues and eigenvectors we get from the covariance matrix to transform each point so that the threshold ellipsoid is a fixed circle. We can then shove all the transformed points into a spatial index, and efficiently find all points within the required radius of the evaluation point. However, we would the kernel to be variable for two reasons. (1) to fit the data better, and (2) because adding or modifying points would require the fixed kernel to be updated, which would mean that the entire data set would need to be reindexed. With a variable kernel, we could make new/updated points only affect the closest points. Specifically, is there a spatial index that can efficiently find ellipses surrounding a given point from a set of around 10 million ellipses of different shapes and sizes? More generally though, does my approach look sound? I am open to answers like "give up and precalculate a grid of results". Answers much appreciated!
Spatial index for variable kernel nonparametric density
CC BY-SA 3.0
null
2014-07-01T17:03:31.907
2014-07-01T17:03:31.907
null
null
1283
[ "data-indexing-techniques" ]
648
2
null
605
3
null
Singular Value Decomposition is a linear algebraic technique as a result of which the notion of normalization is hard to define. In principle, you can do this normalization by dividing each element A(i,j) of the matrix by the sum (or max) of the elements in that particular (ith) row, i.e. A(i,j) = A(i, j) / \sum_{k=1}^{n} A(i,k) However, a more elegant way to achieve this would be to apply a probabilistic dimensionality reduction technique such as [PLSA](http://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis) or [LDA](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation). These dimensionality reduction techniques ensure that you always end up with probability values strictly between 0 and 1.
null
CC BY-SA 3.0
null
2014-07-01T18:48:13.757
2014-07-01T18:48:13.757
null
null
984
null
649
2
null
644
12
null
You are doing the correct thing. Technically, this averaging leads to computing the [centroid](http://en.wikipedia.org/wiki/Centroid) in the Euclidean space of a set of N points. The centroid works pretty well with cosine similarities (cosine of the angles between normalized vectors), e.g. [the Rocchio algorithm](http://en.wikipedia.org/wiki/Rocchio_algorithm).
null
CC BY-SA 3.0
null
2014-07-01T23:11:35.627
2014-07-01T23:11:35.627
null
null
984
null
650
2
null
644
2
null
As @Debasis writes, what you are doing is correct. Do not worry about the relative values as the cosine similarity focusses on the angle, not the length of the vector. Note that the averages that you compute actually are the proportion of observations that have that feature in your subset. If the subset is sufficiently large, you can interpret them as probabilities.
null
CC BY-SA 3.0
null
2014-07-02T08:40:09.773
2014-07-02T08:40:09.773
null
null
172
null
651
2
null
623
2
null
The number of data in the class is sometimes referred to as the `support` of the classifier. It tells how much you can trust your result, like a p-value would allow you to trust or distrust some test. One approach you can use is to compute several classifier performance measures, not only precision and recall, but also true positive rate, false positive rate, specificity, sensitivity, positive likelihood, negative likelihood, etc. and see whether they are consistent with one another. If one of the measure maxes out (100%) and the other do not, it is often, in my experience, indicative of something went wrong (e.g. poor support, trivial classifier, biased classifier, etc.). See [this](http://www.damienfrancois.be/blog/files/modelperfcheatsheet.pdf) for a list of classifier performance measures.
null
CC BY-SA 3.0
null
2014-07-02T08:50:55.980
2014-07-02T08:50:55.980
null
null
172
null
652
2
null
361
5
null
Simply, one common approach is to increase the complexity of the model, making it simple, and most probably underfitting at first, and increasing the complexity of the model until early signs of overfitting are witnessed using a resampling technique such as cross validation, bootstrap, etc. You increase the complexity either by adding parameters (number of hidden neurons for artificial neural networks, number of trees in a random forest) or by relaxing the regularization (often named lambda, or C for support vector machines) term in your model.
null
CC BY-SA 3.0
null
2014-07-02T08:58:25.413
2014-07-02T08:58:25.413
null
null
172
null
653
1
654
null
4
314
I am trying to find which classification methods, that do not use a training phase, are available. The scenario is gene expression based classification, in which you have a matrix of gene expression of m genes (features) and n samples (observations). A signature for each class is also provided (that is a list of the features to consider to define to which class belongs a sample). An application (non-training) is the [Nearest Template Prediction](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0015543) method. In this case it is computed the cosine distance between each sample and each signature (on the common set of features). Then each sample is assigned to the nearest class (the sample-class comparison resulting in a smaller distance). No already classified samples are needed in this case. A different application (training) is the [kNN](http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) method, in which we have a set of already labeled samples. Then, each new sample is labeled depending on how are labeled the k nearest samples. Are there any other non-training methods? Thanks
Which non-training classification methods are available?
CC BY-SA 3.0
null
2014-07-02T13:40:27.000
2015-04-12T16:08:13.467
null
null
133
[ "classification" ]
654
2
null
653
7
null
What you are asking about is [Instance-Based Learning](http://en.wikipedia.org/wiki/Instance-based_learning). k-Nearest Neighbors (kNN) appears to be the most popular of these methods and is applicable to a wide variety of problem domains. Another general type of instance-based learning is [Analogical Modeling](http://en.wikipedia.org/wiki/Analogical_modeling), which uses instances as exemplars for comparison with new data. You referred to kNN as an application that uses training but that is not correct (the Wikipedia entry you linked is somewhat misleading in that regard). Yes, there are "training examples" (labeled instances) but the classifier doesn't learn/train from these data. Rather, they are only used whenever you actually want to classify a new instance, which is why it is considered a "lazy" learner. Note that the Nearest Template Prediction method you mention effectively is a form of kNN with `k=1` and cosine distance as the distance measure.
null
CC BY-SA 3.0
null
2014-07-02T14:14:22.593
2014-07-02T14:46:55.010
2014-07-02T14:46:55.010
964
964
null
655
1
null
null
10
2252
I would like to use ANN to automate trading currencies, preferably USD/EUR or USD/GBP. I know this is hard and may not be straightforward. I have already read some papers and done some experiments but without much luck. I would like to get advice from EXPERTS to make this work. Here is what I did so far: - I got tick by tick data for the month of july 2013. It has bid/ask/bid volume/ask volume. - Extracted all ticks for the time frame 12PM to 14PM for all days. - From this data, created a data set where each entry consists of n bid values in sequence. - Used that data to train an ANN with n-1 inputs and the output is the forecasted nth bid value. - The ANN had n-1 inputs neurons, (n-1)*2 + 1 hidden and 1 output neuron. Input layer had linear TF, hidden had log TF and output had linear TF. - Trained the network with back propagation with n-125 first and then 10. For both n, the MSE did not drop below 0.5 and stayed at that value during full training. Assuming that this could be due to the time series being totally random, I used the R package to find partial autocorrelation on the data set (pacf). This gave non zero values for 2 and 3 lags only. Question 1: What does this mean exactly? Then I used hurst exponent to evaluate the randomness. In R, hurst(values) showed values above 0.9. Question 2: It is supposed to be nearly random. Should it have a value closer to 0.5? I repeated the training of the ANN with n=3. The ANN was trained and was able to obtain a pretty low value for MSE. However, the calculated output from this ANN does not differ much from the (n-1)th bid value. It looks like ANN just takes the last bid as the next bid! I tried different network structures (all multilayer perceptions), different training parameters, etc, but results are same. Question 3: How can I improve the accuracy? Are there any other training methods than backpropagation?
Foreign exchange market forecasting with neural networks
CC BY-SA 3.0
null
2014-07-02T14:40:55.413
2014-07-09T00:37:51.167
2014-07-02T22:18:40.597
322
1300
[ "neural-network" ]
656
2
null
636
2
null
Some suggestions - Remove items appearing too infrequently in the data. That will reduce the dimensionality by several orders of magnitude. If a feature occurs less than say 10 times, it's likely that it's not adding any predictive value, and it may lead to overfitting due to low frequency - Try a Linear SVM instead. They handle large dimensional data very well in terms of not overfitting. They also often have the option to assign relative weights to different classes, which may help address your unbalanced problem above. The sklearn svm (which simply wraps some other packages such as libsvm) has this option. - Don't use the ID column. Producing a model per user will most probably lead to overfitting. Instead, feed in attributes that describe the user that allows the model to generalize over similar users. You could try fitting a separate model per user, but you need a lot of data per user to do this well. - It sounds like you really need to try some feature selection here, to reduce the dimensionality of the problem. But try 1 and 2 first, as they may give you good results sooner (although the end solution may still work better with some good feature selection). Sklearn again has a number of options for feature selection.
null
CC BY-SA 3.0
null
2014-07-02T15:52:44.470
2014-07-02T15:52:44.470
null
null
1301
null
657
1
665
null
4
835
Suppose I want to use CART as classification tree (I want a categorical response). I have the training set, and I split it using observation labels. Now, to build the decision tree (classification tree) how are selected the features to decide which label apply to testing observations? Supposing we are working on gene expression matrix, in which each element is a real number, is that done using features that are more distant between classes?
How are selected the features for a decision tree in CART?
CC BY-SA 3.0
null
2014-07-02T16:41:47.467
2014-07-03T17:52:00.963
null
null
133
[ "classification" ]
658
1
null
null
10
180
I would like to use non-atomic data, as a feature for a prediction. Suppose I have a Table with these features: ``` - Column 1: Categorical - House - Column 2: Numerical - 23.22 - Column 3: A Vector - [ 12, 22, 32 ] - Column 4: A Tree - [ [ 2323, 2323 ],[2323, 2323] , [ Boolean, Categorical ] ] - Column 5: A List [ 122, Boolean ] ``` I would like to predict/classify, for instance, Column 2. I am making something to automatically respond to questions, any type of question, like "Where was Foo Born?" ... I first make a query to a search engine, then I get some text data as a result, then I do all the parsing stuff (tagging, stemming, parsing, splitting ... ) My first approach was to make a table, each row with a line of text and a lot of features, like "First Word", "Tag of First Word", "Chunks", etc... But with this approach I am missing the relationships between the sentences. I would like to know if there is an algorithm that looks inside the tree structures (or vectors) and makes the relations and extract whatever is relevant for predicting/classifying. I'd prefer to know about a library that does that than an algorithm that I have to implement.
Prediction with non-atomic features
CC BY-SA 3.0
null
2014-07-02T20:54:53.767
2014-07-16T15:35:31.410
2014-07-16T15:35:31.410
1367
1306
[ "machine-learning" ]
659
2
null
600
6
null
As Steve Kallestad has said, this is a TSP problem, and there are wonderful free solvers to find approximate solutions. It may be too much work for what you are looking for, but you may try use one of those solvers in combination with the Google Maps API, to find real walking distances beetwen your coordinates: [https://developers.google.com/maps/documentation/directions/#DirectionsRequests](https://developers.google.com/maps/documentation/directions/#DirectionsRequests) (I have never used this API, so I don't know how easy or effective it would be)
null
CC BY-SA 3.0
null
2014-07-03T08:00:10.823
2014-07-03T08:00:10.823
null
null
1289
null
660
1
null
null
7
1325
I m new to RHadoop and also to RMR... I had an requirement to write a Mapreduce Job in R Mapreduce. I have Tried writing but While executing this it gives an Error. Tring to read the file from hdfs ## Error: ``` Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, : hadoop streaming failed with error code 1 ``` ## Code : ``` Sys.setenv(HADOOP_HOME="/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop") Sys.setenv(HADOOP_CMD="/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/bin/hadoop") Sys.setenv(HADOOP_STREAMING="/opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.7.0.jar") library(rmr2) library(rhdfs) hdfs.init() day_file = hdfs.file("/hdfs/bikes_LR/day.csv","r") day_read = hdfs.read(day_file) c = rawToChar(day_read) XtX = values(from.dfs( mapreduce( input = "/hdfs/bikes_LR/day.csv", map= function(.,Xi){ yi =c[Xi[,1],] Xi = Xi[,-1] keyval(1,list(t(Xi)%*%Xi)) }, reduce = function(k,v ) { vals =as.numeric(v) keyval(k,sum(vals)) } , combine = TRUE)))[[1]] XtY = values(from.dfs( mapreduce( input = "/hdfs/bikes_LR/day.csv", map= function(.,Xi){ yi =c[Xi[,1],] Xi = Xi[,-1] keyval(1,list(t(Xi)%*%yi)) }, reduce = TRUE , combine = TRUE)))[[1]] solve(XtX,XtY) Input: ------------ instant,dteday,season,yr,mnth,holiday,weekday,workingday,weathersit,temp,atemp,hum,windspeed,casual,registered,cnt 1,2011-01-01,1,0,1,0,6,0,2,0.344167,0.363625,0.805833,0.160446,331,654,985 2,2011-01-02,1,0,1,0,0,0,2,0.363478,0.353739,0.696087,0.248539,131,670,801 3,2011-01-03,1,0,1,0,1,1,1,0.196364,0.189405,0.437273,0.248309,120,1229,1349 4,2011-01-04,1,0,1,0,2,1,1,0.2,0.212122,0.590435,0.160296,108,1454,1562 5,2011-01-05,1,0,1,0,3,1,1,0.226957,0.22927,0.436957,0.1869,82,1518,1600 6,2011-01-06,1,0,1,0,4,1,1,0.204348,0.233209,0.518261,0.0895652,88,1518,1606 7,2011-01-07,1,0,1,0,5,1,2,0.196522,0.208839,0.498696,0.168726,148,1362,1510 8,2011-01-08,1,0,1,0,6,0,2,0.165,0.162254,0.535833,0.266804,68,891,959 9,2011-01-09,1,0,1,0,0,0,1,0.138333,0.116175,0.434167,0.36195,54,768,822 10,2011-01-10,1,0,1,0,1,1,1,0.150833,0.150888,0.482917,0.223267,41,1280,1321 Please Suggest me any mistakes. ```
Linear Regression in R Mapreduce(RHadoop)
CC BY-SA 3.0
null
2014-07-03T10:49:50.993
2014-12-31T13:08:04.937
null
null
1235
[ "machine-learning", "r", "apache-hadoop", "map-reduce" ]
661
1
null
null
2
2449
I download and installed CDH 5 package succesfully on a single linux node in pseudo-distributed Mode on my CentOS 6.5 Starting Hadoop and verifying it is Working Properly as in [this link](http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Quick-Start/cdh5qs_mrv1_pseudo.html) I succesfully finished the following steps Step 1: Format the NameNode. Step 2: Start HDFS Step 3: Create the /tmp Directory Step 4: Create the MapReduce system directories: Step 5: Verify the HDFS File Structure Step 6: Start MapReduce while following command in step 7 I get the following error. Step 7: Create User Directories $ sudo -u hdfs hadoop fs -mkdir -p /user/hadoopuser mkdir: '/user/hadoopuser': No such file or directory (where hadoopuser is my linux login username) If I create the directory manually as /user/hadoopuser in the filesystem, it is not accepting. How to success the step 7:? Please provide the sloution to procced the remaining installation.
Cannot make user directory on a new CDH5 installation (Hadoop)
CC BY-SA 3.0
null
2014-07-03T14:18:14.387
2015-07-22T06:22:54.263
2014-07-03T17:45:59.843
434
1314
[ "bigdata", "apache-hadoop" ]
662
1
null
null
31
29579
Note that I am doing everything in R. The problem goes as follow: Basically, I have a list of resumes (CVs). Some candidates will have work experience before and some don't. The goal here is to: based on the text on their CVs, I want to classify them into different job sectors. I am particular in those cases, in which the candidates do not have any experience / is a student, and I want to make a prediction to classify which job sectors this candidate will most likely belongs to after graduation . Question 1: I know machine learning algorithms. However, I have never done NLP before. I came across Latent Dirichlet allocation on the internet. However, I am not sure if this is the best approach to tackle my problem. My original idea: make this a supervised learning problem. Suppose we already have large amount of labelled data, meaning that we have correctly labelled the job sectors for a list of candidates. We train the model up using ML algorithms (i.e. nearest neighbor... )and feed in those unlabelled data, which are candidates that have no work experience / are students, and try to predict which job sector they will belong to. Update Question 2: Would it be a good idea to create an text file by extracting everything in a resume and print these data out in the text file, so that each resume is associated with a text file,which contains unstructured strings, and then we applied text mining techniques to the text files and make the data become structured or even to create a frequency matrix of terms used out of the text files ? For example, the text file may look something like this: `I deployed ML algorithm in this project and... Skills: Java, Python, c++ ...` This is what I meant by 'unstructured', i.e. collapsing everything into a single line string. Is this approach wrong ? Please correct me if you think my approach is wrong. Question 3: The tricky part is: how to identify and extract the keywords ? Using the `tm` package in R ? what algorithm is the `tm` package based on ? Should I use NLP algorithms ? If yes, what algorithms should I look at ? Please point me to some good resources to look at as well. Any ideas would be great.
What algorithms should I use to perform job classification based on resume data?
CC BY-SA 3.0
null
2014-07-03T16:11:22.637
2019-04-02T02:47:17.803
2014-07-09T00:19:01.640
1352
1315
[ "machine-learning", "classification", "nlp", "text-mining" ]
663
2
null
662
14
null
Check out [this](http://www.rdatamining.com/examples/text-mining) link. Here, they will take you through loading unstructured text to creating a wordcloud. You can adapt this strategy and instead of creating a wordcloud, you can create a frequency matrix of terms used. The idea is to take the unstructured text and structure it somehow. You change everything to lowercase (or uppercase), remove stop words, and find frequent terms for each job function, via Document Term Matrices. You also have the option of stemming the words. If you stem words you will be able to detect different forms of words as the same word. For example, 'programmed' and 'programming' could be stemmed to 'program'. You can possibly add the occurrence of these frequent terms as a weighted feature in your ML model training. You can also adapt this to frequent phrases, finding common groups of 2-3 words for each job function. Example: 1) Load libraries and build the example data ``` library(tm) library(SnowballC) doc1 = "I am highly skilled in Java Programming. I have spent 5 years developing bug-tracking systems and creating data managing system applications in C." job1 = "Software Engineer" doc2 = "Tested new software releases for major program enhancements. Designed and executed test procedures and worked with relational databases. I helped organize and lead meetings and work independently and in a group setting." job2 = "Quality Assurance" doc3 = "Developed large and complex web applications for client service center. Lead projects for upcoming releases and interact with consumers. Perform database design and debugging of current releases." job3 = "Software Engineer" jobInfo = data.frame("text" = c(doc1,doc2,doc3), "job" = c(job1,job2,job3)) ``` 2) Now we do some text structuring. I am positive there are quicker/shorter ways to do the following. ``` # Convert to lowercase jobInfo$text = sapply(jobInfo$text,tolower) # Remove Punctuation jobInfo$text = sapply(jobInfo$text,function(x) gsub("[[:punct:]]"," ",x)) # Remove extra white space jobInfo$text = sapply(jobInfo$text,function(x) gsub("[ ]+"," ",x)) # Remove stop words jobInfo$text = sapply(jobInfo$text, function(x){ paste(setdiff(strsplit(x," ")[[1]],stopwords()),collapse=" ") }) # Stem words (Also try without stemming?) jobInfo$text = sapply(jobInfo$text, function(x) { paste(setdiff(wordStem(strsplit(x," ")[[1]]),""),collapse=" ") }) ``` 3) Make a corpus source and document term matrix. ``` # Create Corpus Source jobCorpus = Corpus(VectorSource(jobInfo$text)) # Create Document Term Matrix jobDTM = DocumentTermMatrix(jobCorpus) # Create Term Frequency Matrix jobFreq = as.matrix(jobDTM) ``` Now we have the frequency matrix, jobFreq, that is a (3 by x) matrix, 3 entries and X number of words. Where you go from here is up to you. You can keep only specific (more common) words and use them as features in your model. Another way is to keep it simple and have a percentage of words used in each job description, say "java" would have 80% occurrence in 'software engineer' and only 50% occurrence in 'quality assurance'. Now it's time to go look up why 'assurance' has 1 'r' and 'occurrence' has 2 'r's.
null
CC BY-SA 4.0
null
2014-07-03T17:06:13.143
2019-04-02T02:47:17.803
2019-04-02T02:47:17.803
29575
375
null
664
1
null
null
11
680
R base function `glm()` uses Fishers Scoring for MLE, while the `glmnet` appears to use the coordinate descent method to solve the same equation. Coordinate descent is more time-efficient than Fisher Scoring, as Fisher Scoring calculates the second order derivative matrix, in addition to some other matrix operations. which makes expensive to perform, while coordinate descent can do the same task in O(np) time. Why would R base function use Fisher Scoring? Does this method have an advantage over other optimization methods? How does coordinate descent and Fisher Scoring compare? I am relatively new to do this field so any help or resource will be helpful.
Fisher Scoring v/s Coordinate Descent for MLE in R
CC BY-SA 3.0
null
2014-07-03T17:11:01.770
2016-02-23T17:34:08.210
2015-10-26T08:28:38.520
13413
1319
[ "machine-learning", "r", "algorithms", "optimization" ]
665
2
null
657
4
null
At each split point, CART will choose the feature which "best" splits the observations. What qualifies as best varies, but generally the split is done so that the subsequent nodes are more homogenous/pure with respect to the target. There are different ways of measuring homogeneity, for example Gini, Entropy, Chi-square. If you are using software, it may allow you to choose the measure of homogenity that the tree algorithm will use. Distance is not a factor with trees - what matters is whether the value is greater than or less than the split point, not the distance from the split point.
null
CC BY-SA 3.0
null
2014-07-03T17:52:00.963
2014-07-03T17:52:00.963
null
null
1111
null
666
2
null
587
3
null
As far as gathering data goes, you can check out Quandl (there's a tutorial on using it with R on [DataCamp](https://www.datacamp.com/courses/how-to-work-with-quandl-in-r) if you're interested). In addition, Aswath Damodaran's [site](http://people.stern.nyu.edu/adamodar/New_Home_Page/data.html) contains a lot of helpful datasets. Though they aren't updated that frequently, they may still be useful, especially as a benchmark for comparing your own output (from the scripts you will inevitably need to write to calculate the necessary metrics). And, again, Quant SE is probably a better place to be looking...
null
CC BY-SA 3.0
null
2014-07-03T18:36:36.050
2014-07-03T18:36:36.050
null
null
786
null
667
2
null
662
7
null
This is a tricky problem. There are many ways to handle it. I guess, resumes can be treated as semi-structured documents. Sometimes, it's beneficial to have some minimal structure in the documents. I believe, in resumes you would see some tabular data. You might want to treat these as attribute value pairs. For example, you would get a list of terms for the attribute "Skill set". The key idea is to manually configure a list of key phrases such as "skill", "education", "publication" etc. The next step is to extract terms which pertain to these key phrases either by exploiting the structure in some way (such as tables) or by utilizing the proximity of terms around these key phrases, e.g. the fact that the word "Java" is in close proximity to the term "skill" might indicate that the person is skilled in Java. After you extract these information, the next step could be to build up a feature vector for each of these key phrases. You can then represent a document as a vector with different fields (one each for a key phrase). For example, consider the following two resumes represented with two fields, namely project and education. Doc1: {project: (java, 3) (c, 4)}, {education: (computer, 2), (physics, 1)} Doc2: {project: (java, 3) (python, 2)}, {education: (maths, 3), (computer, 2)} In the above example, I show a term with the frequency. Of course, while extracting the terms you need to stem and remove stop-words. It is clear from the examples that the person whose resume is Doc1 is more skilled in C than that of D2. Implementation wise, it's very easy to represent documents as field vectors in Lucene. Now, the next step is to retrieve a ranked list of resumes given a job specification. In fact, that's fairly straight forward if you represent queries (job specs) as field vectors as well. You just need to retrieve a ranked list of candidates (resumes) using Lucene from a collection of indexed resumes.
null
CC BY-SA 3.0
null
2014-07-03T20:47:20.147
2014-07-03T20:47:20.147
null
null
984
null
668
2
null
265
5
null
I've done some research in this area. I've found first order Markov chains work well for predicting within game scoring dynamics across a variety of sports. You can read in more detail [here](http://www.epjdatascience.com/content/3/1/4).
null
CC BY-SA 4.0
null
2014-07-03T20:51:59.510
2020-08-20T18:25:18.637
2020-08-20T18:25:18.637
98307
1329
null
669
2
null
655
6
null
The results you're seeing aren't a byproduct of your training product, but rather that `neural nets` are not a great choice for this task. `Neural nets` are effectively a means to create a high order non-linear function by composing a number of simpler functions. This is often a really good thing, because it allows neural nets to fit very complex patterns. However, in a stock exchange any complex pattern, when traded upon will quickly decay. Detecting a complicated pattern will generally not generate useful results, because it is typically complex patterns in the short term. Additionally, depending on the metric you choose, there are a number of ways of performing well that actually won't pay off in investing (such as just predicting the last value in your example). In addition the stock market is startlingly chaotic which can result in a `neural net` overfitting. This means that the patterns it learns will generalize poorly. Something along the lines of just seeing a stock decrease over a day and uniformly deciding that the stock will always decrease just because it was seen on a relatively short term. Instead techniques like `ridge` and `robust regression`, which will identify more general, less complex patterns, do better. The winner of a similar Kaggle competition used `robust regression` for this very reason. You are likely to see better results if you switch to a shallow learning model that will find functions of a lower polynomial order, over the deep complex functions of a neural net.
null
CC BY-SA 3.0
null
2014-07-03T23:00:42.350
2014-07-03T23:00:42.350
null
null
548
null
670
1
672
null
3
112
I was building a model that predicts user churn for a website, where I have data on all users, both past and present. I can build a model that only uses those users that have left, but then I'm leaving 2/3 of the total user population unused. Is there a good way to incorporate data from these users into a model from a conceptual standpoint?
Dealing with events that have not yet happened when building a model
CC BY-SA 3.0
null
2014-07-04T02:31:42.080
2016-08-11T09:33:39.037
2014-07-04T02:38:12.523
1334
1334
[ "data-mining" ]
671
1
729
null
8
206
I have a linearly increasing time series dataset of a sensor, with value ranges between 50 and 150. I've implemented a [Simple Linear Regression](http://en.wikipedia.org/wiki/Simple_linear_regression) algorithm to fit a regression line on such data, and I'm predicting the date when the series would reach 120. All works fine when the series move upwards. But, there are cases in which the sensor reaches around 110 or 115, and it is reset; in such cases the values would start over again at, say, 50 or 60. This is where I start facing issues with the regression line, as it starts moving downwards, and it starts predicting old date. I think I should be considering only the subset of data from where it was previously reset. However, I'm trying to understand if there are any algorithms available that consider this case. I'm new to data science, would appreciate any pointers to move further. Edit: nfmcclure's suggestions applied Before applying the suggestions ![enter image description here](https://i.stack.imgur.com/ZsyyQ.png) Below is the snapshot of what I've got after splitting the dataset where the reset occurs, and the slope of two set. ![enter image description here](https://i.stack.imgur.com/OEQCw.png) finding the mean of the two slopes and drawing the line from the mean. ![enter image description here](https://i.stack.imgur.com/i2qv5.png) Is this OK?
Linearly increasing data with manual reset
CC BY-SA 3.0
null
2014-07-04T05:12:44.707
2019-01-01T19:50:02.190
2014-08-27T07:15:52.587
870
870
[ "machine-learning", "statistics", "time-series" ]
672
2
null
670
5
null
This setting is common in reliability, health care, and mortality. The statistical analysis method is called [Survival Analysis](http://en.wikipedia.org/wiki/Survival_analysis). All users are coded according to their start date (or week or month). You use the empirical data to estimate the survival function, which is the probability that the time of defection is later than some specified time t. Your baseline model will estimate survival function for all users. Then you can do more sophisticated modeling to estimate what factors or behaviors might predict defection (churn), given your baseline survival function. Basically, any model that is predictive will yield a survival probability that is significantly lower than the baseline. --- There's another approach which involves attempting to identify precursor events patterns or user behavior pattern that foreshadow defection. Any given event/behavior pattern might occur for users that defect, or for users that stay. For this analysis, you may need to censor your data to only include users that have been members for some minimum period of time. The minimum time period can be estimated using your estimate of survival function, or even simple histogram analysis of the distribution of membership period for users who have defected.
null
CC BY-SA 3.0
null
2014-07-04T06:46:19.180
2014-07-04T06:46:19.180
null
null
609
null
675
2
null
662
11
null
Just extract keywords and train a classifier on them. That's all, really. Most of the text in CVs is not actually related to skills. E.g. consider sentence "I'm experienced and highly efficient in Java". Here only 1 out of 7 words is a skill name, the rest is just a noise that's going to put your classification accuracy down. Most of CVs are not really structured. Or structured too freely. Or use unusual names for sections. Or file formats that don't preserve structure when translated to text. I have experience extracting dates, times, names, addresses and even people intents from unstructured text, but not a skill (or university or anything) list, not even closely. So just tokenize (and possibly [stem](https://stackoverflow.com/questions/16069406/text-mining-with-the-tm-package-word-stemming)) your CVs, select only words from predefined list (you can use LinkedIn or something similar to grab this list), create a feature vector and try out a couple of classifiers (say, SVM and Naive Bayes). (Note: I used a similar approach to classify LinkedIn profiles into more than 50 classes with accuracy > 90%, so I'm pretty sure even naive implementation will work well.)
null
CC BY-SA 3.0
null
2014-07-04T22:46:32.130
2014-07-05T08:31:38.717
2017-05-23T12:38:53.587
-1
1279
null
676
2
null
565
7
null
In general regression models (any) can behave in an arbitrary way beyond the domain spanned by training samples. In particular, they are free to assume linearity of the modeled function, so if you for instance train a regression model with points: ``` X Y 10 0 20 1 30 2 ``` it is reasonable to build a model `f(x) = x/10-1`, which for `x<10` returns negative values. The same applies "in between" your data points, it is always possible that due to the assumed famility of functions (which can be modeled by particular method) you will get values "out of your training samples". You can think about this in another way - "what is so special about negative values?", why do you find existance of negative values weird (if not provided in training set) while you don't get alarmed by existance of lets say... value 2131.23? Unless developed in such a way, no model will treat negative values "different" than positive ones. This is just a natural element of the real values which can be attained as any other value.
null
CC BY-SA 3.0
null
2014-07-05T14:48:10.570
2014-07-05T14:48:10.570
null
null
1355
null
677
1
5834
null
7
4752
I'm trying to use the sklearn_pandas module to extend the work I do in pandas and dip a toe into machine learning but I'm struggling with an error I don't really understand how to fix. I was working through the following dataset on [Kaggle](https://www.kaggle.com/c/data-science-london-scikit-learn/data). It's essentially an unheadered table (1000 rows, 40 features) with floating point values. ``` import pandas as pdfrom sklearn import neighbors from sklearn_pandas import DataFrameMapper, cross_val_score path_train ="../kaggle/scikitlearn/train.csv" path_labels ="../kaggle/scikitlearn/trainLabels.csv" path_test = "../kaggle/scikitlearn/test.csv" train = pd.read_csv(path_train, header=None) labels = pd.read_csv(path_labels, header=None) test = pd.read_csv(path_test, header=None) mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))]) mapper_train ``` Output: ``` DataFrameMapper(features=[([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', n_neighbors=3, p=2, weights='uniform'))]) ``` So far so good. But then I try the fit ``` mapper_train.fit_transform(train, labels) ``` Output: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-e3897d6db1b5> in <module>() ----> 1 mapper_train.fit_transform(train, labels) //anaconda/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y, **fit_params) 409 else: 410 # fit method of arity 2 (supervised transformation) --> 411 return self.fit(X, y, **fit_params).transform(X) 412 413 //anaconda/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y) 116 for columns, transformer in self.features: 117 if transformer is not None: --> 118 transformer.fit(self._get_col_subset(X, columns)) 119 return self 120 TypeError: fit() takes exactly 3 arguments (2 given)` ``` What am I doing wrong? While the data in this case is all the same, I'm planning to work up a workflow for mixtures categorical, nominal and floating point features and sklearn_pandas seemed to be a logical fit.
Struggling to integrate sklearn and pandas in simple Kaggle task
CC BY-SA 3.0
null
2014-07-05T15:01:43.940
2015-05-27T03:16:56.403
null
null
974
[ "python", "pandas", "scikit-learn" ]
678
1
689
null
39
10907
When I say "document", I have in mind web pages like Wikipedia articles and news stories. I prefer answers giving either vanilla lexical distance metrics or state-of-the-art semantic distance metrics, with stronger preference for the latter.
What are some standard ways of computing the distance between documents?
CC BY-SA 3.0
null
2014-07-05T16:10:21.580
2020-08-06T13:01:07.800
2015-11-06T09:00:24.573
13727
1097
[ "machine-learning", "data-mining", "nlp", "text-mining", "similarity" ]
679
1
699
null
8
199
I made a similar question asking about distance between "documents" (Wikipedia articles, news stories, etc.). I made this a separate question because search queries are considerably smaller than documents and are considerably noisier. I hence don't know (and doubt) if the same distance metrics would be used here. Either vanilla lexical distance metrics or state-of-the-art semantic distance metrics are preferred, with stronger preference for the latter.
What are some standard ways of computing the distance between individual search queries?
CC BY-SA 3.0
null
2014-07-05T16:20:17.963
2014-08-07T07:07:03.713
null
null
1097
[ "machine-learning", "nlp", "search" ]
680
2
null
678
6
null
There's a number of semantic distance measures, each with its pros and cons. Here are just a few of them: - cosine distance, inner product between document feature vectors; - LSA, another vector-based model, but utilizing SVD for de-noising original term-document matrix; - WordNet-based, human verified, though hardly extensible. Start with a simplest approach and then move further based on issues for your specific case.
null
CC BY-SA 3.0
null
2014-07-06T00:01:41.327
2014-07-06T00:01:41.327
null
null
1279
null