Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
2357
2
null
2348
2
null
A confusion matrix is a cross tabulation of your predicted values against the true observed values, and (test) accuracy is emperical rate of correct predictions. So in this case you'll need to - Predict the 'play' attribute for your test set. (Currently you don't have a method to predict your second test case, so for the sake of argument let's assume your model would predict yes for the sunny example. - The following method of keeping track of your predictions is reffered to as a confusion matrix. The top labels are prediced Predicted +----------------+ ¦ ¦ yes ¦ no ¦ Oserved ¦ yes ¦ 1 ¦ 1 ¦ ¦ no ¦ 0 ¦ 0 ¦ +----------------+ Here the first 1 is from your first test case and the second 1 is from the misclassified second test case. - Calculate accuracy, Accuracy = (# correct predictions)/(# total predictions) = 1 / 2 = .50.
null
CC BY-SA 3.0
null
2014-10-27T16:13:21.233
2014-10-27T16:23:02.663
2014-10-27T16:23:02.663
4724
4724
null
2358
2
null
2348
0
null
It is classify test objects: "In classification, let R be the set of generated rules and T the training data. The basic idea of the proposed method is to choose a set of high confidence rules in R to cover T. In classifying a test object, the first rule in the set of rules that matches the test object condition classifies it. This process ensures that only the highest ranked rules classify test objects." Suppose 1 test case is (overcast, cool, normal, TRUE). Look through the rules top to bottom and see if any of the conditions are matched. The first rule for example tests the outlook feature. The value doesn't match, so the rule isn't matched. Move on to the next rule. And so on. In this case, rule 2 matches the test case and the classification for the play variable is "yes". The second test case is misclassified. Thanks
null
CC BY-SA 3.0
null
2014-10-27T17:39:18.997
2014-10-27T17:39:18.997
null
null
3503
null
2359
1
null
null
3
98
Our system allows an admin to manage a database of university courses. These courses have multiple fields, like the department, a title, and a description. I am adding the ability to add learning objectives to a course. To simplify the problem, let's say that learning objectives are just tags. Courses can have more than one learning objective associated with them. So a course like CHEM 101 might have "chemistry", "technology", "science", and several others. Assuming I can reduce a course to a set of features, (using keywords/stemming/nlp, I suppose?), what kind of problem is this and what algorithm would you suggest? It seems very similar to a classification problem, but I want to provide a sorted list of suggestions with the most relevant at the top.
Ranked tag recommendation for university courses
CC BY-SA 3.0
null
2014-10-27T22:17:31.497
2014-10-29T08:12:37.850
2014-10-29T08:12:37.850
381
4821
[ "machine-learning", "classification" ]
2360
2
null
1240
5
null
For item-ratings type of data with the restriction that an item's rating should be between 1 and 10 after transformation, I would suggest using a simple re-scaling, such that the item's transformed rating $x_t$ is given by: $$x_t = 9\left(\frac{x_i - x_{min}}{x_{max} - x_{min}}\right) + 1$$ where $x_{min}$ and $x_{max}$ are the minimum and maximum possible rating in the specific scale for the item, and $x_i$ is the item rating. In the case of the above scaling, the transformation applied is independent of the data. However, in the normalization, the transformation applied is dependent on the data (through mean and standard deviation), and might change as more data becomes available. Section 4.3 on page 30 of [this document](http://www.dai-labor.de/fileadmin/files/publications/DiplomaThesisStephanSpiegel.pdf) shows other ways of normalizing in which your restriction (transforming to the same absolute scale) might not be preserved.
null
CC BY-SA 3.0
null
2014-10-27T23:24:46.673
2014-12-13T21:46:55.380
2014-12-13T21:46:55.380
84
847
null
2361
2
null
2359
1
null
This will most likely turn out to be multiple binary classification problems. Instead of building just one classifier, you would have to build as many classifiers as there are tags. Each classifier's task would be to predict whether a certain tag exists or not. A tag specific probability threshold can then be learnt by tuning over the precision-curve where course wise precision recall can be calculated. The percentile of the predicted probability for each tag can be used to obtain a ordered (sorted) list of tag suggestions. This way of converting a multi-class problem to multiple binary classification problems is known as binary relevance. While it is very efficient, it assumes that the labels themselves are independent. However, picking the probability threshold for each tag in order to optimize the course wise precision recall curve can lead to modeling dependence in the tags.
null
CC BY-SA 3.0
null
2014-10-27T23:56:11.853
2014-10-27T23:56:11.853
null
null
847
null
2362
1
null
null
5
3332
I'm training a NN with 8 features and 8000 training examples with a single output (0, 1) using the scipy.optimise CG algorithm and the results are somewhat inconsistent. The goal is to get the NN to be as 'precise' as possible (recall doesn't really matter too much) so I've set the threshold for y value quite high (0.75). Most of the time it gets a precision of around 80%, however sometimes it fails (using exactly the same parameters, lambda etc..) to generate any outputs which are above the 0.75 threshold, meaning the precision equals 0. I've successfully trained NNs before without these unusual results (albeit the goal was a somewhat more conventional multi-class classifier with many more features). I'm wondering if the training NNs with fewer features increases the chances of it getting stuck at a local optima; or getting stuck at local optima has a more significant impact on NNs with fewer features? Any thoughts on what's going on!?
Neural Networks getting stuck at local optima
CC BY-SA 3.0
null
2014-10-28T06:00:49.643
2014-12-28T06:14:08.787
null
null
4824
[ "machine-learning", "neural-network" ]
2363
2
null
2298
3
null
The difference was, it appears, due to the different implementation of Random Forests in `R` and `Cascading Pattern` (as well as `openscoring` which I tried later) with respect to ties in the tree voting - i.e. when an even number of trees are built (say, 500) and exactly half classify an application as `Good`, and the other half as `Bad`, the handling of those situations differs. Solved it by growing and odd (501) number of trees.
null
CC BY-SA 3.0
null
2014-10-28T06:27:40.090
2014-10-28T06:27:40.090
null
null
1127
null
2364
2
null
985
0
null
Using unsupervised learning to reduce the dimensionality and then using supervised learning to obtain an accurate predictive model is commonly used. See for example [Bhat and Zaelit, 2012](http://faculty.ucmerced.edu/hbhat/BhatZaelit2012.pdf) where they first use PCA to reduce the dimension of a problem from 87 to 35. Then, they use L1 regression to obtain the best predictive model. This method beats non-linear tree based models built on the entire dataset and also its subset. If your goal is to create more accurate classification of data into clusters, then a commonly used technique is to use supervised learning as a method to accurately pick the number of clusters [see Pan et al, 2013](http://jmlr.csail.mit.edu/papers/volume14/pan13a/pan13a.pdf) for a recent example. The basic approach here is to choose the number of clusters such that a supervised multi-class method can learn these clusters and predict the clusters with the highest out of sample accuracy. This is one way to convince yourself that the clusters are both meaningful and predictable. Another approach, if your goal is to classify documents as being from US/ USA/ Australia or for that matter discussing, soccer/ American football/ Australian football could be to solve three binary classification problems that independently predict if the document talks about soccer, American football or Australian football. Combining the results from these three classifiers (known as binary relevance), you could also have the ability of tagging a document as both soccer or American football or any combination of the above three tags.
null
CC BY-SA 3.0
null
2014-10-28T07:00:01.593
2014-10-28T07:00:01.593
null
null
847
null
2365
2
null
2362
5
null
Ok all sorted - Bit embarrassing but forgot to normalise the data!
null
CC BY-SA 3.0
null
2014-10-28T07:14:06.873
2014-10-28T07:14:06.873
null
null
4824
null
2366
2
null
2353
1
null
You are altering weights in the wrong direction for the negative cases. The line ``` w = w + eta*Xtr(j,:)'; ``` should be ``` w = w + eta*Xtr(j,:)'*ytr(j); ``` With that change I got 12% error.
null
CC BY-SA 3.0
null
2014-10-28T17:07:32.723
2014-10-28T17:07:32.723
null
null
836
null
2367
2
null
492
6
null
Instead of "recursive neural nets with back propagation" you might consider the approach used by Frantzi, et. al. at National Centre for Text Mining (NaCTeM) at University of Manchester for Termine (see: [this](http://www.nactem.ac.uk/index.php) and [this](http://personalpages.manchester.ac.uk/staff/sophia.ananiadou/IJODL2000.pdf)) Instead of deep neural nets, they "combine linguistic and statistical information".
null
CC BY-SA 4.0
null
2014-10-29T00:52:06.097
2020-08-05T08:41:02.810
2020-08-05T08:41:02.810
98307
609
null
2368
1
2370
null
63
66443
What are the common/best practices to handle time data for machine learning application? For example, if in data set there is a column with timestamp of event, such as "2014-05-05", how you can extract useful features from this column if any? Thanks in advance!
Machine learning - features engineering from date/time data
CC BY-SA 3.0
null
2014-10-29T05:25:55.603
2022-10-14T09:40:15.590
null
null
88
[ "machine-learning", "time-series", "feature-selection" ]
2369
2
null
2368
7
null
In several cases, data and events inside a time series are seasonal. In such cases, the month and the year of the event matters a lot. Hence, in such scenarios you can use binary variables to represent if the event is during a given month/year or not. Hope this answers your question. If not, kindly be a little more specific on what exactly are you trying to achieve.
null
CC BY-SA 4.0
null
2014-10-29T07:52:41.060
2022-10-14T09:40:15.590
2022-10-14T09:40:15.590
25750
4839
null
2370
2
null
2368
57
null
I would start by graphing the time variable vs other variables and looking for trends. ## For example ![enter image description here](https://i.stack.imgur.com/QGYUC.png) In this case there is a periodic weekly trend and a long term upwards trend. So you would want to encode two time variables: - day_of_week - absolute_time ## In general There are several common time frames that trends occur over: - absolute_time - day_of_year - day_of_week - month_of_year - hour_of_day - minute_of_hour Look for trends in all of these. ## Weird trends Look for weird trends too. For example you may see rare but persistent time based trends: - is_easter - is_superbowl - is_national_emergency - etc. These often require that you cross reference your data against some external source that maps events to time. ## Why graph? There are two reasons that I think graphing is so important. - Weird trends While the general trends can be automated pretty easily (just add them every time), weird trends will often require a human eye and knowledge of the world to find. This is one reason that graphing is so important. - Data errors All too often data has serious errors in it. For example, you may find that the dates were encoded in two formats and only one of them has been correctly loaded into your program. There are a myriad of such problems and they are surprisingly common. This is the other reason I think graphing is important, not just for time series, but for any data.
null
CC BY-SA 3.0
null
2014-10-29T13:54:23.793
2014-11-05T22:02:14.857
2014-11-05T22:02:14.857
4808
4808
null
2371
2
null
2355
3
null
I have found some solution and will post it here, because somebody, who works with graphlab, can have the same question. We can look at the example here: [Six degrees of Kevin Bacon](http://graphlab.com/learn/gallery/notebooks/graph_analytics_movies.html) At te beginning of the program execution you need to run next command: ``` graphlab.canvas.set_target('ipynb') ``` Exactly this is a key of the whole problem (at least by me:-) At the beginning it is important to know, which parameter of `set_target()` command you want to execute. You can use two options for argument of this command: `'ipynb'` (which is executed direct in iPython Notebook, like in example) or `'browser'` (which open the new window with the plots) On my machine 64-bit, Ubuntu, I can not use the command `'ipynb'`. Only the `'browser'`-command get me the plot back. I don't think, it is necessary to change https to http, but you can do it anyway. We have also the other machine by us (32-bit, Ubuntu) and it executes the other command `'ipynb'`, but not `'browser'` (without to change https to http)
null
CC BY-SA 3.0
null
2014-10-29T13:58:55.653
2014-10-30T07:57:03.453
2014-10-30T07:57:03.453
3281
3281
null
2372
2
null
2298
-1
null
Decision Trees are unstable learners and very sensitive to changes in the input parameters.
null
CC BY-SA 3.0
null
2014-10-29T14:37:00.910
2014-10-29T14:37:00.910
null
null
4847
null
2373
1
2376
null
4
312
The most online tutorials like to use a simple example to introduce to machine learning by classify unknown text in spam or not spam. They say that this is a binary-class problem. But why is this a binary-class problem? I think it is a one-class problem! I do only need positive samples of my inbox to learn what is not spam. If I do take a bunch of not spam textes as positiv samples and a bunch of spam-mails as negativ samples, then of course it's possible to train a binary-classifier and make predictions from unlabeled data, but where is the difference to the onc-class-approach? There I would just define a training-set of all non spam examples and train some one-class classifier. What do you think?
Detecting Spam using Machine Learning
CC BY-SA 3.0
null
2014-10-29T21:57:15.603
2014-10-30T14:48:34.907
null
null
4717
[ "machine-learning" ]
2374
2
null
2313
0
null
Your training data needs to be one set of data with samples of all the categories, because you are trying (I think) to create a model that will be fed such data. Have you given any thoughts as to what model(s) you might be using? I'm asking because pure classification models will achieve a better fit if the amount of data in each class is pretty uniform in the training data. However, regression models need the data type proportions to match the expected input.
null
CC BY-SA 3.0
null
2014-10-29T22:10:53.570
2014-10-29T22:10:53.570
null
null
1241
null
2375
2
null
2373
2
null
The problem arises if you want to classify a new example as either spam or not spam. A one-class method will only give you some score of how well a new instance fits to this class, but how do you turn this into a binary prediction without knowing how big the score would be for the other class? If you look at the Naive Bayes classifier it essentially trains a "one-class" model for each class and arrives at a prediction by choosing the class with the highest score. But this requires you to have training examples for all classes.
null
CC BY-SA 3.0
null
2014-10-30T07:31:03.903
2014-10-30T07:31:03.903
null
null
4852
null
2376
2
null
2373
6
null
Strictly speaking, "one class classification" does not make sense as an idea. If there is only one possible state of a predicted value, then there is no prediction problem. The answer is always the single class. Concretely, if you only have spam examples, you would always achieve 100% accuracy by classifying all email as spam. This is clearly wrong, and the only way to know how it is wrong is to know where the classification is wrong -- where emails are not in the spam class. So-called [one-class classification](http://en.wikipedia.org/wiki/One-class_classification) techniques are really anomaly detection approaches. They have an implicit assumption that things unlike the examples are not part of the single class, but, this is just an assumption about data being probably not within the class. There's a binary classification problem lurking in there. What is wrong with a binary classifier?
null
CC BY-SA 3.0
null
2014-10-30T14:48:34.907
2014-10-30T14:48:34.907
null
null
21
null
2377
2
null
1135
1
null
Basically your machine-learning problem is: given [day-of-week, weather, departure-time, route], predict arrival time, then obtain travel time. Once your model is solid, you can then predict your travel time for each potential departure time and route, and choose the lowest. You don't really need to factor in the pit stops, if you think they're a function of the rest, or a random variable altogether. If you want a tool that can do the machine-learning aspect 'out of the box' you should try Weka. You'll have to encode the data in the specific format it expects, but other than that you won't have to do any coding (i.e. you won't need to code any of the actual machine-learning algorithms). I would discretize your departure time to make sure you have enough data. Weka will let you try out the different algorithms, see which one is best. Note that it's a regression problem as opposed to a classification problem, so only a subset of the algorithms apply. Once you've got the Weka part figured out, you can also call it programmatically, which will allow you to code the interface you want, and potentially include the automatic weather retrieval.
null
CC BY-SA 3.0
null
2014-10-30T14:59:13.243
2014-10-30T14:59:13.243
null
null
4760
null
2378
1
null
null
8
981
I'm doing some data analysis in a Statistical Pattern Recognition course using PRML. We analyzed a lot of matrix properties, like eigenvalues, column independence, positive semi-definite matrix, etc. When we are doing, for example, linear regression, we need to calculate some of those properties, and fit them into the equation. So my question is, my question is about the intuition behind these matrix properties, and their implications in the ML/DM literature. If anyone could answer, can you teach me what is the importance of eigenvalue, positive semi-definite matrix, and column independence for ML/DM. And possibly, other important matrix properties you think important in study the dataset, and why. I'd be really appreciated if someone can answer this question.
Matrix properties and machine learning/data mining
CC BY-SA 3.0
null
2014-10-30T18:22:18.907
2014-10-31T04:44:23.100
null
null
4859
[ "machine-learning", "data-mining", "dataset" ]
2379
2
null
2378
2
null
A few things where the knowledge of Linear Algebra might be helpful in the context of Machine Learning: - Dimensionality Reduction: There are lots of problems where PCA (a special case of an SVD) followed by a simple Machine Learning method applied on the reduced dataset produces much better results than a non parametric model on the full (non-reduced dataset). For an example see Bhat and Zaelit, 2012 where PCA followed by linear regression performs better than more involved non-parametric models. It also suggests reasons why dimensionality reduction performs better in these cases. - Visualizing data: Higher dimensional data is complicated to visualize and one often needs to be able to reduce the dimension of the dataset to view it. This comes very handy when one has to "view" the results of clustering on a higher dimensional dataset. - Numerical Accuracy: Eigenvalues are generally handy in order to understand condition numbers of matrices and hence be able to figure out if results of Linear regression or other methods that require to solve Ax=b would be numerically accurate. Positive Definiteness of a matrix might also be able to guarantee bounds on numerical accuracies. - Recommendations: Some methods like collaborative filtering use matrix factorization (SVD) tuned smartly to solve recommendation problems. - Regularization: Regularization is commonly used to reduce over-fitting in machine learning problems. Most of these regularization techniques like Lasso, Tikhonov etc. have both optimization and Linear Algebra at their heart.
null
CC BY-SA 3.0
null
2014-10-31T01:38:57.483
2014-10-31T01:38:57.483
null
null
847
null
2380
2
null
2378
3
null
The importance of a concept in mathematics depends on the circumstances of its application. Sometimes, its importance relies on the fact that it allows you to carry on with what you are doing. For example, you usually need column independence (independent variables between predictors) because multiple regression will behave badly with highly correlated variables. Even worst, when some of your columns (or rows) are dependent, your matrix is not invertible. Why? Because matrix inversion A^-1 involves the determinant 1/|A|, which is 0 when columns or rows are linearly dependent. Eigenvalues is a common occurrence in calculations related to maximization/minimization in machine learning. Let's say you are interested in principal component analysis. A very important idea there is dimensional reduction (you have a dataset with many variables and want to reduce the number of variables without losing too much explanatory power.) One solution is to project your data onto a lower dimensional space (e.g. taking your data with 50 variables and reducing them to 5 variables.) Turns out a good projection to use is one that includes as much variation as possible and maximization of this variation results in the eigenvalue equation S u = λ u. In other cases, you explicitly include the eigenvalue equation of some quantity of interest because in doing so, you're changing the coordinate system in which you represent the variables. Take the case of a (multivariate) Gaussian distribution in which the argument of the exponent is given by Δ = (x-μ)^T Σ (x-μ). If you consider the eigenvalue equation of Σ, the exponent can be written as Δ = y_1^2 / λ_1 + y_2^2 / λ_2 (in two dimensions) This is the equation of an ellipse only if λ_1 and λ_2 are positive. Therefore, you obtain the following graphical interpretation (Bishop, PRML, p.81): ![enter image description here](https://i.stack.imgur.com/pNrLb.png) Positive semi-definite matrices are used as a matter of convenience. They are well-behaved and well-understood. For instance, their eigenvalues are non-negative, and if you remember the previous paragraph, the argument Δ required positive eigenvalues. By now, you can see why some concepts are very popular: You need them for your calculations or they need each other. I can recommend a couple of books: - Linear Algebra: A Modern Introduction by David Poole - Understanding Complex Datasets: Data Mining with Matrix Decompositions by David Skillicorn. The second recommendation is more specialized and requires a decent understanding of the basics, but it is of great help to understand matrix decompositions.
null
CC BY-SA 3.0
null
2014-10-31T04:44:23.100
2014-10-31T04:44:23.100
null
null
4621
null
2381
1
null
null
3
112
I'm examining the activity of customers over the years which have about one event per year. This results is many short time-series for which I found the distributions (hit/miss over 4 years sorted by probability in the data): ``` 0000 : 0.31834 0001 : 0.17582 0010 : 0.13605 0100 : 0.13554 1000 : 0.12886 0011 : 0.01717 1100 : 0.01650 0110 : 0.01578 0101 : 0.01220 1010 : 0.01117 1001 : 0.00883 0111 : 0.00571 1110 : 0.00565 1111 : 0.00496 1101 : 0.00384 1011 : 0.00351 ``` Apparently a purely uncorrelated binomial model wouldn't do, but one can observe that if both, the number of 1's and 11's coincide, then the probabilities are approximately equal (apart from a small recency effect of 0001). Can you see a way to approach such data to deduce a probabilistic model? Basically where I have only a few probability parameters which roughly explain this distribution?
How to find a model for a short discrete time-series?
CC BY-SA 3.0
null
2014-10-31T08:50:23.160
2014-10-31T10:02:13.603
2014-10-31T10:02:13.603
723
723
[ "time-series" ]
2383
2
null
2343
3
null
- Gather app reviews for single app and try to find out the most important features that you can mine for the app developer - try predicting the next word in a sentence based on previous words.Eg. Swiftkey does this. - Get questions data from stack overflow and try to predict tags for every post on stack overflow.
null
CC BY-SA 3.0
null
2014-10-31T13:05:49.263
2014-10-31T13:05:49.263
null
null
3211
null
2384
1
2393
null
4
2235
I'm planing to write a classification program that is able to classify unknown text in around 10 different categories and if none of them fits it would be nice to know that. It is also possible that more then one category is right. My predefined categories are: ``` c1 = "politics" c2 = "biology" c3 = "food" ... ``` I'm thinking about the right approach in how to represent my training-data or what kind of classification is the right one. The first challenge is about finding the right features. If I only have text (250 words each) what method would you recommend to find the right features? My first approach is to remove all stop-words and use the POS-Tagger ([Stanford NLP POS-Tagger](http://nlp.stanford.edu/software/tagger.shtml)) to find nouns, adjective etc. I count them an use all frequently appeared words as features. e.g. politics, I've around 2.000 text-entities. With the mentioned POS-Tagger I found: ``` law: 841 capitalism: 412 president: 397 democracy: 1007 executive: 112 ... ``` Would it be right to use only that as features? The trainings-set would then look like: ``` Training set for politics: feature law numeric feature capitalism numeric feature president numeric feature democracy numeric feature executive numeric class politics,all_others sample data: politics,5,7,1,9,3 politics,14,4,6,7,9 politics,9,9,9,4,2,1 politics,5,8,0,7,6 ... all_others,0,2,4,1,0 all_others,0,0,1,1,1 all_others,7,4,0,0,0 ... ``` Would this be a right approach for binary-classification? Or how would I define my sets? Or is multi-class classification the right approach? Then it would look like: ``` Training set for politics: feature law numeric feature capitalism numeric feature president numeric feature democracy numeric feature executive numeric feature genetics numeric feature muscle numeric feature blood numeric feature burger numeric feature salad numeric feature cooking numeric class politics,biology,food sample data: politics,5,7,1,9,3,0,0,2,1,0,1 politics,14,4,6,7,9,0,0,0,0,0,1 politics,9,9,9,4,2,1,1,1,1,0,3 politics,5,8,0,7,6,2,2,0,1,0,1 ... biology,0,2,4,1,0,4,19,5,0,2,2 biology,0,0,1,1,1,12,9,9,2,1,1 biology,7,4,0,0,0,10,10,3,0,0,7 ... ``` What would you say?
Text-Classification-Problem, what is the right approach?
CC BY-SA 3.0
null
2014-10-31T17:55:25.723
2020-08-18T16:28:32.957
null
null
4717
[ "machine-learning" ]
2385
2
null
2343
3
null
[Kaggle](http://www.kaggle.com/competitions) has a bunch of good practice datasets and basic tutorials.
null
CC BY-SA 3.0
null
2014-10-31T22:40:24.513
2014-10-31T22:40:24.513
null
null
192
null
2386
2
null
2384
-1
null
The following great article by Sebastian Raschka on Bayesian approach to text classification should be very helpful for your task. I also highly recommend his excellent [blog](http://sebastianraschka.com/articles.html) on data science topics, as an additional general reference. You may also check this [educational report](http://www.datasciencecentral.com/profiles/blogs/my-data-science-apprenticeship-project) on text classification. It might provide you with some additional ideas.
null
CC BY-SA 4.0
null
2014-11-01T10:07:09.843
2020-08-18T16:28:32.957
2020-08-18T16:28:32.957
98307
2452
null
2387
2
null
2343
3
null
Check my answer on the related question here: [https://datascience.stackexchange.com/a/843/2452](https://datascience.stackexchange.com/a/843/2452). However, given your situation, you might need to work on something simpler. For this, it might be a good idea to review recommended literature in the open source data science master's curriculum: [http://datasciencemasters.org](http://datasciencemasters.org). Many referenced there sources are free and easily available online. I'm pretty sure that many of these sources contain examples of simple projects that you could (re)implement, extend or otherwise use for your task.
null
CC BY-SA 3.0
null
2014-11-01T10:16:55.557
2014-11-01T10:16:55.557
2017-04-13T12:50:41.230
-1
2452
null
2391
1
null
null
4
943
I am a newbie to data science with a typical problem. I have a data set with metric1, metric2 and metric3. All these metrics are interdependent on each other. I want to detect anomalies in metric3. Currently, I am using Nupic from numenta.org for my analysis and it doesn't seem to be effective. Is there any ML library which can detect anomalies in multiple parameters?
Anomaly detection in multiple parameters
CC BY-SA 3.0
null
2014-11-02T07:20:32.603
2015-03-17T20:13:00.767
2014-11-03T08:27:42.083
21
4887
[ "machine-learning", "bigdata", "statistics", "anomaly-detection" ]
2392
1
null
null
2
132
I tried to use OMP algorithm available in scikit-learn. My net datasize which includes both target signal and dictionary ~ 1G. However when I ran the code, it exited with mem-error. The machine has 16G RAM, so I don't think this should have happened. I tried with some logging where the error came and found that the data got loaded completely into numpy arrays. And it was the algorithm itself that caused the error. Can someone help me with this or sugggest more memory efficient algorithm for feature selection, or is subsampling the data my only option. Are there some deterministic good subsampling techniques. EDIT: Relevant code piece: ``` n=8; y=mydata[:,0]; X=mydata[:,[1,2,3,4,5,6,7,8]]; #print y; #print X; print "here"; omp = OrthogonalMatchingPursuit(n_nonzero_coefs=5,copy_X = False, normalize=True); omp.fit(X,y); coef = omp.coef_; print omp.coef_; idx_r, = coef.nonzero(); for id in idx_r: print coef[id], vars[id],"\n"; ``` The error I get: ``` File "/usr/local/lib/python2.7/dist-packages/sklearn/base.py", line 324, in score return r2_score(y, self.predict(X), sample_weight=sample_weight) File "/usr/local/lib/python2.7/dist-packages/sklearn/metrics/metrics.py", line 2332, in r2_score numerator = (weight * (y_true - y_pred) ** 2).sum(dtype=np.float64) MemoryError ```
scikit-learn OMP mem error
CC BY-SA 3.0
null
2014-11-02T11:36:28.143
2023-05-22T08:05:20.607
2015-11-13T11:07:48.370
21
4889
[ "python", "bigdata", "feature-selection", "scikit-learn", "scalability" ]
2393
2
null
2384
3
null
I think perhaps the first thing to decide that will help clarify some of your other questions is whether you want to perform binary classification or multi-class classification. If you're interested in classifying each instance in your dataset into more than one class, then this brings up a set of new concerns regarding setting up your data set, the experiments you want to run, and how you plan to evaluate your classifier(s). My hunch is that you could formulate your task as a binary one where you train and test one classifier for each class you want to predict, and simply set up the data matrix so that there are two classes to predict - (1) the one you're interested in classifying and (2) everything else. In that case, instead of your training set looking like this (where each row is a document and columns 1-3 contain features for that document, and the class column is the class to be predicted): ``` 1 2 3 class feature1 feature2 feature3 politics feature1 feature2 feature3 law feature1 feature2 feature3 president feature1 feature2 feature3 politics ``` it would look like the following in the case where you're interested in detecting the politics class against everything else: ``` 1 2 3 class feature1 feature2 feature3 politics feature1 feature2 feature3 non-politics feature1 feature2 feature3 non-politics feature1 feature2 feature3 politics ``` You would need to do this process for each class you're interested in predicting, and then train and test one classifier per class and evaluate each classifier according to your chosen metrics (usually accuracy, precision, or recall or some variation thereof). As far as choosing features, this requires quite a bit of thinking. Features can be highly dependent on the type of text you're trying to classify, so be sure to explore your dataset and get a sense for how people are writing in each domain. Qualitative investigation isn't enough to decide once and for all what are good features, but it is a good way to get ideas. Also, look into [TF-IDF](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) weighting of terms instead of just using their frequency within each instance of your dataset. This will help you pick up on (a) terms that are prevalent within a document (and possibly a target class) and (b) terms that distinguish a given document from other documents. I hope this helps a little.
null
CC BY-SA 3.0
null
2014-11-03T03:22:30.393
2014-11-03T03:22:30.393
null
null
4897
null
2394
1
2397
null
0
189
In most data acquisition settings it is useful to tag your data with time and location. If I write the data to csv file, what are the best formats that I can use for this two variables if I want to create a heatmap on Google Maps?
Best format for recording time stamp and GPS
CC BY-SA 3.0
null
2014-11-03T12:41:51.737
2016-04-22T17:04:08.310
2016-04-22T17:04:08.310
16284
3106
[ "data-formats", "geospatial" ]
2395
1
null
null
15
5737
I have a continuous variable, sampled over a period of a year at irregular intervals. Some days have more than one observation per hour, while other periods have nothing for days. This makes it particularly difficult to detect patterns in the time series, because some months (for instance October) are highly sampled, while others are not. ![enter image description here](https://i.stack.imgur.com/7MEXt.png) My question is what would be the best approach to model this time series? - I believe most time series analysis techniques (like ARMA) need a fixed frequency. I could aggregate the data, in order to have a constant sample or choose a sub-set of the data that is very detailed. With both options I would be missing some information from the original dataset, that could unveil distinct patterns. - Instead of decomposing the series in cycles, I could feed the model with the entire dataset and expect it to pick up the patterns. For instance, I transformed the hour, weekday and month in categorical variables and tried a multiple regression with good results (R2=0.71) I have the idea that machine learning techniques such as ANN can also pick these patterns from uneven time series, but I was wondering if anybody has tried that, and could provide me some advice about the best way of representing time patterns in a Neural network.
Modelling Unevenly Spaced Time Series
CC BY-SA 3.0
null
2014-11-03T16:51:47.467
2020-07-11T01:35:27.383
null
null
3159
[ "neural-network", "time-series", "regression" ]
2397
2
null
2394
1
null
As Spacedman put it, "best" is pretty subjective. However, as we have found, a good format for time is [Unix time](http://en.wikipedia.org/wiki/Unix_time) (aka POSIX time, aka Epoch time). Most databases support it and it is still pretty human readable. For location, we like [decimal degrees](http://en.wikipedia.org/wiki/Decimal_degrees) as it is easy to read and stored and is compatible with Google Maps API. It's also easy to convert to other formats if needed.
null
CC BY-SA 3.0
null
2014-11-03T22:10:13.263
2014-11-03T22:10:13.263
null
null
4907
null
2398
1
2401
null
3
431
I'd like to apply some of the more complex supervised machine learning techniques in python - deep learning, generalized addative models, proper implementation of regularization, other cool stuff I dont even know about, etc. Any recommendations how I could find expert ML folks that would like to collaborate on projects?
Python Machine Learning Experts
CC BY-SA 3.0
null
2014-11-04T00:58:00.137
2014-11-04T13:41:52.240
null
null
4910
[ "machine-learning", "python" ]
2399
2
null
1091
1
null
Check out [LightSide](http://lightsidelabs.com/what/research/) for a GUI introduction to text mining in general, and more specifically for generating features quickly. It was developed (and I believe is still being developed) by researchers at CMU and it's how I got hooked on text mining. There is quite a bit of functionality right out of the box, and you can quickly import CSV data into the application, extract features, and start running experiments. It also makes use of suites of algorithms from several other prominent, well-regarded open source toolkits like Weka and LibLinear so you know you're using something credible under the hood. That being said, both of these last mentioned toolkits are definitely worth checking out for added functionality, even if they have a bit of a steeper learning curve. Hope that helps.
null
CC BY-SA 3.0
null
2014-11-04T04:23:37.107
2014-11-04T04:23:37.107
null
null
4897
null
2401
2
null
2398
3
null
You could try some competitions from [kaggle](http://kaggle.com). Data Science courses from Coursera, edX, etc also provide forums for discussion. Linkedin or freelance sites could be other possibilities.
null
CC BY-SA 3.0
null
2014-11-04T13:41:52.240
2014-11-04T13:41:52.240
null
null
3051
null
2403
1
null
null
21
11439
I had a conversation with someone recently and mentioned my interest in data analysis and who I intended to learn the necessary skills and tools. They suggested to me that while it is great to learn the tools and build the skills there is little point in doing so unless i have specialized knowledge in a specific field. They basically summed it to that I'd just be like a builder with a pile of tools who could build a few wooden boxes and may be build better things (cabins, cupboards etc), but without knowledge in a specific field I'd never be a builder people would come to for a specific product. Has anyone found this or have any input on what to make of this ? It would seem if it was true one would have to learn the data science aspects of things and then learn a new field just to become specialized.
Data science without knowledge of a specific topic, is it worth pursuing as a career?
CC BY-SA 3.0
null
2014-11-05T09:24:58.003
2021-05-28T09:43:34.023
null
null
4836
[ "education", "beginner", "career" ]
2405
2
null
2269
1
null
Try out AirXcell : [AirXcell calculation software](http://airxcell.com). See documentation [Use AirXCell as an r Console](http://www.airxcell.com/doc/howTo/useConsole.html)
null
CC BY-SA 3.0
null
2014-11-05T15:16:36.867
2014-11-05T15:16:36.867
null
null
4929
null
2406
2
null
2403
51
null
Drew Conway published the [Data Science Venn Diagram](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram), with which I heartily agree: ![Data Science Venn Diagram](https://i.stack.imgur.com/7W0VM.png) On the one hand, you should really read his post. On the other hand, I can offer my own experience: my subject matter expertise (which I like better as a term than "Substantive Expertise", because you should really also have "Substantive Expertise" in math/stats and hacking) is in the retail business, my math/stats are forecasting and inferential statistics, and my hacking skills lie in R. From this vantage point, I can talk to and understand retailers, and someone who doesn't have at least a passing knowledge of this field will have to face a steep learning curve in a project with retailers. As a side gig, I do statistics in psychology, and it's exactly the same there. And even with quite some knowledge of the hacking/math/stats part of the diagram, I would have a hard time getting up to speed in, say, credit scoring or some other new subject field. Once you have a certain amount of math/stats and hacking skills, it is much better to acquire a grounding in one or more subjects than in adding yet another programming language to your hacking skills, or yet another machine learning algorithm to your math/stats portfolio. After all, once you have a solid math/stats/hacking grounding, you could if need be learn such new tools from the web or from textbooks in a relative short time period. But the subject matter expertise, on the other hand, you will likely not be able to learn from scratch if you start from zero. And clients will rather work with some data scientist A who understands their specific field than with another data scientist B who first needs to learn the basics - even if B is better in math/stats/hacking. Of course, all this will also mean that you will never become an expert in either of the three fields. But that's fine, because you are a data scientist, not a programmer or a statistician or a subject matter expert. There will always be people in the three separate circles who you can learn from. Which is part of what I like about data science. --- EDIT: A little while and a few thoughts later, I'd like to update this post with a new version of the diagram. I still think that Hacking Skills, Math & Statistics Knowledge and Substantive Expertise (shortened to "Programming", "Statistics" and "Business" for legibility) are important... but I think that the role of Communication is important, too. All the insights you derive by leveraging your hacking, stats and business expertise won't make a bit of a difference unless you can communicate them to people who may not have that unique blend of knowledge. You may need to explain your statistical insights to a business manager who needs to be convinced to spend money or change processes. Or to a programmer who doesn't think statistically. [Caulcutt (2021, Significance)](https://doi.org/10.1111/1740-9713.01531) is a short article that says much the same, but gives more detail than I do. So here is the new data science Venn diagram, which also includes communication as one indispensable ingredient. I have labeled the areas in ways that should guarantee maximum flaming, while being easy to remember. Comment away. [](https://i.stack.imgur.com/aiQeT.png) R code: ``` draw.ellipse <- function(center,angle,semimajor,semiminor,radius,h,s,v,...) { shape <- rbind(c(cos(angle),-sin(angle)),c(sin(angle),cos(angle))) %*% diag(c(semimajor,semiminor)) tt <- seq(0,2*pi,length.out=1000) foo <- matrix(center,nrow=2,ncol=length(tt),byrow=FALSE) + shape%*%(radius*rbind(cos(tt),sin(tt))) polygon(foo[1,],foo[2,],col=hsv(h,s,v,alpha=0.5),border="black",...) } name <- function(x,y,label,cex=1.2,...) text(x,y,label,cex=cex,...) png("Venn.png",width=600,height=600) opar <- par(mai=c(0,0,0,0),lwd=3,font=2) plot(c(0,100),c(0,90),type="n",bty="n",xaxt="n",yaxt="n",xlab="",ylab="") draw.ellipse(center=c(30,30),angle=0.75*pi,semimajor=2,semiminor=1,radius=20,h=60/360,s=.068,v=.976) draw.ellipse(center=c(70,30),angle=0.25*pi,semimajor=2,semiminor=1,radius=20,h=83/360,s=.482,v=.894) draw.ellipse(center=c(48,40),angle=0.7*pi,semimajor=2,semiminor=1,radius=20,h=174/360,s=.397,v=.8) draw.ellipse(center=c(52,40),angle=0.3*pi,semimajor=2,semiminor=1,radius=20,h=200/360,s=.774,v=.745) name(50,90,"The Data Scientist Venn Diagram",pos=1,cex=2) name(8,62,"Communi-\ncation",cex=1.5,pos=3) name(30,78,"Statistics",cex=1.5) name(70,78,"Programming",cex=1.5) name(92,62,"Business",cex=1.5,pos=3) name(10,45,"Hot\nAir") name(90,45,"The\nAccountant") name(33,65,"The\nData\nNerd") name(67,65,"The\nHacker") name(27,50,"The\nStats\nProf") name(73,50,"The\nIT\nGuy") name(50,55,"R\nCore\nTeam") name(38,38,"The\nGood\nConsultant") name(62,38,"Drew\nConway's\nData\nScientist") name(50,24,"The\nperfect\nData\nScientist!") name(31,18,"Comp\nSci\nProf") name(69,18,"The\nNumber\nCruncher") name(42,11,"Head\nof IT") name(58,11,"Ana-\nlyst") name(50,5,"The\nSalesperson") par(opar) dev.off() ```
null
CC BY-SA 4.0
null
2014-11-05T17:03:40.513
2021-05-28T09:43:34.023
2021-05-28T09:43:34.023
2853
2853
null
2407
2
null
1091
1
null
Another library that I use that has not been mentioned yet is gensim. Its [Dictionary](http://radimrehurek.com/gensim/corpora/dictionary.html) module allows you to convert a list of words into a list of (id,count) pairs. It also has an `allow_update` variable that updates the dictionary in the case that new words are encountered at runtime. It also has built-in support for TF-IDF, LSI, and LDA, among other models.
null
CC BY-SA 3.0
null
2014-11-05T17:07:02.563
2014-11-05T17:07:02.563
null
null
1097
null
2410
2
null
2395
4
null
ARIMA, Exponential Smoothing and others indeed require evenly spaced sample points. As you write, you could bucketize your data (say into days), but as you also write, you would lose information. In addition, you may end up with missing values, so you would need to impute, since ARIMA is not very good at handling missing values. One alternative, as you again write, is to feed time dummies into a regression framework. I personally do not really like categorical dummies, because this implies a sharp cutoff between neighboring categories. This is usually not very natural. So I would rather look at periodic splines with different periodicities. This approach has the advantage of dealing with your uneven sampling and also with missing values. Be very careful about interpreting $R^2$. In-sample fit is notoriously misleading as a measure of out-of-sample forecast accuracy ([see here](https://www.otexts.org/fpp/2/5)). I would argue that this disconnect between in-sample fit and out-of-sample forecast accuracy also means that there is no connection between in-sample fit and how well a model "understood" the data, even if your interest lies not in forecasting, but only in modeling per se. My philosophy is that if you can't forecast a time series well, you haven't understood it in any meaningful sense. Finally, don't overdo the modeling. Just from eyeballing your data, it is obvious that something happened in June, on one day in August and in September/October. I suggest you first find out what this something was and include this in your model, e.g., as explanatory variables (which you can include in ARIMAX if you want to). What happened there is obviously not seasonality.
null
CC BY-SA 3.0
null
2014-11-05T17:22:00.213
2014-11-05T17:22:00.213
null
null
2853
null
2411
2
null
2403
2
null
Sure, you can. Companies are clamoring for data scientists. Be careful though that they all interpret the term differently. Depending on the company you might find yourself asked to do anything from statistics to writing production code. Either one is a full-time job in itself and you have to be prepared for both, so asking for deep specialized knowledge on top of that's not reasonable, in my opinion, and the companies I've talked to stressed the other two areas (esp. the programming). However, I found that it helps to be familiar with the types of problems that you might face. Depending on the sector, that could be anomaly detection, recommendation/personalization, prediction, record linkage, etc. These are things you can learn as examples at the same times as maths and programming.
null
CC BY-SA 3.0
null
2014-11-05T17:50:39.573
2014-11-05T17:50:39.573
null
null
381
null
2412
1
null
null
2
236
# My Background I am a graduate student in Civil Engineering. For the analyses of road traffic data (vehicle trajectories as time series) I work with big data sets mostly about a million data points or more. I started using R language when MS Excel could not open the big data files. Using basic statistics knowledge and R code I developed few algorithms to identify certain patterns in the data which worked for many applications. But I still lack serious programming skills in R. Now, I am familiar with basic inferential statistics and R packages (plyr, dplyr, ggplot2, etc). Recently I came to know that Machine Learning algorithms also help in defining patterns in the data through supervised/ unsupervised learning and their application might improve the accuracy of prediction of certain 'behaviors' of drivers using the traffic data. # Question Having the basic knowledge of Statistics and R, I want to learn about the data science/ machine learning as a beginner. I know that some concepts in Stats. and ML overlap and that might bridge the gap in my learning of ML. Keeping my background in mind, what resources (books/ online courses) would you recommend me to start learning data science and apply it in my field?
Learning resources for Data Science for applications in road traffic data?
CC BY-SA 3.0
null
2014-11-05T18:57:55.037
2017-05-19T16:13:40.213
2017-05-19T16:13:40.213
21
4933
[ "self-study" ]
2413
2
null
2391
2
null
One way to use both metric1 and metric2 in order to find anomalies in metric3 is to consider residual analysis. In your case, this would require, creating a predictive model with metric1 and metric2 as the predictors and metric3 as the response variable. Then, calculate the residuals for metric3 as its predicted value subtracted from its true value. Now, you can report the all members of the lowest decile [or any other percentile] as one kind of an anomaly and all the members of the highest decile [or any other percentile] as another kind of an anomaly.
null
CC BY-SA 3.0
null
2014-11-05T21:22:09.193
2014-11-05T21:22:09.193
null
null
847
null
2414
2
null
2412
1
null
The best way to learn data science is through problem solving. I suggest you to head over to Kaggle and work through the for-knowledge problems. To get a good start on Machine Learning problems, acclimate yourself with the tree package in R. This will help you understand how decision trees work, and building upon that, how random forests, gradient boosting machines and other sophisticated tree based algorithms work. Then, there are SVMs and deep learning models. To get an understanding of unsupervised learning problems, learn k-means and employ it for clustering. Other general concepts/ ideas to understand are: - cross-validation - overfitting, regularization - bias-variance trade off - dimensionality reduction/ variable selection - generalization error - ensemble learning For books, the most common recommendation to anyone who is familiar with statistics and wants to get into Machine Learning is ["The Elements of Statistical Learning"](http://statweb.stanford.edu/~tibs/ElemStatLearn/) by Hastie, Tibshirani, and Friedman.
null
CC BY-SA 3.0
null
2014-11-05T21:55:43.587
2014-11-05T21:55:43.587
null
null
847
null
2416
1
null
null
6
785
I would like to ask your opinion on how to choose a similarity measure. I have a set of vectors of length N, each element of which can contain either 0 or 1. The vectors are actually ordered sequences, so the position of each element is important. Suppose I have three vectors of length 10, x_1 x2, x3: x1 has three 1 at positions 6,7,8 (indexes start from 1. Both x2 and x3 have an additional 1, but x2 has it in position 9 while x3 has it in position 1. I am looking for a metric according to which x1 is more similar to x2 than to x3, in that the additional 1 is closer to the "bulk" of ones. I guess this is a relatively common problem, but I am confused on the best way to approach it. Many thanks in advance!
Similarity measure for ordered binary vectors
CC BY-SA 3.0
null
2014-11-06T07:38:33.490
2014-11-07T09:04:17.603
null
null
1004
[ "similarity" ]
2417
2
null
2416
3
null
If speed isn't a great concern you could use a [KDE](http://en.wikipedia.org/wiki/Kernel_density_estimation) with a high bandwidth to pick up the similarity between neighboring elements, then an appropriate metric like the [K-L divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence). Similarity and divergence are complementary, of course, so as a final step you would have to relate them; e.g., `sim(A, B) = exp[- KLD(A, B)]`
null
CC BY-SA 3.0
null
2014-11-06T08:52:22.617
2014-11-06T08:52:22.617
null
null
381
null
2418
2
null
2303
0
null
The [R Programming Wikibook](http://en.wikibooks.org/wiki/R_Programming) is a nice collaborative handbook for R.
null
CC BY-SA 3.0
null
2014-11-06T13:56:02.497
2014-11-06T13:56:02.497
null
null
187
null
2419
2
null
2416
4
null
One thing you could do is fuzzify your vectors: replace each 1 by (for example) 0.4 in its position, 0.2 in the neighbouring positions, and 0.1 in the second position over. Then add up what's in each position. With these fuzzified vectors, you can apply a similarity metric either based on a distance or one like cosines similarity. Your example would produce: (showing only first decimal) 0000011100 -> 0001378731 0000011110 -> 0001378873 1000011100 -> 4211378731 cos(x1, x2) = 0.9613, cos(x1,x3) = 0.9469
null
CC BY-SA 3.0
null
2014-11-06T15:19:35.017
2014-11-06T15:19:35.017
null
null
4760
null
2420
2
null
1216
0
null
Keep in mind that "big data" is an increasingly trendy thing for a company to say they're involved in. Higher ups might read an article about it in HBR, and say to themselves, "I've got to get me some of that" (not that they're necessarily wrong). What this means for you is that the advanced analytics isn't as necessary for that company as just getting something up and running might be. Luckily for you, most of the components said companies might need are free. Moreover, I believe both Hortonworks and Cloudera have free "sandbox" virtual machines, which you can run on your PC, to play around with and get your bearings. Advanced analytics on big data platforms are valuable, to be sure, but many companies need to learn to crawl before they can run.
null
CC BY-SA 3.0
null
2014-11-06T17:56:06.257
2014-11-06T17:56:06.257
null
null
4943
null
2421
2
null
1216
0
null
This is a really strange question in my opinion. Why you're going to move in a new direction if you are not sure that you love this new direction or at least find it very interesting? If you do love Big Data, why do you care about the PhD intelligent creatures that are already in the field? The same amount of PhD creatures are in every area of IT. Please have a quick read at this very nice article [http://www.forbes.com/sites/louisefron/2013/09/13/why-you-cant-find-a-job-you-love/](http://www.forbes.com/sites/louisefron/2013/09/13/why-you-cant-find-a-job-you-love/) and then ask yourself if you love Big Data enough and you are ready to add your grain of sand to the mountain of knowledge
null
CC BY-SA 3.0
null
2014-11-06T21:36:51.897
2014-11-06T21:36:51.897
null
null
4944
null
2422
1
2423
null
3
2431
I'm just starting to work on a relatively large dataset after ML course in Coursera. Trying to work on [https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD](https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD). Got an accuracy of 5.2 in training and test set with linear regression using gradient descent in octave. I tried adding all possible quadratic features (515345 instances and 4275 features), but the code just won't stop executing in my HP Pavilion g6 2320tx, with 4GB RAM in Ubuntu 14.04. Is this beyond the data size capacity of Octave ?
Clarification about Octave data size limit
CC BY-SA 3.0
null
2014-11-07T04:13:44.583
2014-11-07T08:28:07.227
2014-11-07T08:28:07.227
847
4947
[ "octave" ]
2423
2
null
2422
2
null
You have about 4GB of RAM on your machine and Octave is an in memory application. If you want to work with 515345 instances and 4275 features, assuming that you are using double precision (i.e. 8 bytes), you would need a memory of 515345*4275*8/1000000/1024 bytes ~ 17.6 GB. Even if you were using 4 bytes for each data point, you would require at least 9 GB for the computation to go through. This issue might not be the Octave memory restriction in this case. See [here](https://stackoverflow.com/questions/565806/is-anyone-using-64-bit-build-of-octave) for further details on Octave's memory usage.
null
CC BY-SA 3.0
null
2014-11-07T05:53:25.600
2014-11-07T06:04:59.140
2017-05-23T12:38:53.587
-1
847
null
2424
2
null
2303
0
null
I would recommend those materials: - Python Python for Data Analysis - book which nicely covers Pandas workflow with IPython. Hands-On Machine Learning with Scikit-Learn and TensorFlow - slightly more advanced book about using Scikit-Learn and Tensor flow in data science projects - R Coursera Data Science Specialisation - free nine one month time consuming courses which introduces R step by step from beginning to machine learning. Data Camp courses - at least 4 free courses covering topics from data analysis.
null
CC BY-SA 3.0
null
2014-11-07T08:09:40.260
2017-07-10T06:48:41.050
2017-07-10T06:48:41.050
82
82
null
2425
2
null
2416
1
null
Another possibility is the [Earth Mover's Distance](http://en.wikipedia.org/wiki/Earth_mover's_distance). I applied it to Computer Vision, but I think that it may be adapted to your specific problem.
null
CC BY-SA 3.0
null
2014-11-07T09:04:17.603
2014-11-07T09:04:17.603
null
null
2576
null
2426
1
null
null
3
88
I have a data set that is pivoted in to the following format: [key] [id] [0] [1] [5] [10] [15] [60] [120] [180],.. [365] So key could be [Products] [1000] [15,000] [4000]... etc Where products is the category of item being reviewed and key is the identifier for the product; the only fields (0, 1,... 180,.. [365]) are individual daily samples identify how many of "x" product were logged as either sold, in-stock etc. What I need to do is perform some kind of analysis on an entire slew of products and their inventory levels. i.e. each import of data I need to make sure the incoming data is accurate or predictably accurate and that some human did not typo a stock level. The problem is, using a simple average or rolling average can introduce significant variance and smoothing out the average renders my analysis less reliable. Ideally this analysis would trigger an alarm that someone would have to investigate. Is there a better and more accurate way of performing this analysis? Thanks!
Validity of data
CC BY-SA 3.0
null
2014-11-07T21:36:00.080
2014-12-08T20:57:17.583
null
null
4956
[ "machine-learning", "statistics" ]
2427
1
null
null
7
299
The meaning of multi-class classification rules ``` Example: I have two classification rules (Refund is a predictor and Cheat is a binary response): (Refund, No) → (Cheat, No) Support = 0.4, Confidence = 0.57 (Refund, No) → (Cheat, Yes) Support = 0.3, Confidence = 0.43 ``` => multi-class classification rules: ``` (Refund, No) → (Cheat, No) v (Cheat, Yes) ``` When predicted classification for test data, (Cheat, No) will be selected priority so why we need to have (Cheat, Yes) in multi-class classification rules here?
The meaning of multi-class classification rules
CC BY-SA 3.0
null
2014-11-08T12:22:17.920
2014-12-12T20:00:23.620
2014-11-08T22:52:49.717
847
3503
[ "classification" ]
2428
2
null
2427
3
null
``` (Cheat, No) will be selected (cheat will be classified as No) for the rule (Refund, No). ``` In a binary response variable like cheat, all the information can be inferred from just the first rule: ``` (Refund, No) → (Cheat, No) Support = 0.4, Confidence = 0.57 ``` The other rule is redundant. However, in the case of a multi-class response variable, we would like to have all the rules written out so we exactly know the likelihood of the rule implying each of the different classes in the response variable. To keep things consistent, this is also done for the case when the response variable is binary.
null
CC BY-SA 3.0
null
2014-11-08T18:33:37.063
2014-11-08T18:33:37.063
null
null
847
null
2429
2
null
2426
1
null
The first thing you should do is to identify how large an error your analysis can handle. That will make your job much easier because you won't have to find everything. A standard way of identifying "suspicious" data is is Benford's Law, which predicts the distribution of the first digit of each number. It can also be generalized for for other digits. [http://en.wikipedia.org/wiki/Benford](http://en.wikipedia.org/wiki/Benford)'s_law As for finding outliers, I'd probably use boxplots, particularly because you can achieve high data density with them, reducing the time to manually skim them. One thing that might be useful is to compare the ratio of one variable to another- in my company we use this method all the time.
null
CC BY-SA 3.0
null
2014-11-08T20:29:37.377
2014-11-08T20:29:37.377
null
null
1241
null
2431
2
null
1216
1
null
In my experience to have a PhD doesn't mean necessarily be good in the enviroment of data science company, I work as data scientist and I'm just an engineer but I've known some universitary teachers who works in collaboration with my company and sometimes I've said them that Their point of view was not right because despite of their ideas and reasonings were right they are not applicables to the company activities, so we had to modify some data models to make them usefull for the company and the results lost their value so we had to seek new models. What I mean is that Data Science is a multidisciplinar area so many different people working together is needed so I think that your skills could be very useful in a data scientist team, you only have to find where you fit ;)
null
CC BY-SA 3.0
null
2014-11-09T10:36:51.290
2014-11-09T10:36:51.290
null
null
4966
null
2432
1
null
null
5
540
By "large", I mean in the range of 100m to 10b rows. I'm currently using both Hadoop MapReduce and Amazon RedShift. MapReduce has been a little disappointing here. Redshift works very well if the data is distributed well for the given query. Are there other technologies that I should be looking at here? If so, what are the trade offs?
What technologies are fastest at performing joins on large datasets?
CC BY-SA 3.0
null
2014-11-09T14:11:18.350
2015-06-14T08:56:13.250
null
null
4967
[ "bigdata", "performance", "map-reduce", "aws" ]
2433
1
null
null
2
129
I was going through an IEEE research paper which has used Fuzzy ARTMAP for predicting the price of electricity given some highly correlated data. As per my basic understanding about Fuzzy ARTMAP, it is a classification algorithm, so how will it be able to predict continuous data? The text from research paper is: > In the architecture of the FA network, the preprocessing stages take the input vector and contribute to produce complement coding, which avoids category proliferation, i.e., the creation of a relatively large number of categories to represent the training data. A sequence of input vectors (price and demand) and their respective target vectors are introduced to the FA network in order to classify the input pattern correctly. The classified input patterns are then grouped into labels using membership functions. I was using MATLAB to implement the same. So is there a library in MATLAB to approach towards the solution?
FUZZY ARTMAP for continuous data
CC BY-SA 4.0
null
2014-11-10T00:10:26.947
2021-03-12T06:57:13.800
2021-03-12T06:57:13.800
85045
4973
[ "classification", "regression", "matlab", "fuzzy-classification" ]
2434
2
null
2432
4
null
More importantly than the technology is the type of join you are using. For instance if the join keys are sorted, you can use sort merge joins and use join orders to get a better performance. That being said, you can use in memory solutions for fastest joins if the size of your intermediate results will not blow up your cluster memory. Look at Spark SQL or Mem-SQL for instance.
null
CC BY-SA 3.0
null
2014-11-10T02:15:27.147
2014-11-10T02:15:27.147
null
null
4974
null
2435
2
null
974
4
null
If you are working on collaborative filtering you should pose the problem as a low-rank matrix approximation, wherein both the users are items are co-embedded into the same low-dimensionality space. Similarity search will be much simpler then. I recommend using LSH, as you suggested. Another fruitful avenue for dimensionality reduction not yet mentioned is [random projection](http://en.wikipedia.org/wiki/Random_projection).
null
CC BY-SA 3.0
null
2014-11-10T09:40:26.753
2014-11-10T09:40:26.753
null
null
381
null
2436
1
null
null
4
10252
I need to collect several large datasets (thousands of samples, dozens of features) for regression with only categorical inputs. I already look for such datasets in the UCI repository, but I did not find any suitable one. Does anybody know of any such dataset, or of any additional dataset repository on the Internet?
Large categorical dataset for regression
CC BY-SA 3.0
null
2014-11-10T09:45:13.063
2015-01-12T07:58:39.780
null
null
2576
[ "dataset", "categorical-data" ]
2437
1
null
null
3
304
In this paper ([here](http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf)) they suppose a "unified architecture for NLP" with deep neural networks with multitask learning My problem is to understand the layered architecture in figure 1, see below: ![unified deep learning architecture for NLP](https://i.stack.imgur.com/RoU0P.png) - Is someone able to give me a concrete, reproducible example of how this architecture processing 3 sentences through their layers? - What are the outputs after each layer? - Why they choose which layer?
How does the supposed "Unified Architecture for NLP" from Collobert and Weston 2008 really works?
CC BY-SA 4.0
null
2014-11-10T11:13:07.987
2021-03-12T05:35:13.430
2021-03-12T05:35:13.430
83275
3615
[ "nlp", "neural-network" ]
2438
2
null
2269
3
null
You can easily have an RStudio server installed in Digital Ocean using [this](https://github.com/sckott/analogsea) package. [](https://github.com/sckott/analogsea)
null
CC BY-SA 4.0
null
2014-11-10T11:16:07.537
2021-02-09T01:32:56.940
2021-02-09T01:32:56.940
29169
4978
null
2439
1
2462
null
5
178
I have a [database](https://archive.ics.uci.edu/ml/datasets/Molecular+Biology+%28Splice-junction+Gene+Sequences%29) of 3190 instances of DNA consisting of 60 sequential DNA nucleotide positions classified according to 3 types: EI, IE, Other. I want to formulate a supervised classifier. My present approach is to formulate a 2nd order Markov Transition Matrix for each instance and apply the resulting data to a Neural Network. How best to approach this classification problem, given that the Sequence of the data should be relevant? Is there a better approach than the one I came up with?
Classification of DNA Sequences
CC BY-SA 3.0
null
2014-11-10T16:20:22.647
2014-11-14T11:36:06.290
2014-11-14T11:36:06.290
136
136
[ "sequence" ]
2441
2
null
2384
0
null
You should probably start with a very basic approach: bag-of-words representation (vector as long as your vocabulary, 1 if the word is found in the text, 0 if it's not), and a simple classifier like naive bayes. This works surprisingly well to find topics (a little less for sentiment classification). For preprocessing you would probably want to do stop-word removal and stemming (in order to reduce the vocabulary) rather than POS tagging. The problem with the basic approach is that you would have a n-class classifier, and no "this fits multiple categories" or "this fits 0 categories" answers. If you want to include that aspect, then the best is to design n 2-class classifiers, one for each of your classes, where each classifier decides whether the text fits the class or not. But I would try out-of-the box naive bayes first, just to see how it works. You can use Weka, it's free, open-source, and can be integrated with java. You can also do the preprocessing (stemming) with the Python NLTK.
null
CC BY-SA 3.0
null
2014-11-10T18:46:42.993
2014-11-10T18:46:42.993
null
null
4760
null
2442
2
null
2436
0
null
All you need are data sets with enough records and enough features for your purposes. You can simply convert any continuous variables into categorical ones by grouping. Some sources for large sets can be found by a search for "large free data sets". If you are dead set on lots of categorical data, try insurance data (given that I'm an actuary, I should have thought of that earlier). Those tend to be laden with categorical variables, as I well know from first person experience.
null
CC BY-SA 3.0
null
2014-11-10T19:27:03.397
2014-11-12T11:45:02.060
2014-11-12T11:45:02.060
1241
1241
null
2443
2
null
1216
1
null
May be it will be a little offtopic, but I'd like to highly recommend you to go through this MOOC [https://www.coursera.org/course/statistics](https://www.coursera.org/course/statistics). This is a very good and clear introduction to statistics. It give you a base principles about core field in data science. I hope it will be a good start point for beginning friendship between you and statistics.
null
CC BY-SA 3.0
null
2014-11-11T08:34:22.210
2014-11-11T08:34:22.210
null
null
4842
null
2445
1
null
null
3
103
I need to analyse a dataset about mobile phone usage (#calls, #sms, #internetConnections) per each cell and hour in the different days. ``` [date] [CDR/Position] [#calls] [#sms] [#internetConnections] ``` My purpose is detecting similarities in the data (Monday-Tuesday is similar... or Monday night is different...). After this, I'd like to find the reason they are similar/dissimilar. What can I apply?
Analyzing mobile usage. What kind of approach should I apply?
CC BY-SA 3.0
null
2014-11-11T15:03:51.680
2014-11-12T09:42:00.080
2014-11-12T09:42:00.080
989
989
[ "data-mining", "clustering" ]
2446
1
null
null
3
448
I have a biosemi bdf of EEG data which contains 32 channel. I've opened it using biosig, everything works great, a first list is channel and inside each list there are eeg data. But if I open it using MNE it the first list is eeg data, and the second list (inside the list of eeg data) are two list of eeg data. this is how I open the data using MNE ``` raw_file=read_raw_edf("E:\eegDATA\\256\s02_reduced.bdf",preload=True,verbose=True) ``` Am I missing something here?
Opening biosemi bdf data using MNE and biosig using python
CC BY-SA 3.0
null
2014-11-11T18:34:40.213
2016-01-12T12:07:15.503
2016-01-12T12:07:15.503
11097
273
[ "python" ]
2447
1
2452
null
3
147
I'm trying to create a logistic regression model in jpmml, then write the PMML to a file. The problem I'm having, is that I can't find any way to create a custom tag, such as "shortForm" and "longForm" in the following example: ``` <MapValues outputColumn="longForm"> <FieldColumnPair field="gender" column="shortForm"/> <InlineTable> <row><shortForm>m</shortForm><longForm>male</longForm> </row> <row><shortForm>f</shortForm><longForm>female</longForm> </row> </InlineTable> </MapValues> ``` Here's what I have so far: ``` MapValues mv = new MapValues("output") .withFieldColumnPairs( new FieldColumnPair( new FieldName("gender"), "shortForm" ) ).withInlineTable( new InlineTable().withRows( new Row().with???( new ??? ) ))) ``` In short, I am asking for an API call I can use to instantiate the "shortForm" element in the example, and attach it to the "row" object. I've been all through the API, examples, and Google/SO, and can't find a thing. Thanks for your help!
How can I create a custom tag in JPMML?
CC BY-SA 3.0
null
2014-11-11T19:22:00.073
2014-11-13T04:44:06.957
2014-11-13T04:44:06.957
21
4999
[ "logistic-regression", "java" ]
2448
1
null
null
4
39
We have a ruby-on-rails platform (w/ postgreSQL db) for people to upload various products to trade. Of course, many of these products listed are the same, while they are described differently by the consumer (either through spelling, case etc.) "lots of duplicates" For the purposes of analytics and a better UX, we're aiming to create an evolving "master product list", or "whitelist", if you will, that will have users select from an existing list of products they are uploading, OR request to add a new one. We also plan to enrich each product entry with additional information from the web, that would be tied to the "master product". Here are some methods we're proposing to solve this problem: A) Take all the "items" listed in the website (~90,000), de-dupe as much as possible by running select "distinct" queries (while maintaining a key-map back to original data by generating an array of item keys from each distinct listing in a group-by.) THEN A1) Running this data through mechanical turk, and asking each turk user to list data in a uniform format. OR A2) Running each product entry through the Amazon products API and asking the user to identify a match. or A3) A better method?
Method to create master product database to validate entries, and enrich data set
CC BY-SA 3.0
null
2014-11-12T01:31:01.377
2014-11-12T01:31:01.377
null
null
5004
[ "data-cleaning", "sql" ]
2449
2
null
2445
1
null
There are two straight forward (vanilla) ways without going for any fancy featurization: Clustering: Run a clustering algorithm. Something like k-means should work well with this kind of a dataset. While doing this, I would not feed the day_of_week information into the clustering algorithm. I would suggest running k-means (after normalizing each of the columns). Choose a small number of clusters that is easy to investigate (or you could use the number of clusters that maximizes the BIC). Investigate the clusters to understand membership by day_of_week in each of these clusters. Multi-class Classification: Treat the day_of_week as the response that you would like to predict. Build a decision tree of a fixed depth to predict the day_of_week given the columns. By examining this tree, you can easily tell, which decisions led to a set of leaves being labeled Sunday vs the set of decisions that led to a set of leaves being labeled Monday. These decisions will also help you understand the similarities between different days.
null
CC BY-SA 3.0
null
2014-11-12T03:12:11.550
2014-11-12T06:23:54.947
2014-11-12T06:23:54.947
847
847
null
2450
2
null
2436
0
null
Try the [1998 KDD Cup dataset](https://archive.ics.uci.edu/ml/datasets/KDD+Cup+1998+Data). Its a regression problem with categorical and integer predictors. For your task, you could either treat integer predictors as categorical or ignore them completely.
null
CC BY-SA 3.0
null
2014-11-12T06:14:15.090
2014-11-12T06:14:15.090
null
null
847
null
2451
1
null
null
3
211
For problems where the data represents online fraud or insurance (where each row represents a transaction), it is typical for the response variable to denote the value of fraud committed in dollars. Such a response value might have less than 5% non-zero values denoting fraudulent transactions. I have two questions regarding such a dataset: - What algorithms can we use to ensure that the model not only predicts the fraudulent transactions accurately, but also predicts the value of fraud associated with these. - Assuming that we can quantify the cost involved in each false positive (tagging a non-fraudulent transaction as fraudulent) and cost incurred due to a false negative (tagging a fraudulent transaction as non-fraudulent), how can we optimize the model to maximize savings (or minimize losses)?
Modeling when the response variable has too many 0's and few continuous values?
CC BY-SA 3.0
null
2014-11-12T09:21:19.490
2014-11-18T23:44:58.257
2014-11-12T16:39:52.363
847
847
[ "classification", "regression", "predictive-modeling", "class-imbalance" ]
2452
2
null
2447
1
null
You can/should use a generic Java Architecture for XML Binding (JAXB) approach. Simply put, call `Row#withContent(Object...)` with instances of `org.w3c.dom.Element` that represent the desired XML content. For example: ``` Document document = documentBuilder.newDocument(); Element shortForm = document.createElement("shortForm"); shortForm.setTextContent("m"); Element longForm = document.createElement("longForm"); longForm.setTextContent("male"); row = row.withContent(shortForm, longForm); ```
null
CC BY-SA 3.0
null
2014-11-12T11:08:44.973
2014-11-12T11:08:44.973
null
null
5012
null
2453
2
null
2303
0
null
If you prefer quick hands-on/interactive tutorials, below are my suggestions - Python - [codeacademy](http://www.codecademy.com/), Google Python Class R - CodeSchool's ['Try R'](http://tryr.codeschool.com/) and DataCamp (suggested above)
null
CC BY-SA 3.0
null
2014-11-12T14:51:42.213
2014-11-12T14:51:42.213
null
null
782
null
2454
1
5596
null
6
1441
We're currently using Redshift as our data warehouse, which we're very happy with. However, we now have a requirement to do machine learning against the data in our warehouse. Given the volume of data involved, ideally I'd want to run the computation in the same location as the data rather than shifting the data around, but this doesn't seem possible with Redshift. I've looked at MADlib, but this is not an option as Redshift does not support UDFs (which MADlib requires). I'm currently looking at shifting the data over to EMR and processing it with the Apache Spark machine learning library (or maybe H20, or Mahout, or whatever). So my questions are: - is there a better way? - if not, how should I make the data accessible to Spark? The options I've identified so far include: use Sqoop to load it into HDFS, use DBInputFormat, do a Redshift export to S3 and have Spark grab it from there. What are the pros/cons for these different approaches (and any others) when using Spark? Note that this is off-line batch learning, but we'd like to be able to do this as quickly as possible so that we can iterate experiments quickly.
Processing data stored in Redshift
CC BY-SA 3.0
null
2014-11-12T17:27:57.850
2015-09-26T09:52:25.717
null
null
5019
[ "apache-hadoop", "aws" ]
2455
2
null
2268
2
null
NLP is very vast and varied. Here are a few basic tools in NLP: - Sentence splitting: Identifying sentence boundaries in text - Tokenization: Splitting a sentence into individual words - Lemmatization: Converting a word to its root form. E.g. says, said, saying will all map to root form - say - Stemmer: It is similar to a lemmatizer, but it stems a word rather than get to the root form. e.g. laughed, laughing will stem to laugh. However, said, saying will map to sa - which is not particularly enlightening in terms of what "sa" means - POS tagger: Tags a word with the Part of Speech - what is a noun, verb, preposition etc. - Parser: Links words with POS tags to other words with POS tags. E.g. John ate an apple. Here John and apple are nouns linked by the verb - eat. John is the subject of the verb, and apple is the object of the verb. If you are looking for the state of the art for these tools, check out [StanfordCoreNLP](http://nlp.stanford.edu/software/corenlp.shtml), which has most of these tools and a trained model to identify the above from a document. There is also an online demo to check out stanfordCoreNLP before downloading and using it with your application. NLP has several subfields. Here are a few of them: - Machine Translation: Automatic Translation from one language to another - Information Retrieval: Something like a search engine, that retrieves relevant information from a large set of documents based on a search query - Information Extraction: Extract concepts and keywords - such as names of people, locations, times, synonyms etc. - Deep Learning has lately become a new field of NLP, where a system tries to understand a document like a human understands it.
null
CC BY-SA 3.0
null
2014-11-12T18:22:07.253
2014-11-12T18:22:07.253
null
null
5021
null
2456
2
null
566
1
null
You have a great idea going, and it might work for your specific project. However there are a few considerations you should take into account: - In your first sentence, Obama in incorrectly classified as an organization, instead of a person. This is because the training model used my NLTK probably does not have enough data to recognize Obama as a PERSON. So, one way would be to update this model by training a new model with a lot of labeled training data. Generating labeled training data is one of the most expensive tasks in NLP - because of all the man hours it takes to tag sentences with the correct Part of Speech as well as semantic role. - In sentence 2, there are 2 concepts - "Former Vice President", and "Dick Cheney". You can use co-reference to identify the relation between the 2 NNPs. Both the NNP are refering to the same entity, and the same entity could be referenced as - "former vice president" as well as "Dick Cheney". Co-reference is often used to identify the Named entity that pronouns refer to. e.g. "Dick Cheney is the former vice president of USA. He is a Republican". Here the pronoun "he" refers to "Dick Cheney", and it should be identified by a co-reference identification tool.
null
CC BY-SA 3.0
null
2014-11-12T18:41:33.053
2014-11-12T18:41:33.053
null
null
5021
null
2457
2
null
1079
2
null
TF IDF will give you the degree of measure of how relevant a document is to your query. owever, to evaluate your IR system you need to use metrics such as - Precision, Recall and F score. Precision: Out of all the documents that your system retrieves, which ones are relevant? This measures how much noise there is in the output of your IR system. Recall: Out of all the documents that are relevant, which ones did your system retrieve? This measures how much coverage does your IR system have? It is possible to get 100% recall all the time by basically retrieving ALL documents from a collection for any query. However, the precision in this case will be very low. It is possible to get a very high precision by hand modeling an IR system ti produce very accurate results. However, it would produce a very bad recall as there will not be coverage over all the documents. So we need to measure F score- which is the harmonic mean between Precision and Recall Check out [Chapter 8 of the Stanford IR book](http://nlp.stanford.edu/IR-book/pdf/08eval.pdf). If you are looking for datasets only here are a few that are relevant: - http://zitnik.si/mediawiki/index.php?title=Datasets - http://www.daviddlewis.com/resources/testcollections/ - http://boston.lti.cs.cmu.edu/callan/Data/
null
CC BY-SA 3.0
null
2014-11-12T19:00:05.013
2014-11-12T19:00:05.013
null
null
5021
null
2458
2
null
2303
1
null
There are really so many good resources now. If you want to stay away from textbooks, both O'Reilly Media and Packt Publishing offer much lighter but effective reading on a lot of great topics. These books are much more applied in practice. As far as learning the languages go, Coursera, Udacity, Code Acadmey, and Code School have great tutorials. I would recommend taking a look at the following: [Coursera AI and Stats Courses](https://www.coursera.org/courses?orderby=upcoming&lngs=en&cats=stats,cs-ai) [Udacity Data Science courses](https://www.udacity.com/courses#!/data-science)
null
CC BY-SA 4.0
null
2014-11-12T19:31:27.170
2020-08-02T12:49:46.867
2020-08-02T12:49:46.867
98307
5022
null
2459
2
null
2303
0
null
Also, [Green Tea Press](http://www.greenteapress.com/index.html) offers free books on related topics such as an intro to Python and using python with Probability and Stats.
null
CC BY-SA 4.0
null
2014-11-12T19:35:01.550
2020-08-02T12:49:39.407
2020-08-02T12:49:39.407
98307
5022
null
2462
2
null
2439
3
null
One way would be to create 20 features (each feature representing a codon). In this way, you would have a dataset with 3190 instances and 20 categorical features. There is no need to treat the sequence as a Markov chain. Once the dataset has been featurized as suggested above, any supervised classifier can work well. I would suggest using a gradient boosting machine as it might be better suited to handle categorical features.
null
CC BY-SA 3.0
null
2014-11-13T07:17:18.147
2014-11-13T07:17:18.147
null
null
847
null
2463
1
2490
null
2
515
I have a data set that keeps track of who referred someone to a program, and includes the geo coordinates of both parties for each record. What will be the best way to visualize this kind of data set? This visualization should also be able to use the geo coordinates to place this entities in the map to form clusters, or to superimpose them on a real map. I am interested in an algorithm and/or a library that will help me do this. Library should be preferably written in Java, Python, Scala, or NodeJS. The record count can be as big as a thousand or hundreds of thousands.
Geospatial Social Network Analysis Visualization
CC BY-SA 3.0
null
2014-11-13T08:58:25.847
2017-04-24T17:38:16.470
2017-04-24T17:38:16.470
31513
5027
[ "visualization", "social-network-analysis", "scalability", "library" ]
2464
1
null
null
9
14398
I want to use Latent Dirichlet Allocation for a project and I am using Python with the gensim library. After finding the topics I would like to cluster the documents using an algorithm such as k-means(Ideally I would like to use a good one for overlapping clusters so any recommendation is welcomed). I managed to get the topics but they are in the form of: 0.041*Minister + 0.041*Key + 0.041*moments + 0.041*controversial + 0.041*Prime In order to apply a clustering algorithm, and correct me if I'm wrong, I believe I should find a way to represent each word as a number using either tfidf or word2vec. Do you have any ideas of how I could "strip" the textual information from e.g. a list, in order to do so and then place them back in order to make the appropriate multiplication? For instance the way I see it if the word Minister has a tfidf weight of 0.042 and so on for any other word within the same topic I should be to compute something like: 0.041*0.42 + ... + 0.041*tfidf(Prime) and get a result that will be later on used in order to cluster the results. Thank you for your time.
Clustering of documents using the topics derived from Latent Dirichlet Allocation
CC BY-SA 3.0
null
2014-11-13T09:19:22.797
2020-12-05T00:04:44.510
null
null
5029
[ "python", "clustering", "lda" ]
2466
1
null
null
3
141
I have measurements of 4 devices at two different points of time. A measurement basically consists of an array of ones and zeros corresponding to a bit value at the corresponding location: ``` whos measurement1_dev1_time1 Name Size Bytes Class Attributes measurement1_dev1_time1 4096x8 32768 logical ``` I assume that for a specific device the changes between time 1 and 2 of the measurements are unique. However, since I am dealing with 32768 bits at different locations, it is quite hard to visualize if there is some kind of dependency. As every bit at location `x`can be regarded as one dimension of an observation I thought to use PCA to reduce the number of dimensions. Thus, for every of the 5 devices: - I randomly sample n measurements at point t1and t2 seperatly - I prepare an array as input for pca() with m*n columns (m< 32768; its a subset of all the observed bits, as the original data might be too big for pca) and 4 rows (one row for each device). - On this array A I calculate the pca: ``[coeff score latent] = pca(zscore(A))``` - Then I try to visualize it using biplot: biplot(coeff(:,1:2), 'score', score(:,1:2)) However, this gives me really strange results. Maybe PCA is not the right approach for this problem? I also modified the input data to do the PCA not on the logical bit array itself. Instead, I created a vector, which holds the indices where there is a '1' in the original measurement array. Also this produces strange results. As I am completely new to PCA I want to ask you if you either see a flaw in the process or if PCA is just not the right approach for my goal and I better look for other dimension reduction approaches or clustering algorithms.
Dimension reduction for logical arrays
CC BY-SA 3.0
null
2014-11-14T08:53:21.510
2017-06-27T20:42:42.587
null
null
5038
[ "dimensionality-reduction" ]
2468
2
null
2263
3
null
### Storing user profiles If you just want to store all user profiles... just save them into normal RDBMS. Assuming one user profiles takes 10Kb of storage, you need only ~9.5Gb for every million of users, which is pretty little and gives you all advantages of mature relational databases. It makes sense to use HBase only when you have really many users (say, > 1B) or when data is very sparse (most columns are empty). But don't expect it to be as convenient as good old SQL databases. In advertising, and especially in [real-time bidding](http://en.wikipedia.org/wiki/Real-time_bidding), very fast retrieval of user profiles is needed. [Aerospike](http://www.aerospike.com/) becomes more and more popular for this task. ### Analysing data slices Common use of business logs is to analyse specific slices of data, e.g. number of users from France that visited sites from "game" category on November 1-14, 2014. Standard way to manage such data efficiently is to organize them into [data cubes](http://en.wikipedia.org/wiki/Data_cube). You won't get individual records (e.g. users), but you'll get aggregated statistics really fast. Such cubes may have many different dimensions, but in 99% of cases they have date field that they are partitioned by. It makes great sense, because almost every query includes time period to get data from. As for software, Vertica is great for such aggregations. Cheaper* solution from Hadoop world is Impala, which is also great. (* - if you count only license price) ### Machine learning It really depends on concrete tasks and ML toolkit in use. For real-time bidding you would want blazing fast access to user profile vectors and would probably prefer Aerospike. For online learning Spark Streaming may be used as a data source, and no storage used at all. For offline machine learning there's excellent [MLlib](https://spark.apache.org/docs/latest/mllib-guide.html) from the same Spark project, which works with a variety of sources.
null
CC BY-SA 3.0
null
2014-11-14T15:15:20.617
2014-11-14T15:15:20.617
null
null
1279
null
2469
1
2476
null
2
342
I did small survey and get such data: ``` |-------------| Yes | No | Dont_Know | |-------------| | | | | Employee | 60 | 5 | 5 | | Workers | 17 | 0 | 1 | | Businessmen | 71 | 5 | 10 | | Jobless | 4 | 30 | 0 | ``` R code ``` dt <- data.frame(workers = c("Employee", "Workers", "Businessmen", "Jobless"), yes = c(60,17,71,4), no = c(5,0,5,30), dont_know = c(5,1,10,0) ) ``` - What kind of test I must do, if I want to show, that the Jobless people are often choosing No answer? - Is the difference between Jobless and Businessmen answers significant? - And what is about other groups? - What another information I can get from such data or what kind questions I can ask from such data?
survey data analysis (discrete data)
CC BY-SA 3.0
null
2014-11-14T15:20:44.463
2014-11-15T01:26:16.583
2014-11-14T15:48:17.037
2961
3377
[ "r" ]
2470
1
null
null
2
188
I have two classes (A,B) that I would like to classify using a SVM. Say that I have a class C and a function f. Can I do this: ``` A' = f(A,C) = |A-C| B' = f(B,C) = |B-C| ``` and then perform the classification on A' and B' instead? In the context of my problem A and B are classes where elements are vectors. The f function measures the Mahalanobis distance of each vector with respect to the distribution imposed by C.
Non-linear transformations input dataset for support vector machines
CC BY-SA 4.0
null
2014-11-14T16:39:29.663
2021-03-11T18:24:57.270
2021-03-11T18:24:57.270
29169
5042
[ "svm" ]
2471
2
null
1154
2
null
I would definitely use a graph (Though, this clearly depends on the final application, maybe you could add more information) For the nodes, you should consider as nodes not only tanks but also points were pipelines change name or bifurcate. For instance, following your example: ``` e1 e2 e3 e4 e5 e7 +---+----------.---------+----------.----------+----------- | | | | | [R tank 1] [S tank 1] [S tank 2] |e6 [S tank 3] | [S tank 4] ``` Now adding the nodes: ``` e1 n2 e2 n4 e3 n6 e4 n8 e5 n9 e7 n1 +---+----------.---------+----------.----------+----------+ | | | | | n3 n5 n7 |e6 n11 | n10 ``` Lastly, you need some kind of mapping. Some of the nodes will map to tanks: ``` [R tank 1] n3 [S tank 1] n5 [S tank 2] n7 [S tank 4] n10 [S tank 3] n11 ``` And the pipelines will be represented by paths in the graph ``` Pipeline 1 e1 e2 Pipeline 2 e3 e4 Pipeline 3 e5 e7 Pipeline 4 e6 ```
null
CC BY-SA 3.0
null
2014-11-14T17:11:13.060
2014-11-14T17:11:13.060
null
null
5041
null
2472
2
null
1189
8
null
You are just predicting if Play = Yes or Play = No. The confusion matrix would look like this: ``` Predicted +------+------+ | Yes | No | +-------------------+ A | | | | c | Yes | TP | FP | t | | | | u +-------------------+ a | | | | l | No | FN | TN | | | | | +-----+------+------+ TP: True positives FP: False positives FN: False negatives TN: True negatives ``` The accuracy can then be calculated as (TP + TN)/(TP + FP + TN + FN).
null
CC BY-SA 3.0
null
2014-11-14T17:32:26.050
2014-11-15T14:27:22.397
2014-11-15T14:27:22.397
847
5041
null
2473
1
2479
null
2
1462
I am dealing with a lot of categorical data right now and I would like to use an appropriate data mining method in any tool [preferably R] to find the effect of each parameter [categorical parameters] over my target variable. To give a brief notion about the data that am dealing with, my target variable denotes the product type [say, disposables and non-disposables] and I have parameters like root cause,symptom,customer name, product name etc. As my target can be considered as a binary value, I tried to find the combination of values leading to the desired categories using Apriori but, I have more than 2 categories in that attribute and I want to use all of them and find the effect of the mentioned parameters over each category. I really wanted to try SVM and use hyperplanes to separate the content and get n-dimensional view. But, I do not have enough knowledge to validate the technique, functions am using to do the analysis. Currently I have like 9000 records and each of them represents a complaint from the user. There are lot of columns available in the dataset which is what I am trying to use to determine the target variable [ myForumla <- Target~. ] I tried with just 4 categorical columns too. Not getting a proper result. Can just the categorical variables be used to develop a SVM model and get visualization with n hyper planes? Is there any appropriate data mining technique available for dealing with just the categorical data?
Choosing the right data mining method to find the effect of each parameter over the target
CC BY-SA 3.0
null
2014-11-14T19:03:35.603
2015-10-21T01:46:30.197
null
null
5043
[ "data-mining", "classification", "r", "svm", "categorical-data" ]
2474
1
2483
null
2
144
I think this is something that experienced programmers do all the time. But, given my limited programming experience, please bear with me. I have an excel file which has particular cell entries that read [[{"from": "4", "response": true, "value": 20}, {"from": "8", "response": true, "value": 20}, {"from": "9", "response": true, "value": 20}, {"from": "3", "response": true, "value": 20}], [{"from": "14", "response": false, "value": 20}, {"from": "15", "response": true, "value": 20}, {"from": "17", "response": false, "value": 20}, {"from": "13", "response": true, "value": 20}]] Now, for each such entry I want to take the information in each of the curly brackets and make a row of data out of it. Each such row would have 3 columns. For example, the row formed from the first entry within curly brackets should have the entries "4" "true" and "20" respectively. The part I posted should give me 6 such rows, and for n such repetitions I should end up with a matrix of 6n rows, and 4 columns ( an identifier, plus the 3 columns mentioned). What would be most efficient way to do this? By "doing this" I mean learning the trick, and then implementing it. I have access to quite a few software packages(Excel, Stata, Matlab, R) in my laboratory, so that should not be an issue.
Parsing data from a string
CC BY-SA 3.0
null
2014-11-14T22:48:10.173
2014-11-27T03:27:46.193
null
null
5044
[ "parsing" ]