Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
682
2
null
265
6
null
Michael Maouboussin, in his book, "The Success Equation," looks at differentiating luck from skill in various endeavors, including sports. He actually ranks sports by the amount of luck that contributes to performance in the different sports (p. 23) and about 2/3 of performance in football is attributable to skill. By contrast, I used MM's technique to analyze performance in Formula 1 racing, and found that 60% is attributable to skill (less than I was expecting.) That said, it seems this kind of analysis would imply that a sufficiently detailed and crafted feature set would allow ML algorithms to predict performance of NFL teams, perhaps even to the play level, with the caveat that significant variance will still exist because of the influence of luck in the game.
null
CC BY-SA 3.0
null
2014-07-06T14:01:57.467
2014-07-06T14:01:57.467
null
null
1360
null
683
1
null
null
11
2801
I have 2 datasets, one with positive instances of what I would like to detect, and one with unlabeled instances. What methods can I use ? As an example, suppose we want to understand detect spam email based on a few structured email characteristics. We have one dataset of 10000 spam emails, and one dataset of 100000 emails for which we don't know whether they are spam or not. How can we tackle this problem (without labeling manually any of the unlabeled data) ? What can we do if we have additional information about the proportion of spam in the unlabeled data (i.e. what if we estimate that between 20-40% of the 100000 unlabeled emails are spam) ?
Build a binary classifier with only positive and unlabeled data
CC BY-SA 3.0
null
2014-07-07T09:34:36.950
2015-12-07T10:50:13.513
2015-12-07T10:50:13.513
11097
906
[ "classification", "semi-supervised-learning" ]
684
2
null
677
7
null
I have never used `sklearn_pandas`, but from reading their source code, it looks like this is a bug on their side. If you look for [the function that is throwing the exception](https://github.com/paulgb/sklearn-pandas/blob/master/sklearn_pandas/__init__.py), you can notice that they are discarding the `y` argument (it does not even survive until the docstring), and the inner `fit` function expects one argument more, which is probably `y`: ``` def fit(self, X, y=None): ''' Fit a transformation from the pipeline X the data to fit ''' for columns, transformer in self.features: if transformer is not None: transformer.fit(self._get_col_subset(X, columns)) return self ``` I would recommend that you open an issue in [their bug tracker](https://github.com/paulgb/sklearn-pandas/issues). UPDATE: You can test this if you run your code from IPython. To summarize, if you use the `%pdb on` magic right before you run the problematic call, the exception is captured by the Python debugger, so you can play around a bit and see that calling the `fit` function with the label values `y[0]` does work -- see the last line with the `pdb>` prompt. (The CSV files are downloaded from Kaggle, except for the largest one which is just a part of the real file). ``` In [1]: import pandas as pd In [2]: from sklearn import neighbors In [3]: from sklearn_pandas import DataFrameMapper, cross_val_score In [4]: path_train ="train.csv" In [5]: path_labels ="trainLabels.csv" In [6]: path_test = "test.csv" In [7]: train = pd.read_csv(path_train, header=None) In [8]: labels = pd.read_csv(path_labels, header=None) In [9]: test = pd.read_csv(path_test, header=None) In [10]: mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))]) In [13]: %pdb on In [14]: mapper_train.fit_transform(train, labels) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-14-e3897d6db1b5> in <module>() ----> 1 mapper_train.fit_transform(train, labels) /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y, **fit_params) 409 else: 410 # fit method of arity 2 (supervised transformation) --> 411 return self.fit(X, y, **fit_params).transform(X) 412 413 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y) 116 for columns, transformer in self.features: 117 if transformer is not None: --> 118 transformer.fit(self._get_col_subset(X, columns)) 119 return self 120 TypeError: fit() takes exactly 3 arguments (2 given) > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.py(118)fit() 117 if transformer is not None: --> 118 transformer.fit(self._get_col_subset(X, columns)) 119 return self ipdb> l 113 114 X the data to fit 115 ''' 116 for columns, transformer in self.features: 117 if transformer is not None: --> 118 transformer.fit(self._get_col_subset(X, columns)) 119 return self 120 121 122 def transform(self, X): 123 ''' ipdb> transformer.fit(self._get_col_subset(X, columns), y[0]) KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', n_neighbors=3, p=2, weights='uniform') ```
null
CC BY-SA 3.0
null
2014-07-07T10:25:55.323
2014-07-07T13:42:31.827
2014-07-07T13:42:31.827
1367
1367
null
685
1
null
null
5
4518
I am very new to machine learning and in my first project have stumbled across a lot of issues which I really want to get through. I'm using logistic regression with R's `glmnet` package and alpha = 0 for ridge regression. I'm using ridge regression actually since lasso deleted all my variables and gave very low area under curve (0.52) but with ridge there isn't much of a difference (0.61). My dependent variable/output is probability of click, based on if there is a click or not in historical data. The independent variables are state, city, device, user age, user gender, IP carrier, keyword, mobile manufacturer, ad template, browser version, browser family, OS version and OS family. Of these, for prediction I'm using state, device, user age, user gender, IP carrier, browser version, browser family, OS version and OS family; I am not using keyword or template since we want to reject a user request before deep diving in our system and selecting a keyword or template. I am not using city because they are too many or mobile manufacturer because they are too few. Is that okay or should I be using the rejected variables? To start, I create a sparse matrix from my variables which are mapped against the column of clicks that have yes or no values. After training the model, I save the coefficients and intercept. These are used for new incoming requests using the formula for logistic regression: > Where `a` is intercept, `k` is the `i`th coefficient and `x` is the `i`th variable value. Is my approach correct so far? Simple GLM in R (that is where there is no regularized regression, right?) gave me 0.56 AUC. With regularization I get 0.61 but there is no distinct threshold where we could say that above 0.xx its mostly ones and below it most zeros are covered; actually, the max probability that a click didn't happen is almost always greater than the max probability that a click happened. So basically what should I do? I have read how stochastic gradient descent is an effective technique in logit so how do I implement stochastic gradient descent in R? If it's not straightforward, is there a way to implement this system in Python? Is SGD implemented after generating a regularized logistic regression model or is it a different process altogether? Also there is an algorithm called follow the regularized leader (FTRL) that is used in click-through rate prediction. Is there a sample code and use of FTRL that I could go through?
Stochastic gradient descent in logistic regression
CC BY-SA 3.0
null
2014-07-07T11:43:48.430
2021-05-15T21:43:02.720
2014-07-07T20:02:08.777
322
1273
[ "machine-learning", "data-mining", "r", "logistic-regression", "gradient-descent" ]
686
2
null
678
3
null
State of the art appears to be "paragraph vectors" introduced in a recent [paper](http://cs.stanford.edu/%7Equocle/paragraph_vector.pdf). Cosine/Euclidean distance between paragraph vectors would likely work better than any other approach. This probably isn't feasible yet due to lack of open source implementations. Next best thing is cosine distance between LSA vectors or cosine distance between raw BOW vectors. Sometimes it works better to choose different weighting schemes, like TF-IDF.
null
CC BY-SA 4.0
null
2014-07-07T13:23:28.220
2020-08-06T13:01:07.800
2020-08-06T13:01:07.800
98307
574
null
687
2
null
683
7
null
My suggestion would be to attempt to build some kind of clustering on your unlabeled data that somewhat approximates a labelled dataset. The rationale is more or less as follows: - You have some feature vector for representing your documents - Based on that feature vector, you can come up with a number of different clusterings, with either fuzzy, rough, or class-based clustering methods - Knowing what a positive example looks like, you can quickly evaluate the overall similarity of a cluster to your positive cluster - Knowing that there should really only be two clusters, you can adjust the hyperparameters on your clustering method so that the above two metrics are closer and closer to satisfaction - With the two clusters, you have what is likely a close approximation of a labelled dataset, which you can then use as a silver-standard corpus of sorts to actually train your model Hope that makes sense, if you're specifically looking for clustering algorithms, a few that I personally enjoy that might be good in this scenario are [FLAME](http://en.wikipedia.org/wiki/FLAME_clustering) and [tsne](http://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding). Alternately, looking at the spectacular [gensim](http://radimrehurek.com/gensim/) library in python will get you a long way toward the clustering you're looking for. Hope that helps and makes sense, leave a comment if you've got any questions.
null
CC BY-SA 3.0
null
2014-07-07T14:15:51.117
2014-07-07T14:15:51.117
null
null
548
null
688
2
null
600
2
null
As @SpacedMan has noted [in a comment](https://datascience.stackexchange.com/questions/600/how-do-you-create-an-optimized-walk-list-given-longitude-and-latitude-coordinate#comment1670_600), the street layout will have a massive influence on the optimization of the walk list. You have included only "latitude and longitude" in your question's title; but solving that problem does not lead to a "walk list", but to a "as-the-crow-flies list". Looking at your street layout as a graph, with edge weights describing distances, and trying to find the shortest traversal between all required addresses, will lead you to think of your problem as a "[Shortest path problem](http://en.wikipedia.org/wiki/Shortest_path_problem)". [Dijkstra's algorithm](http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm) is the best known solution (there are others); in its naive implementation it converges in O(n2), which may be acceptable if your lists of addresses are moderate in size. Otherwise, look for optimized versions in the above links. As for libraries and resources to start tackling the problem, since you do not specify languages or platforms, let me point to the [compilation of routing solvers in the Open Street Maps wiki](http://wiki.openstreetmap.org/wiki/Routing#Developers) and in general [their frameworks and libraries page](http://wiki.openstreetmap.org/wiki/Frameworks).
null
CC BY-SA 3.0
null
2014-07-07T14:41:59.523
2014-07-07T14:41:59.523
2017-04-13T12:50:41.230
-1
1367
null
689
2
null
678
52
null
There's a number of different ways of going about this depending on exactly how much semantic information you want to retain and how easy your documents are to tokenize (html documents would probably be pretty difficult to tokenize, but you could conceivably do something with tags and context.) Some of them have been mentioned by ffriend, and the paragraph vectors by user1133029 is a really solid one, but I just figured I would go into some more depth about plusses and minuses of different approaches. - Cosine Distance - Tried a true, cosine distance is probably the most common distance metric used generically across multiple domains. With that said, there's very little information in cosine distance that can actually be mapped back to anything semantic, which seems to be non-ideal for this situation. - Levenshtein Distance - Also known as edit distance, this is usually just used on the individual token level (words, bigrams, etc...). In general I wouldn't recommend this metric as it not only discards any semantic information, but also tends to treat very different word alterations very similarly, but it is an extremely common metric for this kind of thing - LSA - Is a part of a large arsenal of techniques when it comes to evaluating document similarity called topic modeling. LSA has gone out of fashion pretty recently, and in my experience, it's not quite the strongest topic modeling approach, but it is relatively straightforward to implement and has a few open source implementations - LDA - Is also a technique used for topic modeling, but it's different from LSA in that it actually learns internal representations that tend to be more smooth and intuitive. In general, the results you get from LDA are better for modeling document similarity than LSA, but not quite as good for learning how to discriminate strongly between topics. - Pachinko Allocation - Is a really neat extension on top of LDA. In general, this is just a significantly improved version of LDA, with the only downside being that it takes a bit longer to train and open-source implementations are a little harder to come by - word2vec - Google has been working on a series of techniques for intelligently reducing words and documents to more reasonable vectors than the sparse vectors yielded by techniques such as Count Vectorizers and TF-IDF. Word2vec is great because it has a number of open source implementations. Once you have the vector, any other similarity metric (like cosine distance) can be used on top of it with significantly more efficacy. - doc2vec - Also known as paragraph vectors, this is the latest and greatest in a series of papers by Google, looking into dense vector representations of documents. The gensim library in python has an implementation of word2vec that is straightforward enough that it can pretty reasonably be leveraged to build doc2vec, but make sure to keep the license in mind if you want to go down this route Hope that helps, let me know if you've got any questions.
null
CC BY-SA 3.0
null
2014-07-07T15:36:40.960
2014-07-07T15:36:40.960
null
null
548
null
690
2
null
683
2
null
Train 2 generative models, one for each dataset (spam only, spam plus ham), that will give you the probability that a datapoint is drawn from the same probability distribution of the training data. Assign emails as spam or ham based on which model gives you the highest probability of the document arising from the training data used to train it. Example generative models are RBM's, autoencoders (in that case, which model has the lowest reconstruction error). There are likely some bayesian generative models also that will assign a probability to a data point based on some training data. The best option though would be to take time to curate a second dataset containing only ham. That will give you higher classification accuracy. Assuming a lower proportion of spam to ham emails, that should not be too hard. You can even use Mechanical Turk if you lack the time or resources (or interns \ graduates students or other cheap labor).
null
CC BY-SA 3.0
null
2014-07-07T18:06:24.583
2014-07-07T18:06:24.583
null
null
1301
null
691
2
null
678
6
null
Empirically I have found LSA vastly superior to LDA every time and on every dataset I have tried it on. I have talked to other people who have said the same thing. It's also been used to win a number of the SemEval competitions for measuring semantic similarity between documents, often in combinations with a wordnet based measure, so I wouldn't say it's going out of fashion, or is definitely inferior to LDA, which is better for topic modelling and not semantic similarity in my experience, contrary to what some responders have stated. If you use gensim (a python library), it has LSA, LDA and word2vec, so you can easily compare the 3. doc2vec is a cool idea, but does not scale very well and you will likely have to implement it yourself as I am unaware of any open source implementations. It does not scale well as for each document, a new and separate model has to be built using SGD, a slow machine learning algorithm. But it will probably give you the most accurate results. LSA and LDA also don't scale well (word2vec does however), LDA scales worse in general. Gensim's implementations are very fast however, as it uses iterative SVD. One other note, if you use word2vec, you will still have to determine a way to compose vectors from documents, as it gives you a different vector per word. The simplest way to do this is to normalize each vector and take the mean over all word vectors in the document, or take a weighted mean by idf weighting of each word. So it's not as simple as 'use word2vec', you will need to do something further to compute document similarity. I would personally go with LSA, as I have seen it work well empirically, and gensim's library scales very well. However, there's no free lunch, so preferably try each method and see which works better for your data.
null
CC BY-SA 3.0
null
2014-07-07T18:20:36.090
2014-07-07T18:20:36.090
null
null
1301
null
692
2
null
679
4
null
The cosine similarity metric does a good (if not perfect) job of controlling for the document length, so comparing the similarity of 2 documents or 2 queries using the cosine metric and tf idf weights for the words should work well in either case. I would also recommend doing LSA first on tf idf weights, and then computing the cosine distance\similarities. If you are trying to build a search engine, I would recommend using a free open source search engine like solr or elastic search, or just the raw lucene libraries, as they do most of the work for you, and have good built in methods for handling the query to document similarity problem.
null
CC BY-SA 3.0
null
2014-07-07T18:28:50.420
2014-07-07T18:28:50.420
null
null
1301
null
693
2
null
662
7
null
I work for an online jobs site and we build solutions to recommend jobs based on resumes. Our approach take's a person's job title (or desired job title if a student and known), along with skills we extract from their resume, and their location (which is very important to most people) and find matches with jobs based on that. in terms of document classification, I would take a similar approach. I would recommend computing a tf idf matrix for each resume as a standard bag of words model, extracting just the person's job title and skills (for which you will need to define a list of skills to look for), and feed that into a ML algorithm. I would recommend trying knn, and an SVM, the latter works very well with high dimensional text data. Linear SVM's tend to do better than non-linear (e.g. using RBf kernels). If you have that outputting reasonable results, I would then play with extracting features using a natural language parser \ chunker, and also some custom built phrases matched by regex's.
null
CC BY-SA 3.0
null
2014-07-07T18:36:23.430
2014-07-07T18:36:23.430
null
null
1301
null
694
1
695
null
150
125671
I'm using Neural Networks to solve different Machine learning problems. I'm using Python and [pybrain](http://pybrain.org/) but this library is almost discontinued. Are there other good alternatives in Python?
Best python library for neural networks
CC BY-SA 3.0
null
2014-07-07T19:17:04.973
2018-12-11T23:39:52.667
2017-05-29T17:43:26.890
8432
989
[ "machine-learning", "python", "neural-network" ]
695
2
null
694
132
null
UPDATE: the landscape has changed quite a bit since I answered this question in July '14, and some new players have entered the space. In particular, I would recommend checking out: - TensorFlow - Blocks - Lasagne - Keras - Deepy - Nolearn - NeuPy They each have their strengths and weaknesses, so give them all a go and see which best suits your use case. Although I would have recommended using PyLearn2 a year ago, the community is no longer active so I would recommend looking elsewhere. My original response to the answer is included below but is largely irrelevant at this point. --- [PyLearn2](http://deeplearning.net/software/pylearn2/) is generally considered the library of choice for neural networks and deep learning in python. It's designed for easy scientific experimentation rather than ease of use, so the learning curve is rather steep, but if you take your time and follow the tutorials I think you'll be happy with the functionality it provides. Everything from standard Multilayer Perceptrons to Restricted Boltzmann Machines to Convolutional Nets to Autoencoders is provided. There's great GPU support and everything is built on top of Theano, so performance is typically quite good. The source for PyLearn2 is available [on github](https://github.com/lisa-lab/pylearn2). Be aware that PyLearn2 has the opposite problem of PyBrain at the moment -- rather than being abandoned, PyLearn2 is under active development and is subject to frequent changes.
null
CC BY-SA 3.0
null
2014-07-07T19:55:51.057
2016-12-12T14:48:39.890
2016-12-12T14:48:39.890
9634
684
null
697
1
714
null
1
1762
I'm trying to run some analysis with some big datasets (eg 400k rows vs. 400 columns) with R (e.g. using neural networks and recommendation systems). But, it's taking too long to process the data (with huge matrices, e.g. 400k rows vs. 400k columns). What are some free/cheap ways to improve R performance? I'm accepting packages or web services suggestions (other options are welcome).
Running huge datasets with R
CC BY-SA 3.0
null
2014-07-07T21:26:36.830
2017-11-15T03:54:08.703
null
null
199
[ "bigdata", "r", "optimization", "processing" ]
698
2
null
265
5
null
They can't predict, but they can tell you the most likely result. There's an study about this kind of approach from Etienne - [Predicting Who Will Win the World Cup with Wolfram Language](http://blog.wolfram.com/2014/06/20/predicting-who-will-win-the-world-cup-with-wolfram-language/). This is a very detailed study, so you can check all the methodology used to get the predictions. Interesting enough, 11 from 15 matches were correct! > As one might expect, Brazil is the favorite, with a probability to win of 42.5%. This striking result is due to the fact that Brazil has both the highest Elo ranking and plays at home. (Let's go Brazil!)
null
CC BY-SA 3.0
null
2014-07-08T02:08:04.560
2014-07-08T02:08:04.560
null
null
1379
null
699
2
null
679
4
null
From my experience only some classes of queries can be classified on lexical features (due to ambiguity of natural language). Instead you can try to use boolean search results (sites or segments of sites, not documents, without ranking) as features for classification (instead on words). This approach works well in classes where there is a big lexical ambiguity in a query but exists a lot of good sites relevant to the query (e.g. movies, music, commercial queries and so on). Also, for offline classification you can do LSI on query-site matrix. See "Introduction to Information Retrieval" book for details.
null
CC BY-SA 3.0
null
2014-07-08T06:40:36.923
2014-07-08T06:40:36.923
null
null
1384
null
700
1
900
null
7
2560
I have a set of datapoints from the unit interval (i.e. 1-dimensional dataset with numerical values). I receive some additional datapoints online, and moreover the value of some datapoints might change dynamically. I'm looking for an ideal clustering algorithm which can handle these issues efficiently. I know [sequential k-means clustering](https://www.cs.princeton.edu/courses/archive/fall08/cos436/Duda/C/sk_means.htm) copes with the addition of new instances, and I suppose with minor modification it can work with dynamic instance values (i.e. first taking the modified instance from the respective cluster, then updating the mean of the cluster and finally giving the modified instance as an input to the algorithm just as the addition of an unseen instance). My concern with using the k-means algorithm is the requirement of supplying the number of clusters as an input. I know that they beat other clustering algorithms (GAs, MSTs, Hierarchical Methods etc.) in time&space complexity. Honestly I'm not sure, but maybe I can get away with using one of the aforementioned algorithms. Even that my datasets are relatively large, the existence of a single dimension makes me wonder. More specifically a typical test case of mine would contain about 10K-200K 1-dimensional datapoints. I would like to complete the clustering preferably under a second. The dynamic changes in the value points are assumed to be smooth, i.e. relatively small. Thus being able to use existing solutions (i.e. being able to continue clustering on the existing one when a value is changed or new one is added) is highly preferred. So all in all: > Can you think of an algorithm which will provide a sweet spot between computational efficiency and the accuracy of clusters wrt. the problem defined above? Are there some nice heuristics for the k-means algorithm to automatically compute the value of K beforehand?
Efficient dynamic clustering
CC BY-SA 3.0
null
2014-07-08T07:29:34.167
2018-05-21T20:48:53.453
2020-06-16T11:08:43.077
-1
1386
[ "machine-learning", "algorithms", "clustering", "k-means", "hierarchical-data-format" ]
701
1
702
null
4
1246
I have generated a dataset of pairwise distances as follows: ``` id_1 id_2 dist_12 id_2 id_3 dist_23 ``` I want to cluster this data so as to identify the pattern. I have been looking at Spectral clustering and DBSCAN, but I haven't been able to come to a conclusion and have been ambiguous on how to make use of the existing implementations of these algorithms. I have been looking at Python and Java implementations so far. Could anyone point me to a tutorial or demo on how to make use of these clustering algorithms to handle the situation in hand?
Clustering pair-wise distance dataset
CC BY-SA 3.0
null
2014-07-08T07:37:57.123
2016-03-02T15:50:44.287
2016-03-02T15:50:44.287
13727
645
[ "data-mining", "clustering", "dbscan" ]
702
2
null
701
2
null
In the scikit-learn implementation of Spectral clustering and DBSCAN you do not need to precompute the distances, you should input the sample coordinates for all `id_1` ... `id_n`. Here is a simplification of the [documented example comparison of clustering algorithms](http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html): ``` import numpy as np from sklearn import cluster from sklearn.preprocessing import StandardScaler ## Prepare the data X = np.random.rand(1500, 2) # When reading from a file of the form: `id_n coord_x coord_y` # you will need this call instead: # X = np.loadtxt('coords.csv', usecols=(1, 2)) X = StandardScaler().fit_transform(X) ## Instantiate the algorithms spectral = cluster.SpectralClustering(n_clusters=2, eigen_solver='arpack', affinity="nearest_neighbors") dbscan = cluster.DBSCAN(eps=.2) ## Use the algorithms spectral_labels = spectral.fit_predict(X) dbscan_labels = dbscan.fit_predict(X) ```
null
CC BY-SA 3.0
null
2014-07-08T09:18:17.990
2014-07-08T09:18:17.990
null
null
1367
null
703
2
null
694
23
null
Pylearn relies on Theano and as mentioned in the other answer to use the library is quite complicated, until you get the hold of it. In the meantime I would suggest using [Theanets](https://github.com/lmjohns3/theano-nets/). It also built on top of Theano, but is much more easier to work with. It might be true, that it doesn't have all the features of Pylearn, but for the basic work it's sufficient. Also it's open source, so you can add custom networks on the fly, if you dare. :) EDIT: Dec 2015. Recently I have started using [Keras](http://keras.io/). It is a bit lower level than Theanets, but much more powerful. For basic tests the Theanets is appropriate. But if you want to do some research in field of ANN Keras is much more flexible. Plus the Keras can use Tensorflow as a backend.
null
CC BY-SA 3.0
null
2014-07-08T10:36:44.220
2015-12-10T09:35:30.443
2015-12-10T09:35:30.443
1390
1390
null
704
1
null
null
11
397
In my university, we have an HPC computing cluster. I use the cluster to train classifiers and so on. So, usually, to send a job to the cluster, (e.g. python scikit-learn script), I need to write a Bash script that contains (among others) a command like `qsub script.py`. However, I find this process very very frustrating. Usually what happens is that I write the python script on my laptop and then I login to the server and update the SVN repository, so I get the same python script there. Then I write that Bash script or edit it, so I can run the bash script. As you see this is really frustrating since, for every little update for the python script, I need to do many steps to have it executed at the computing cluster. Of course the task gets even more complicated when I have to put the data on the server and use the datasets' path on the server. I'm sure many people here are using computing clusters for their data science tasks. I just want to know how you guys manage sending the jobs to the clusters?
Working with HPC clusters
CC BY-SA 3.0
null
2014-07-08T13:45:07.583
2014-07-14T13:06:05.523
2014-07-09T00:25:47.190
84
728
[ "bigdata", "data-mining" ]
705
2
null
685
4
null
Stochastic gradient descent is a method of setting the parameters of the regressor; since the objective for logistic regression is convex (has only one maximum), this won't be an issue and SGD is generally only needed to improve convergence speed with masses of training data. What your numbers suggest to me is that your features are not adequate to separate the classes. Consider adding extra features if you can think any any that are useful. You might also consider interactions and quadratic features in your original feature space.
null
CC BY-SA 3.0
null
2014-07-08T14:32:55.827
2014-07-08T14:32:55.827
null
null
1399
null
707
2
null
655
5
null
Try a recurrent neural network, a model well suited for time series data. They're notoriously difficult to train, but seem to perform well when trained properly: [http://cs229.stanford.edu/proj2012/BernalFokPidaparthi-FinancialMarketTimeSeriesPredictionwithRecurrentNeural.pdf](http://cs229.stanford.edu/proj2012/BernalFokPidaparthi-FinancialMarketTimeSeriesPredictionwithRecurrentNeural.pdf)
null
CC BY-SA 3.0
null
2014-07-09T00:37:51.167
2014-07-09T00:37:51.167
null
null
684
null
708
2
null
704
4
null
There are many solutions to ease the burden of copying the file from a local machine to the computing nodes in the clusters. A simple approach is to use an interface that allows multi-access to the machines in the cluster, like [clusterssh](http://sourceforge.net/projects/clusterssh/) (cssh). It allows you to type commands to multiple machines at once via a set of terminal screens (each one a ssh connection to a different machine in the cluster). Since your cluster seem to have `qsub` set up, your problem may be rather related to replicating the data along the machines (other than simply running a command in each node). So, to address this point, you may either write an `scp` script, to copy things to and from each node in the cluster (which is surely better addressed with SVN), or you may set up a NFS. This would allow for a simple and transparent access to the data, and also reduce the need for replicating unnecessary data. For example, you could access a node, copy the data to such place, and simply use the data remotely, via network communication. I'm not acquainted with how to set up a NFS, but you already have access to it (in case your home folder is the same across the machines you access). Then, the scripts and data could be sent to a single place, and later accessed from others. This is akin to the SVN approach, except it's more transparent/straightforward.
null
CC BY-SA 3.0
null
2014-07-09T00:44:28.207
2014-07-09T00:44:28.207
null
null
84
null
709
2
null
697
2
null
Since you mention you are building a recommendation system, I believe you have a sparse matrix which you are working on. Check [sparseMatrix](http://stat.ethz.ch/R-manual/R-devel/library/Matrix/html/sparseMatrix.html) from Matrix package. This should be able to help you with storing your large size matrix in memory and train your model.
null
CC BY-SA 3.0
null
2014-07-09T07:08:34.740
2014-07-09T07:08:34.740
null
null
1131
null
710
1
720
null
1
893
I'm looking for commercial text summarization tools (APIs, Libraries,...) which are able to perform any of the following tasks: - Extractive Multi-Document Summarization (Generic or query-based) - Extractive Single-Document Summarization (Generic or query-based) - Generative Single-Document Summarization (Generic or query-based) - Generative Multi-Document Summarization (Generic or query-based)
Commercial Text Summarization Tools
CC BY-SA 3.0
null
2014-07-09T11:05:40.813
2017-03-11T20:01:01.750
2014-08-08T00:27:05.340
21
979
[ "text-mining" ]
711
1
712
null
71
10739
This question is in response to a comment I saw on another question. The comment was regarding the Machine Learning course syllabus on Coursera, and along the lines of "SVMs are not used so much nowadays". I have only just finished the relevant lectures myself, and my understanding of SVMs is that they are a robust and efficient learning algorithm for classification, and that when using a kernel, they have a "niche" covering number of features perhaps 10 to 1000 and number of training samples perhaps 100 to 10,000. The limit on training samples is because the core algorithm revolves around optimising results generated from a square matrix with dimensions based on number of training samples, not number of original features. So does the comment I saw refer some real change since the course was made, and if so, what is that change: A new algorithm that covers SVM's "sweet spot" just as well, better CPUs meaning SVM's computational advantages are not worth as much? Or is it perhaps opinion or personal experience of the commenter? I tried a search for e.g. "are support vector machines out of fashion" and found nothing to imply they were being dropped in favour of anything else. And Wikipedia has this: [http://en.wikipedia.org/wiki/Support_vector_machine#Issues](http://en.wikipedia.org/wiki/Support_vector_machine#Issues) . . . the main sticking point appears to be difficulty of interpreting the model. Which makes SVM fine for a black-box predicting engine, but not so good for generating insights. I don't see that as a major issue, just another minor thing to take into account when picking the right tool for the job (along with nature of the training data and learning task etc).
Are Support Vector Machines still considered "state of the art" in their niche?
CC BY-SA 3.0
null
2014-07-09T12:22:22.400
2020-08-17T07:11:34.777
2020-08-17T07:11:34.777
836
836
[ "data-mining", "svm", "state-of-the-art" ]
712
2
null
711
56
null
SVM is a powerful classifier. It has some nice advantages (which I guess were responsible for its popularity)... These are: - Efficiency: Only the support vectors play a role in determining the classification boundary. All other points from the training set needn't be stored in memory. - The so-called power of kernels: With appropriate kernels you can transform feature space into a higher dimension so that it becomes linearly separable. The notion of kernels work with arbitrary objects on which you can define some notion of similarity with the help of inner products... and hence SVMs can classify arbitrary objects such as trees, graphs etc. There are some significant disadvantages as well. - Parameter sensitivity: The performance is highly sensitive to the choice of the regularization parameter C, which allows some variance in the model. - Extra parameter for the Gaussian kernel: The radius of the Gaussian kernel can have a significant impact on classifier accuracy. Typically a grid search has to be conducted to find optimal parameters. LibSVM has a support for grid search. SVMs generally belong to the class of "Sparse Kernel Machines". The sparse vectors in the case of SVM are the support vectors which are chosen from the maximum margin criterion. Other sparse vector machines such as the Relevance Vector Machine (RVM) perform better than SVM. The following figure shows a comparative performance of the two. In the figure, the x-axis shows one dimensional data from two classes y={0,1}. The mixture model is defined as P(x|y=0)=Unif(0,1) and P(x|y=1)=Unif(.5,1.5) (Unif denotes uniform distribution). 1000 points were sampled from this mixture and an SVM and an RVM were used to estimate the posterior. The problem of SVM is that the predicted values are far off from the true log odds. ![RVM vs. SVM](https://i.stack.imgur.com/zNYbt.png) A very effective classifier, which is very popular nowadays, is the Random Forest. The main advantages are: - Only one parameter to tune (i.e. the number of trees in the forest) - Not utterly parameter sensitive - Can easily be extended to multiple classes - Is based on probabilistic principles (maximizing mutual information gain with the help of decision trees)
null
CC BY-SA 3.0
null
2014-07-09T13:07:13.303
2014-07-09T20:23:12.380
2014-07-09T20:23:12.380
984
984
null
713
1
715
null
3
1341
I have installed cloudera CDH5 Quick start VM on VM player. When I login through HUE in the first page I am the following error “Potential misconfiguration detected. Fix and restart Hue.”![Potential misconfiguration detected. Fix and restart Hue](https://i.stack.imgur.com/vnq5P.png) How to solve this issue.
Cloudera QuickStart VM Error
CC BY-SA 3.0
null
2014-07-09T17:51:40.583
2016-04-04T01:08:43.203
2016-04-04T01:08:43.203
1352
1314
[ "apache-hadoop" ]
714
2
null
697
6
null
Although your question is not very specific so I'll try to give you some generic solutions. There are couple of things you can do here: - Check sparseMatrix from Matrix package as mentioned by @Sidhha - Try running your model in parallel using packages like snowfall, Parallel. Check this list of packages on Cran which can help you runnning your model in multicore parallel mode. - You can also try data.table package. It is quite phenomenal in speed. Good reads: - 11 Tips on How to Handle Big Data in R (and 1 Bad Pun) - Why R is slow & how to improve its Performance?
null
CC BY-SA 3.0
null
2014-07-10T08:38:43.353
2014-07-10T08:38:43.353
null
null
2433
null
715
2
null
713
2
null
Go into the other link from home - to the cloudera manager. From there, you'll see Hue can be restarted, but there is probably an alert that needs to be resolved in there first. If I remember right there's some initial configuration that needs to be done on the quickstart VM that's spelled out as soon as you log into the manager application.
null
CC BY-SA 3.0
null
2014-07-10T09:16:39.937
2014-07-10T09:16:39.937
null
null
434
null
716
1
718
null
22
6961
I know that there is no a clear answer for this question, but let's suppose that I have a huge neural network, with a lot of data and I want to add a new feature in input. The "best" way would be to test the network with the new feature and see the results, but is there a method to test if the feature IS UNLIKELY helpful? Like [correlation measures](http://www3.nd.edu/%7Emclark19/learn/CorrelationComparison.pdf) etc?
How to choose the features for a neural network?
CC BY-SA 4.0
null
2014-07-10T10:07:13.523
2021-03-11T19:26:46.897
2020-08-05T11:06:38.627
98307
989
[ "machine-learning", "neural-network", "feature-selection", "feature-extraction" ]
717
1
723
null
6
101
I'm using an experimental design to test the robustness of different classification methods, and now I'm searching for the correct definition of such design. I'm creating different subsets of the full dataset by cutting away some samples. Each subset is created independently with respect to the others. Then, I run each classification method on every subset. Finally, I estimate the accuracy of each method as how many classifications on subsets are in agreement with the classification on the full dataset. For example: ``` Classification-full 1 2 3 2 1 1 2 Classification-subset1 1 2 2 3 1 Classification-subset2 2 3 1 1 2 ... Accuracy 1 1 1 1 0.5 1 1 ``` Is there a correct name to this methodology? I thought it can fall under [bootstrapping](http://en.wikipedia.org/wiki/Bootstrapping_(statistics)) but I'm not sure about this.
How to define a custom resampling methodology
CC BY-SA 3.0
null
2014-07-10T11:55:49.637
2014-07-11T11:32:13.550
2014-07-10T18:04:59.523
84
133
[ "classification", "definitions", "accuracy", "sampling" ]
718
2
null
716
20
null
A very strong correlation between the new feature and an existing feature is a fairly good sign that the new feature provides little new information. A low correlation between the new feature and existing features is likely preferable. A strong linear correlation between the new feature and the predicted variable is an good sign that a new feature will be valuable, but the absence of a high correlation is not necessary a sign of a poor feature, because neural networks are not restricted to linear combinations of variables. If the new feature was manually constructed from a combination of existing features, consider leaving it out. The beauty of neural networks is that little feature engineering and preprocessing is required -- features are instead learned by intermediate layers. Whenever possible, prefer learning features to engineering them.
null
CC BY-SA 3.0
null
2014-07-10T15:43:53.177
2014-07-10T19:18:05.697
2014-07-10T19:18:05.697
684
684
null
719
2
null
653
4
null
nsl- I'm a beginner at machine learning, so forgive the lay-like description here, but it sounds like you might be able to use topic modelling, like latent dirichlet analysis (LDA). It's an algorithm widely used to classify documents, according to what topics they are about, based on the words found and the relative frequencies of those words in the overall corpus. I bring it up mainly because, in LDA it's not necessary to define the topics in advance. Since the help pages on LDA are mostly written for text analysis, the analogy I would use, in order to apply it to your question, is: - Treat each gene expression, or feature, as a 'word' (sometimes called a token in typical LDA text-classification applications) - Treat each sample as a document (ie it contains an assortment of words, or gene expressions) - Treat the signatures as pre-existing topics If I'm not mistaken, LDA should give weighted probabilities for each topic, as to how strongly it is present in each document.
null
CC BY-SA 3.0
null
2014-07-10T22:42:13.720
2014-07-10T22:42:13.720
null
null
2443
null
720
2
null
710
2
null
There are a couple of open source options I know of - LibOTS - [http://libots.sourceforge.net/](http://libots.sourceforge.net/) DocSum - [http://docsum.sourceforge.net/docsum/web/about.php](http://docsum.sourceforge.net/docsum/web/about.php) A couple of commercial solutions - Intellix Summarizer Pro - [http://summarizer.intellexer.com/order_summarizer_pro.php](http://summarizer.intellexer.com/order_summarizer_pro.php) Copernic Summarizer - [http://www.copernic.com/en/products/summarizer/](http://www.copernic.com/en/products/summarizer/) And this one is a web service - TextTeaser - [http://www.textteaser.com/](http://www.textteaser.com/) I'm sure there are plenty of others out there. I have used Copernic a good deal and it's pretty good, but I was hoping it could be automated easily, which it can't - at least it couldn't when I used it.
null
CC BY-SA 3.0
null
2014-07-10T23:38:58.153
2014-07-10T23:38:58.153
null
null
434
null
721
2
null
658
7
null
When it comes to dealing with many disparate kinds of data, especially when the relationships between them are unclear, I would strongly recommend a technique based on [decision trees](http://en.wikipedia.org/wiki/Decision_tree_learning), the most popular ones today to the best of my knowledge are [random forest](http://en.wikipedia.org/wiki/Random_forest), and [extremely randomized trees](http://www.montefiore.ulg.ac.be/~ernst/uploads/news/id63/extremely-randomized-trees.pdf). Both have implementations in [sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier.fit_transform), and they are pretty straightforward to use. At a very high level, the reason that a `decision tree`-based approach is advantageous for multiple disparate kinds of data is because decision trees are largely independent from the specific data they are dealing with, just so long as they are capable of understanding your representation. You'll still have to fit your data into a feature vector of some kind, but based on your example that seems like a pretty straightforward task, and if you're willing to go a little deeper on your implementation you could certainly come up with a custom tree-splitting rule without actually having to change anything in the underlying algorithm. The [original paper](http://www.cs.nyu.edu/~roweis/csc2515-2006/readings/quinlan.pdf) is a pretty decent place to start if you want to give that a shot. If you want pseudo-structural data from your text data though, I might suggest looking into `doc2vec`, recently developed by Google. I don't think there are any good open-source implementations now, but it's a pretty straightforward improvement on the `word2vec` algorithm, which has implementations in at least `C` and `python`. Hope that helps! Let me know if you've got any more questions.
null
CC BY-SA 3.0
null
2014-07-11T01:28:24.957
2014-07-11T01:28:24.957
null
null
548
null
722
2
null
52
23
null
R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (`gsub`, `transform`, etc.), as well as in various third-party packages, such as stringr, reshape/reshape2, and plyr/dplyr. Examples and best practices of usage for these packages and their functions are described in the following paper: [http://vita.had.co.nz/papers/tidy-data.pdf](http://vita.had.co.nz/papers/tidy-data.pdf). Additionally, R offers some packages specifically focused on data cleaning and transformation: - editrules (http://cran.r-project.org/web/packages/editrules/index.html) - deducorrect (http://cran.r-project.org/web/packages/deducorrect/index.html) - StatMatch (http://cran.r-project.org/web/packages/StatMatch/index.html) - MatchIt (http://cran.r-project.org/web/packages/MatchIt/index.html) - DataCombine (http://cran.r-project.org/web/packages/DataCombine) - data.table (http://cran.r-project.org/web/packages/data.table) A comprehensive and coherent approach to data cleaning in R, including examples and use of editrules and deducorrect packages, as well as a description of workflow (framework) of data cleaning in R, is presented in the following paper, which I highly recommend: [http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf](http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf).
null
CC BY-SA 3.0
null
2014-07-11T09:49:32.933
2014-08-02T08:16:08.640
2014-08-02T08:16:08.640
2452
2452
null
723
2
null
717
2
null
Random subsampling seems appropriate, bootstrapping is a bit more generic, but also correct. Here are some references and synonyms: [http://www.frank-dieterle.com/phd/2_4_3.html](http://www.frank-dieterle.com/phd/2_4_3.html)
null
CC BY-SA 3.0
null
2014-07-11T11:32:13.550
2014-07-11T11:32:13.550
null
null
127
null
724
2
null
704
4
null
Your approach of using a source version repository is a good one and it actually allows you also working on the cluster and then copying everything back. If you find yourself making minor edits to your Python script on your laptop, then updating your SVN directory on the cluster, why not work directly on the cluster frontend, make all needed minor edits, and then, at the end of the day, commit everything there and update on your laptop? All you need is to get familiar with the environment there (OS, editor, etc.) or install your own environment (I usually install in my home directory the latest version of [Vim](http://www.vim.org/), [Tmux](http://tmux.sourceforge.net/), etc. with the proper dotfiles so I feel at home there.) Also, you can version your data, and even your intermediate results if size permits. My repositories often comprise code, data (original and cleaned versions), documentation, and paper sources for publishing (latex) Finally, you can script your job submission to avoid modifying scripts manually. `qsub` accepts a script from stdin and also accepts all `#$` comments as command-line arguments.
null
CC BY-SA 3.0
null
2014-07-11T14:13:30.403
2014-07-11T14:13:30.403
null
null
172
null
725
2
null
634
0
null
What you describe falls in the category of [concept drift](http://en.wikipedia.org/wiki/Concept_drift) in machine learning. You might find interesting and actionable ideas in this [summary paper](http://arxiv.org/pdf/1010.4784.pdf) and you'll find a taxonomy of the possible approaches in [these slides](http://www.cs.waikato.ac.nz/~abifet/PAKDD2011/PAKDD11Tutorial_Handling_Concept_Drift.pdf).
null
CC BY-SA 3.0
null
2014-07-11T14:27:01.603
2014-07-11T14:27:01.603
null
null
172
null
726
1
null
null
6
374
I am trying to understand a neuroscience article: - Friston, Karl J., et al. "Action and behavior: a free-energy formulation." Biological cybernetics 102.3 (2010): 227-260. (DOI 10.1007/s00422-010-0364-z) In this article, Friston gives three equations that are, as I understand him, equivalent or inter-convertertable and refer to both physical and Shannon entropy. They appear on page 231 of the article as equation (5): > The resulting expression for free-energy can be expressed in three ways (with the use of the Bayes rules and simple rearrangements): • Energy minus entropy • Divergence plus surprise • Complexity minus accuracy Mathematically, these correspond to: The things I am struggling with at this point are: - the meaning of the || in the 2nd and 3rd versions of the equations; - and the negative logs. Any help in understanding how these equations are actually what Fristen claims them to be would be greatly appreciated. For example, in the 1st equation, in what sense is the first term energy, etc?
Trying to understand free-energy equations in a Karl Friston neuroscience article
CC BY-SA 3.0
null
2014-07-11T21:09:58.873
2014-07-16T16:18:18.573
2014-07-16T16:18:18.573
84
2458
[ "neural-network" ]
727
2
null
671
3
null
Your problem is that the resets aren't part of your linear model. You either have to cut your data into different fragments at the resets, so that no reset occurs within each fragment, and you can fit a linear model to each fragment. Or you can build a more complicated model that allows for resets. In this case, either the time of occurrence of the resets has to be put into the model manually, or the time of resets has to be a free parameter in the model that is determined by fitting the model to the data.
null
CC BY-SA 3.0
null
2014-07-11T21:35:22.677
2014-07-11T21:35:22.677
null
null
2459
null
728
2
null
536
4
null
In addition to excellent previous answers, I'd like to recommend two papers on data cleaning. They are not specific to manual data cleaning, but, considering the benefits and advice (which I completely agree with) of expressing even manual data transformations in code, these resources can be as valuable. Also, despite the fact that following papers are somewhat R-focused, I believe that general ideas and workflows for data cleaning can be easily extracted and are equally applicable to non-R environments, as well. The first paper presents the concept of tidy data, as well as examples and best practices of use of standard and specific R packages in data cleaning: [http://vita.had.co.nz/papers/tidy-data.pdf](http://vita.had.co.nz/papers/tidy-data.pdf). A comprehensive and coherent approach to data cleaning in R, including examples, as well as a description of workflow (framework) of data cleaning in R, is presented in the following paper, which I highly recommend: [http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf](http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf).
null
CC BY-SA 3.0
null
2014-07-11T22:03:31.950
2014-07-11T22:03:31.950
null
null
2452
null
729
2
null
671
5
null
I thought this was an interesting problem, so I wrote a sample data set and a linear slope estimator in R. I hope it helps you with your problem. I'm going to make some assumptions, the biggest is that you want to estimate a constant slope, given by some segments in your data. Another assumption to separate the blocks of linear data is that the natural 'reset' will be found by comparing consecutive differences and finding ones that are X-standard deviations below the mean. (I chose 4 sd's, but this can be changed) Here is a plot of the data, and the code to generating it is at the bottom. ![Sample Data](https://i.stack.imgur.com/2dC1w.png) For starters, we find the breaks and fit each set of y-values and record the slopes. ``` # Find the differences between adjacent points diffs = y_data[-1] - y_data[-length(y_data)] # Find the break points (here I use 4 s.d.'s) break_points = c(0,which(diffs < (mean(diffs) - 4*sd(diffs))),length(y_data)) # Create the lists of y-values y_lists = sapply(1:(length(break_points)-1),function(x){ y_data[(break_points[x]+1):(break_points[x+1])] }) # Create the lists of x-values x_lists = lapply(y_lists,function(x) 1:length(x)) #Find all the slopes for the lists of points slopes = unlist(lapply(1:length(y_lists), function(x) lm(y_lists[[x]] ~ x_lists[[x]])$coefficients[2])) ``` Here are the slopes: (3.309110, 4.419178, 3.292029, 4.531126, 3.675178, 4.294389) And we can just take the mean to find the expected slope (3.920168). --- Edit: Predicting when series reaches 120 I realized I didn't finish predicted when series reaches 120. If we estimate the slope to be m and we see a reset at time t to a value x (x<120), we can predict how much longer it would take to reach 120 by some simple algebra. ![enter image description here](https://i.stack.imgur.com/DixZv.gif) Here, t is the time it would take to reach 120 after a reset, x is what it resets to, and m is the estimated slope. I'm not going to even touch the subject of units here, but it's good practice to work them out and make sure everything makes sense. --- Edit: Creating The Sample Data The sample data will consist of 100 points, random noise with a slope of 4 (Hopefully we will estimate this). When the y-values reach a cutoff, they reset to 50. The cutoff is randomly chosen between 115 and 120 for each reset. Here is the R code to create the data set. ``` # Create Sample Data set.seed(1001) x_data = 1:100 # x-data y_data = rep(0,length(x_data)) # Initialize y-data y_data[1] = 50 reset_level = sample(115:120,1) # Select initial cutoff for (i in x_data[-1]){ # Loop through rest of x-data if(y_data[i-1]>reset_level){ # check if y-value is above cutoff y_data[i] = 50 # Reset if it is and reset_level = sample(115:120,1) # rechoose cutoff }else { y_data[i] = y_data[i-1] + 4 + (10*runif(1)-5) # Or just increment y with random noise } } plot(x_data,y_data) # Plot data ```
null
CC BY-SA 4.0
null
2014-07-11T23:08:16.267
2019-01-01T19:50:02.190
2019-01-01T19:50:02.190
61
375
null
730
1
1065
null
13
1888
As far as I know the development of algorithms to solve the Frequent Pattern Mining (FPM) problem, the road of improvements have some main checkpoints. Firstly, the [Apriori](http://en.wikipedia.org/wiki/Apriori_algorithm) algorithm was proposed in 1993, by [Agrawal et al.](http://dl.acm.org/citation.cfm?id=170072), along with the formalization of the problem. The algorithm was able to strip-off some sets from the `2^n - 1` sets (powerset) by using a lattice to maintain the data. A drawback of the approach was the need to re-read the database to compute the frequency of each set expanded. Later, on year 1997, [Zaki et al.](http://www.computer.org/csdl/trans/tk/2000/03/k0372-abs.html) proposed the algorithm [Eclat](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Frequent_Pattern_Mining/The_Eclat_Algorithm), which inserted the resulting frequency of each set inside the lattice. This was done by adding, at each node of the lattice, the set of transaction-ids that had the items from root to the referred node. The main contribution is that one does not have to re-read the entire dataset to know the frequency of each set, but the memory required to keep such data structure built may exceed the size of the dataset itself. In 2000, [Han et al.](http://dl.acm.org/citation.cfm?doid=335191.335372) proposed an algorithm named [FPGrowth](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Frequent_Pattern_Mining/The_FP-Growth_Algorithm), along with a prefix-tree data structure named FPTree. The algorithm was able to provide significant data compression, while also granting that only frequent itemsets would be yielded (without candidate itemset generation). This was done mainly by sorting the items of each transaction in decreasing order, so that the most frequent items are the ones with the least repetitions in the tree data structure. Since the frequency only descends while traversing the tree in-depth, the algorithm is able to strip-off non-frequent itemsets. Edit: As far as I know, this may be considered a state-of-the-art algorithm, but I'd like to know about other proposed solutions. What other algorithms for FPM are considered "state-of-the-art"? What is the intuition/main-contribution of such algorithms? Is the FPGrowth algorithm still considered "state of the art" in frequent pattern mining? If not, what algorithm(s) may extract frequent itemsets from large datasets more efficiently?
Is FPGrowth still considered "state of the art" in frequent pattern mining?
CC BY-SA 3.0
null
2014-07-12T17:25:52.907
2014-08-30T18:36:07.490
2014-07-13T03:05:46.660
84
84
[ "bigdata", "data-mining", "efficiency", "state-of-the-art" ]
731
1
732
null
59
26154
When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest! I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data. For implementing NN I use Theano Stacked Auto Encoder with [the code from tutorial](https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/SdA.py) that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at [tutorial](http://deeplearning.net/tutorial/deeplearning.pdf), chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc. And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set. On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set. How can it be that my deep NN with pre-training behaves so badly? What should I try?
How to fight underfitting in a deep neural net
CC BY-SA 3.0
null
2014-07-13T09:04:39.703
2017-05-01T07:46:22.633
2017-05-01T07:46:22.633
8820
2471
[ "neural-network", "deep-learning" ]
732
2
null
731
37
null
The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate). Also I found following papers very useful: - Visually Debugging Restricted Boltzmann Machine Training with a 3D Example - A Practical Guide to Training Restricted Boltzmann Machines They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
null
CC BY-SA 3.0
null
2014-07-13T09:58:16.387
2014-07-13T16:46:23.883
2014-07-13T16:46:23.883
1279
1279
null
733
2
null
704
5
null
Ask your grid administrator to add your local machine as a "submit host", and install SGE (which we assume you are using, you don't actually say) so then you can `qsub` from your machine. OR.... Use emacs, then you can edit on your HPC via emacs's "tramp" ssh-connection facilities, and keep a shell open in another emacs window. You don't say what editor/operating system you like to use. You can even configure emacs to save a file in two places, so you could save to your local machine for running tests and to the HPC file system simultaneously for big jobs.
null
CC BY-SA 3.0
null
2014-07-13T12:24:36.430
2014-07-13T12:24:36.430
null
null
471
null
734
2
null
726
3
null
I'm not qualified to understand almost all of that paper, but, I might be able to give some intuitions from information theory that help you parse the paper. `||` denotes the [Kullback-Leibler divergence](http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence). It measures an information gain between two distributions. I suppose you could say it indicates the information in the real distribution of data that a model fails to capture. When you see "negative log" think ["entropy"](http://en.wikipedia.org/wiki/Information_entropy). In the first equation, think of it as "-ln(...) - -ln(...)". This may help think of it as the difference of entropies. Likewise in the second, read it as "D(...) + -ln(...)". This may help think of it as "plus entropy". If you look at the divergence definition, you'll see it is defined as the log of the ratio of the PDFs. This may help connect it to logs and negative logs. Look at the definition that writes it as cross-entropy minus entropy. Then this is all a question of differences of entropies of things which may be clearer.
null
CC BY-SA 3.0
null
2014-07-14T12:08:13.373
2014-07-14T12:08:13.373
null
null
21
null
735
2
null
704
3
null
From your question's wording I assume that you have a local machine and a remote machine where you update two files — a Python script and a Bash script. Both files are under SVN control, and both machines have access to the same SVN server. I am sorry I do not have any advice specific to your grid system, but let me list some general points I have found important for any deployment. Keep production changes limited to configuration changes. You write that you have to "use the datasets' path on the server"; this sounds to me like you have the paths hardcoded into your Python script. This is not a good idea, precisely because you will need to change those paths in every other machine where you move the script to. If you commit those changes back to SVN, then on your local machine you will have the remote paths, and on and on ... (What if there are not only paths, but also passwords? You should not have production passwords in an SVN server.) So, keep paths and other setup informations in a `.ini` file and use [ConfigParser](https://docs.python.org/2/library/configparser.html) to read it, or use a `.json` file and use the [json](https://docs.python.org/2/library/json.html) module. Keep one copy of the file locally and one remotely, both under the same path, both without SVN control, and just keep the path to that configuration file in the Python script (or get it from the command line if you can't keep both configurations under the same path). Keep configuration as small as possible. Any configuration is a "moving part" of your application, and any system is more robust the less it has moving parts. A good indicator of something that belongs into configuration is exactly that you have to edit it every time you move the code; things that have not needed editing can remain as constants in the code. Automate your deployment. You can do it via a Bash script on your local machine; note that you can [run any command on a remote machine](http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html) through `ssh`. For instance: ``` svn export yourprojectpath /tmp/exportedproject tar czf /tmp/yourproject.tgz /tmp/exportedproject scp /tmp/myproject.tgz youruser@remotemachine:~/dev ## Remote commands are in the right hand side, between '' ssh youruser@remotemachine 'tar xzf ~/dev/yourproject.tgz' ssh youruser@remotemachine 'qsub ~/dev/yourproject/script.py' ``` For this to work, you need of course to have a [passwordless login](http://www.linuxproblem.org/art_9.html), based on public/private keys, set up between your local and the remote machine. If you need more than this, you can think of using Python's [Fabric](http://www.fabfile.org) or the higher-level [cuisine](https://github.com/sebastien/cuisine).
null
CC BY-SA 3.0
null
2014-07-14T13:06:05.523
2014-07-14T13:06:05.523
null
null
1367
null
736
1
740
null
10
3533
I have a dataset which contains ~100,000 samples of 50 classes. I have been using SVM with an RBF kernel to train and predict new data. The problem though is the dataset is skewed towards different classes. For example, Class 1 - 30 (~3% each), Class 31 - 45 (~0.6% each), Class 46 - 50 (~0.2% each) I see that the model tends to very rarely predict the classes which occur less frequent in the training set, even though the test set has the same class distribution as the training set. I am aware that there are technique such as 'undersampling' where the majority class is scaled down to the minor class. However, is this applicable here where there are so many different classes? Are there other methods to help handle this case?
Skewed multi-class data
CC BY-SA 3.0
null
2014-07-14T13:53:28.437
2015-10-01T20:31:46.680
null
null
802
[ "classification", "svm" ]
737
2
null
736
1
null
I have faced this problem many times while using SVM with Rbf kernel. Using Linear kernel instead of Rbf kernel solved my problem, but I dealt with lesser number of classes. The results were less skewed and more accurate with the linear kernel. Hope this solves your problem. Edit: While I wrote original answer I was naive enough to not consider weighting the classes as one of them correctly answered. Also, while using rbf kernel its important to make sure that the penalty parameter or the 'C' value as per sklearn's svm module is too generic. I find that the default value of C=1 is too generic most of the time and I typically end up with a value of C=10000. Hope this helps others who get skewed results with svm(rbf) despite of having good distribution of classes in data.
null
CC BY-SA 3.0
null
2014-07-14T15:58:42.517
2015-10-01T20:31:46.680
2015-10-01T20:31:46.680
2485
2485
null
739
1
742
null
24
6446
I am an MSc student at the University of Edinburgh, specialized in machine learning and natural language processing. I had some practical courses focused on data mining, and others dealing with machine learning, bayesian statistics and graphical models. My background is a BSc in Computer Science. I did some software engineering and I learnt the basic concepts, such as design patterns, but I have never been involved in a large software development project. However, I had a data mining project in my MSc. My question is, if I want to go for a career as Data Scientist, should I apply for a graduate data scientist position first, or should I get a position as graduate software engineer first, maybe something related to data science, such as big data infrastructure or machine learning software development? My concern is that I might need good software engineering skills for data science, and I am not sure if these can be obtained by working as a graduate data scientist directly. Moreover, at the moment I like Data Mining, but what if I want to change my career to software engineering in the future? It might be difficult if I specialised so much in data science. I have not been employed yet, so my knowledge is still limited. Any clarification or advice are welcome, as I am about to finish my MSc and I want to start applying for graduate positions in early October.
Starting my career as Data Scientist, is Software Engineering experience required?
CC BY-SA 3.0
null
2014-07-14T19:02:01.670
2014-07-18T19:29:05.073
2014-07-18T19:29:05.073
553
2489
[ "education", "definitions", "career" ]
740
2
null
736
5
null
I would suggest you to use libsvm, which already has adjustable class weights implemented in it. Rather than replicating the training samples, one modifies the C parameter for different classes in the SVM optimization. For example if your data has 2 classes, and the first class is only 10% of the data, you would choose class weights to be 10 and 1 for class 1 and 2 respectively. Therefore, margin violations of the first class would cost 10 times more than the margin violations for second class, and per-class accuracies would be more balanced.
null
CC BY-SA 3.0
null
2014-07-14T19:06:44.090
2014-07-14T19:06:44.090
null
null
1350
null
741
2
null
739
6
null
Absolutely. Keep your software skills sharp. You can do this in an academic program if you simply implement by yourself all the algorithms you learn about. Good selection of courses, btw. Consider getting an internship too.
null
CC BY-SA 3.0
null
2014-07-15T04:39:11.670
2014-07-15T04:39:11.670
null
null
381
null
742
2
null
739
30
null
1) I think that there's no need to question whether your background is adequate for a career in data science. CS degree IMHO is more than enough for data scientist from software engineering point of view. Having said that, theoretical knowledge is not very helpful without matching practical experience, so I would definitely try to enrich my experience through participating in additional school projects, internships or open source projects (maybe ones, focused on data science / machine learning / artificial intelligence). 2) I believe your concern about focusing on data science too early is unfounded, as long as you will be practicing software engineering either as a part of your data science job, or additionally in your spare time. 3) I find the following definition of a data scientist rather accurate and hope it will be helpful in your future career success: > A data scientist is someone who is better at statistics than any software engineer and better at software engineering than any statistician. P.S. Today's enormous number of various resources on data science topics is mind-blowing, but this open source curriculum for learning data science might fill some gaps between your BSc/MSc respective curricula and reality of the data science career (or, at least, provide some direction for further research and maybe answer some of your concerns): [http://datasciencemasters.org](http://datasciencemasters.org), or on GitHub: [https://github.com/datasciencemasters/go](https://github.com/datasciencemasters/go).
null
CC BY-SA 3.0
null
2014-07-15T06:19:31.820
2014-07-15T06:19:31.820
null
null
2452
null
743
2
null
739
9
null
From the job ads I have seen, the answer depends: There are jobs which are more technical in nature (designing big data projects, doing some analysis) or the exact opposite (doing analysis, storage etc. is someone elses job). So I would say that SOME software design skills are extremely useful , but you don't need the abillity to build a huge program in C# / Java or whatever. Why I like some SW skills is simply that your code probably looks way better than code from someone who never programmed for the sake of programming. Most of the time, the latter code is very hard do understand / debug for outsiders. Also, sometimes your analysis needs to be integrated in a bigger program,an understand of the needs of the programms certainly helps.
null
CC BY-SA 3.0
null
2014-07-15T09:30:02.183
2014-07-15T09:30:02.183
null
null
791
null
744
1
773
null
80
62979
It looks like the cosine similarity of two features is just their dot product scaled by the product of their magnitudes. When does cosine similarity make a better distance metric than the dot product? I.e. do the dot product and cosine similarity have different strengths or weaknesses in different situations?
Cosine similarity versus dot product as distance metrics
CC BY-SA 3.0
null
2014-07-15T21:30:11.600
2020-09-04T15:43:15.887
null
null
2507
[ "classification" ]
745
1
null
null
3
2033
I'm trying to use ARMA/ARIMA with the [statsmodel Python package](http://statsmodels.sourceforge.net/devel/tsa.html#descriptive-statistics-and-tests), in order to predict the gas consumption. I tried with [a dataset](https://github.com/denadai2/Gas-consumption-outliers/blob/master/exportWeb.csv) of this format: ![with this format](https://i.imgur.com/ZUvBlUP.png) Using only the gas column. ``` from pandas.tseries.offsets import * arma_mod20 = sm.tsa.ARMA(januaryFeb[['gas [m3]']], (5,3)).fit() predict_sunspots = arma_mod20.predict('2012-01-13', '2012-01-14', dynamic=True) ax = januaryFeb.ix['2012-01-13 00:00:00':'2012-01-15 22:00:00']['gas [m3]'].plot(figsize=(12,8)) ax = predict_sunspots.plot(ax=ax, style='r--', label='Dynamic Prediction'); ax.legend(); ``` ![result](https://i.imgur.com/oCPonu7.png) Why is the prediction so bad?
ARMA/ARIMA on energy forecasts timeseries: strange prediction
CC BY-SA 3.0
null
2014-07-16T00:06:02.160
2014-07-16T18:26:54.233
null
null
989
[ "machine-learning", "python" ]
746
2
null
745
5
null
I'm not an expert on time series, but I have a general advice: may I suggest you to try other packages (and various parameters) to see, if there are any differences in results. Also, unless you have to use `Python`, I'd recommend to take a look at the `R`'s extensive ecosystem for time series analysis: see [http://www.statmethods.net/advstats/timeseries.html](http://www.statmethods.net/advstats/timeseries.html) and [http://cran.r-project.org/web/views/TimeSeries.html](http://cran.r-project.org/web/views/TimeSeries.html). In particular, you may want to check the standard `stats` package (including functions `arima()` and `arima0`), as well as some other packages: `FitARMA` ([http://cran.r-project.org/web/packages/FitARMA](http://cran.r-project.org/web/packages/FitARMA)), `forecast` ([http://cran.r-project.org/web/packages/forecast](http://cran.r-project.org/web/packages/forecast)) and education-focused `fArma` (cran.r-project.org/web/packages/fArma), to mention just a few. I hope this is helpful.
null
CC BY-SA 3.0
null
2014-07-16T03:32:16.270
2014-07-16T03:32:16.270
null
null
2452
null
747
2
null
52
5
null
About automatic cleaning: You really cannot clean data automatically, because the number of errors and the definition of an error is often dependent on the data. E.g.: Your column "Income" might contain negative values, which are an error - you have to do something about the cases. On the other hand a column "monthly savings" could reasonably contain negative values. Such errors are highly domain dependent - so to find them, you must have domain knowledge, something at which humans excel, automated processes not so much. Where you can and should automate is repeated projects. E.g. a report which has to produced monthly. If you spot errors, you should place some automated process which can spot these kinds of errors in subsequent months, freeing your time.
null
CC BY-SA 3.0
null
2014-07-16T06:45:36.740
2014-07-16T06:45:36.740
null
null
791
null
748
1
null
null
7
105
I asked a data science question regarding how to decide on the best variation of a split test on the Statistics section of StackExchange. I hope I will have better luck here. The question is basically, "Why is mean revenue per user the best metric to make your decision on in a split test?" The original question is here: [https://stats.stackexchange.com/questions/107599/better-estimator-of-expected-sum-than-mean](https://stats.stackexchange.com/questions/107599/better-estimator-of-expected-sum-than-mean) Since it was not well received/understood I simplified the problem to a discrete set of purchases and phrased it as a classical probability problem. That question is here: [https://stats.stackexchange.com/questions/107848/drawing-numbered-balls-from-an-urn](https://stats.stackexchange.com/questions/107848/drawing-numbered-balls-from-an-urn) The mean may be the best metric for such a decision but I am not convinced. We often have a lot of prior information so a Bayesian method would likely improve our estimates. I realize that this is a difficult question but Data Scientists are doing such split tests everyday.
Why use mean revenue in a split test?
CC BY-SA 3.0
null
2014-07-16T07:47:48.603
2014-07-27T03:58:22.907
2017-04-13T12:44:20.183
-1
2511
[ "research", "cross-validation" ]
749
1
null
null
37
19573
I am learning about matrix factorization for recommender systems and I am seeing the term `latent features` occurring too frequently but I am unable to understand what it means. I know what a feature is but I don't understand the idea of latent features. Could please explain it? Or at least point me to a paper/place where I can read about it?
Meaning of latent features?
CC BY-SA 3.0
null
2014-07-16T09:24:51.780
2020-08-16T10:15:48.737
2015-10-18T12:37:43.577
843
728
[ "machine-learning", "data-mining", "recommender-system" ]
750
1
769
null
22
12101
I am using OpenCV letter_recog.cpp example to experiment on random trees and other classifiers. This example has implementations of six classifiers - random trees, boosting, MLP, kNN, naive Bayes and SVM. UCI letter recognition dataset with 20000 instances and 16 features is used, which I split in half for training and testing. I have experience with SVM so I quickly set its recognition error to 3.3%. After some experimentation what I got was: UCI letter recognition: - RTrees - 5.3% - Boost - 13% - MLP - 7.9% - kNN(k=3) - 6.5% - Bayes - 11.5% - SVM - 3.3% Parameters used: - RTrees - max_num_of_trees_in_the_forrest=200, max_depth=20, min_sample_count=1 - Boost - boost_type=REAL, weak_count=200, weight_trim_rate=0.95, max_depth=7 - MLP - method=BACKPROP, param=0.001, max_iter=300 (default values - too slow to experiment) - kNN(k=3) - k=3 - Bayes - none - SVM - RBF kernel, C=10, gamma=0.01 After that I used same parameters and tested on Digits and MNIST datasets by extracting gradient features first (vector size 200 elements): Digits: - RTrees - 5.1% - Boost - 23.4% - MLP - 4.3% - kNN(k=3) - 7.3% - Bayes - 17.7% - SVM - 4.2% MNIST: - RTrees - 1.4% - Boost - out of memory - MLP - 1.0% - kNN(k=3) - 1.2% - Bayes - 34.33% - SVM - 0.6% I am new to all classifiers except SVM and kNN, for these two I can say the results seem fine. What about others? I expected more from random trees, on MNIST kNN gives better accuracy, any ideas how to get it higher? Boost and Bayes give very low accuracy. In the end I'd like to use these classifiers to make a multiple classifier system. Any advice?
How to increase accuracy of classifiers?
CC BY-SA 3.0
null
2014-07-16T09:49:15.933
2020-08-06T09:38:11.873
2014-07-16T15:09:44.907
1387
1387
[ "machine-learning", "classification", "svm", "accuracy", "random-forest" ]
751
2
null
744
12
null
You are right, cosine similarity has a lot of common with dot product of vectors. Indeed, it is a dot product, scaled by magnitude. And because of scaling it is normalized between 0 and 1. CS is preferable because it takes into account variability of data and features' relative frequencies. On the other hand, plain dot product is a little bit "cheaper" (in terms of complexity and implementation).
null
CC BY-SA 3.0
null
2014-07-16T10:42:55.793
2014-07-16T10:42:55.793
null
null
941
null
752
2
null
749
3
null
It seems to me that latent features is a term used to describe criteria for classifying entities by their structure, in other words, by features (traits) they contain, instead of classes they belong to. Meaning of the word "latent" here is most likely similar to its meaning in social sciences, where very popular term [latent variable](http://en.wikipedia.org/wiki/Latent_variable) means unobservable variable (concept). Section "Introduction" in [this paper](http://papers.nips.cc/paper/3846-nonparametric-latent-feature-models-for-link-prediction.pdf) provides a good explanation of latent features' meaning and use in modeling of social sciences phenomena.
null
CC BY-SA 4.0
null
2014-07-16T11:07:56.467
2020-08-16T10:15:48.737
2020-08-16T10:15:48.737
98307
2452
null
753
2
null
224
2
null
I could try to explain you with words, but these slides explain it very well with pictures. Hope it helps. [http://www.inf.ed.ac.uk/teaching/courses/mt/lectures/phrase-model.pdf](http://www.inf.ed.ac.uk/teaching/courses/mt/lectures/phrase-model.pdf) Note this slides correspond to the chapter 5 of "Statistical Machine Translation" by Philipp Koehn, highly recommended if you are working on machine translation, and it is easy to read.
null
CC BY-SA 3.0
null
2014-07-16T13:05:41.123
2014-07-16T13:05:41.123
null
null
2489
null
754
2
null
730
2
null
Most of the recent Frequent Pattern approaches that I've seen in the literature are based on optimizing FPGrowth. I have to admit, I haven't seen many developments within the literature in FPM in many years. [This wikibook](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Frequent_Pattern_Mining/The_FP-Growth_Algorithm) highlights many of the variants on FPGrowth that are out there.
null
CC BY-SA 3.0
null
2014-07-16T14:22:38.677
2014-07-16T14:22:38.677
null
null
2513
null
755
2
null
750
5
null
I expected more from random trees: - With random forests, typically for N features, sqrt(N) features are used for each decision tree construction. Since in your case N=20, you could try setting max_depth (the number of sub-features to construct each decision tree) to 5. - Instead of decision trees, linear models have been proposed and evaluated as base estimators in random forests, in particular multinomial logistic regression and naive Bayes. This might improve your accuracy. On MNIST kNN gives better accuracy, any ideas how to get it higher? - Try with a higher value of K (say 5 or 7). A higher value of K would give you more supportive evidence about the class label of a point. - You could run PCA or Fisher's Linear Discriminant Analysis before running k-nearest neighbour. By this you could potentially get rid of correlated features while computing distances between the points, and hence your k neighbours would be more robust. - Try different K values for different points based on the variance in the distances between the K neighbours.
null
CC BY-SA 3.0
null
2014-07-16T17:34:24.880
2014-07-16T17:34:24.880
null
null
984
null
756
2
null
749
34
null
At the expense of over-simplication, latent features are 'hidden' features to distinguish them from observed features. Latent features are computed from observed features using matrix factorization. An example would be text document analysis. 'words' extracted from the documents are features. If you factorize the data of words you can find 'topics', where 'topic' is a group of words with semantic relevance. Low-rank matrix factorization maps several rows (observed features) to a smaller set of rows (latent features). To elaborate, the document could have observed features (words) like [sail-boat, schooner, yatch, steamer, cruiser] which would 'factorize' to latent feature (topic) like 'ship' and 'boat'. [sail-boat, schooner, yatch, steamer, cruiser, ...] -> [ship, boat] The underlying idea is that latent features are semantically relevant 'aggregates' of observered features. When you have large-scale, high-dimensional, and noisy observered features, it makes sense to build your classifier on latent features. This is a of course a simplified description to elucidate the concept. You can read the details on Latent Dirichlet Allocation (LDA) or probabilistic Latent Semantic Analysis (pLSA) models for an accurate description.
null
CC BY-SA 3.0
null
2014-07-16T18:15:42.343
2014-07-16T18:15:42.343
null
null
2515
null
757
2
null
745
3
null
Gas usage has a daily cycle but there are also secondary weekly and annual cycles that the ARIMA may not be able to capture. There is a very noticeable difference between the weekday and Saturday data. Try creating a subset of the data for each day of the week or splitting the data into weekday and weekend and applying the model. If you can obtain temperature data for the same period check if there is a correlation between the temperature and gas usage. As @Aleksandr Blekh said R does have good packages for ARIMA models
null
CC BY-SA 3.0
null
2014-07-16T18:26:54.233
2014-07-16T18:26:54.233
null
null
325
null
758
1
759
null
63
10063
I am working on a data science project using Python. The project has several stages. Each stage comprises of taking a data set, using Python scripts, auxiliary data, configuration and parameters, and creating another data set. I store the code in git, so that part is covered. I would like to hear about: - Tools for data version control. - Tools enabling to reproduce stages and experiments. - Protocol and suggested directory structure for such a project. - Automated build/run tools.
Tools and protocol for reproducible data science using Python
CC BY-SA 3.0
null
2014-07-16T20:09:08.640
2018-10-18T03:48:09.637
2015-08-21T16:37:14.087
4647
895
[ "python", "tools", "version-control" ]
759
2
null
758
51
null
The topic of reproducible research (RR) is very popular today and, consequently, is huge, but I hope that my answer will be comprehensive enough as an answer and will provide enough information for further research, should you decide to do so. While Python-specific tools for RR certainly exist out there, I think it makes more sense to focus on more universal tools (you never know for sure what programming languages and computing environments you will be working with in the future). Having said that, let's take a look what tools are available per your list. 1) Tools for data version control. Unless you plan to work with (very) big data, I guess, it would make sense to use the same `git`, which you use for source code version control. The infrastructure is already there. Even if your files are binary and big, this advice might be helpful: [https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git](https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git). 2) Tools for managing RR workflows and experiments. Here's a list of most popular tools in this category, to the best of my knowledge (in the descending order of popularity): - Taverna Workflow Management System (http://www.taverna.org.uk) - very solid, if a little too complex, set of tools. The major tool is a Java-based desktop software. However, it is compatible with online workflow repository portal myExperiment (http://www.myexperiment.org), where user can store and share their RR workflows. Web-based RR portal, fully compatible with Taverna is called Taverna Online, but it is being developed and maintained by totally different organization in Russia (referred there to as OnlineHPC: http://onlinehpc.com). - The Kepler Project (https://kepler-project.org) - VisTrails (http://vistrails.org) - Madagascar (http://www.reproducibility.org) EXAMPLE. Here's an interesting article on scientific workflows with an example of the real workflow design and data analysis, based on using Kepler and myExperiment projects: [http://f1000research.com/articles/3-110/v1](http://f1000research.com/articles/3-110/v1). There are many RR tools that implement literate programming paradigm, exemplified by `LaTeX` software family. Tools that help in report generation and presentation is also a large category, where `Sweave` and `knitr` are probably the most well-known ones. `Sweave` is a tool, focused on R, but it can be integrated with Python-based projects, albeit with some additional effort ([https://stackoverflow.com/questions/2161152/sweave-for-python](https://stackoverflow.com/questions/2161152/sweave-for-python)). I think that `knitr` might be a better option, as it's modern, has extensive support by popular tools (such as `RStudio`) and is language-neutral ([http://yihui.name/knitr/demo/engines](http://yihui.name/knitr/demo/engines)). 3) Protocol and suggested directory structure. If I understood correctly what you implied by using term protocol (workflow), generally I think that standard RR data analysis workflow consists of the following sequential phases: data collection => data preparation (cleaning, transformation, merging, sampling) => data analysis => presentation of results (generating reports and/or presentations). Nevertheless, every workflow is project-specific and, thus, some specific tasks might require adding additional steps. For sample directory structure, you may take a look at documentation for R package `ProjectTemplate` ([http://projecttemplate.net](http://projecttemplate.net)), as an attempt to automate data analysis workflows and projects: ![enter image description here](https://i.stack.imgur.com/0B2vo.png) 4) Automated build/run tools. Since my answer is focused on universal (language-neutral) RR tools, the most popular tools is `make`. Read the following article for some reasons to use `make` as the preferred RR workflow automation tool: [http://bost.ocks.org/mike/make](http://bost.ocks.org/mike/make). Certainly, there are other similar tools, which either improve some aspects of `make`, or add some additional features. For example: `ant` (officially, Apache Ant: [http://ant.apache.org](http://ant.apache.org)), `Maven` ("next generation `ant`": [http://maven.apache.org](http://maven.apache.org)), `rake` ([https://github.com/ruby/rake](https://github.com/ruby/rake)), `Makepp` ([http://makepp.sourceforge.net](http://makepp.sourceforge.net)). For a comprehensive list of such tools, see Wikipedia: [http://en.wikipedia.org/wiki/List_of_build_automation_software](http://en.wikipedia.org/wiki/List_of_build_automation_software).
null
CC BY-SA 3.0
null
2014-07-17T06:02:04.813
2014-07-17T11:42:25.753
2017-05-23T12:38:53.587
-1
2452
null
760
1
null
null
2
420
I am given a time series data vector (ordered by months and years),which contains only `0`s and `1`s. `1` s represent a person changes his job at a particular a month. Questions: What model can i use to determine model how frequently this person change his job ? In addition, this model should be able to predict the probability of this person changing his in the next 6 months. A poisson process ? (I have studied poisson process before however I have no idea when and how to apply it). Any assumptions that data need to meet before applying the poisson process ? Would love to gather more information on how to model something like this. Thanks
Given time series data, how to model the frequency of someone changes his job?
CC BY-SA 3.0
null
2014-07-17T09:26:11.833
2018-10-16T20:20:17.820
null
null
1315
[ "data-mining", "time-series" ]
761
1
764
null
66
100737
What is the right approach and clustering algorithm for geolocation clustering? I'm using the following code to cluster geolocation coordinates: ``` import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans2, whiten coordinates= np.array([ [lat, long], [lat, long], ... [lat, long] ]) x, y = kmeans2(whiten(coordinates), 3, iter = 20) plt.scatter(coordinates[:,0], coordinates[:,1], c=y); plt.show() ``` Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not [Haversine formula](https://en.wikipedia.org/wiki/Haversine_formula) as a distance function?
Clustering geo location coordinates (lat,long pairs)
CC BY-SA 3.0
null
2014-07-17T09:50:41.437
2023-01-12T18:50:07.947
2017-05-12T08:39:00.347
31513
2533
[ "machine-learning", "python", "clustering", "k-means", "geospatial" ]
762
1
766
null
11
5818
t-SNE, as in [1], works by progressively reducing the Kullback-Leibler (KL) divergence, until a certain condition is met. The creators of t-SNE suggests to use KL divergence as a performance criterion for the visualizations: > you can compare the Kullback-Leibler divergences that t-SNE reports. It is perfectly fine to run t-SNE ten times, and select the solution with the lowest KL divergence [2] I tried two implementations of t-SNE: - python: sklearn.manifold.TSNE(). - R: tsne, from library(tsne). Both these implementations, when verbosity is set, print the error (Kullback-Leibler divergence) for each iteration. However, they don't allow the user to get this information, which looks a bit strange to me. For example, the code: ``` import numpy as np from sklearn.manifold import TSNE X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) model = TSNE(n_components=2, verbose=2, n_iter=200) t = model.fit_transform(X) ``` produces: ``` [t-SNE] Computing pairwise distances... [t-SNE] Computed conditional probabilities for sample 4 / 4 [t-SNE] Mean sigma: 1125899906842624.000000 [t-SNE] Iteration 10: error = 6.7213750, gradient norm = 0.0012028 [t-SNE] Iteration 20: error = 6.7192064, gradient norm = 0.0012062 [t-SNE] Iteration 30: error = 6.7178683, gradient norm = 0.0012114 ... [t-SNE] Error after 200 iterations: 0.270186 ``` Now, as far as I understand, 0.270186 should be the KL divergence. However I cannot get this information, neither from model nor from t (which is a simple `numpy.ndarray`). To solve this problem I could: - Calculate KL divergence by my self, - Do something nasty in python for capturing and parsing TSNE() function's output [3]. However: - would be quite stupid to re-calculate KL divergence, when TSNE() has already computed it, - would be a bit unusual in terms of code. Do you have any other suggestion? Is there a standard way to get this information using this library? I mentioned I tried R's tsne library, but I'd prefer the answers to focus on the python sklearn implementation. --- References [1] [http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb](http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb) [2] [http://homepage.tudelft.nl/19j49/t-SNE.html](http://homepage.tudelft.nl/19j49/t-SNE.html) [3] [https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call](https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call)
t-SNE Python implementation: Kullback-Leibler divergence
CC BY-SA 4.0
null
2014-07-17T10:04:29.797
2020-08-02T14:02:51.077
2020-08-02T14:02:51.077
98307
131
[ "machine-learning", "python" ]
763
2
null
760
2
null
A simple and perhaps somewhat naive approach would be to assume that a person changes jobs at a constant rate and that previous job changes have no influence on future ones. Under these assumptions you could model the job changes as a Poisson process and estimate the rate parameter using MLE ([http://en.wikipedia.org/wiki/Poisson_process](http://en.wikipedia.org/wiki/Poisson_process) and [http://en.wikipedia.org/wiki/Poisson_distribution](http://en.wikipedia.org/wiki/Poisson_distribution)). Of course one should explore how well these assumptions hold in the data. To do this, you could study whether or not job changes are independent of one another by computing the correlation between events at various lags ([http://en.wikipedia.org/wiki/Correlation](http://en.wikipedia.org/wiki/Correlation)). You could also plot the distribution of time between job change events. If the process is Poisson-like then you should observe little to no correlation between events at any number of lags and the distribution of time between job change events should be exponentially distributed ([http://en.wikipedia.org/wiki/Exponential_distribution](http://en.wikipedia.org/wiki/Exponential_distribution)).
null
CC BY-SA 3.0
null
2014-07-17T10:26:45.547
2014-07-17T10:26:45.547
null
null
1329
null
764
2
null
761
11
null
K-means should be right in this case. Since k-means tries to group based solely on euclidean distance between objects you will get back clusters of locations that are close to each other. To find the optimal number of clusters you can try making an 'elbow' plot of the within group sum of square distance. [This](http://nbviewer.ipython.org/github/nborwankar/LearnDataScience/blob/master/notebooks/D3.%20K-Means%20Clustering%20Analysis.ipynb) may be helpful
null
CC BY-SA 4.0
null
2014-07-17T12:34:11.397
2020-08-02T14:02:48.237
2020-08-02T14:02:48.237
98307
802
null
766
2
null
762
4
null
The TSNE source in scikit-learn is in pure Python. Fit `fit_transform()` method is actually calling a private `_fit()` function which then calls a private `_tsne()` function. That `_tsne()` function has a local variable `error` which is printed out at the end of the fit. Seems like you could pretty easily change one or two lines of source code to have that value returned to `fit_transform()`.
null
CC BY-SA 3.0
null
2014-07-17T14:07:12.643
2014-07-17T14:07:12.643
null
null
159
null
767
2
null
758
20
null
The best reproducibility tool is to make a log of your actions, something like this: ``` experiment/input ; expected ; observation/output ; current hypothesis and if supported or rejected exp1 ; expected1 ; obs1 ; some fancy hypothesis, supported ``` This can be written down on a paper, but, if your experiments fit in a computational framework, you can use computational tools to partly or completely automate that logging process (particularly by helping you track the input datasets which can be huge, and the output figures). A great reproducibility tool for Python with a low learning curve is of course [IPython/Jupyter Notebook](http://ipython.org/) (don't forget the [%logon and %logstart](https://damontallen.github.io/IPython-quick-ref-sheets/) magics). Tip: to make sure your notebook is reproducible, restart the kernel and try to run all cells from top to bottom (button Run All Cells): if it works, then save everything in an archive file ("freezing"), else, notably if you need to run cells in a non linear and non sequential and non obvious fashion to avoid errors, you need to rework a bit. Another great tool that is very recent (2015) is [recipy](https://github.com/recipy/recipy), which is very like sumatra (see below), but made specifically for Python. I don't know if it works with Jupyter Notebooks, but I know the author frequently uses them so I guess that if it's not currently supported, it will be in the future. [Git](https://en.wikipedia.org/wiki/Git_(software)) is also awesome, and it's not tied to Python. It will help you not only to keep a history of all your experiments, code, datasets, figures, etc. but also provide you with tools to maintain ([git pickaxe](http://www.philandstuff.com/2014/02/09/git-pickaxe.html)), collaborate ([blame](http://jfire.io/blog/2012/03/07/code-archaeology-with-git/)) and debug ([git](http://blogs.perl.org/users/ovid/2014/07/making-git-bisect-more-useful.html)-[bisect](https://lwn.net/Articles/317154/)) using a scientific method of debugging (called [delta debugging](https://en.wikipedia.org/wiki/Delta_Debugging)). [Here's a story](https://matthew-brett.github.io/pydagogue/curious_git.html) of a fictional researcher trying to make his own experiments logging system, until it ends up being a facsimile of Git. Another general tool working with any language (with a Python API on [pypi](https://pypi.python.org/pypi/Sumatra)) is [Sumatra](http://neuralensemble.org/sumatra/), which is specifically designed to help you do replicable research (replicable aims to produce the same results given the exact same code and softwares, whereas reproducibility aims to produce the same results given any medium, which is a lot harder and time consuming and not automatable). Here is how Sumatra works: for each experiment that you conduct through Sumatra, this software will act like a "save game state" often found in videogames. More precisely, it will will save: - all the parameters you provided; - the exact sourcecode state of your whole experimental application and config files; - the output/plots/results and also any file produced by your experimental application. It will then construct a database with the timestamp and other metadatas for each of your experiments, that you can later crawl using the webGUI. Since Sumatra saved the full state of your application for a specific experiment at one specific point in time, you can restore the code that produced a specific result at any moment you want, thus you have replicable research at a low cost (except for storage if you work on huge datasets, but you can configure exceptions if you don't want to save everything everytime). Another awesome tool is [GNOME's Zeitgeist](https://en.wikipedia.org/wiki/Zeitgeist_(free_software)) (previously coded in Python but now ported to Vala), an all-compassing action journaling system, which records everything you do and it can use machine learning to summarize for a time period you want the relationship between items based on similarity and usage patterns, eg answering questions like "What was most relevant to me, while I was working on project X, for a month last year?". Interestingly, [Zim Desktop Wiki](http://zim-wiki.org/), a note-taking app similar to Evernote, has a plugin to work with Zeitgeist. In the end, you can use either Git or Sumatra or any other software you want, they will provide you with about the same replicability power, but Sumatra is specifically tailored for scientific research so it provides a few fancy tools like a web GUI to crawl your results, while Git is more tailored towards code maintenance (but it has debugging tools like git-bisect so if your experiments involve codes, it may actually be better). Or of course you can use both! /EDIT: [dsign](https://datascience.stackexchange.com/a/775) touched a very important point here: the replicability of your setup is as important as the replicability of your application. In other words, you should at least provide a full list of the libraries and compilers you used along with their exact versions and the details of your platform. Personally, in scientific computing with Python, I have found that packaging an application along with the libraries is just too painful, thus I now just use an all-in-one scientific python package such as [Anaconda](https://store.continuum.io/cshop/anaconda/) (with the great package manager [conda](http://www.continuum.io/blog/conda)), and just advise users to use the same package. Another solution could be to provide a script to automatically generate a [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/), or to package everything using the commercial [Docker application as cited by dsign](https://datascience.stackexchange.com/a/775) or the opensource [Vagrant](http://www.vagrantup.com/) (with for example [pylearn2-in-a-box](http://deeplearning.net/software/pylearn2/#other-methods) which use Vagrant to produce an easily redistributable virtual environment package). Finally, to really ensure that you have a fully working environment everytime you need, you can make a virtual machine (see VirtualBox), and you can even save the state of the machine (snapshot) with your experiment ready to run inside. Then you can just share this virtual machine with everything included so that anyone can replicate your experiment with your exact setup. This is probably the best way to replicate a software based experiment. Containers might be a more lightweight alternative, but they do not include the whole environment, so that the replication fidelity will be less robust. /EDIT2: Here's a [great video](https://www.udacity.com/course/viewer#!/c-cs259/l-48648797/m-48680672) summarizing (for debugging but this can also be applied to research) what is fundamental to do reproducible research: logging your experiments and each other steps of the scientific method, a sort of "explicit experimenting".
null
CC BY-SA 4.0
null
2014-07-17T14:28:38.207
2018-10-18T03:48:09.637
2018-10-18T03:48:09.637
2544
2544
null
768
2
null
758
25
null
Since I started doing research in academia I was constantly looking for a satisfactory workflow. I think that I finally found something I am happy with: 1) Put everything under version control, e.g., Git: For hobby research projects I use GitHub, for research at work I use the private GitLab server that is provided by our university. I also keep my datasets there. 2) I do most of my analyses along with the documentation on IPython notebooks. It is very organized (for me) to have the code, the plots, and the discussion/conclusion all in one document If I am running larger scripts, I would usually put them into separate script .py files, but I would still execute them from the IPython notebook via the %run magic to add information about the purpose, outcome, and other parameters. I have written a small cell-magic extension for IPython and IPython notebooks, called "watermark" that I use to conveniently create time stamps and keep track of the different package versions I used and also Git hashs For example ``` %watermark 29/06/2014 01:19:10 CPython 3.4.1 IPython 2.1.0 compiler : GCC 4.2.1 (Apple Inc. build 5577) system : Darwin release : 13.2.0 machine : x86_64 processor : i386 CPU cores : 2 interpreter: 64bit ``` ``` %watermark -d -t 29/06/2014 01:19:11 ``` ``` %watermark -v -m -p numpy,scipy CPython 3.4.1 IPython 2.1.0 numpy 1.8.1 scipy 0.14.0 compiler : GCC 4.2.1 (Apple Inc. build 5577) system : Darwin release : 13.2.0 machine : x86_64 processor : i386 CPU cores : 2 interpreter: 64bit ``` For more info, see the [documentation here](http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/ipython_magic/watermark.ipynb).
null
CC BY-SA 3.0
null
2014-07-17T15:19:32.237
2014-07-17T15:19:32.237
null
null
null
null
769
2
null
750
11
null
Dimensionality Reduction Another important procedure is to compare the error rates on training and test dataset to see if you are overfitting (due to the "curse of dimensionality"). E.g., if your error rate on the test dataset is much larger than the error on the training data set, this would be one indicator. In this case, you could try dimensionality reduction techniques, such as PCA or LDA. If you are interested, I have written about PCA, LDA and some other techniques [here](http://sebastianraschka.com/index.html#machine_learning) and in my GitHub repo [here](https://github.com/rasbt/pattern_classification). Cross validation Also you may want to take a look at cross-validation techniques in order to evaluate the performance of your classifiers in a more objective manner
null
CC BY-SA 4.0
null
2014-07-17T15:28:25.940
2020-08-06T09:38:11.873
2020-08-06T09:38:11.873
98307
null
null
770
2
null
736
2
null
I am not an export in using SVMs, but usually (if you are using a machine learning library like Python's `scikit-learn` or R's `libsvm`, there is the `class_weight` parameter, or `class.weights`, respectively. Or if you'd use a Bayes classifier, you would take this "skew" into account via the "prior (class) probabilities" P(ωj)
null
CC BY-SA 3.0
null
2014-07-17T15:35:17.487
2014-07-17T15:35:17.487
null
null
null
null
771
2
null
694
6
null
From what I heard, Pylearn2 might be currently the library of choice for most people. This reminds me of a recent blog post a few month ago that lists all the different machine learning libraries with a short explanation [https://www.cbinsights.com/blog/python-tools-machine-learning](https://www.cbinsights.com/blog/python-tools-machine-learning) The section you might be interested in here would be "Deep Learning". About Pylearn2, he writes > PyLearn2 There is another library built on top of Theano, called PyLearn2 which brings modularity and configurability to Theano where you could create your neural network through different configuration files so that it would be easier to experiment different parameters. Arguably, it provides more modularity by separating the parameters and properties of neural network to the configuration file.
null
CC BY-SA 3.0
null
2014-07-17T18:25:44.683
2014-07-17T18:25:44.683
null
null
null
null
773
2
null
744
78
null
Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
null
CC BY-SA 3.0
null
2014-07-17T20:02:11.227
2014-07-17T20:02:11.227
null
null
154
null
774
2
null
376
2
null
The following general answer is my uneducated guess, so take it with grain of salt. Hopefully, it makes sense. I think that the best way to describe or analyze experiments (as any other systems, in general) is to build their statistical (multivariate) models and evaluate them. Depending on whether environments for your set of experiments are represented by the same model or different, I see the following approaches: 1) Single model approach. Define experiments' statistical model for all environments (dependent and independent variables, data types, assumptions, constraints). Analyze it (most likely, using regression analysis). Compare results across variables, which determine (influence) different environments. 2) Multiple models approach. The same steps as previous case, but compare results across models, corresponding to different environments.
null
CC BY-SA 3.0
null
2014-07-18T04:50:58.287
2014-07-18T04:50:58.287
null
null
2452
null
775
2
null
758
14
null
Be sure to check out [docker](https://www.docker.com/)! And in general, all the other good things that software engineering has created along decades for ensuring isolation and reproductibility. I would like to stress that it is not enough to have just reproducible workflows, but also easy to reproduce workflows. Let me show what I mean. Suppose that your project uses Python, a database X and Scipy. Most surely you will be using a specific library to connect to your database from Python, and Scipy will be in turn using some sparse algebraic routines. This is by all means a very simple setup, but not entirely simple to setup, pun intended. If somebody wants to execute your scripts, she will have to install all the dependencies. Or worse, she might have incompatible versions of it already installed. Fixing those things takes time. It will also take time to you if you at some moment need to move your computations to a cluster, to a different cluster, or to some cloud servers. Here is where I find docker useful. Docker is a way to formalize and compile recipes for binary environments. You can write the following in a dockerfile (I'm using here plain English instead of the Dockerfile syntax): - Start with a basic binary environment, like Ubuntu's - Install libsparse-dev - (Pip) Install numpy and scipy - Install X - Install libX-dev - (Pip) Install python-X - Install IPython-Notebook - Copy my python scripts/notebooks to my binary environment, these datafiles, and these configurations to do other miscellaneous things. To ensure reproductibility, copy them from a named url instead of a local file. - Maybe run IPython-Notebook. Some of the lines will be installing things in Python using pip, since pip can do a very clean work in selecting specific package versions. Check it out too! And that's it. If after you create your Dockerfile it can be built, then it can be built anywhere, by anybody (provided they also have access to your project-specific files, e.g. because you put them in a public url referenced from the Dockerfile). What is best, you can upload the resulting environment (called an "image") to a public or private server (called a "register") for other people to use. So, when you publish your workflow, you have both a fully reproducible recipe in the form of a Dockerfile, and an easy way for you or other people to reproduce what you do: ``` docker run dockerregistery.thewheezylab.org/nowyouwillbelieveme ``` Or if they want to poke around in your scripts and so forth: ``` docker run -i -t dockerregistery.thewheezylab.org/nowyouwillbelieveme /bin/bash ```
null
CC BY-SA 3.0
null
2014-07-18T07:43:56.823
2014-07-18T07:43:56.823
null
null
2575
null
776
2
null
41
2
null
I am far from an expert, but my understanding of the subject tells me that R (superb in statistics) and e.g. Python (superb in several of those things where R is lacking) complements each other quite well (as pointed out by previous posts).
null
CC BY-SA 3.0
null
2014-07-18T15:24:38.007
2014-07-18T15:24:38.007
null
null
2583
null
777
1
782
null
2
153
New to the Data Science forum, and first poster here! This may be kind of a specific question (hopefully not too much so), but one I'd imagine others might be interested in. I'm looking for a way to basically query GitHub with something like this: ``` Give me a collection of all of the public repositories that have more than 10 stars, at least two forks, and more than three committers. ``` The result could take any viable form: a JSON data dump, a URL to the web page, etc. It more than likely will consist of information from 10,000 repos or something large. Is this sort of thing possible using the API or some other pre-built way, or am I going to have to build out my own custom solution where I try to scrape every page? If so, how feasible is this and how might I approach it?
Getting GitHub repository information by different criteria
CC BY-SA 3.0
null
2014-07-18T22:29:05.017
2014-07-19T03:42:34.433
null
null
2599
[ "bigdata", "data-mining", "python", "dataset" ]
778
1
781
null
14
17018
I read in this post [Is the R language suitable for Big Data](https://datascience.stackexchange.com/questions/41/is-the-r-language-suitable-for-big-data) that big data constitutes `5TB`, and while it does a good job of providing information about the feasibility of working with this type of data in `R` it provides very little information about `Python`. I was wondering if `Python` can work with this much data as well.
Is Python suitable for big data
CC BY-SA 3.0
null
2014-07-18T22:34:48.080
2022-08-18T09:59:32.430
null
null
890
[ "bigdata", "python" ]
779
2
null
778
2
null
I believe the language itself has little to do with performance capabilities, when it comes to large data. What matters is: - How large is the data actually - What processing are you going to perform on it - What hardware are you going to use - Which are the specific libraries that you plan to use Anyway, Python is well adopted in data science communities.
null
CC BY-SA 3.0
null
2014-07-18T22:59:39.100
2014-07-18T22:59:39.100
null
null
2600
null
780
2
null
41
17
null
Some good answers here. I would like to join the discussion by adding the following three notes: - The question's emphasis on the volume of data while referring to Big Data is certainly understandable and valid, especially considering the problem of data volume growth outpacing technological capacities' exponential growth per Moore's Law (http://en.wikipedia.org/wiki/Moore%27s_law). - Having said that, it is important to remember about other aspects of big data concept. Based on Gartner's definition (emphasis mine - AB): "Big data is high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization." (usually referred to as the "3Vs model"). I mention this, because it forces data scientists and other analysts to look for and use R packages that focus on other than volume aspects of big data (enabled by the richness of enormous R ecosystem). - While existing answers mention some R packages, related to big data, for a more comprehensive coverage, I'd recommend to refer to CRAN Task View "High-Performance and Parallel Computing with R" (http://cran.r-project.org/web/views/HighPerformanceComputing.html), in particular, sections "Parallel computing: Hadoop" and "Large memory and out-of-memory data".
null
CC BY-SA 4.0
null
2014-07-19T02:19:46.530
2019-02-23T11:34:41.513
2019-02-23T11:34:41.513
23546
2452
null
781
2
null
778
18
null
To clarify, I feel like the original question references by OP probably isn't be best for a SO-type format, but I will certainly represent `python` in this particular case. Let me just start by saying that regardless of your data size, `python` shouldn't be your limiting factor. In fact, there are just a couple main issues that you're going to run into dealing with large datasets: - Reading data into memory - This is by far the most common issue faced in the world of big data. Basically, you can't read in more data than you have memory (RAM) for. The best way to fix this is by making atomic operations on your data instead of trying to read everything in at once. - Storing data - This is actually just another form of the earlier issue, by the time to get up to about 1TB, you start having to look elsewhere for storage. AWS S3 is the most common resource, and python has the fantastic boto library to facilitate leading with large pieces of data. - Network latency - Moving data around between different services is going to be your bottleneck. There's not a huge amount you can do to fix this, other than trying to pick co-located resources and plugging into the wall.
null
CC BY-SA 3.0
null
2014-07-19T03:29:02.647
2014-07-19T03:29:02.647
null
null
548
null
782
2
null
777
1
null
My limited understanding, based on brief browsing GitHub API documentation, is that currently there is NO single API request that supports all your listed criteria at once. However, I think that you could use the following sequence in order to achieve the goal from your example (at least, I would use this approach): 1) Request information on all public repositories (API returns summary representations only): [https://developer.github.com/v3/repos/#list-all-public-repositories](https://developer.github.com/v3/repos/#list-all-public-repositories); 2) Loop through the list of all public repositories retrieved in step 1, requesting individual resources, and save it as new (detailed) list (this returns detailed representations, in other words, all attributes): [https://developer.github.com/v3/repos/#get](https://developer.github.com/v3/repos/#get); 3) Loop through the detailed list of all repositories, filtering corresponding fields by your criteria. For your example request, you'd be interested in the following attributes of the parent object: stargazers_count, forks_count. In order to filter the repositories by number of committers, you could use a separate API: [https://developer.github.com/v3/repos/#list-contributors](https://developer.github.com/v3/repos/#list-contributors). Updates or comments from people more familiar with GitHub API are welcome!
null
CC BY-SA 3.0
null
2014-07-19T03:42:34.433
2014-07-19T03:42:34.433
null
null
2452
null
783
1
799
null
11
1231
I want to plot the bytes from a disk image in order to understand a pattern in them. This is mainly an academic task, since I'm almost sure this pattern was created by a disk testing program, but I'd like to reverse-engineer it anyway. I already know that the pattern is aligned, with a periodicity of 256 characters. I can envision two ways of visualizing this information: either a 16x16 plane viewed through time (3 dimensions), where each pixel's color is the ASCII code for the character, or a 256 pixel line for each period (2 dimensions). This is a snapshot of the pattern (you can see more than one), seen through `xxd` (32x16): ![Pattern to analyze](https://i.stack.imgur.com/zOFSK.gif) Either way, I am trying to find a way of visualizing this information. This probably isn't hard for anyone into signal analysis, but I can't seem to find a way using open-source software. I'd like to avoid Matlab or Mathematica and I'd prefer an answer in R, since I have been learning it recently, but nonetheless, any language is welcome. --- Update, 2014-07-25: given Emre's answer below, this is what the pattern looks like, given the first 30MB of the pattern, aligned at 512 instead of 256 (this alignment looks better): ![Graphical pattern](https://i.stack.imgur.com/4tDIA.png) Any further ideas are welcome!
Data visualization for pattern analysis (language-independent, but R preferred)
CC BY-SA 3.0
null
2014-07-19T05:27:22.773
2014-07-25T12:59:00.850
2014-07-25T03:26:52.427
2604
2604
[ "r", "visualization" ]
784
1
821
null
6
1057
I have a timeseries with hourly gas consumption. I want to use [ARMA](http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model)/[ARIMA](http://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) to forecast the consumption on the next hour, basing on the previous. Why should I analyze/find the seasonality (with [Seasonal and Trend decomposition using Loess](https://www.otexts.org/fpp/6/5) (STL)?)? ![enter image description here](https://i.stack.imgur.com/hYyH8.png)
Why should I care about seasonal data when I forecast?
CC BY-SA 3.0
null
2014-07-19T18:31:36.573
2015-07-20T21:51:46.383
2014-07-20T19:34:08.117
84
989
[ "machine-learning", "time-series" ]
785
2
null
783
1
null
I know almost nothing about signal analysis, but 2-dimensional visualization could be easily done using R. Particularly you will need `reshape2` and `ggplot2` packages. Assuming your data is wide (e.g. [n X 256] size), first you need to transform it to [long](http://www.cookbook-r.com/Manipulating_data/Converting_data_between_wide_and_long_format/) format using `melt()` function from `reshape2` package. Then use [geom_tile](http://docs.ggplot2.org/current/geom_tile.html) geometry from `ggplot2`. Here is a nice [recipe](http://www.r-bloggers.com/simplest-possible-heatmap-with-ggplot2/) with [gist](https://gist.github.com/dsparks/3710171).
null
CC BY-SA 3.0
null
2014-07-19T18:47:38.787
2014-07-19T18:47:38.787
null
null
941
null
786
1
789
null
10
1255
I am trying to find stock data to practice with, is there a good resource for this? I found [this](ftp://emi.nasdaq.com/ITCH/) but it only has the current year. I already have a way of parsing the protocol, but would like to have some more data to compare with. It doesn't have to be in the same format, as long as it has price, trades, and date statistics.
NASDAQ Trade Data
CC BY-SA 4.0
null
2014-07-19T20:46:52.740
2020-08-16T18:02:33.567
2020-08-16T18:02:33.567
98307
2567
[ "data-mining", "dataset" ]