Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1118
|
2
| null |
1110
|
5
| null |
There are several approaches. You can start from the second one.
Equal-width (distance) partitioning:
- It divides the range into N intervals of equal size: uniform grid
- if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B-A)/N.
- The most straightforward
- Outliers may dominate presentation
- Skewed data is not handled well.
Equal-depth (frequency) partitioning:
- It divides the range into
N intervals, each containing approximately
same number of samples
- Good data scaling
- Managing categorical attributes can be tricky.
Other Methods
- Rank: The rank of a number is its size relative to other values of a numerical variable. First, we sort the list of values, then we assign the position of a value as its rank. Same values receive the same rank but the presence of duplicate values affects the ranks of subsequent values (e.g., 1,2,3,3,5). Rank is a solid binning method with one major drawback, values can have different ranks in different lists.
- Quantiles (median, quartiles, percentiles, ...): Quantiles are also very useful binning methods but like Rank, one value can have different quantile if the list of values changes.
- Math functions: For example, logarithmic binning is an effective method for the numerical variables with highly skewed distribution (e.g., income).
Entropy-based Binning
[Entropy based method](http://www.saedsayad.com/supervised_binning.htm) uses a split approach. The entropy (or the information content) is calculated based on the class label. Intuitively, it finds the best split so that the bins are as pure as possible that is the majority of the values in a bin correspond to have the same class label. Formally, it is characterized by finding the split with the maximal information gain.
| null |
CC BY-SA 3.0
| null |
2014-09-15T13:03:44.083
|
2014-09-15T13:03:44.083
| null | null |
97
| null |
1119
|
1
| null | null |
5
|
3779
|
[RFM](http://en.wikipedia.org/wiki/RFM_(customer_value)) - is a ranking model when all customers are ranked according to their purchasing F requency, R recency and M monetary value. This indicator is highly used by marketing departments of various organizations to segment customers into groups according to customer value.
The question is following: are there any substantial models based on RFM scoring (or related to) which have solid predictive power?
Update:
- predicting which customer will most likely spend more
- who is going to upgrade/renew subscribtion/refund etc
Update2:
- I understand, this is simple problem with three independent variable and one classifier. My guess and experience say these pure three factors do not predict future customer value. But they can be used together with another data or can be an additional input into some model.
- Please share which methodologies worked for you personally and are likely to have high predictive ability. What kind of data you used together with RFM indicators and it worked well?
|
Predictive modeling based on RFM scoring indicators
|
CC BY-SA 3.0
| null |
2014-09-15T13:14:40.797
|
2017-09-21T14:42:59.557
|
2014-09-23T09:11:21.413
|
97
|
97
|
[
"marketing",
"predictive-modeling"
] |
1120
|
2
| null |
1108
|
3
| null |
This may provide some answer: [https://cran.r-project.org/web/packages/caret/vignettes/caret.html](https://cran.r-project.org/web/packages/caret/vignettes/caret.html)
You may also check out Max Kuhn's "Applied Predictive Modeling" book. He talks about the caret package at length in this book, including the kappa statistics and how to use it. This may be of some help to you.
| null |
CC BY-SA 4.0
| null |
2014-09-15T18:14:16.180
|
2020-04-01T21:11:58.403
|
2020-04-01T21:11:58.403
|
41411
|
3314
| null |
1121
|
2
| null |
1108
|
9
| null |
The Kappa is Cohen's Kappa score for inter-rater agreement. It's a commonly-used metric for evaluating the performance of machine learning algorithms and human annotaters, particularly when dealing with text/linguistics.
What it does is compare the level of agreement between the output of the (human or algorithmic) annotater and the ground truth labels, to the level of agreement that would occur through random chance. There's a very good overview of how to calculate Kappa and use it to evaluate a classifier in this stats.stackexchange.com answer [here](https://stats.stackexchange.com/questions/82162/kappa-statistic-in-plain-english), and a more in-depth explanation of Kappa and how to interpret it in [this paper](http://www1.cs.columbia.edu/~julia/courses/CS6998/Interrater_agreement.Kappa_statistic.pdf), entitled "Understanding Interobserver Agreement: The Kappa Statistic" by Viera & Garrett (2005).
The benefit of using Kappa, particularly in an unbalanced data set like yours, is that with a 90-10% imbalance between the classes, you can achieve 90% accuracy by simply labeling all of the data points with the label of the more commonly occurring class. The Kappa statistic is describing how well the classifier performs above that baseline level of performance.
Kappa ranges from -1 to 1, with 0 indicating no agreement between the raters, 1 indicating a perfect agreement, and negative numbers indicating systematic disagreement. While interpretation is somewhat arbitrary (and very task-dependent), Landis & Koch (1977) defined the following interpretation system which can work as a general rule of thumb:
```
Kappa Agreement
< 0 Less than chance agreement
0.01–0.20 Slight agreement
0.21– 0.40 Fair agreement
0.41–0.60 Moderate agreement
0.61–0.80 Substantial agreement
0.81–0.99 Almost perfect agreement
```
Which would indicate that your algorithm is performing moderately well. Accuracy SD and Kappa SD are the respective Standard Deviations of the Accuracy and Kappa scores. I hope this is helpful!
| null |
CC BY-SA 3.0
| null |
2014-09-15T20:29:16.620
|
2014-09-15T20:29:16.620
|
2017-04-13T12:44:20.183
|
-1
|
2969
| null |
1122
|
2
| null |
1119
|
3
| null |
A Google search leads to many relevant resources that answer your question:
- USING RFM DATA TO OPTIMIZE DIRECT MARKETING CAMPAIGNS: A LINEAR PROGRAMMING APPROACH http://www.thefreelibrary.com/Using+RFM+data+to+optimize+direct+marketing+campaigns%3A+a+linear...-a0272246211
- Data Mining using RFM Analysis http://cdn.intechweb.org/pdfs/13162.pdf
- Libby on Recency, Frequency, Monetary Value (book) http://www.amazon.com/Recency-Frequency-Monetary-Century-Library/dp/1882222067
From a data science point of view, there is nothing very special or unique about this problem. You have three independent variables and one dependent variable. Regression, clustering, and classification methods can be applied.
| null |
CC BY-SA 3.0
| null |
2014-09-15T22:43:37.583
|
2014-09-15T22:43:37.583
| null | null |
609
| null |
1123
|
1
|
1126
| null |
3
|
4861
|
Suppose I am interested in classifying a set of instances composed by different content types, e.g.:
- a piece of text
- an image
as `relevant` or `non-relevant` for a specific class `C`.
In my classification process I perform the following steps:
- Given a sample, I subdivide it in text and image
- A first SVM binary classifier (SVM-text), trained only on text, classifies the text as relevant/non-relevant for the class C
- A second SVM binary classifier (SVM-image), trained only on images, classifies the image as relevant/non-relevant for the class C
Both `SVM-text` and `SVM-image` produce an estimate of the probability of the analyzed content (text or image) of being relevant for the class `C`. Given this, I am able to state whether the text is relevant for `C` and the image is relevant for `C`.
However, these estimates are valid for segments of the original sample (either the text or the image), while it is not clear how to obtain a general opinion on the whole original sample (text+image). How can I combine conveniently the opinions of the two classifiers, so as to obtain a classification for the whole original sample?
|
Combine multiple classifiers to build a multi-modal classifier
|
CC BY-SA 3.0
| null |
2014-09-16T08:01:35.997
|
2014-09-16T12:32:10.130
| null | null |
3321
|
[
"classification",
"svm"
] |
1124
|
1
|
1138
| null |
9
|
2536
|
I am working on a text classification problem using Random Forest as classifiers, and a bag-of-words approach.
I am using the basic implementation of Random Forests (the one present in scikit), that creates a binary condition on a single variable at each split. Given this, is there a difference between using simple tf (term frequency) features. where each word has an associated weight that represents the number of occurrences in the document, or tf-idf (term frequency * inverse document frequency), where the term frequency is also multiplied by a value that represents the ratio between the total number of documents and the number of documents containing the word)?
In my opinion, there should not be any difference between these two approaches, because the only difference is a scaling factor on each feature, but since the split is done at the level of single features this should not make a difference.
Am I right in my reasoning?
|
Difference between tf-idf and tf with Random Forests
|
CC BY-SA 3.0
| null |
2014-09-16T08:14:06.307
|
2014-09-18T13:14:37.940
| null | null |
3054
|
[
"classification",
"text-mining",
"random-forest"
] |
1125
|
1
| null | null |
5
|
226
|
I am interested in the field of named entity disambiguation and want to learn more about it. I have heard that there are contests organised by various associations on these kind of research topics. These contests are very helpful as they give a practical experience in these fields.
I found one such contest organised by Microsoft research [here](http://web-ngram.research.microsoft.com/erd2014/) though the dates have already passed. Can anyone point me to any other such contests ? Also, is there a site which catalogues these contests so that one can just go there and know about all upcoming contests ?
Thanks in advance.
|
Named entity disambiguation contests
|
CC BY-SA 3.0
| null |
2014-09-16T11:41:48.140
|
2017-05-19T16:12:17.180
|
2017-05-19T16:12:17.180
|
21
|
3325
|
[
"nlp"
] |
1126
|
2
| null |
1123
|
2
| null |
Basically, you can do one of two things:
- Combine features from both classifiers. I.e., instead of SVM-text and SVM-image you may train single SVM that uses both - textual and visual features.
- Use ensemble learning. If you already have probabilities from separate classifiers, you can simply use them as weights and compute weighted average. For more sophisticated cases there are Bayesian combiners (each classifier has its prior), boosting algorithms (e.g. see AdaBoost) and others.
Note, that ensembles where initially created for combining different learners, not different sets if features. In this later case ensembles have advantage mostly in cases when different kinds of features just can't be combined in a single vector efficiently. But in general, combing features is simpler and more straightforward.
| null |
CC BY-SA 3.0
| null |
2014-09-16T12:32:10.130
|
2014-09-16T12:32:10.130
| null | null |
1279
| null |
1127
|
2
| null |
310
|
6
| null |
One class learning
I wouldn't be too quick to throw out one-class classification methods (option 2) - the key is to model the positive (minority) class with the one-class model.
There has been research demonstrating cases where one-class classification out-performed other approaches like sampling for highly imbalanced data as often seen with protein classification tasks.
I couldn't find the research I recalled, but I did find some other comparisons, showing using one-class classifiers (typically modeling the minority class) achieved as good or better performance than binary classification typically with sampled "negatives" from the large set of proteins not known to be positive.
Additionally this approach also gives the advantage of much improved run-time - since you only need to train the classifier on the smaller, positive set. A couple papers:
["Prediction of protein-protein interactions using one-class classification methods and integrating diverse biological data"](http://journal.imbio.de/articles/pdf/jib-77.pdf)
["A One-Class Classification Approach for Protein Sequences and Structures"](http://www.inf.u-szeged.hu/~busarobi/PDFs/BaBuKe09.pdf)
At the very least I would try some one-class methods and compare the performance using validation with your binary/multi-class classification approaches. There are also open source implementations for many of these so it shouldn't be too costly to try them out, for example LibSVM has a one-class SVM implementation. Additionally, it might prove valuable for use in an ensemble with binary classifiers, since there may be more disagreement in their predictions.
Higher level representation embedding / clustering
Along the lines of what you were thinking with (1) and the other post suggesting PCA, approaches like clustering, sparse coding, or even topic modeling - treating each protein as a document string and different protein families as different topics - could yield a representation that might make classifying the proteins straightforward.
I.e., you could identify which group/cluster a protein belongs to or classify the cluster memberships / embedded representations.
E.g., such embedding approaches as sparse coding can yield representations that reveal which cluster a protein belongs too - so that some sets of features are only active (non-zero) for proteins in the same cluster - which can make classifying them much easier.
Additionally class labels or known cluster membership can be incorporated in the embedding process for most methods.
Ensemble
Ensembles of multiple classifiers tend to work best - especially when the classifiers are very diverse and can achieve comparable performance individually.
There are at least two ways use ensembles for this problem.
- You can build an ensemble of binary classifiers by sampling multiple different same-size negative sets and training a classifier on each.
- You can build an ensemble from different approaches, such as binary classifiers with different negative samples, combined with a one-class classification approach, combined with classification models trained on the embedded data.
| null |
CC BY-SA 3.0
| null |
2014-09-16T14:29:41.520
|
2014-09-16T14:29:41.520
| null | null |
3326
| null |
1128
|
1
| null | null |
6
|
545
|
I am looking for information on (formal) algebraic systems that can be used to transform time-series - in either a practical or academic context.
I hope that there exists (at least one) small, expressive, set of operators - ranging over (finite) time-series. I want to compare and contrast different systems with respect to algebraic completeness, and brevity of representation, of common time-series transformations in various domains.
I realise this question is broad - but hope it is not too vague for datascience.stackexchange. I welcome any pointers to relevant literature for specific scenarios, or the general subject.
Edit... (Attempt to better explain what I meant by an algebraic system...)
I was thinking about "abstract algebras" as discussed in Wikipedia:
[http://en.wikipedia.org/wiki/Algebra#Abstract_algebra](http://en.wikipedia.org/wiki/Algebra#Abstract_algebra)
[http://en.wikipedia.org/wiki/Abstract_algebra#Basic_concepts](http://en.wikipedia.org/wiki/Abstract_algebra#Basic_concepts)
Boolean Algebras are (very simple) algebras that range over Boolean values. A simple example of such an algebra would consist the values True and False and the operations AND, OR and NOT. One might argue this algebra is 'complete' as, from these two constants (free-variables) and three basic operations, arbitrary boolean functions can be constructed/described.
I am interested to discover algebras where the values are (time-domain) time-series. I'd like it to be possible to construct "arbitrary" functions, that map time-series to time-series, from a few operations which, individually, map time-series to time-series. I am open to liberal interpretations of "arbitrary". I would be especially interested in examples of these algebras where the operations consist 'higher-order functions' - where such operations have been developed for a specific domain.
|
Looking for algebras designed to transform time series
|
CC BY-SA 3.0
| null |
2014-09-16T15:47:17.430
|
2015-08-09T10:44:16.053
|
2014-09-17T13:34:51.830
|
3328
|
3328
|
[
"machine-learning",
"algorithms",
"time-series"
] |
1129
|
2
| null |
403
|
2
| null |
It's probably due to the [effect coding](http://www.ats.ucla.edu/stat/sas/seminars/sas_logistic/logistic1.htm) of categorical predictors. Eg, the regression coefficient for CPR = 7 is not zero but -(sum of regression coefficients for the other 6 levels). I guess EM should have an option to switch it to reference coding, then your way of computing the predicted probability should work.
| null |
CC BY-SA 3.0
| null |
2014-09-16T16:37:33.357
|
2014-09-16T16:37:33.357
| null | null |
3315
| null |
1131
|
2
| null |
1128
|
0
| null |
The most direct and obvious transformation is from time domain to [frequency domain](http://en.wikipedia.org/wiki/Frequency_domain). Possible methods include [Fourier transform](http://en.wikipedia.org/wiki/Fourier_transform) and [wavelet transform](http://en.wikipedia.org/wiki/Wavelet_transform). After the transform the signal is represented by a function of frequency-domain elements which can be operated on using ordinary algebra.
It's also possible to model a time series as a trajectory of a dynamical system in a state space (see: [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9892.1980.tb00300.x/abstract](http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9892.1980.tb00300.x/abstract), and [http://www3.stat.sinica.edu.tw/statistica/oldpdf/A2n16.pdf](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A2n16.pdf)). Dynamical systems can be modeled symbolically at a course-grain level (see: [http://en.wikipedia.org/wiki/Symbolic_dynamics](http://en.wikipedia.org/wiki/Symbolic_dynamics) and [http://www.math.washington.edu/SymbolicDynamics/](http://www.math.washington.edu/SymbolicDynamics/)) Symbolic dynamics draws on linear algebra.
| null |
CC BY-SA 3.0
| null |
2014-09-17T00:44:19.967
|
2014-09-17T00:44:19.967
| null | null |
609
| null |
1132
|
1
| null | null |
1
|
1625
|
I am a research scholar in data mining. I'm interested in C# implementation of K-Means clustering algorithm for mixed numeric and categorical data.
|
K-Means clustering for mixed numeric and categorical data implementation in C#
|
CC BY-SA 3.0
| null |
2014-09-17T04:49:39.770
|
2014-11-18T12:55:21.787
|
2014-09-19T12:15:42.527
|
97
|
3336
|
[
"clustering",
"k-means",
"beginner",
"library"
] |
1133
|
2
| null |
1132
|
1
| null |
The solution is described here: [K-Means clustering for mixed numeric and categorical data](https://datascience.stackexchange.com/questions/22/k-means-clustering-for-mixed-numeric-and-categorical-data)
C# implementation can be found in ALGLIB library, which I strongly recommend: [http://www.alglib.net/translator/man/manual.csharp.html#gs_packages](http://www.alglib.net/translator/man/manual.csharp.html#gs_packages)
| null |
CC BY-SA 3.0
| null |
2014-09-17T10:16:07.100
|
2014-09-17T10:16:07.100
|
2017-04-13T12:50:41.230
|
-1
|
97
| null |
1135
|
1
| null | null |
6
|
346
|
I have a rather large commute every day - it ranges between about an hour and about an hour and half of driving.
I have been tracking my driving times, and want to continue to do so. I am capturing the date, my time of departure, my time of arrival, the route I took (there are two or three possible ones), weather conditions (wet/dry and clear/hazy/foggy), and whether I stopped (and if so, for what reason - fuel/toilet break/food break, and for how long) for every journey to and from work.
I would like to create a system to analyse this data and suggest an optimal departure time (for the next journey) based on day of the week, weather conditions, and whether i need to stop.
Anecdotally, I can see that Tuesday mornings are worse than other mornings, the earlier I leave the more likely I am to take a toilet break or a food break, and obviously that the journey takes longer on rainy or foggy days than on clear and dry days - but I would like the system to empirically tell me that!
I assume this is a machine-learning and statistical analysis problem.
However, I have absolutely no knowledge of machine-learning, or statistical methods.
What statistical methods should I use to do this kind of analysis to the point where the data will lead to suggestions like "tomorrow is Tuesday and it is going to rain, so you must leave home between 7.50 and 8.00, and take route XYZ, to get the optimal driving time. Oh and chances are you will need a toilet break - and I have factored that in"? (assume that I manually enter tomorrow’s weather forecast - I’ll look into integrating with a weather service later)
Note that this is life-hacking for me, trying to optimise the hell out of a tedious process, and it is very personal - specific to me and my habits, specific to this route, and specific to the morning/evening commute times. Google Maps with Traffic, TomTom with IQ, and Waze do very well in the more open-ended situations of ad-hoc driving-time prediction. Even Apple is happy to tell me on my iPhone notification screen how long it will take me to get home if I leave right now.
Also note, it appears to me that traffic is not a consideration - that is to say, I do not think I need to know the actual traffic conditions - traffic is a function of day of the week and weather. For example, there are more people on the roads on Monday and Tuesday mornings, and people drive more slowly, and more people are in cars (opting to drive instead of cycle or take public transport) when it rains.
To what extent can I let the data do all the talking? I have a somewhat ambiguous hidden agenda which may not be apparent from the data;
- I should be at work at 9.30 (i.,e. 9.15 +/- 15 minutes) every day, but the occasional 10am arrival is OK
- I want to leave home as late as possible, and yet arrive at work as early as possible
- I want to leave work as early as possible, and yet have done at least 8 hours’ work
- it is OK for me to, say, leave half an hour early on one day but stay late on another to compensate
I think I can come up with a procedural formula that can encompass all of these rules, but my gut feeling is that statistical analysis can make it a lot smarter.
Apart from the methods of analysis, the technology stack is not an issue. Java is my language of choice - I am quite familiar with programming in it, and in creating web applications.
Assuming that it is possible, are there Java libraries that can provide the requisite methods?
What limitations are there? I want to keep capturing more and more data every day, making the data set bigger, hopefully, making the prediction more accurate.
What other ways are there to do it? Can I push this data into, say, Wolfram Programming Cloud, or maybe something Google provides to get the desired results?
|
Statistical Commute Analysis in Java
|
CC BY-SA 3.0
| null |
2014-09-17T13:13:59.147
|
2014-10-30T14:59:13.243
| null | null |
3343
|
[
"machine-learning",
"statistics"
] |
1136
|
2
| null |
1135
|
2
| null |
You do want to model the traffic, at least over a work day, otherwise it wouldn't matter what time you traveled! Absent any data, I'd assume there isn't much variance over the working week, but that's one thing the data will quickly confirm or refute. If it is varying, you can use a different model for each day.
You have two variables; the departure times from home and work, respectively. Let's call them t_h and t_w. Let's call the commute time T_c(t), where t is the time of day. You can estimate this function from the data, so I'll assume it is given.
You want to maximize c t_h - (1-c) t_w subject to the constraints t_h + T_c(t_h) < 9.5 and t_w > t_h + T_c(t_h) + 8
where c is a constant you can set to adjust the relative importance of leaving home early relative to leaving work early. You should be able to solve this numerical optimization problem with Mathematica, MATLAB, or something similar. I would not recommend Java; it's not meant for this. The only tricky part is estimating T_c. You know that it's a non-negative function, so you could use the standard trick of estimating it's logarithm (say, with kernels) and exponentiating. For implementation with Mathematica see [Smoothing Data, Filling Missing Data, and Nonparametric Fitting](http://reference.wolfram.com/language/ref/InterpolatingFunction.html) and [Constrained Optimization](http://reference.wolfram.com/language/tutorial/ConstrainedOptimizationOverview.html).
| null |
CC BY-SA 3.0
| null |
2014-09-18T01:39:30.063
|
2014-09-19T20:17:35.690
|
2014-09-19T20:17:35.690
|
381
|
381
| null |
1137
|
1
| null | null |
7
|
3345
|
I have time series data from mobile sensors for different motions such as walking, pushups, dumbellifts, rowing and so on. All these motions have different length of time series. For classifying them using [Dynamic Time Warping (DTW)](http://en.wikipedia.org/wiki/Dynamic_time_warping), how do I choose an appropriate window size that will give good results?
|
Choosing a window size for DTW
|
CC BY-SA 3.0
| null |
2014-09-18T06:22:13.940
|
2015-07-18T09:41:58.270
|
2014-09-19T12:22:10.130
|
84
|
2475
|
[
"classification",
"time-series",
"parameter"
] |
1138
|
2
| null |
1124
|
8
| null |
Decision trees (and hence Random Forests) are insensitive to monotone transformations of input features.
Since multiplying by the same factor is a monotone transformation, I'd assume that for Random Forests there indeed is no difference.
However, you eventually may consider using other classifiers that do not have this property, so it may still make sense to use the entire TF * IDF.
| null |
CC BY-SA 3.0
| null |
2014-09-18T13:14:37.940
|
2014-09-18T13:14:37.940
| null | null |
816
| null |
1139
|
2
| null |
1125
|
1
| null |
Some of the GREC shared task challenges included a named entity recognition & coreference resolution component (i.e., disambiguation), but I don't think they've run GREC since 2010...?
[https://sites.google.com/site/genchalrepository/reg-in-context/grec-ner](https://sites.google.com/site/genchalrepository/reg-in-context/grec-ner)
| null |
CC BY-SA 3.0
| null |
2014-09-18T13:51:58.473
|
2014-09-18T13:51:58.473
| null | null |
819
| null |
1140
|
2
| null |
1106
|
4
| null |
- pre-process your documents (some of the steps may be skipped)
tokenize
remove stop words
stem or lemmatize
do normalization (e.g. U.S.A. -> USA, météo -> meteo, etc) and orthographic correction
perform phonetic normalization (e.g. with Soundex)
find equivalence classes (using thesauri, e.g. WordNet)
- use a Vector Space model to represent documents (you may use TF, aforementioned TF-IDF or other models)
- do the same with the query: preprocess and represent it in the vector space
- find the most similar documents by computing the vector similarity (e.g. using the cosine similarity)
That's an outline of the [Information Retrieval](http://en.wikipedia.org/wiki/Information_retrieval) process
[Introduction to Information Retrieval](http://nlp.stanford.edu/IR-book/) by Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze is a very good book to get started in IR.
---
Or just use [Apache Solr](http://lucene.apache.org/solr/) to get everything you need out of the box (or [Apache Lucene](http://lucene.apache.org/core/), that is used by Solr, to build your own application)
| null |
CC BY-SA 3.0
| null |
2014-09-18T19:56:25.827
|
2014-09-18T19:56:25.827
| null | null |
816
| null |
1141
|
1
| null | null |
8
|
609
|
I am looking to choose my career in the area of decision science or predictive modeling and I am aware that this is kind of opinion based but I would like to have some suggestion from experts that I can use it to build my career in correct path. What are the tools should I know like R, SAS or any other. What are the thinks I should know to work in a data science or machine learning or predictive modeling. For me I am having problem in identifying steps that I should follow. Please suggest me some steps to follow.
|
Some suggestion for career in data science or predictive modeling
|
CC BY-SA 3.0
| null |
2014-09-19T02:34:04.093
|
2014-10-19T13:24:36.110
|
2014-09-19T12:19:25.697
|
97
|
3057
|
[
"education",
"beginner",
"career"
] |
1142
|
2
| null |
1141
|
5
| null |
First of all I should say you question probably is an off-topic and will be closed soon.
Discussed at this SE site
Anyway I can target you to similar questions discussed at this SE site already:
- Statistics + Computer Science = Data Science?
- Starting my career as Data Scientist, is Software Engineering experience required?
Cross Validated SE
A set of relevant questions at Cross Validated Stack Exchange:
- https://stats.stackexchange.com/questions/tagged/careers
Data scientist map
This is good infographics of data science knowledge you might need to start a career ([Link to image](http://pennlio.files.wordpress.com/2013/09/roadtodatascientist1.png)):

Careers SE
Also, simple "data scientist" querying of Careers SE site
[http://careers.stackoverflow.com/jobs?searchTerm=data+scientist&location=](http://careers.stackoverflow.com/jobs?searchTerm=data+scientist&location=) will lead you to the following knowlege "tags":
- R
- bigdata
- data-visualization
- hadoop
- mapreduce
- scala
- python
- matlab
etc.
| null |
CC BY-SA 3.0
| null |
2014-09-19T07:15:13.803
|
2014-09-19T09:04:24.007
|
2017-04-13T12:50:41.230
|
-1
|
97
| null |
1143
|
1
| null | null |
14
|
759
|
Recently I was introduced to the field of Data Science (its been 6 months approx), and Ii started the journey with Machine Learning Course by Andrew Ng and post that started working on the Data Science Specialization by JHU.
On practical application front, I have been working on building a predictive model that would predict attrition. So far I have used glm, bayesglm, rf in an effort to learn and apply these methods, but I find a lot of gap in my understanding of these algorithms.
My basic dilemma is:
Whether I should focus more on learning the intricacies of a few algorithms or should I use the approach of knowing a lot of them as and when and as much as required?
Please guide me in the right direction, maybe by suggesting books or articles or anything that you think would help.
I would be grateful if you would reply with an idea of guiding someone who has just started his career in the field of Data Science, and wants to be a person who solves practical issues for the business world.
I would read (as many as possible) resources (books,articles) suggested in this post and would provide a personal feed back on the pros and cons of the same so as to make this a helpful post for people who come across a similar question in future,and i think it would be great if people suggesting these books can do the same.
|
Studying machine learning algorithms: depth of understanding vs. number of algorithms
|
CC BY-SA 3.0
| null |
2014-09-19T09:08:55.180
|
2021-01-15T00:09:31.857
|
2017-05-24T14:31:40.483
|
8878
|
3360
|
[
"machine-learning"
] |
1144
|
2
| null |
1143
|
5
| null |
Arguably someone calling themself a data scientist ought to know more about the intricacies of the algorithms they use—e.g. what affects the convergence rate of the Fisher scoring algorithm in GLM—than a common or garden statistician—who might be content just to know that the maximum-likelihood solution will be found (perhaps after they make a cup of coffee). In any case understanding the general concepts of statistics & machine learning is important in addition to familiarity with the methods you do use—the theory behind them, the assumptions they make, what diagnostic checks you should perform, how to interpret the results. Avoid being [this parody](https://stats.stackexchange.com/questions/104500/skills-hard-to-find-in-machine-learners/104507#104507).
You'd probably enjoy reading [Hastie et al. (2009), The Elements of Statistical Learning](http://statweb.stanford.edu/%7Etibs/ElemStatLearn/).
| null |
CC BY-SA 4.0
| null |
2014-09-19T11:38:36.150
|
2021-01-15T00:09:31.857
|
2021-01-15T00:09:31.857
|
3361
|
3361
| null |
1145
|
2
| null |
1143
|
10
| null |
I would recommend limiting yourself to a few tried and trusted algorithms. I would not recommend Elements of statistical learning ( as a first book). It is too theoretical, aimed at graduate students, with exercises asking how to prove X or Y... I think [ISL](http://www-bcf.usc.edu/~gareth/ISL/) is more appropriate, with more practical advice ( in any case both books are free as pdf downloads).
Besides statistics, I would make sure you are comfortable with experimental design/AB tests, and with Business Intelligence/Visualisation.
| null |
CC BY-SA 3.0
| null |
2014-09-19T12:39:23.103
|
2014-09-19T12:39:23.103
| null | null |
1256
| null |
1146
|
2
| null |
1113
|
2
| null |
Since it is general methodological question, let's assume we have only one text-based variable - total number of words in a sentence. First of all, it's worth to visualize your data. I will pretend I have following data:

Here we see slight dependency between age and number of words in responses. We may assume that young people (approx. between 12 and 25) tend to use 1-4 words, while people of age 25-35 try to give longer answers. But how do we split these points? I would do it something like this:

In 2D plot it looks pretty straightforward, and this is how it works most of the time in practise. However, you asked for splitting data by a single variable - age. That is, something like this:

Is it a good split? I don't know. In fact, it depends on your actual needs and interpretation of the "cut points". That's why I asked about concrete task. Anyway, this interpretation is up to you.
In practise, you will have much more text-based variables. E.g. you can use every word as a feature (don't forget to [stem or lemmatize](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) it first) with values from zero to a number of occurrences in the response. Visualizing high-dimensional data is not an easy task, so you need a way to discover groups of data without plotting them. [Clustering](http://en.wikipedia.org/wiki/Cluster_analysis) is a general approach for this. Though clustering algorithms may work with data of arbitrary dimensionality, we still have only 2D to plot it, so let's come back to our example.
With algorithm like [k-means](http://en.wikipedia.org/wiki/K-means_clustering) you can obtain 2 groups like this:

Two dots - red and blue - show cluster centres, calculated by k-means. You can use coordinates of these points to split your data by any subset of axes, even if you have 10k dimensions. But again, the most important question here is: what linguistic features will provide reasonable grouping of ages.
| null |
CC BY-SA 3.0
| null |
2014-09-19T22:51:43.883
|
2014-09-19T22:51:43.883
| null | null |
1279
| null |
1147
|
1
|
1175
| null |
2
|
413
|
One of the discussed nice aspects of the procedure that Vowpal Wabbit uses for updates to sgd
[pdf](http://lowrank.net/nikos/pubs/liw.pdf) is so-called weight invariance, described in the linked as:
"Among these updates we mainly focus on a novel
set of updates that satisfies an additional invariance
property: for all importance weights of h, the update
is equivalent to two updates with importance weight
h/2. We call these updates importance invariant."
What does this mean and why is it useful?
|
Invariance Property of Vowpal Wabbit Updates - Explaination
|
CC BY-SA 3.0
| null |
2014-09-20T02:22:07.510
|
2014-10-01T18:15:01.893
|
2014-10-01T18:15:01.893
|
1138
|
1138
|
[
"machine-learning",
"gradient-descent"
] |
1148
|
2
| null |
1137
|
1
| null |
If you have enough data, use cross validation.
If you don't have a lot of data, use cross validation on a similar dataset, and transfer the window size (the UCR archive has a bunch of similar dataset)
Don't forget, that the best warping window size depends on the amount of training data. As you get more data, you can have a smaller warping window, see fig 6 of [http://www.cs.ucr.edu/~eamonn/DTW_myths.pdf](http://www.cs.ucr.edu/~eamonn/DTW_myths.pdf)
eamonn
| null |
CC BY-SA 3.0
| null |
2014-09-21T06:01:41.097
|
2014-09-21T06:01:41.097
| null | null |
3376
| null |
1149
|
1
| null | null |
2
|
1194
|
I am working on a text classification problem on tweets. At the moment I was only considering the content of the tweets as a source of information, and I was using a simple bag of words approach using term frequencies as features, using Random Forests (this is something I cannot change).
Now my idea is to try to incorporate information present in the URLs used in tweets. Now, not all the tweets have URLs, and if I decide to use the same term frequency representation also for URLs I will have a huge number of features only from URLs. For this reason, I suppose that having a single set of features containing both the tweet term frequencies and the URL term frequencies could be bad. Besides I'll have to fill some impossible values (like -1) for the URL features for tweets that do not have URLs, and I will probably worsen the classification for this tweets, as I will have a huge number of uninformative features.
Do you have any suggestions regarding this issue?
|
Text Classification with mixed features in Random Forests
|
CC BY-SA 3.0
| null |
2014-09-22T15:48:32.697
|
2014-10-23T14:11:10.700
| null | null |
3054
|
[
"text-mining",
"random-forest"
] |
1150
|
2
| null |
1105
|
3
| null |
This has been asked, and answered, on CrossValidated. See [https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression](https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression) and [https://stats.stackexchange.com/questions/45803/logistic-regression-in-r-resulted-in-hauck-donner-phenomenon-now-what](https://stats.stackexchange.com/questions/45803/logistic-regression-in-r-resulted-in-hauck-donner-phenomenon-now-what) for two related answers (and there are other related questions that you can explore there). That logistic regression can blow up is a known effect in computational statistics.
Also, in situations where exp(-30) gives you roughly the relative accuracy of the double type, you would want to be extremely careful with accumulation of the round-off errors, as 1+exp(-30)=1: summing the likelihood contributions may run into numerical problems, especially when you start computing numerical derivatives and gradients. For a brief introduction into the issue in application to the typical problems in statistical computing and the specific problems it tends to encounter, see [http://www.stata.com/meeting/nordic-and-baltic14/abstracts/materials/dk14_gould.pdf](http://www.stata.com/meeting/nordic-and-baltic14/abstracts/materials/dk14_gould.pdf).
| null |
CC BY-SA 3.0
| null |
2014-09-23T02:20:24.670
|
2014-09-23T02:20:24.670
|
2017-04-13T12:44:20.183
|
-1
|
1237
| null |
1151
|
1
| null | null |
3
|
1996
|
I'm a Java developer and I want to pursue career in Data Science and machine learning.
Please advise me where and how to begin? What subjects and mathematical/statistical skills are required and so on?
|
Prerequisites for Data Science
|
CC BY-SA 3.0
| null |
2014-09-23T03:34:35.750
|
2016-06-18T07:14:29.727
|
2016-06-18T07:14:29.727
|
2723
|
3390
|
[
"machine-learning",
"bigdata"
] |
1152
|
2
| null |
1149
|
1
| null |
Are you using raw term frequencies, or TF-IDF?
Perhaps you could simply combine the terms in the tweet with the terms in the URL-linked pages (if any) into a single bag of words, calculate TF-IDF, and normalize to avoid bias towards longer documents (i.e., those tweets containing URL links).
>
if I decide to use the same term frequency representation also for URLs I will have a huge number of features only from URLs
I don't understand what you mean here. Aren't your features the terms in your bag of words? So the number of features will be the size of your vocabulary, which I imagine won't change much whether you include URLs or not.
>
Besides I'll have to fill some impossible values (like -1) for the URL features for tweets that do not have URLs, and I will probably worsen the classification for this tweets, as I will have a huge number of uninformative features.
I don't understand this either. Term-document matrices are virtually always a sparse matrix, since most of the terms in your vocabulary won't appear in most of your documents. So, the vast majority of values in your TDM will be 0. I don't know where you're getting -1 from.
| null |
CC BY-SA 3.0
| null |
2014-09-23T13:51:24.650
|
2014-09-23T13:51:24.650
| null | null |
819
| null |
1153
|
2
| null |
1106
|
1
| null |
I remember a long time ago playing with Elastic Search (the website is very different now from what I remember). There is some stuff about dealing with human language [here](http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/languages.html).
Be warned that Elastic search is like a big bazooka to your problem. If your problem is very simple, maybe you want to go from scratch. There is some docs in the web about it.
| null |
CC BY-SA 4.0
| null |
2014-09-23T20:39:29.810
|
2020-08-17T01:00:18.380
|
2020-08-17T01:00:18.380
|
98307
|
3400
| null |
1154
|
1
|
2471
| null |
3
|
108
|
I have a problem and I'm having trouble representing it - first I thought I should use graph theory (nodes and edges) and now I'm not sure.
My data is some tanks names and it's volumes, those tanks are connected by pipelines which I have the names and length.
```
------(pipeline 1)------.-----(pipeline 2)------.----(pipeline 3)---
| | | | |
[R tank 1] [S tank 1] [S tank 2] (pipeline 4) [S tank 3]
|
[S tank 4]
R tank is sink (receiver) and S tank is source (sender)
```
Problem is the pipe names change doesn't occur where there is a tank - they change name because historical reasons, size or connections...
So if I want to graphically show that S tank 2 is connected to pipeline 2 at point X and pipeline 2 connects to pipeline and the content goes to R tank 1, how should I do this? (I think the point X may not be relevant but if I had some way to get the distance travelled would be great).
|
Trouble representing a problem
|
CC BY-SA 3.0
| null |
2014-09-23T20:58:01.027
|
2015-01-13T17:31:51.833
| null | null |
3400
|
[
"dataset",
"recommender-system",
"graphs"
] |
1155
|
1
| null | null |
3
|
80
|
I couldn't quite think of how best to title this, so recommendations are welcome. Same goes for the tags (I don't have the reputation to use the tags that I thought were appropriate). The question is this:
"Suppose you have N pairs of observations, (x,y), and you have a model with some unknown parameters, B, that estimates the relationship between x and y, F(x,B) -> y. Now suppose you determine B using the method of least-squares (and, implicitly, that all the assumptions of least-squares are satisfied). The parameters, B, are themselves random variables, each with its own variance. Is there any way to estimate the reduction (or increase) in the variance of B that would result from applying the same method of least-squares to N+1 pairs of observations?"
The question is asked in the context of experimentation. If each data point costs $X, an affirmative answer to the question would go a long way in determining whether or not to continue testing.
|
Estimating Variance Reduction Resultant from Additional Data
|
CC BY-SA 3.0
| null |
2014-09-24T16:25:48.803
|
2015-05-24T20:42:45.670
|
2014-11-25T18:27:15.063
|
84
|
3411
|
[
"statistics",
"algorithms",
"experiments"
] |
1159
|
1
|
1169
| null |
36
|
32602
|
I have a large set of data (about 8GB). I would like to use machine learning to analyze it. So, I think that I should use SVD then PCA to reduce the data dimensionality for efficiency. However, MATLAB and Octave cannot load such a large dataset.
What tools I can use to do SVD with such a large amount of data?
|
How to do SVD and PCA with big data?
|
CC BY-SA 4.0
| null |
2014-09-25T08:40:59.467
|
2019-06-09T17:14:32.920
|
2019-06-09T17:14:32.920
|
29169
|
3167
|
[
"bigdata",
"data-mining",
"dimensionality-reduction"
] |
1160
|
2
| null |
1113
|
1
| null |
If I understand you correctly, I would try a few featurization methods to transform the text column to a numeric value. Then you can proceed with analysis as usual. There is a great book on NLP called [Taming Text](http://tamingtext.com/) that would give numerous ways to think about your text variables.
| null |
CC BY-SA 3.0
| null |
2014-09-25T15:02:45.533
|
2014-09-25T15:02:45.533
| null | null |
3430
| null |
1161
|
2
| null |
1151
|
2
| null |
I would say that trying out the resources online such as the [Explore Data Science course](https://exploredatascience.boozallen.com) and Andrew Ng's ML Course on Coursera (mentioned in the notes above) are great. However, nothing beats true data that you understand inside and out. After picking up some skills from Explore Data Science, I propose you collect your fuel usage data continuously. Collect the basics: city, state, gallons, total cost, and your vehicles mileage. When you forget to enter it, then you have to learn how to work with missing data. As you feel more comfortable, enter in kaggle contests. The data science community is growing so quickly that the online resources are vast and will help you identify any areas where a text book is what you need. Good Luck!! Have fun!
| null |
CC BY-SA 3.0
| null |
2014-09-25T15:20:18.660
|
2014-09-25T15:20:18.660
| null | null |
3430
| null |
1162
|
2
| null |
823
|
2
| null |
I think ethics in Data Science is important. There is a fundamental difference in using user data to better their experience and show relevant ads and using user data to trick people into clicking on ads for the sake of monetary profit. Personally I like ads that give me relevant information like deals on things I would buy anyway. However, showing me weight loss ads because I got dumped is creepy and unethical. As my friend Peter always says, "don't be creepy with data".
| null |
CC BY-SA 3.0
| null |
2014-09-25T15:29:00.877
|
2014-09-25T15:29:00.877
| null | null |
3430
| null |
1163
|
2
| null |
1159
|
7
| null |
Don't bother.
First rule of programming- which also applies to data science: get everything working on a small test problem.
so take a random sample of your data of say 100,000 rows. try different algorithms etc. once you have got everything working to your satisfaction, you can try larger (and larger) data sets - and see how the test error reduces as you add more data.
furthermore you do not want to apply svd to only 8 columns: you apply it when you have a lot of columns.
| null |
CC BY-SA 3.0
| null |
2014-09-25T17:33:09.617
|
2014-09-25T17:33:09.617
| null | null |
1256
| null |
1164
|
2
| null |
1159
|
1
| null |
Although you can probably find some tools that will let you do it on a single machine, you're getting into the range where it make sense to consider "big data" tools like Spark, especially if you think your data set might grow. Spark has a component called MLlib which supports PCA and SVD. [The documentation has examples](http://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html).
| null |
CC BY-SA 3.0
| null |
2014-09-25T18:48:12.303
|
2014-09-25T18:48:12.303
| null | null |
381
| null |
1165
|
1
|
5252
| null |
6
|
7356
|
I'm going to start a Computer Science phd this year and for that I need a research topic. I am interested in Predictive Analytics in the context of Big Data. I am interested by the area of Education (MOOCs, Online courses...). In that field, what are the unexplored areas that can help me choose a strong topic? Thanks.
|
Looking for a strong Phd Topic in Predictive Analytics in the context of Big Data
|
CC BY-SA 3.0
| null |
2014-09-25T20:18:46.880
|
2020-08-17T20:32:32.897
|
2014-09-27T16:56:14.523
|
3433
|
3433
|
[
"machine-learning",
"bigdata",
"data-mining",
"statistics",
"predictive-modeling"
] |
1166
|
1
| null | null |
2
|
3035
|
Matlab is a great tool for some mathematical experiments, Neural Networks, Image Processing ...
I would like to know if there is such a comprehensive and strong tool for data manipulation and NLP tasks? such as tokenization, POS tagging, parsing, training, testing ....
However I am new to NLP and I need a tool which let me experiment, get familiar and progress
|
A tool like Matlab for NLP?
|
CC BY-SA 3.0
| null |
2014-09-26T06:56:26.913
|
2017-12-14T12:37:39.437
|
2014-09-26T14:18:10.033
|
3436
|
3436
|
[
"data-mining",
"nlp"
] |
1167
|
2
| null |
946
|
1
| null |
The correct model is binomial, both poisson and normal are just approximations. The binomial pdf is defined on the integers between zero and number of trials. The poisson is defined on the integers between 0 and infinity, and normal is on all real variables between +/- infinity.
In other words: for a poisson there is a (possibly small) but non zero probability of having more clicks than impressions. For gaussian you can have even negative clicks. Of course, the particular parameters determine how big an impact this has... probably worth plotting the respective pdfs
| null |
CC BY-SA 3.0
| null |
2014-09-26T07:47:45.600
|
2014-09-27T08:14:16.690
|
2014-09-27T08:14:16.690
|
1256
|
1256
| null |
1168
|
2
| null |
1166
|
0
| null |
Please take a look at this paper:
Survey on Various Natural Language Processing Toolkits
In this work several NLP tools are introduced. Examples are:
- Carabao Language Toolkit
- GATE
- Stanford NLP toolkit
- NLTK
Just google the title of the paper and you will find the .pdf file.
| null |
CC BY-SA 3.0
| null |
2014-09-26T08:51:46.727
|
2014-09-26T08:51:46.727
| null | null |
979
| null |
1169
|
2
| null |
1159
|
46
| null |
First of all, dimensionality reduction is used when you have many covariated dimensions and want to reduce problem size by rotating data points into new orthogonal basis and taking only axes with largest variance. With 8 variables (columns) your space is already low-dimensional, reducing number of variables further is unlikely to solve technical issues with memory size, but may affect dataset quality a lot. In your concrete case it's more promising to take a look at [online learning](http://en.wikipedia.org/wiki/Online_machine_learning) methods. Roughly speaking, instead of working with the whole dataset, these methods take a little part of them (often referred to as "mini-batches") at a time and build a model incrementally. (I personally like to interpret word "online" as a reference to some infinitely long source of data from Internet like a Twitter feed, where you just can't load the whole dataset at once).
But what if you really wanted to apply dimensionality reduction technique like PCA to a dataset that doesn't fit into a memory? Normally a dataset is represented as a data matrix X of size n x m, where n is number of observations (rows) and m is a number of variables (columns). Typically problems with memory come from only one of these two numbers.
## Too many observations (n >> m)
When you have too many observations, but the number of variables is from small to moderate, you can build the covariance matrix incrementally. Indeed, typical PCA consists of constructing a covariance matrix of size m x m and applying singular value decomposition to it. With m=1000 variables of type float64, a covariance matrix has size 1000*1000*8 ~ 8Mb, which easily fits into memory and may be used with SVD. So you need only to build the covariance matrix without loading entire dataset into memory - [pretty tractable task](http://rebcabin.github.io/blog/2013/01/22/covariance-matrices/).
Alternatively, you can select a small representative sample from your dataset and approximate the covariance matrix. This matrix will have all the same properties as normal, just a little bit less accurate.
## Too many variables (n << m)
On another hand, sometimes, when you have too many variables, the covariance matrix itself will not fit into memory. E.g. if you work with 640x480 images, every observation has 640*480=307200 variables, which results in a 703Gb covariance matrix! That's definitely not what you would like to keep in memory of your computer, or even in memory of your cluster. So we need to reduce dimensions without building a covariance matrix at all.
My favourite method for doing it is [Random Projection](http://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf). In short, if you have dataset X of size n x m, you can multiply it by some sparse random matrix R of size m x k (with k << m) and obtain new matrix X' of a much smaller size n x k with approximately the same properties as the original one. Why does it work? Well, you should know that PCA aims to find set of orthogonal axes (principal components) and project your data onto first k of them. It turns out that sparse random vectors are nearly orthogonal and thus may also be used as a new basis.
And, of course, you don't have to multiply the whole dataset X by R - you can translate every observation x into the new basis separately or in mini-batches.
There's also somewhat similar algorithm called Random SVD. I don't have any real experience with it, but you can find example code with explanations [here](https://stats.stackexchange.com/a/11934/3305).
---
As a bottom line, here's a short check list for dimensionality reduction of big datasets:
- If you have not that many dimensions (variables), simply use online learning algorithms.
- If there are many observations, but a moderate number of variables (covariance matrix fits into memory), construct the matrix incrementally and use normal SVD.
- If number of variables is too high, use incremental algorithms.
| null |
CC BY-SA 3.0
| null |
2014-09-26T11:39:07.313
|
2017-02-14T21:24:09.753
|
2017-04-13T12:44:20.183
|
-1
|
1279
| null |
1171
|
2
| null |
1166
|
0
| null |
There's a NLP toolbox for MATLAB called [MatlabNLP](https://github.com/faridani/MatlabNLP). It includes modules for tokenization, preprocessing (stop word removal, text cleaning, stemming), and learning algorithms (linear regression, decision trees, support vector machines and a Naïve Bayes). A module for POS tagging is coming soon.
| null |
CC BY-SA 3.0
| null |
2014-09-26T16:15:05.580
|
2014-09-26T16:15:05.580
| null | null |
3412
| null |
1172
|
1
| null | null |
2
|
52
|
Are there any general open-source programs or libraries (e.g., a Python library) for analyzing user search behavior? By "search behavior", I mean a user's interaction with a search engine, such as querying, clicking relevant results, and spending time on those results. I'd like something with the following properties - it doesn't have to be all of them, but the more the merrier:
- Models individual user behavior (aggregate and time-based)
- Models group user behavior
- Simulates individual user behavior, given a model
- Is easily extensible (to accept data input formats, user models, document models, etc., that end-users define)
Links are a plus!
|
General programs/libraries for studying user search behavior?
|
CC BY-SA 3.0
| null |
2014-09-27T13:14:16.710
|
2020-02-07T21:35:08.583
| null | null |
1097
|
[
"machine-learning",
"data-mining",
"search",
"library"
] |
1173
|
1
|
1182
| null |
1
|
73
|
Not sure if this is Math, Stats or Data Science, but I figured I would post it here to get the site used.
As a programmer, when you have a system/component implemented, you might want to allow some performance monitoring. For example to query how often a function call was used, how long it took and so on. So typically you care about count, means/percentile, max/min and similiar statistics. This could be measurements since startup, but also a rolling average or window.
I wonder if there is a good data structure which can be updated efficiently concurrently which can be used as the source for most of those queries. For example having a ringbuffer of rollup-metrics (count, sum, min, max) over increasing periods of time and a background aggregate process triggered regularly.
The focus here (for me) is on in-memory data structures with limited memory consumption. (For other things I would use a RRD type of library).
|
Versatile data structure for combined statistics
|
CC BY-SA 3.0
| null |
2014-09-27T21:58:42.477
|
2014-09-30T14:29:06.307
| null | null |
3445
|
[
"dataset",
"statistics",
"algorithms"
] |
1174
|
1
| null | null |
4
|
12692
|
I have post already the question few months ago about my project that I'm starting to work on. This post can be see here:
[Human activity recognition using smartphone data set problem](https://datascience.stackexchange.com/questions/211/human-activity-recognition-using-smartphone-data-set-problem)
Now, I know this is based around multivariate time series analysis and tasks are to classify and cluster the data. I have gathered some materials (e-books, tutorials etc.) on this but still can't see a more detailed picture of how even I should start. Here's the tutorial that looks like it might be helpful but the thing is my data looks differently and I'm not really sure if this can be applied to my work.
[http://little-book-of-r-for-multivariate-analysis.readthedocs.org/en/latest/src/multivariateanalysis.html#scatterplots-of-the-principal-components](http://little-book-of-r-for-multivariate-analysis.readthedocs.org/en/latest/src/multivariateanalysis.html#scatterplots-of-the-principal-components)
So basically, my questions are:
How I can start on some very basic analysis? How to read data so it any meaning for me.
Any tips and advises will be much appreciated!
Note: I'm just the beginner in data science.
|
How to classify and cluster this time series data
|
CC BY-SA 3.0
| null |
2014-09-28T12:51:43.823
|
2021-02-12T12:52:09.447
|
2017-04-13T12:50:41.230
|
-1
|
295
|
[
"data-mining",
"classification",
"dataset",
"clustering",
"time-series"
] |
1175
|
2
| null |
1147
|
2
| null |
Often different data samples have different weighting ( eg the costs of misclassification error for one group of data is higher than for other classes).
Most error metrics are of the form $\sum_i e_i$ where e_i is the loss ( eg squared error) on data point $i$. Therefore weightings of the form $\sum_i w_i e_i$ are equivalent to duplicating the data w_i times (eg for w_i integer).
One simple case is if you have repeated data - rather than keeping all the duplicated data points, you just "weight" your one repeated sample by the number of instances.
Now whilst this is easy to do in a batch setting, it is hard in vowpal wabbits online big data setting: given that you have a large data set, you do not just want to represent the data n times to deal with the weighting ( because it increases your computational load). Similarly, just multiplying the gradient vector by the weighting - which is correct in batch gradient descent - will cause big problems for stochastic/online gradient descent: essentially you shoot off in one direction ( think of large integer weights) then you shoot off in the other - causing significant instability. SGD essentially relies on all the errors to be of roughly the same order ( so that the learning rate can be set appropriately). So what they propose is to ensure that the update for training sample x_i with weight n is equivalent to presenting training sample x_i n times consecutively.
The idea being that presenting it consecutively reduces the problem because the error gradient (for that single example $x_i$) reduces for each consecutive presentation and update (as you get closer & closer to the minimum for that specific example). In other words the consecutive updates provides a kind of feedback control.
To me it sounds like you would still have instabilities (you get to zero error on x_i, then you get to zero error on x_i+1,...). the learning rate will need to be adjusted to take into account the size of the weights.
| null |
CC BY-SA 3.0
| null |
2014-09-29T00:20:24.093
|
2014-09-29T00:20:24.093
| null | null |
1256
| null |
1176
|
2
| null |
1159
|
7
| null |
PCA is usually implemented by computing SVD on the covariance matrix.
Computing the covariance matrix is an embarrassingly parallel task, so it scales linear with the number of records, and is trivial to distribute on multiple machines!
Just do one pass over your data to compute the means. Then a second pass to compute the covariance matrix. This can be done with map-reduce easily - essentially it's the same as computing the means again. Sum terms as in covariance are trivial to parallelize! You may only need to pay attention to numerics when summing a lot of values of similar magnitude.
Things get different when you have a huge number of variables. But on an 8 GB system, you should be able to run PCA on up to 20.000 dimensions in-memory with the BLAS libraries. But then you may run into the problem that PCA isn't all that reliable anymore, because it has too many degrees of freedom. In other words: it overfits easily. I've seen the recommendation of having at least 10*d*d records (or was it d^3). So for 10000 dimensions, you should have at least a billion records (of 10000 dimensions... that is a lot!) for the result to be statistically reliable.
| null |
CC BY-SA 3.0
| null |
2014-09-29T00:28:26.413
|
2014-09-29T00:28:26.413
| null | null |
924
| null |
1177
|
1
| null | null |
6
|
325
|
I'm trying to implement item based collaborative filtering. Do any distance calculations allow for weighting of certain ranges of values within each vector?
For example, I would like to be able to say values 10..22 within each vector are more significant than values within the range 0..10.
I've been experimenting with Pearson, Tanimoto and Euclidean algorithms, but they all seem to assume equal weighting for each value within the vector.
Am I approaching this problem in the right way, and if not, how do others deal with this problem?
|
Distance calculation/vector range significance
|
CC BY-SA 3.0
| null |
2014-09-29T14:52:48.130
|
2015-11-11T16:50:46.357
| null | null |
3459
|
[
"machine-learning",
"classification",
"algorithms",
"beginner"
] |
1178
|
2
| null |
1042
|
2
| null |
The kinds of tools you will use will vary based on the problem you are trying to solve. Social media data is rich and therefore many questions can be asked - and many tools can be used.
However, there is a general pattern you might keep in mind. Typically, you will have to use the platform's API to gather data. You will then have to normalized and store the data in a data warehouse. Finally, you will access and analyze the data with the tools you desire. Keeping the end goal in mind, you will have to strategically choose the best technologies for the job you are doing.
For example, let's say you wanted to study the relationships between users on a social network - a question like "Who are the mutual friends of the most popular individuals in this social network?" In this case, you would gather data using the social media platform's API, normalize it into a CSV, import it into a Neo4j database, and then use Cypher to make queries.
| null |
CC BY-SA 3.0
| null |
2014-09-29T17:46:58.617
|
2014-09-29T17:46:58.617
| null | null |
3466
| null |
1179
|
2
| null |
1177
|
1
| null |
I created a scoring system ("Thomas Scoring System") to deal with this problem. If you treat "distance" as a similarity score, this system should work for you. [http://exploringpossibilityspace.blogspot.com/2014/02/thomas-scoring-system.html](http://exploringpossibilityspace.blogspot.com/2014/02/thomas-scoring-system.html)
>
Here's Thomas Scoring System (TSS) in a nutshell: Treat each metric and their values as evidence that weigh for or against specific values in the performance index. This is an inference process and not an arithmetic calculation, as in the Usual Method. The output of TSS is an estimation of the Weight of Evidence for all index values. The Weight of Evidence can be expressed as a probability distribution, where all the evidence weights sum to one.
To use your example, each range is a condition: A) 10 <= x <= 22; and B) x < 10. If A is true, then it weighs in favor of certain similarity (distance) scores. If B is true, then it weights in favor of other similarity (distance) scores, but would have less weight compared to other evidence.
| null |
CC BY-SA 3.0
| null |
2014-09-29T20:31:49.387
|
2014-09-29T20:31:49.387
| null | null |
609
| null |
1180
|
2
| null |
1177
|
1
| null |
You mention distance metrics but Pearson and Tanimoto are not. For Euclidean distance, simply scale the dimensions in question by some factor. For example doubling a dimension's values makes its contribution to distance larger and so makes the feature more important to the distance metric.
On a related note you may wish to look at the Mahalanobis distance which kind of accounts for covariance or 'redundancy' across dimensions.
| null |
CC BY-SA 3.0
| null |
2014-09-29T22:45:38.623
|
2014-09-29T22:45:38.623
| null | null |
21
| null |
1181
|
1
| null | null |
-1
|
541
|
I want the text-based semantic clustering EMD do.
Is there a better way of using LDA to detect topics in text, there are so provide better results?
I'm going to do my EMD on discovery topics.
Thanks
|
document clustering by semantic similarity based EMD(earth mover distance)
|
CC BY-SA 3.0
| null |
2014-09-30T09:43:24.153
|
2018-12-04T09:17:15.800
| null | null |
3473
|
[
"text-mining",
"clustering"
] |
1182
|
2
| null |
1173
|
1
| null |
It sounds like you would like the [Boost Accumulators](http://www.boost.org/doc/libs/1_56_0/doc/html/accumulators.html) library:
>
Boost.Accumulators is both a library for incremental statistical
computation as well as an extensible framework for incremental
calculation in general. The library deals primarily with the concept
of an accumulator, which is a primitive computational entity that
accepts data one sample at a time and maintains some internal state.
These accumulators may offload some of their computations on other
accumulators, on which they depend. Accumulators are grouped within an
accumulator set. Boost.Accumulators resolves the inter-dependencies
between accumulators in a set and ensures that accumulators are
processed in the proper order.
| null |
CC BY-SA 3.0
| null |
2014-09-30T14:29:06.307
|
2014-09-30T14:29:06.307
| null | null |
515
| null |
1183
|
1
| null | null |
2
|
49
|
I'm interested in discovering some kind of dis-associations between the periods of a time series based on its data, e.g., find some (unknown number of) periods where the data is not similar with the data from another period.
Also I would like to compare the same data but over 2 years (something like DTW?).
I get my data Excel as a two-column list:
```
c1=date (one per each day of the year), c2=Data To Analyze
```
So, what algorithms could I use and in what software?
Update/Later edit:
I'm looking for dates as cut-off points from which the DataToAnalyze could be part of another cluster of consecutive dates. For example:
```
2014-1-1 --> 2014-3-10
```
are part of Cluster_1 based on DataToAnalyze. And:
```
2014-3-11 --> 2014-5-2
```
are part of Cluster_2 based on DataToAnalyze, and so on. So, clusters of consecutive dates should be automatically determined based on some algorithms, which is what I'm looking for. Which ones (or which software) would be applicable to this problem?
|
Discovering dis-associations between periods of time-series
|
CC BY-SA 3.0
| null |
2014-09-30T19:53:20.163
|
2014-10-02T13:36:58.920
|
2014-10-02T13:36:58.920
|
84
|
3482
|
[
"clustering",
"time-series"
] |
1184
|
2
| null |
1159
|
0
| null |
I would reccomend python if you lazily evaluate the file you will have a miniscule memory footprint, and numpy/scipy give you access to all of the tools Octave/Matlab would.
| null |
CC BY-SA 3.0
| null |
2014-09-30T20:12:32.870
|
2014-09-30T20:12:32.870
| null | null |
890
| null |
1185
|
1
| null | null |
2
|
68
|
I'm trying to implement GD for standard task of NN training :) The best papers for practioneer I've founded so far are:
1) "Efficient BackProp" by Yann LeCun et al.
2) "Stochastic Gradient Descent Tricks" by Leon Bottou
Are there some other must read papers on this topic?
Thank you!
|
What are some best papers on gradient descent for NN implementation?
|
CC BY-SA 3.0
| null |
2014-10-01T10:32:32.960
|
2015-03-09T00:24:48.157
| null | null |
2471
|
[
"neural-network",
"gradient-descent"
] |
1186
|
1
| null | null |
11
|
310
|
I am planning to use scikit linear support vector machine (SVM) classifier for text classification on a corpus consisting of 1 million labeled documents. What I am planning to do is, when a user enters some keyword, the classifier will first classify it in a category, and then a subsequent information retrieval query will happen in within the documents of that category catagory. I have a few questions:
- How do I confirm that classification will not take much time? I don't want users to have to spend time waiting for a classification to finish in order to get better results.
- Is using Python's scikit library for websites/web applications suitable for this?
- Does anyone know how amazon or flipkart perform classification on user queries, or do they use a completely different logic?
|
How much time do scikit classifiers take to classify?
|
CC BY-SA 3.0
| null |
2014-10-01T13:26:52.037
|
2016-01-21T11:29:32.980
|
2015-10-24T00:20:30.280
|
13413
|
3498
|
[
"machine-learning",
"classification",
"python",
"scikit-learn"
] |
1188
|
2
| null |
1174
|
2
| null |
>
How I can start on some very basic analysis?
Take your labeled data and compute histograms of the values for each of the sets. Plot these and visually see if there's any differences. Also compute the mean and variance of each of the different labeled sets and see if there are differences.
If it's timeseries data, take small (overlapping) windows of time and compute various metrics - min, max, variance, mean, for instance - and use that as input to a classifier.
| null |
CC BY-SA 3.0
| null |
2014-10-01T14:44:35.467
|
2014-10-01T14:44:35.467
| null | null |
403
| null |
1189
|
1
|
2472
| null |
3
|
592
|
Below is the dataset where the response variable is play with two labels (yes, and no):
```
No. outlook temperature humidity windy play
1 sunny hot high FALSE no
2 sunny hot high TRUE no
3 overcast hot high FALSE yes
4 rainy mild high FALSE yes
5 rainy cool normal FALSE yes
6 rainy cool normal TRUE no
7 overcast cool normal TRUE yes
8 sunny mild high FALSE no
9 sunny cool normal FALSE yes
10 rainy mild normal FALSE yes
11 sunny mild normal TRUE yes
12 overcast mild high TRUE yes
13 overcast hot normal FALSE yes
14 rainy mild high TRUE no
```
Here are the decisions with their respective classifications:
```
1: (outlook,overcast) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 3, 7, 12, 13]
2: (humidity,normal), (windy,FALSE) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 5, 9, 10]
3: (outlook,sunny), (humidity,high) -> (play,no)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 1, 2, 8]
4: (outlook,rainy), (windy,FALSE) -> (play,yes)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 4]
5: (outlook,sunny), (humidity,normal) -> (play,yes)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 11]
6: (outlook,rainy), (windy,TRUE) -> (play,no)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 6, 14]
```
|
How to define confusion matrix for classification?
|
CC BY-SA 4.0
| null |
2014-10-01T18:11:26.220
|
2018-12-27T14:56:31.230
|
2018-12-27T14:56:31.230
|
134
|
3503
|
[
"confusion-matrix"
] |
1190
|
1
| null | null |
4
|
1527
|
Statement of problem: An ambulance is at the hospital dropping off a patient. The goal of the paramedic is to get released from the hospital as soon as possible. I am curious, what are the factors in how long an ambulance off loads a patient at the hospital? Can I predict how long an offload will take given certain variables. And how confident can I be in this model? The Dependent Variable is HospitalTime, it is a ratio type of data and is measured in seconds. The Independent Variables are:
- Hospital, a nominal type of data recoded into integers, 1 would stand
for Lee Memorial.
- Ambulance, a nominal type of data recoded into integers, 9 would
stand for ambulance #9
- PatientPriority is an ordinal type of data recoded into integers. A 1
is a high priority, 2 is a medium priority and 3 is low acuity.
- MonthOfCall is an interval type of data recoded into integers. A 6
would be June and 12 is December. A 12 (December) is not twice as
much as a 6 (June) in this case.
- HourOfCall is an interval type of data recoded into integers. Once
again, an offload happening at 10:00 pm is not more than something
happening at 10:00 am.
- Officer1 and Officer2 are nominal data and are integers representing
an EMT and a paramedic.
My question is this: Given this type of data and my goal to predict the off loading time at the hospital, what kind of regression model should I look into?
I have looked at my statistics books from university days and they are all using ratio data. My data is mixed with nominal, ordinal, interval and ratio.
I have as much data as you could ask for. I have at least 100,000 observations.
Can you please push me in the right direction? What kind of model should I use with this type of data?
Shown below are observations to give you a tiny peek at my data:
```
IncidentID,HospitalTime,Hospital,Ambulance,PatientPriority,MonthOfCall,HourOfCall,Officer1,Officer2
757620,1849,7,11,2,10,10,234,771,chr(10) 802611,2625,7,11,3,1,18,234,777,chr(10)
765597,1149,7,12,3,11,2,234,777,chr(10) 770926,1785,7,12,3,11,15,234,777,chr(10)
771689,3557,7,12,2,11,14,234,777,chr(10) 822758,1073,7,20,3,3,13,777,307,chr(10)
767249,2570,7,22,2,11,11,560,778,chr(10) 767326,1998,7,22,1,11,18,560,777,chr(10)
785903,1660,7,22,3,12,12,234,777,chr(10) 787644,2852,7,22,3,12,17,234,777,chr(10)
760294,1327,7,23,2,10,14,498,735,chr(10) 994677,3653,7,32,2,2,15,181,159,chr(10)
994677,3653,7,32,2,2,15,181,159,chr(10) 788471,2053,5,9,2,1,3,498,777,chr(10)
788471,2053,5,9,2,1,3,498,777,chr(10) 759983,1342,5,11,2,10,8,474,777,chr(10)
791243,1635,5,11,2,1,18,234,777,chr(10) 800796,1381,5,11,3,1,11,234,777,chr(10)
```
P.S. This question is cross-posted in Stack-Overflow under the same title and author.
|
Type of regression with nominal, ordinal, interval and ratio data
|
CC BY-SA 3.0
| null |
2014-10-01T19:51:21.220
|
2015-04-16T11:14:29.710
|
2014-10-14T08:24:14.660
|
21
|
3504
|
[
"regression"
] |
1191
|
1
| null | null |
4
|
6596
|
The dataset that I am experimenting with is in the form of a table with columns userid and itemid. If there is a row for a given user and a given item, that means the user accessed the item (like in an online store). I am trying to cluster similar items based on this data. If a pair of items is accessed together often, then the items are similar.
Because this is a case of a high dimensionality (# of users and items will be in 10,000's) I think I am justified in trying to use SVD as a pre-clustering step and then do some classical clustering. When I tried doing this I got poor clustering results when compared with simple hierarchical clustering. Items that weren't very similar were being bucketed together in one dimension, while there were available dimensions that weren't used. The results weren't completely random, but they were definitely worse than the output from the hierarchical clustering. I attempted the SVD step with Mahaut and Octave and the results were similar. For the hierarchical clustering I used the Jaccard measure.
At this point I am starting to doubt the notion of SVD as a way to reduce dimensionality. Do you think that SVD cannot be used effectively in this case (and why?) or do you think that I made some mistake along the way?
|
Using SVD for clustering
|
CC BY-SA 3.0
| null |
2014-10-02T03:30:37.930
|
2014-10-02T12:36:46.010
|
2014-10-02T12:36:46.010
|
84
|
3506
|
[
"clustering"
] |
1192
|
1
| null | null |
5
|
134
|
I have several datasets with thousands of variables. This different datasets have different variables for the same thing. Is there a way to automatically/semi-automatically check compatible variables and make them consistent?
If there is such thing, that would save me months of tedious work. The data is stored in SPSS format.
|
Amalgamating multiple datasets with different variables coding
|
CC BY-SA 3.0
| null |
2014-10-02T06:18:11.397
|
2017-11-19T21:58:07.593
|
2014-10-02T13:25:16.317
|
84
|
3507
|
[
"dataset",
"data-cleaning"
] |
1193
|
2
| null |
1186
|
3
| null |
The only reliable way to see how long it takes is to code it up and give it a shot. Training will take more time, then you can save your model (pickle) to use later.
| null |
CC BY-SA 3.0
| null |
2014-10-02T08:17:03.253
|
2014-10-02T08:17:03.253
| null | null |
3457
| null |
1194
|
2
| null |
1191
|
2
| null |
We are using Singular Value Decomposition in the much same manner as you, except rather than clustering similar items, we are using a reduced-rank matrix to power a recommendation engine based on a term-document matrix in Latent Semantic Indexing.
From your brief description, your approach seems sound enough. However, I highly recommend reading [Berry, Dumais & O'Brien's Using Linear Algebra for Intelligent Information Retrieval](http://lsirwww.epfl.ch/courses/dis/2003ws/papers/ut-cs-94-270.pdf).
Key to using SVD is selecting an acceptable rank-k approximation to the original sparse matrix. You should carry out some exploratory analysis to see how much variance can be explained, using the singular values in the diagonal matrix Sigma. This question was brought up in [this question on CrossValidated](https://stats.stackexchange.com/questions/49646/for-a-random-matrix-shouldnt-a-svd-explain-nothing-at-all-what-am-i-doing-wro).
A lot of the papers I've read suggest anywhere a rank k from 200 to 300 singular values. In a proof-of-concept implementation, we had original sparse matrix of about 10000 rows (unique terms) to about 1000 columns (unique documents), and we were capturing just under 85% of the variance with only 300 singular values.
However, that really hinges upon the nature of your data, so your mileage may vary.
| null |
CC BY-SA 3.0
| null |
2014-10-02T08:58:18.340
|
2014-10-02T08:58:18.340
|
2017-04-13T12:44:20.183
|
-1
|
24
| null |
1195
|
1
|
1206
| null |
2
|
151
|
I'm coding a program that tests several classifiers over a database weather.arff, I found rules below, I want classify test objects.
I do not understand how the classification, it is described:
"In classification, let R be the set of generated rules and T the training data. The basic idea of the proposed method is to choose a set of high confidence rules in R to cover T. In classifying a test object, the first rule in the set of rules that matches the test object condition classifies it. This process ensures that only the highest ranked rules classify test objects. "
How to classify test objects?
```
No. outlook temperature humidity windy play
1 sunny hot high FALSE no
2 sunny hot high TRUE no
3 overcast hot high FALSE yes
4 rainy mild high FALSE yes
5 rainy cool normal FALSE yes
6 rainy cool normal TRUE no
7 overcast cool normal TRUE yes
8 sunny mild high FALSE no
9 sunny cool normal FALSE yes
10 rainy mild normal FALSE yes
11 sunny mild normal TRUE yes
12 overcast mild high TRUE yes
13 overcast hot normal FALSE yes
14 rainy mild high TRUE no
```
Rule found:
```
1: (outlook,overcast) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 3, 7, 12, 13]
2: (humidity,normal), (windy,FALSE) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 5, 9, 10]
3: (outlook,sunny), (humidity,high) -> (play,no)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 1, 2, 8]
4: (outlook,rainy), (windy,FALSE) -> (play,yes)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 4]
5: (outlook,sunny), (humidity,normal) -> (play,yes)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 11]
6: (outlook,rainy), (windy,TRUE) -> (play,no)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 6, 14]
```
Thanks,
Dung
|
How to classify test objects with this ruleset in order of priority?
|
CC BY-SA 3.0
| null |
2014-10-02T13:06:22.433
|
2016-12-13T09:18:45.437
|
2016-12-13T09:18:45.437
|
8501
|
3503
|
[
"classification",
"association-rules"
] |
1196
|
1
| null | null |
2
|
188
|
Many discussions of missing data in supervised (and unsupervised) learning deal with various methods of imputation, like mean values or EM. But in some cases the data will be missing as a necessary consequence of the data generation process.
For instance, let's say I'm trying to predict students' grades, and one of the inputs I want to analyze is the average grades of the student's siblings. If a particular student is an only child, then that value will be missing, not because we failed to collect the data, but because logically there is no data to collect. This is distinct from cases where the student has siblings, but we can't find their grades.
Other examples abound: say we're in college admissions and we want to include students' AP exam results, but not all students took AP exams. Or we're looking at social network data, but not all subjects have facebook and/or twitter accounts.
These data are missing, but they're certainly not missing at random. And many algorithms, such as all supervised learning packages in scikit-learn, simply demand that there be no missing values at all in the data set.
How have people dealt with this in the past, and what off-the-shelf solutions are there? For instance, I believe the gradient boosting algorithm in R uses trees with three possible branches: left, right, and missing. Any other alternatives out there?
|
Supervised Learning with Necessarily Missing Data
|
CC BY-SA 3.0
| null |
2014-10-02T17:32:42.003
|
2014-10-03T11:22:55.273
| null | null |
3510
|
[
"machine-learning"
] |
1197
|
1
| null | null |
11
|
2060
|
I've made a Naive Bayes classifier that uses the bag-of-words technique to classify spam posts on a message board. It works, but I think I could get much better results if my models considered the word orderings and phrases. (ex: 'girls' and 'live' may not trigger a high spam score, even though 'live girls' is most likely junk). How can I build a model that takes word ordering into account?
I've considered storing n-grams (check-out-these, out-these-live, these-live-girls), but this seems to radically increase the size of the dictionary I keep score in and causes inconsistency as phrases with very similar wording but different order will slip through.
I'm not tied to Bayesian classification, but I'd like something that someone without a strong background in statistics could grok and implement.
|
How can I classify text considering word order, instead of just using a bag-of-words approach?
|
CC BY-SA 3.0
| null |
2014-10-02T23:15:27.050
|
2017-11-13T14:20:37.103
|
2017-11-13T14:20:37.103
|
29575
|
3513
|
[
"classification"
] |
1198
|
2
| null |
155
|
12
| null |
As you mentioned, the API is the hard part, not the data. [Quandl](http://quandl.com) seems to solve this problem by providing over 10 million publicly available data sets under one easy, RESTful API. If programming isn't your strong suit, there is a free tool to make loading data into Excel very easy. Additionally, if you do enjoy programming, there are several native libraries in [R, Python, Java and more](https://www.quandl.com/help/libraries).
| null |
CC BY-SA 3.0
| null |
2014-10-03T00:25:48.490
|
2014-10-03T00:25:48.490
| null | null |
3514
| null |
1199
|
2
| null |
1196
|
1
| null |
I solved a similar problem not so long ago. Let X be a numerical variable that has missing values. First, we assign the value 0 to those instances in which the value is missing. Then, we add a new categorical variable to our model, X_missing, which domain is {True,False}. We obtain a data model with mixed numerical/categorical variables. You can then apply gradient descent to train a regression model from these variables.
| null |
CC BY-SA 3.0
| null |
2014-10-03T11:22:55.273
|
2014-10-03T11:22:55.273
| null | null |
2576
| null |
1200
|
2
| null |
1190
|
2
| null |
I think I'd try to model the rate at which the ambulances are released instead of the time it takes. That would let me use Poisson regression, which is the canonical type of GLM for rates (in R's GLM, set family = "poisson".) However, in order to use PR, the data needs to have its variance equal to its mean, or at least close to it.
| null |
CC BY-SA 3.0
| null |
2014-10-03T11:39:11.240
|
2014-10-03T11:39:11.240
| null | null |
1241
| null |
1202
|
2
| null |
1197
|
5
| null |
Try out some generative models like HMM.
Just check the following link:
[https://stats.stackexchange.com/questions/91290/how-do-i-train-hmms-for-classification](https://stats.stackexchange.com/questions/91290/how-do-i-train-hmms-for-classification)
| null |
CC BY-SA 3.0
| null |
2014-10-03T16:07:30.327
|
2014-10-03T16:07:30.327
|
2017-04-13T12:44:20.183
|
-1
|
11
| null |
1203
|
2
| null |
1197
|
7
| null |
There is a very simple hack to incorporate word order in an existing bag-of-words model implementation. Treat some of the phrases, such as the frequently occurring bi-grams (e.g. New York) as a unit, i.e. a single word instead of treating them as separate entities. This will ensure that "New York" is different from "York New". You could also define higher order word shingles such as for n=3,4 etc.
You could use the Lucene [ShingleFilter](http://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html) to decompose your document text into shingles as a pre-processing step and then apply the classifier on this decomposed text.
```
import java.io.*;
import org.apache.lucene.analysis.core.*;
import org.apache.lucene.analysis.*;
import org.apache.lucene.analysis.shingle.ShingleFilter;
import org.apache.lucene.analysis.standard.*;
import org.apache.lucene.util.*;
import org.apache.lucene.analysis.util.*;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.analysis.charfilter.*;
import org.apache.lucene.analysis.core.WhitespaceTokenizer;
class TestAnalyzer extends Analyzer {
TestAnalyzer() {
super();
}
protected TokenStreamComponents createComponents( String fieldName, Reader reader ) {
String token;
TokenStream result = null;
Tokenizer source = new WhitespaceTokenizer( Version.LUCENE_CURRENT, reader );
result = new ShingleFilter(source, 2, 2);
return new TokenStreamComponents( source, result );
}
}
public class LuceneTest {
public static void main(String[] args) throws Exception {
TestAnalyzer analyzer = new TestAnalyzer();
try {
TokenStream stream = analyzer.tokenStream("field", new StringReader("This is a sample sentence."));
CharTermAttribute termAtt = stream.addAttribute(CharTermAttribute.class);
stream.reset();
// print all tokens until stream is exhausted
while (stream.incrementToken()) {
System.out.println(termAtt.toString());
}
stream.end();
stream.close();
}
catch (Exception ex) {
ex.printStackTrace();
}
}
}
```
| null |
CC BY-SA 3.0
| null |
2014-10-03T18:05:13.163
|
2014-10-03T18:05:13.163
| null | null |
984
| null |
1204
|
1
| null | null |
-1
|
1050
|
I would like to do some data mining and NLP experiments to do some research
I have decided to use NLTK or related tools and software
Which environment or operating system do you suggest for my purpose? I mean doing research on NLP
Windows or Linux?
I am a user of Windows but I thought if Linux has better shell and related software for NLP tasks then I switch to Linux
What is your experience and your preferred OS?
As NLTK is in Python I thought Python is a good language for my purpose, do you suggest Python too?
|
NLP lab, linux or windows and which programming languages?
|
CC BY-SA 3.0
| null |
2014-10-03T20:12:24.427
|
2020-06-12T19:43:25.510
|
2020-06-12T19:43:25.510
|
1330
|
3436
|
[
"nlp",
"tools",
"linux",
"windows"
] |
1205
|
1
|
1212
| null |
1
|
940
|
Next week I'm going to begin prototyping a recommendation engine for work. I've implemented/completed the Netflix Challenge in Java before (for college) but have no real idea what to use for a production/enterprise level recommendation engine. Taking into consideration everything from a standalone programming language to things like Apache Mahout and Neo4j, does anyone have any advice on how to proceed?
|
Recommended Language/Framework for Building a New Recommendation Engine
|
CC BY-SA 3.0
| null |
2014-10-04T07:50:11.633
|
2014-10-09T10:50:39.197
| null | null |
3526
|
[
"recommender-system"
] |
1206
|
2
| null |
1195
|
1
| null |
Suppose your test object is `(sunny, hot, normal, TRUE)`. Look through the rules top to bottom and see if any of the conditions are matched. The first rule for example tests the `outlook` feature. The value doesn't match, so the rule isn't matched. Move on to the next rule. And so on. In this case, rule 5 matches the test case and the classification for the p lay variable is "yes".
More generally, for any test case, look at the values its features take and find the first rule that those values satisfy. The implication of that rule will be its classification.
| null |
CC BY-SA 3.0
| null |
2014-10-04T15:52:44.370
|
2014-10-04T15:52:44.370
| null | null |
51
| null |
1207
|
2
| null |
1197
|
3
| null |
There are a bunch of techniques. You have already mentioned n-gram, then there is word combination and others. But the main problem (at least from your point of view) is that as the feature becomes more complex (like n-gram) the feature count increases dramatically. This is manageable. Basically before classification you must score your features and then threshold at a certain score. this way the features (or in your case n-grams) that are scored below a certain level are omitted and the feature count becomes manageable.
as for the scoring. There are numerous ways (which to select is dependent on your application) to score the features. You can begin with "BiNormal separation", "chi square", "Information Gain" and etc.
I don't know if this answer helps you but if you are interested i can elaborate...
I forgot, in word combination you put a window of size m on the text and extract each combination of n words. of course n
| null |
CC BY-SA 3.0
| null |
2014-10-04T19:37:51.523
|
2014-10-04T19:37:51.523
| null | null |
3530
| null |
1208
|
1
|
4895
| null |
3
|
73
|
Okay, here is the background:
I am doing text mining, and my basic flow is like this:
extract feature (n-gram), reduce feature count, score (tf-idf) and classify. for my own sake i am doing comparison between SVM and neural network classifiers. here is the weird part (or am i wrong and this is reasonable?), if i use 2gram the classifiers' result (accuracy/precision) is different and the SVM is the better one; but when i use 3-gram the results are exactly the same. what causes this? is there any explanation? is it the case of very separable classes?
|
What circumstances causes two different classifiers to classify data exactly like one another
|
CC BY-SA 3.0
| null |
2014-10-04T19:49:53.543
|
2015-01-17T04:42:47.530
| null | null |
3530
|
[
"text-mining",
"neural-network",
"svm"
] |
1209
|
2
| null |
1151
|
1
| null |
coursera should be a good start. i have seen machine learning on their course list if im not mistaken. once you get the hang of it, a machine learning classic text like "machine learning and pattern recognition" by bishop or other texts can familiarize you with different classes of algorithms.
if you want to go in depth, mathematics is a must, but if you just want to get familiar and use the algorithms there's absolutely no need for it
| null |
CC BY-SA 3.0
| null |
2014-10-04T19:55:31.480
|
2014-10-04T19:55:31.480
| null | null |
3530
| null |
1210
|
2
| null |
1059
|
0
| null |
i have seen many papers that are basically what you just described. there is nothing wrong with what you are doing, but there are severe limitations in how this can predict the market.
let me give you an example: suppose that u have some data and you begin to predict. with each set of data, you predict the next datapoint. and then you feed this datapoint back to the system as input and do this on and on .... in most of the times the system would just continue the last trend and the timeseries won't break. this is not prediction, this is line continuation... only when the system sees the break in real data, will the prediction break, and this is the lag that you are talking about (if i understand your question right).
The first thing that you can do to enhance this is to extract some market indicators from the price. this would really help
| null |
CC BY-SA 3.0
| null |
2014-10-04T20:09:17.547
|
2014-10-04T20:09:17.547
| null | null |
3530
| null |
1212
|
2
| null |
1205
|
0
| null |
If you merely want to scale up a simple collaborative filter (low rank matrix factorization), I'd suggest looking at [graphlab](http://docs.graphlab.org/collaborative_filtering.html). Another graph-based (or should I say [Giraph](http://giraph.apache.org/)?) solution is [Okapi](http://grafos.ml/okapi.html#collaborative). Spark's [MLLib](http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html) is another option ([details](https://databricks.com/blog/2014/07/23/scalable-collaborative-filtering-with-spark-mllib.html)), and it also supports implicit feedback out of the box. Mahout's behind the curve today; I wouldn't bother with it until it is [migrated to Spark](https://www.mapr.com/blog/mahout-spark-what%E2%80%99s-new-recommenders).
If you want to do something that the libraries don't do, say with regularization, you'll have to roll your own solution in your general purpose programming language of choice. It's not hard to get a prototype running, but you might run into scale problems in production; that's why I recommended solutions that scale easily.
There are also black box recommender systems from commercial vendors but I have no experience with those.
| null |
CC BY-SA 3.0
| null |
2014-10-05T05:01:03.437
|
2014-10-07T07:45:19.727
|
2014-10-07T07:45:19.727
|
381
|
381
| null |
1213
|
1
| null | null |
4
|
4644
|
I'm looking for a supervised learning algorithm that can take 2d data for input and output. As an example of something similar to my data, consider a black image with some sparse white dots. Blur that image using a full range of grayscale. Then create a machine that can take the blurred image as input and produce the original sharp image as output.
I could make some sample 1D data by taking a region/radius around the original sharp point, but I don't know the exact radius. It would be significant data duplication and a lot of guessing.
Any good algorithm suggestions for this problem? Thanks for your time.
|
machine learning algorithms for 2d data?
|
CC BY-SA 3.0
| null |
2014-10-05T05:12:13.633
|
2017-10-26T15:12:58.433
|
2014-11-14T08:36:25.320
|
847
|
3539
|
[
"machine-learning",
"dimensionality-reduction"
] |
1214
|
1
|
2262
| null |
11
|
430
|
I am a hands on researcher and I like testing out viable solutions, so I tend to run a lot of experiments. For example, if I am calculating a similarity score between documents, I might want to try out many measures. In fact, for each measure I might need to make several runs to test the effect of some parameters.
So far, I've been tracking the runs inputs and their results by writing out the results into files with as much info about the inputs. The problem is that retrieving a specific result becomes a challenge sometimes, even if I try to add the input info to th filename. I tried using a spreadsheet with links to results but this isn't making a huge difference.
What tools/process do you use for the book keeping of your experiments?
|
Book keeping of experiment runs and results
|
CC BY-SA 3.0
| null |
2014-10-05T06:25:27.303
|
2014-10-12T19:52:55.430
| null | null |
3540
|
[
"tools",
"experiments"
] |
1216
|
1
|
1220
| null |
9
|
5955
|
I am a 35 year old IT professional who is purely technical. I am good at programming, learning new technologies, understanding them and implementing. I did not like mathematics at school, so I didn't score well in mathematics. I am very much interested in pursuing a career in Big Data analytics. I am more interested in Analytics rather than Big Data technologies (Hadoop etc.), though I do not dislike it. However, when I look around in the internet, I see that, people who are good in analytics (Data Scientists) are mainly Mathematics graduates who have done their PHds and sound like intelligent creatures, who are far far ahead of me. I get scared sometimes to think whether my decision is correct, because learning advance statistics on your own is very tough and requires a of hard work and time investment.
I would like to know whether my decision is correct, or should I leave this piece of work to only intellectuals who have spend their life in studying in prestigious colleges and earned their degrees and PHDs.
|
Career switch to Big Data Analytics
|
CC BY-SA 3.0
| null |
2014-10-06T12:00:55.303
|
2019-07-16T12:35:08.443
|
2015-01-10T11:10:53.493
|
97
|
3550
|
[
"career"
] |
1220
|
2
| null |
1216
|
4
| null |
Due to high demand, it is possible to start a career in data science without a formal degree. My experience is that having a degree is often a 'requirement' in job descriptions, but if the employer is desperate enough, then that won't matter. In general, it's harder to get into large corporations with formalized job application processes than smaller companies without them. "Knowing people" can get you a long way, in either case.
Regardless of your education, no matter how high demand is, you must have the skills to do the job.
You are correct in noting that advanced statistics and other mathematics are very hard to learn independently. It is a matter of how badly you want to make the career change. While some people do have 'natural talent' in mathematics, everybody does have to do the work to learn. Some may learn more quickly, but everybody has to take the time to learn.
What it comes down to is your ability to show potential employers that you have a genuine interest in the field, and that you will be able to learn quickly on the job. The more knowledge you have, the more projects you can share in a portfolio, and the more work experience under your belt, the higher level jobs that will be available to you. You may have to start in an entry level position first.
I could suggest ways to study mathematics independently, but that isn't part of your question. For now, just know that it's hard, but possible if you are determined to make a career change. Strike while the iron is hot (while demand is high).
| null |
CC BY-SA 3.0
| null |
2014-10-06T19:38:00.207
|
2014-10-06T19:38:00.207
| null | null |
3466
| null |
1221
|
2
| null |
1204
|
2
| null |
Python is easy to use and manage in Linux. The Python package manager PIP and the Python environment manager Virtualenv are both fully supported in Linux. See: [http://docs.python-guide.org/en/latest/starting/install/linux/](http://docs.python-guide.org/en/latest/starting/install/linux/)
The PyCharm IDE from JetBrains is also available for free in Linux (community edition), and it runs well. This is a great tool for a Python developer.
In addition, if you need to integrate other tools, they will most likely be best supported in Linux. Need a MapReduce in your app? Hadoop will run in Linux. Need to store data in MySQL? Linux. Most data science tools are open source, and most open source technologies are supported best in Linux.
There is also no cost for running Linux.
If you need to keep Windows for your primary work computer (like your laptop), then you can install Linux on a server and use Putty to SSH into the Linux machine. You would then perform your work on the Linux machine, though you would still be running Windows locally.
| null |
CC BY-SA 3.0
| null |
2014-10-06T19:54:57.197
|
2014-10-06T19:54:57.197
| null | null |
3466
| null |
1222
|
2
| null |
1216
|
1
| null |
You should look more into the infrastructure side of things if you don't like maths. The lower you go in the software stack, the further away you get from maths (of the data science sort). In other words, you could build the foundation that others will use to create the tools that will serve analysts. Think of companies like Cloudera, MapR, Databricks, etc. Skills that will come in handy are distributed systems and database design. You are not going to be become a data scientist without maths; that's a ridiculous notion!
| null |
CC BY-SA 3.0
| null |
2014-10-06T23:38:17.523
|
2014-10-06T23:38:17.523
| null | null |
381
| null |
1223
|
1
|
1224
| null |
4
|
1877
|
I'm curious if anyone else has run into this. I have a data set with about 350k samples, each with 4k sparse features. The sparse fill rate is about 0.5%. The data is stored in a `scipy.sparse.csr.csr_matrix` object, with `dtype='numpy.float64'`.
I'm using this as an input to sklearn's Logistic Regression classifier. The [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) indicates that sparse CSR matrices are acceptable inputs to this classifier. However, when I train the classifier, I get extremely bad memory performance; the memory usage of my process explodes from ~150 MB to fill all the available memory and then everything grinds to a halt as memory swapping to disk takes over.
Does anyone know why this classifier might expand the sparse matrix to a dense matrix? I'm using the default parameters for the classifier at the moment, within an updated anacoda distribution. Thanks!
```
scipy.__version__ = '0.14.0'
sklearn.__version__ = '0.15.2'
```
|
Scikit Learn Logistic Regression Memory Leak
|
CC BY-SA 3.0
| null |
2014-10-07T17:27:22.063
|
2014-10-07T18:09:19.433
| null | null |
3568
|
[
"efficiency",
"performance",
"scikit-learn"
] |
1224
|
2
| null |
1223
|
4
| null |
Ok, this ended up being an RTFM situation, although in this case it was RTF error message.
While running this, I kept getting the following error:
```
DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
```
I assumed that, since this had to do with the target vector, and since it was a warning only, that it would just silently change my target vector to 1-D.
However, when I explicitly converted my target vector to 1-D, my memory problems went away. Apparently having the target vector in an incorrect form caused it to convert my input vectors into dense vectors from sparse vectors.
Lesson learned: follow the recommendations when sklearn 'suggests' you do something.
| null |
CC BY-SA 3.0
| null |
2014-10-07T18:09:19.433
|
2014-10-07T18:09:19.433
| null | null |
3568
| null |
1225
|
1
|
1235
| null |
2
|
411
|
I have product purchase count data which looks likes this:
```
user item1 item2
a 2 4
b 1 3
c 5 6
... ... ...
```
These data are imported into `python` using `numpy.genfromtxt`. Now I want to process it to get the correlation between `item1` purchase amount and `item2` purchase amount -- basically for each value `x` of `item1` I want to find all the users who bought `item1` in `x` quantity then average the `item2` over the same users. What is the best way to do this? I can do this by using `for` loops but I thought there might be something more efficient than that. Thanks!
|
data processing, correlation calculation
|
CC BY-SA 3.0
| null |
2014-10-08T10:42:41.833
|
2014-10-09T10:58:02.557
| null | null |
3580
|
[
"python",
"correlation"
] |
1226
|
2
| null |
1204
|
0
| null |
If you want to do NLP stay away from Windows! ( No Offense) Python NLTK stands out in terms of resource and community support for NLP.
Moreover in Linux you can always leverage the benefit of awk,sed, grep which are pretty useful tools when it comes to pre-processing. And no worries of funny characters getting introduced when you use a file.
| null |
CC BY-SA 3.0
| null |
2014-10-08T13:37:27.357
|
2014-10-08T13:37:27.357
| null | null |
3232
| null |
1227
|
1
| null | null |
8
|
164
|
I'm trying to figure out a good (and fast) solution to the following problem:
I have two models I'm working with, let's call them players and teams. A player can be on multiple teams and a team can have multiple players). I'm working on creating a UI element on a form that allows a user to select multiple teams (checkboxes). As the user is selecting (or deselecting) teams, I'd like to display the teams grouped by the players.
So for examples:
- If the selected teams have no players that intersect, each team would have its own section.
- If the user selects two teams and they have the same players, there would be one section containing the names of the two teams and all the players.
- If TEAM_A has players [1, 2, 4, 5] and TEAM_B has players [1, 3, 5, 6]. There would be the following sections: SECTION_X = [TEAM_A, TEAM_B, 1, 5], SECTION_Y = [TEAM_A, 2, 3], SECTION _Z = [TEAM_B, 3, 5]
I hope that's clear. Essentially, I want to find the teams that players have in common and group by that. I was thinking maybe there is a way to do this by navigating a bipartite graph? Not exactly sure how though and I might be overthinking it. I was hoping to do this by creating some type of data structure on the server and using it on the client. I would love to hear your suggestions and I appreciate any help you can give!
|
Union grouping in bipartite graphs?
|
CC BY-SA 3.0
| null |
2014-10-08T16:17:11.663
|
2015-02-25T11:18:26.313
| null | null |
3583
|
[
"graphs"
] |
1229
|
1
|
1231
| null |
7
|
8654
|
Lets say I have a database of users who rate different products on a scale of 1-5. Our recommendation engine recommends products to users based on the preferences of other users who are highly similar. My first approach to finding similar users was to use Cosine Similarity, and just treat user ratings as vector components. The main problem with this approach is that it just measures vector angles and doesn't take rating scale or magnitude into consideration.
My question is this:
Are there any drawbacks to just using the percentage difference between the vector components of two vectors as a measure of similarity? What disadvantages, if any, would I encounter if I used that method, instead of Cosine Similarity or Euclidean Distance?
For Example, why not just do this:
```
n = 5 stars
a = (1,4,4)
b = (2,3,4)
similarity(a,b) = 1 - ( (|1-2|/5) + (|4-3|/5) + (|4-4|/5) ) / 3 = .86667
```
Instead of Cosine Similarity :
```
a = (1,4,4)
b = (2,3,4)
CosSimilarity(a,b) =
(1*2)+(4*3)+(4*4) / sqrt( (1^2)+(4^2)+(4^2) ) * sqrt( (2^2)+(3^2)+(4^2) ) = .9697
```
|
Cosine Similarity for Ratings Recommendations? Why use it?
|
CC BY-SA 3.0
| null |
2014-10-09T01:41:11.797
|
2014-10-10T00:03:00.447
| null | null |
3587
|
[
"machine-learning",
"recommender-system"
] |
1230
|
2
| null |
1229
|
5
| null |
For ratings, I think you would need to use [Spearman's rank correlation](http://mathworld.wolfram.com/SpearmanRankCorrelationCoefficient.html) for your similarity metric.
Cosine similarity is often used when comparing documents, and perhaps would not be a good fit for rank variables. Euclidean distance is fine for lower dimensions, but comparison of rank variables normally call for Spearman.
Here's a [question on CrossValidated regarding Spearman (vs Pearson)](https://stats.stackexchange.com/questions/8071/how-to-choose-between-pearson-and-spearman-correlation), which might shed more light for you.
| null |
CC BY-SA 3.0
| null |
2014-10-09T01:57:41.010
|
2014-10-09T01:57:41.010
|
2017-04-13T12:44:20.183
|
-1
|
24
| null |
1231
|
2
| null |
1229
|
5
| null |
Rating bias and scale can easily be accounted for by standardization. The point of using Euclidean similarity metrics in vector space co-embeddings is that it reduces the recommendation problem to one of finding the nearest neighbors, which can be done efficiently both exactly and approximately. What you don't want to do in real-life settings is to have to compare every item/user pair and sort them according to some expensive metric. That just doesn't scale.
One trick is to use an approximation to cull the herd to a managable size of tentative recommendations, then to run your expensive ranking on top of that.
edit: Microsoft Research is presenting a paper that covers this very topic at RecSys right now: [Speeding Up the Xbox Recommender System Using a
Euclidean Transformation for Inner-Product Spaces](http://www.ulrichpaquet.com/Papers/SpeedUp.pdf)
| null |
CC BY-SA 3.0
| null |
2014-10-09T02:24:07.640
|
2014-10-10T00:03:00.447
|
2014-10-10T00:03:00.447
|
381
|
381
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.