Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
β | ParentId
stringlengths 1
6
β | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
β | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
β | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
β | LastEditorUserId
stringlengths 1
6
β | OwnerUserId
stringlengths 1
6
β | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
893
|
1
| null | null |
131
|
371604
|
I am building a regression model and I need to calculate the below to check for correlations
- Correlation between 2 Multi level categorical variables
- Correlation between a Multi level categorical variable and
continuous variable
- VIF(variance inflation factor) for a Multi
level categorical variables
I believe its wrong to use Pearson correlation coefficient for the above scenarios because Pearson only works for 2 continuous variables.
Please answer the below questions
- Which correlation coefficient works best for the above cases ?
- VIF calculation only works for continuous data so what is the
alternative?
- What are the assumptions I need to check before I use the correlation coefficient you suggest?
- How to implement them in SAS & R?
|
How to get correlation between two categorical variable and a categorical variable and continuous variable?
|
CC BY-SA 3.0
| null |
2014-08-03T13:07:24.143
|
2017-09-27T14:52:39.337
|
2014-08-07T23:38:37.457
|
97
|
1151
|
[
"r",
"statistics",
"correlation"
] |
895
|
1
|
901
| null |
4
|
186
|
I assume that each person on Facebook is represented as a node (of a Graph) in Facebook, and relationship/friendship between each person(node) is represented as an edge between the involved nodes.
Given that there are millions of people on Facebook, how is the Graph stored?
|
Facebook's Huge Database
|
CC BY-SA 3.0
| null |
2014-08-03T16:09:35.750
|
2014-08-04T15:59:12.573
| null | null |
2756
|
[
"graphs"
] |
896
|
1
|
932
| null |
4
|
1975
|
I am trying to implement the Brown Clustering Algorithm.
Paper details: "Class-Based n-gram Models of Natural Language" by Brown et al
The algorithm is supposed to in `O(|V|k^2)` where `|V|` is the size of the vocabulary and k is the number of clusters. I am unable to implement it this efficiently. In fact, the best I can manage is `O(|V|k^3)` which is too slow. My current implementation for the main part of the algorithm is as follows:
```
for w = number of clusters + 1 to |V|
{
word = next most frequent word in the corpus
assign word to a new cluster
initialize MaxQuality to 0
initialize ArgMax vector to (0,0)
for i = 0 to number of clusters - 1
{
for j = i to number of clusters
{
Quality = Mutual Information if we merge cluster i and cluster j
if Quality > MaxQuality
{
MaxQuality = Quality
ArgMax = (i,j)
}
}
}
}
```
I compute quality as follows:
```
1. Before entering the second loop compute the pre-merge quality i.e. quality before doing any merges.
2. Every time a cluster-pair merge step is considered:
i. assign quality := pre-merge quality
ii. quality = quality - any terms in the mutual information equation that contain cluster i or cluster j (pre-merge)
iii. quality = quality + any terms in the mutual information equation that contain (cluster i U cluster j) (post-merge)
```
In my implementation, the first loop has approx |V| iterations, the second and third loop approx k iterations each. To compute quality at each step requires approx a further k iterations. In total it runs in `(|V|k^3)` time.
How do you get it to run in `(|V|k^2)`?
|
How to implement Brown Clustering Algorithm in O(|V|k^2)
|
CC BY-SA 3.0
| null |
2014-08-03T16:38:38.853
|
2017-10-04T12:03:07.733
|
2017-10-04T12:03:07.733
|
11097
|
2817
|
[
"nlp",
"efficiency",
"clustering"
] |
897
|
2
| null |
895
|
3
| null |
Having worked with Facebook data a bit (harvested from Facebook users) we stored it just as a pair of values: USER_ID, FRIEND_USER_ID.
But I guess your questions is a bit deeper? You can store it in different ways, depending on your research question. One interesting option is triads for example - [http://mypersonality.org/wiki/doku.php?id=list_of_variables_available#triads](http://mypersonality.org/wiki/doku.php?id=list_of_variables_available#triads)
| null |
CC BY-SA 3.0
| null |
2014-08-03T20:21:08.150
|
2014-08-03T20:21:08.150
| null | null |
587
| null |
898
|
2
| null |
893
|
125
| null |
## Two Categorical Variables
Checking if two categorical variables are independent can be done with Chi-Squared test of independence.
This is a typical [Chi-Square test](https://en.wikipedia.org/wiki/Chi-square_test): if we assume that two variables are independent, then the values of the contingency table for these variables should be distributed uniformly. And then we check how far away from uniform the actual values are.
There also exists a [Crammer's V](http://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V) that is a measure of correlation that follows from this test
### Example
Suppose we have two variables
- gender: male and female
- city: Blois and Tours
We observed the following data:

Are gender and city independent? Let's perform a Chi-Squred test. Null hypothesis: they are independent, Alternative hypothesis is that they are correlated in some way.
Under the Null hypothesis, we assume uniform distribution. So our expected values are the following

So we run the chi-squared test and the resulting p-value here can be seen as a measure of correlation between these two variables.
To compute Crammer's V we first find the normalizing factor chi-squared-max which is typically the size of the sample, divide the chi-square by it and take a square root

### R
```
tbl = matrix(data=c(55, 45, 20, 30), nrow=2, ncol=2, byrow=T)
dimnames(tbl) = list(City=c('B', 'T'), Gender=c('M', 'F'))
chi2 = chisq.test(tbl, correct=F)
c(chi2$statistic, chi2$p.value)
```
Here the p value is 0.08 - quite small, but still not enough to reject the hypothesis of independence. So we can say that the "correlation" here is 0.08
We also compute V:
```
sqrt(chi2$statistic / sum(tbl))
```
And get 0.14 (the smaller v, the lower the correlation)
Consider another dataset
```
Gender
City M F
B 51 49
T 24 26
```
For this, it would give the following
```
tbl = matrix(data=c(51, 49, 24, 26), nrow=2, ncol=2, byrow=T)
dimnames(tbl) = list(City=c('B', 'T'), Gender=c('M', 'F'))
chi2 = chisq.test(tbl, correct=F)
c(chi2$statistic, chi2$p.value)
sqrt(chi2$statistic / sum(tbl))
```
The p-value is 0.72 which is far closer to 1, and v is 0.03 - very close to 0
## Categorical vs Numerical Variables
For this type we typically perform [One-way ANOVA test](http://en.wikipedia.org/wiki/F_test#One-way_ANOVA_example): we calculate in-group variance and intra-group variance and then compare them.
### Example
We want to study the relationship between absorbed fat from donuts vs the type of fat used to produce donuts (example is taken from [here](http://statland.org/R/R/R1way.htm))

Is there any dependence between the variables?
For that we conduct ANOVA test and see that the p-value is just 0.007 - there's no correlation between these variables.
### R
```
t1 = c(164, 172, 168, 177, 156, 195)
t2 = c(178, 191, 197, 182, 185, 177)
t3 = c(175, 193, 178, 171, 163, 176)
t4 = c(155, 166, 149, 164, 170, 168)
val = c(t1, t2, t3, t4)
fac = gl(n=4, k=6, labels=c('type1', 'type2', 'type3', 'type4'))
aov1 = aov(val ~ fac)
summary(aov1)
```
Output is
```
Df Sum Sq Mean Sq F value Pr(>F)
fac 3 1636 545.5 5.406 0.00688 **
Residuals 20 2018 100.9
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
```
So we can take the p-value as the measure of correlation here as well.
## References
- https://en.wikipedia.org/wiki/Chi-square_test
- http://mlwiki.org/index.php/Chi-square_Test_of_Independence
- http://courses.statistics.com/software/R/R1way.htm
- http://mlwiki.org/index.php/One-Way_ANOVA_F-Test
- http://mlwiki.org/index.php/Cramer%27s_Coefficient
| null |
CC BY-SA 3.0
| null |
2014-08-04T09:42:03.590
|
2017-08-06T19:44:38.687
|
2017-08-06T19:44:38.687
|
37611
|
816
| null |
899
|
2
| null |
895
|
1
| null |
When I worked with social network data, we stoted the "friendship" relation in a database in the table `Friends(friend_a, friend_b, ...)` with a B-Tree index on `(friend_a, friend_b)` plus also some partitioning.
In our case it was a little bit different since the graph was directed, so it wasn't really "friendship", but rather "following/follower" relationship. But for friendship I would just store two edges: both `(friend_a, friend_b)` and `(friend_b, friend_a)`
We used MySQL to store the data, if it matters, but I guess it shouldn't.
| null |
CC BY-SA 3.0
| null |
2014-08-04T09:48:09.660
|
2014-08-04T09:48:09.660
| null | null |
816
| null |
900
|
2
| null |
700
|
4
| null |
I think hierarchical clustering would be more time efficient in your case (with a single dimension).
Depending on your task, you may implement something like this:
Having N datapoints di with their 1-dimension value xi:
- Sort datapoints based on their xi value.
- Calculate distances between adjacent datapoints (N-1 distances). Each distance must be assigned a pair of original datapoints (di, dj).
- Sort distances in descending order to generate list of datapoint pairs (di, dj), starting from the closest one.
- Iteratively unite datapoints (di, dj) into clusters, starting from beginning of the list (the closest pair). (Depending on current state of di and dj, uniting them means: (a) creating new cluster for two unclustered datapoints, (b) adding a datapoint to existing cluster and (c) uniting two clusters.)
- Stop uniting, if the distance is over some threshold.
- Create singleton clusters for datapoints which did not get into clusters.
This algorithm implements [single linkage](http://en.wikipedia.org/wiki/Single-linkage_clustering) clustering. It can be tuned easily to implement average linkage. [Complete linkage](http://en.wikipedia.org/wiki/Complete_linkage_clustering) will be less efficient, but maybe easier ones will give good results depending on your data and task.
I believe for 200K datapoints it must take under second, if you use proper data structures for above operations.
| null |
CC BY-SA 3.0
| null |
2014-08-04T15:48:29.757
|
2014-08-04T15:48:29.757
| null | null |
2574
| null |
901
|
2
| null |
895
|
6
| null |
Strange as it sounds, graphs and graph databases are typically implemented as [linked lists](http://docs.oracle.com/javase/7/docs/api/java/util/LinkedList.html). As alluded to [here](http://docs.neo4j.org/chunked/stable/cypher-cookbook-newsfeed.html), even the most popular/performant graph database out there (neo4j), is secretly using something akin to a doubly-linked list.
Representing a graph this way has a number of significant benefits, but also a few drawbacks. Firstly, representing a graph this way means that you can do edge-based insertions in near-constant time. Secondly, this means that traversing the graph can happen extremely quickly, if we're only looking to either step up or down a linked list.
The biggest drawback of this though comes from something sometimes called The Justin Bieber Effect, where nodes with a large number of connections tend to be extremely slow to evaluate. Imagine having to traverse a million semi-redundant links every time someone linked to Justin Bieber.
I know that the awesome folks over at Neo4j are working on the second problem, but I'm not sure how they're going about it, or how much success they've had.
| null |
CC BY-SA 3.0
| null |
2014-08-04T15:59:12.573
|
2014-08-04T15:59:12.573
| null | null |
548
| null |
902
|
1
|
908
| null |
4
|
1903
|
Are there any general rules that one can use to infer what can be learned/generalized from a particular data set? Suppose the dataset was taken from a sample of people. Can these rules be stated as functions of the sample or total population?
I understand the above may be vague, so a case scenario: Users participate in a search task, where the data are their queries, clicked results, and the HTML content (text only) of those results. Each of these are tagged with their user and timestamp. A user may generate a few pages - for a simple fact-finding task - or hundreds of pages - for a longer-term search task, like for class report.
Edit: In addition to generalizing about a population, given a sample, I'm interested in generalizing about an individual's overall search behavior, given a time slice. Theory and paper references are a plus!
|
When is there enough data for generalization?
|
CC BY-SA 3.0
| null |
2014-08-04T19:10:57.187
|
2014-08-07T00:20:05.817
|
2014-08-04T19:23:09.483
|
1097
|
1097
|
[
"machine-learning",
"data-mining",
"statistics",
"search"
] |
903
|
1
|
909
| null |
15
|
4778
|
I have a set of results from an A/B test (one control group, one feature group) which do not fit a Normal Distribution.
In fact the distribution resembles more closely the Landau Distribution.
I believe the independent t-test requires that the samples be at least approximately normally distributed, which discourages me using the t-test as a valid method of significance testing.
But my question is:
At what point can one say that the t-test is not a good method of significance testing?
Or put another way, how can one qualify how reliable the p-values of a t-test are, given only the data set?
|
Analyzing A/B test results which are not normally distributed, using independent t-test
|
CC BY-SA 3.0
| null |
2014-08-04T22:27:10.837
|
2016-01-06T11:48:25.473
|
2016-01-06T11:48:25.473
|
11097
|
2830
|
[
"dataset",
"statistics",
"ab-test"
] |
904
|
1
| null | null |
21
|
12977
|
I need to generate periodic (daily, monthly) web analytics dashboard reports. They will be static and don't require interaction, so imagine a PDF file as the target output. The reports will mix tables and charts (mainly sparkline and bullet graphs created with ggplot2). Think Stephen Few/Perceptual Edge style dashboards, such as: 
but applied to web analytics.
Any suggestions on what packages to use creating these dashboard reports?
My first intuition is to use R markdown and knitr, but perhaps you've found a better solution. I can't seem to find rich examples of dashboards generated from R.
|
What do you use to generate a dashboard in R?
|
CC BY-SA 3.0
| null |
2014-08-04T19:21:45.067
|
2021-03-11T19:06:05.303
|
2014-08-07T23:37:26.890
|
1156
| null |
[
"r",
"visualization"
] |
905
|
2
| null |
904
|
9
| null |
[Shiny](http://shiny.rstudio.com) is a framework for generating HTML-based apps that execute R code dynamically. Shiny apps can stand alone or be built into Markdown documents with `knitr`, and Shiny development is fully integrated into RStudio. There's even a free service called [shinyapps.io](https://www.shinyapps.io) for hosting Shiny apps, the `shiny` package has functions for deploying Shiny apps directly from R, and RStudio has a GUI interface for calling those functions. There's plenty more info in the Tutorial section of the site.
Since it essentially "compiles" the whole thing to JavaScript and HTML, you can use CSS to freely change the formatting and layout, although Shiny has decent wrapper functionality for this. But it just so happens that their default color scheme is similar to the one in the screenshot you posted.
edit: I just realized you don't need them to be dynamic. Shiny still makes very nice-looking webpages out of the box, with lots of options for rearranging elements. There's also functionality for downloading plots, so you can generate your dashboard every month by just updating your data files in the app, and then saving the resulting image to PDF.
| null |
CC BY-SA 3.0
| null |
2014-08-04T19:28:38.173
|
2014-08-04T19:34:39.323
| null | null |
1156
| null |
906
|
2
| null |
812
|
2
| null |
[PajekXXL](http://mrvar.fdv.uni-lj.si/pajek/PajekXXL.htm) is designed to handle enormous networks. But Pajek is also kind of a bizarre program with an unintuitive interface.
| null |
CC BY-SA 3.0
| null |
2014-08-04T23:37:27.877
|
2014-08-04T23:37:27.877
| null | null |
1156
| null |
907
|
2
| null |
904
|
16
| null |
I think that `Shiny` is an overkill in this situation and doesn't match your requirement of dashboard reports to be static. I guess, that your use of the term "dashboard" is a bit confusing, as some people might consider that it has more emphasis of interactivity (real-time dashboards), rather than information layout, as is my understanding (confirmed by the "static" requirement).
My recommendation to you is to use R Markdown and knitr, especially since these packages have much lower learning curve than Shiny. Moreover, I have recently run across an R package, which, in my view, ideally suits your requirement of embedding small charts/plots in a report, as presented on your picture above. This package generates static or dynamic graphical tables and is called sparkTable ([http://cran.r-project.org/web/packages/sparkTable](http://cran.r-project.org/web/packages/sparkTable)). Its vignette is available here (there is no link to it on the package's home page): [http://publik.tuwien.ac.at/files/PubDat_228663.pdf](http://publik.tuwien.ac.at/files/PubDat_228663.pdf). Should you ever need some interactivity, `sparkTable` provides some via its simple interface to `Shiny`.
| null |
CC BY-SA 3.0
| null |
2014-08-05T07:17:34.750
|
2014-08-05T07:17:34.750
| null | null |
2452
| null |
908
|
2
| null |
902
|
6
| null |
It is my understanding that random sampling is a mandatory condition for making any generalization statements. IMHO, other parameters, such as sample size, just affect probability level (confidence) of generalization. Furthermore, clarifying the @ffriend's comment, I believe that you have to calculate needed sample size, based on desired values of confidence interval, effect size, statistical power and number of predictors (this is based on Cohen's work - see References section at the following link). For multiple regression, you can use the following calculator: [http://www.danielsoper.com/statcalc3/calc.aspx?id=1](http://www.danielsoper.com/statcalc3/calc.aspx?id=1).
More information on how to select, calculate and interpret effect sizes can be found in the following nice and comprehensive paper, which is freely available: [http://jpepsy.oxfordjournals.org/content/34/9/917.full](http://jpepsy.oxfordjournals.org/content/34/9/917.full).
If you're using `R` (and even, if you don't), you may find the following Web page on confidence intervals and R interesting and useful: [http://osc.centerforopenscience.org/static/CIs_in_r.html](http://osc.centerforopenscience.org/static/CIs_in_r.html).
Finally, the following comprehensive guide to survey sampling can be helpful, even if you're not using survey research designs. In my opinion, it contains a wealth of useful information on sampling methods, sampling size determination (including calculator) and much more: [http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm](http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm).
| null |
CC BY-SA 3.0
| null |
2014-08-05T08:09:17.240
|
2014-08-05T08:09:17.240
| null | null |
2452
| null |
909
|
2
| null |
903
|
9
| null |
The distribution of your data doesn't need to be normal, it's the [Sampling Distribution](https://en.wikipedia.org/wiki/Sampling_distribution) that has to be nearly normal. If your sample size is big enough, then the sampling distribution of means from Landau Distribution should to be nearly normal, due to the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem).
So it means you should be able to safely use t-test with your data.
### Example
Let's consider this example: suppose we have a population with [Lognormal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution) with mu=0 and sd=0.5 (it looks a bit similar to Landau)

So we sample 30 observations 5000 times from this distribution each time calculating the mean of the sample
And this is what we get

Looks quite normal, doesn't it? If we increase the sample size, it's even more apparent

### R code
```
x = seq(0, 4, 0.05)
y = dlnorm(x, mean=0, sd=0.5)
plot(x, y, type='l', bty='n')
n = 30
m = 1000
set.seed(0)
samp = rep(NA, m)
for (i in 1:m) {
samp[i] = mean(rlnorm(n, mean=0, sd=0.5))
}
hist(samp, col='orange', probability=T, breaks=25, main='sample size = 30')
x = seq(0.5, 1.5, 0.01)
lines(x, dnorm(x, mean=mean(samp), sd=sd(samp)))
n = 300
samp = rep(NA, m)
for (i in 1:m) {
samp[i] = mean(rlnorm(n, mean=0, sd=0.5))
}
hist(samp, col='orange', probability=T, breaks=25, main='sample size = 300')
x = seq(1, 1.25, 0.005)
lines(x, dnorm(x, mean=mean(samp), sd=sd(samp)))
```
| null |
CC BY-SA 3.0
| null |
2014-08-05T08:12:15.647
|
2014-08-05T08:12:15.647
| null | null |
816
| null |
910
|
2
| null |
586
|
13
| null |
Here are the basic Natural Language Processing capabilities (or annotators) that are usually necessary to extract language units from textual data for sake of search and other applications:
[Sentence breaker](http://en.wikipedia.org/wiki/Sentence_boundary_disambiguation) - to split text (usually, text paragraphs) to sentences. Even in English it can be hard for some cases like "Mr. and Mrs. Brown stay in room no. 20."
[Tokenizer](http://en.wikipedia.org/wiki/Tokenization) - to split text or sentences to words or word-level units, including punctuation. This task is not trivial for languages with no spaces and no stable understanding of word boundaries (e.g. Chinese, Japanese)
[Part-of-speech Tagger](http://en.wikipedia.org/wiki/POS_tagger) - to guess part of speech of each word in the context of sentence; usually each word is assigned a so-called POS-tag from a tagset developed in advance to serve your final task (for example, parsing).
[Lemmatizer](http://en.wikipedia.org/wiki/Lemmatization) - to convert a given word into its canonical form ([lemma](http://en.wikipedia.org/wiki/Lemma_(morphology))). Usually you need to know the word's POS-tag. For example, word "heating" as gerund must be converted to "heat", but as noun it must be left unchanged.
[Parser](http://en.wikipedia.org/wiki/Parser) - to perform syntactic analysis of the sentence and build a syntactic tree or graph. There're two main ways to represent syntactic structure of sentence: via [constituency or dependency](http://en.wikipedia.org/wiki/Dependency_grammar#Dependency_vs._constituency).
[Summarizer](http://en.wikipedia.org/wiki/Automatic_summarization) - to generate a short summary of the text by selecting a set of top informative sentences of the document, representing its main idea. However can be done in more intelligent manner than just selecting the sentences from existing ones.
[Named Entity Recognition](http://en.wikipedia.org/wiki/Named-entity_recognition) - to extract so-called named entities from the text. Named entities are the chunks of words from text, which refer to an entity of certain type. The types may include: geographic locations (countries, cities, rivers, ...), person names, organization names etc. Before going into NER task you must understand what do you want to get and, possible, predefine a taxonomy of named entity types to resolve.
[Coreference Resolution](http://en.wikipedia.org/wiki/Coreference_resolution) - to group named entities (or, depending on your task, any other text units) into clusters corresponding to a single real object/meaning. For example, "B. Gates", "William Gates", "Founder of Microsoft" etc. in one text may mean the same person, referenced by using different expressions.
There're many other interesting NLP applications/annotators (see [NLP tasks category](http://en.wikipedia.org/wiki/Category:Tasks_of_natural_language_processing)), sentiment analysis, machine translation etc.). There're many books on this, the classical book: "Speech and Language Processing" by Daniel Jurafsky and James H. Martin., but it can be too detailed for you.
| null |
CC BY-SA 3.0
| null |
2014-08-05T09:07:42.393
|
2015-10-12T07:20:26.220
|
2015-10-12T07:20:26.220
|
12363
|
2574
| null |
911
|
2
| null |
903
|
1
| null |
Basically an independent t-test or a 2 sample t-test is used to check if the averages of the two samples are significantly different. Or, to put in another words, if there is a significant difference between the means of the two samples.
Now, the means of those 2 samples are two statistics, which according with CLT, have a normal distribution, if provided enough samples. Note that CLT works no matter of the distribution from which the mean statistic is built.
Normally one can use a z-test, but if the variances are estimated from the sample (because it is unknown), some additional uncertainty is introduced, which is incorporated in t distribution. That's why 2-sample t-test applies here.
| null |
CC BY-SA 3.0
| null |
2014-08-05T10:15:33.713
|
2014-08-05T10:15:33.713
| null | null |
108
| null |
912
|
2
| null |
744
|
7
| null |
There is an excellent comparison of the common inner-product-based similarity metrics [here](http://brenocon.com/blog/2012/03/cosine-similarity-pearson-correlation-and-ols-coefficients/).
In particular, Cosine Similarity is normalized to lie within $[-1,1]$, unlike the dot product which can be any real number. But, as everyone else is saying, that will require ignoring the magnitude of the vectors. Personally, I think that's a good thing. I think of magnitude as an internal (within-vector) structure, and angle between vectors as external (between vector) structure. They are different things and (in my opinion) are often best analyzed separately. I can't imagine a situation where I would rather compute inner products than compute cosine similarities and just compare the magnitudes afterward.
| null |
CC BY-SA 4.0
| null |
2014-08-05T12:29:05.300
|
2020-09-04T15:43:15.887
|
2020-09-04T15:43:15.887
|
1156
|
1156
| null |
913
|
2
| null |
902
|
4
| null |
There are two rules for generalizability:
- The sample must be representative. In expectation, at least, the distribution of features in your sample must match the distribution of features in the population. When you are fitting a model with a response variable, this includes features that you do not observe, but that affect any response variables in your model. Since it is, in many cases, impossible to know what you do not observe, random sampling is used.
The idea with randomization is that a random sample, up to sampling error, must accurately reflect the distribution of all features in the population, observed and otherwise. This is why randomization is the "gold standard," but if sample control is available by some other technique, or it is defensible to argue that there are no omitted features, then it isn't always necessary.
- Your sample must be large enough that the effect of sampling error on the feature distribution is relatively small. This is, again, to ensure representativeness. But deciding who to sample is different from deciding how many people to sample.
Since it sounds like you're fitting a model, there's the additional consideration that certain important combinations of features could be relatively rare in the population. This is not an issue for generalizability, but it bears heavily on your considerations for sample size. For instance, I'm working on a project now with (non-big) data that was originally collected to understand the experiences of minorities in college. As such, it was critically important to ensure that statistical power was high specifically in the minority subpopulation. For this reason, blacks and Latinos were deliberately oversampled. However, the proportion by which they were oversampled was also recorded. These are used to compute survey weights. These can be used to re-weight the sample so as to reflect the estimated population proportions, in the event that a representative sample is required.
An additional consideration arises if your model is hierarchical. A canonical use for a hierarchical model is one of children's behavior in schools. Children are "grouped" by school and share school-level traits. Therefore a representative sample of schools is required, and within each school a representative sample of children is required. This leads to stratified sampling. This and some other sampling designs are reviewed in surprising depth on [Wikipedia](http://en.wikipedia.org/wiki/Sampling_(statistics)#Sampling_methods).
| null |
CC BY-SA 3.0
| null |
2014-08-05T12:58:16.000
|
2014-08-05T12:58:16.000
|
2020-06-16T11:08:43.077
|
-1
|
1156
| null |
915
|
1
| null | null |
10
|
346
|
Is there a known general table of statistical techniques that explain how they scale with sample size and dimension? For example, a friend of mine told me the other day that the computation time of simply quick-sorting one dimensional data of size n goes as n*log(n).
So, for example, if we regress y against X where X is a d-dimensional variable, does it go as O(n^2*d)? How does it scale if I want to find the solution via exact Gauss-Markov solution vs numerical least squares with Newton method? Or simply getting the solution vs using significance tests?
I guess I more want a good source of answers (like a paper that summarizes the scaling of various statistical techniques) than a good answer here. Like, say, a list that includes the scaling of multiple regression, logistic regression, PCA, cox proportional hazard regression, K-means clustering, etc.
|
How do various statistical techniques (regression, PCA, etc) scale with sample size and dimension?
|
CC BY-SA 3.0
| null |
2014-08-05T18:36:12.753
|
2014-08-14T17:28:40.453
|
2014-08-05T18:46:46.157
|
2841
|
2841
|
[
"bigdata",
"statistics",
"efficiency",
"scalability"
] |
916
|
2
| null |
915
|
6
| null |
Most of the efficient (and non trivial) statistic algorithms are iterative in nature so that the worst case analysis `O()` is irrelevant as the worst case is 'it fails to converge'.
Nevertheless, when you have a lot of data, even the linear algorithms (`O(n)`) can be slow and you then need to focus on the constant 'hidden' behind the notation. For instance, computing the variance of a single variate is naively done scanning the data twice (once for computing an estimate of the mean, and then once to estimate the variance). But it also can be done in [one pass](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Online_algorithm).
For iterative algorithms, what is more important is convergence rate and number of parameters as a function of the data dimensionality, an element that greatly influences convergence. Many models/algorithm grow a number of parameters that is exponential with the number of variables (e.g. splines) while some other grow linearly (e.g. support vector machines, random forests, ...)
| null |
CC BY-SA 3.0
| null |
2014-08-05T20:24:09.200
|
2014-08-05T20:24:09.200
| null | null |
172
| null |
917
|
1
|
918
| null |
12
|
2873
|
I am attempting to solve a set of equations which has 40 independent variables (x1, ..., x40) and one dependent variable (y). The total number of equations (number of rows) is ~300, and I want to solve for the set of 40 coefficients that minimizes the total sum-of-square error between y and the predicted value.
My problem is that the matrix is very sparse and I do not know the best way to solve the system of equations with sparse data. An example of the dataset is shown below:
```
y x1 x2 x3 x4 x5 x6 ... x40
87169 14 0 1 0 0 2 ... 0
46449 0 0 4 0 1 4 ... 12
846449 0 0 0 0 0 3 ... 0
....
```
I am currently using a Genetic Algorithm to solve this and the results are coming out
with roughly a factor of two difference between observed and expected.
Can anyone suggest different methods or techniques which are capable of solving a set of equations with sparse data.
|
Solving a system of equations with sparse data
|
CC BY-SA 3.0
| null |
2014-08-05T20:45:01.383
|
2016-10-18T15:44:16.657
|
2016-10-18T15:44:16.657
|
20343
|
802
|
[
"machine-learning",
"regression",
"algorithms",
"genetic"
] |
918
|
2
| null |
917
|
11
| null |
If I understand you correctly, this is the case of multiple linear regression with sparse data (sparse regression). Assuming that, I hope you will find the following resources useful.
1) NCSU lecture slides on sparse regression with overview of algorithms, notes, formulas, graphics and references to literature: [http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf](http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf)
2) `R` ecosystem offers many packages, useful for sparse regression analysis, including:
- Matrix (http://cran.r-project.org/web/packages/Matrix)
- SparseM (http://cran.r-project.org/web/packages/SparseM)
- MatrixModels (http://cran.r-project.org/web/packages/MatrixModels)
- glmnet (http://cran.r-project.org/web/packages/glmnet)
- flare (http://cran.r-project.org/web/packages/flare)
3) A blog post with an example of sparse regression solution, based on `SparseM`: [http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html](http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html)
4) A blog post on using sparse matrices in R, which includes a primer on using `glmnet`: [http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r](http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r)
5) More examples and some discussion on the topic can be found on StackOverflow: [https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix](https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix)
UPDATE (based on your comment):
If you're trying to solve an LP problem with constraints, you may find this theoretical paper useful: [http://web.stanford.edu/group/SOL/papers/gmsw84.pdf](http://web.stanford.edu/group/SOL/papers/gmsw84.pdf).
Also, check R package limSolve: [http://cran.r-project.org/web/packages/limSolve](http://cran.r-project.org/web/packages/limSolve). And, in general, check packages in CRAN Task View "Optimization and Mathematical Programming": [http://cran.r-project.org/web/views/Optimization.html](http://cran.r-project.org/web/views/Optimization.html).
Finally, check the book "Using R for Numerical Analysis in Science and Engineering" (by Victor A. Bloomfield). It has a section on solving systems of equations, represented by sparse matrices (section 5.7, pages 99-104), which includes examples, based on some of the above-mentioned packages: [http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false](http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false).
| null |
CC BY-SA 3.0
| null |
2014-08-05T22:34:04.550
|
2014-08-06T21:32:39.427
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
919
|
1
|
939
| null |
9
|
2286
|
Data set looks like:
- 25000 observations
- up to 15 predictors of different types: numeric, multi-class categorical, binary
- target variable is binary
Which cross validation method is typical for this type of problems?
By default I'm using K-Fold. How many folds is enough in this case? (One of the models I use is random forest, which is time consuming...)
|
Which cross-validation type best suits to binary classification problem
|
CC BY-SA 3.0
| null |
2014-08-06T08:41:44.967
|
2017-02-25T19:47:29.907
| null | null |
97
|
[
"classification",
"cross-validation"
] |
920
|
1
|
933
| null |
1
|
276
|
There is a text summarization project called SUMMARIST. Apparently it is able to perform abstractive text summarization. I want to give it a try but unfortunately the demo links on the website do not work. Does anybody have any information regarding this? How can I test this tool?
[http://www.isi.edu/natural-language/projects/SUMMARIST.html](http://www.isi.edu/natural-language/projects/SUMMARIST.html)
Regards,
PasMod
|
SUMMARIST: Automated Text Summarization
|
CC BY-SA 3.0
| null |
2014-08-06T09:02:01.033
|
2018-09-07T03:25:01.003
| null | null |
979
|
[
"text-mining"
] |
921
|
1
|
924
| null |
3
|
269
|
I am fitting a model in R.
- use createFolds method to create several k folds from the data set
- loop through the folds, repeating the following on each iteration:
train the model on k-1 folds
predict the outcomes for the i-th fold
calculate prediction accuracy
- average the accuracy
Does R have a function that makes folds itself, repeats model tuning/predictions and gives the average accuracy back?
|
Avoid iterations while calculating average model accuracy
|
CC BY-SA 3.0
| null |
2014-08-06T09:03:20.857
|
2014-08-06T12:16:22.850
| null | null |
97
|
[
"r",
"accuracy",
"cross-validation",
"sampling",
"beginner"
] |
922
|
1
|
925
| null |
3
|
60
|
I have set of documents and I want classify them to true and false
My question is I have to take the whole words in the documents then I classify them depend on the similarity words in these documents or I can take only some words that I interested in then I compare it with the documents. Which one is more efficient in classify document and can work with SVM.
|
Can I classify set of documents using classifying method using limited number of concepts ?
|
CC BY-SA 3.0
| null |
2014-08-06T09:08:08.113
|
2014-08-06T13:15:51.460
| null | null |
2850
|
[
"machine-learning",
"classification",
"text-mining"
] |
923
|
2
| null |
919
|
3
| null |
I think in your case a 10-fold CV will be O.K.
I think it is more important to randomize the cross validation process than selecting the ideal value for k.
So repeat the CV process several times randomly and compute the variance of your classification result to determine if the results are realiable or not.
| null |
CC BY-SA 3.0
| null |
2014-08-06T09:08:12.117
|
2014-08-06T09:08:12.117
| null | null |
979
| null |
924
|
2
| null |
921
|
4
| null |
Yes, you can do all this using the Caret ([http://caret.r-forge.r-project.org/training.html](http://caret.r-forge.r-project.org/training.html)) package in R. For example,
```
fitControl <- trainControl(## 10-fold CV
method = "repeatedcv",
number = 10,
## repeated ten times
repeats = 10)
gbmFit1 <- train(Class ~ ., data = training,
method = "gbm",
trControl = fitControl,
## This last option is actually one
## for gbm() that passes through
verbose = FALSE)
gbmFit1
```
which will give the output
```
Stochastic Gradient Boosting
157 samples
60 predictors
2 classes: 'M', 'R'
No pre-processing
Resampling: Cross-Validated (10 fold, repeated 10 times)
Summary of sample sizes: 142, 142, 140, 142, 142, 141, ...
Resampling results across tuning parameters:
interaction.depth n.trees Accuracy Kappa Accuracy SD Kappa SD
1 50 0.8 0.5 0.1 0.2
1 100 0.8 0.6 0.1 0.2
1 200 0.8 0.6 0.09 0.2
2 50 0.8 0.6 0.1 0.2
2 100 0.8 0.6 0.09 0.2
2 200 0.8 0.6 0.1 0.2
3 50 0.8 0.6 0.09 0.2
3 100 0.8 0.6 0.09 0.2
3 200 0.8 0.6 0.08 0.2
Tuning parameter 'shrinkage' was held constant at a value of 0.1
Accuracy was used to select the optimal model using the largest value.
The final values used for the model were n.trees = 150, interaction.depth = 3
and shrinkage = 0.1.
```
Caret offers many other options as well so should be able to suit your needs.
| null |
CC BY-SA 3.0
| null |
2014-08-06T12:16:22.850
|
2014-08-06T12:16:22.850
| null | null |
802
| null |
925
|
2
| null |
922
|
1
| null |
Both methods work. However, if you retain all words in documents you would essentially be working with high dimensional vectors (each term representing one dimension). Consequently, a classifier, e.g. SVM, would take more time to converge.
It is thus a standard practice to reduce the term-space dimensionality by pre-processing steps such as stop-word removal, stemming, Principal Component Analysis (PCA) etc.
One approach could be to analyze the document corpora by a topic modelling technique such as LDA and then retaining only those words which are representative of the topics, i.e. those which have high membership values in a single topic class.
Another approach (inspired by information retrieval) could be to retain the top K tf-idf terms from each document.
| null |
CC BY-SA 3.0
| null |
2014-08-06T12:27:31.277
|
2014-08-06T13:15:51.460
|
2014-08-06T13:15:51.460
|
984
|
984
| null |
926
|
2
| null |
917
|
7
| null |
[Aleksandr's answer](https://datascience.stackexchange.com/a/918/2853) is completely correct.
However, the way the question is posed implies that this is a straightforward ordinary least squares regression question: minimizing the sum of squared residuals between a dependent variable and a linear combination of predictors.
Now, while there may be many zeros in your design matrix, your system as such is not overly large: 300 observations on 40 predictors is no more than medium-sized. You can run such a regression using R without any special efforts for sparse data. Just use the `lm()` command (for "linear model"). Use `?lm` to see the help page. And note that `lm` will by default silently add a constant column of ones to your design matrix (the intercept) - include a `-1` on the right hand side of your formula to suppress this. Overall, assuming all your data (and nothing else) is in a `data.frame` called `foo`, you can do this:
```
model <- lm(y~.-1,data=foo)
```
And then you can look at parameter estimates etc. like this:
```
summary(model)
residuals(model)
```
If your system is much larger, say on the order of 10,000 observations and hundreds of predictors, looking at specialized sparse solvers as per [Aleksandr's answer](https://datascience.stackexchange.com/a/918/2853) may start to make sense.
Finally, in your comment to [Aleksandr's answer](https://datascience.stackexchange.com/a/918/2853), you mention constraints on your equation. If that is actually your key issue, there are ways to calculate constrained least squares in R. I personally like `pcls()` in the `mgcv` package. Perhaps you want to edit your question to include the type of constraints (box constraints, nonnegativity constraints, integrality constraints, linear constraints, ...) you face?
| null |
CC BY-SA 3.0
| null |
2014-08-06T13:42:54.083
|
2014-08-06T13:42:54.083
|
2017-04-13T12:50:41.230
|
-1
|
2853
| null |
927
|
1
|
929
| null |
2
|
2862
|
I'm working on the dataset with lots of NA values with sklearn and `pandas.DataFrame`. I implemented different imputation strategies for different columns of the dataFrame based column names. For example NAs predictor `'var1'` I impute with 0's and for `'var2'` with mean.
When I try to cross validate my model using `train_test_split` it returns me a `nparray` which does not have column names. How can I impute missing values in this nparray?
P.S. I do not impute missing values in the original data set before splitting on purpose so I keep test and validation sets separately.
|
how to impute missing values on numpy array created by train_test_split from pandas.DataFrame?
|
CC BY-SA 4.0
| null |
2014-08-06T15:07:07.457
|
2020-07-31T14:25:42.320
|
2020-07-31T14:25:42.320
|
98307
|
2854
|
[
"pandas",
"cross-validation",
"scikit-learn"
] |
928
|
2
| null |
919
|
0
| null |
K-Fold should do just fine for binary classification problem. Depending on the time it is taking to train your model and predict the outcome I would use 10-20 folds.
However sometimes a single fold takes several minutes, in this case I use 3-5 folds but not less than 3. Hope it helps.
| null |
CC BY-SA 3.0
| null |
2014-08-06T15:10:59.600
|
2014-08-06T15:10:59.600
| null | null |
2854
| null |
929
|
2
| null |
927
|
1
| null |
Can you just cast your `np.array` from `train_test_split` back into a `pandas.DataFrame` so you can carry out your same strategy. This is very common to what I do when dealing with pandas and scikit. For example,
```
a = train_test_split
new_df = pd.DataFrame(a)
```
| null |
CC BY-SA 4.0
| null |
2014-08-06T17:07:17.520
|
2020-07-31T14:03:12.133
|
2020-07-31T14:03:12.133
|
86339
|
802
| null |
930
|
2
| null |
927
|
0
| null |
From the link you mentioned in the comment, the train and test sets should be in the form of a dataframe if you followed the first explanation.
In that case, you could do something like this:
```
df[variable] = df[variable].fillna(df[variable].median())
```
You have options on what to fill the N/A values with, check out the link.
[http://pandas.pydata.org/pandas-docs/stable/missing_data.html](http://pandas.pydata.org/pandas-docs/stable/missing_data.html)
If you followed the second explanation, using sklearn's cross-validation, you could implement mike1886's suggestion of transforming the arrays into dataframes and then use the fillna option.
| null |
CC BY-SA 3.0
| null |
2014-08-06T18:17:29.700
|
2014-08-06T18:17:29.700
| null | null |
2838
| null |
931
|
2
| null |
902
|
2
| null |
To answer a simpler, but related question, namely 'How well can my model generalize on the data that I have?' the method of learning curves might be applicable. [This](https://www.youtube.com/watch?v=g4XluwGYPaA) is a lecture given by Andrew Ng about them.
The basic idea is to plot test set error and training set error vs. the complexity of the model you are using (this can be somewhat complicated). If the model is powerful enough to fully 'understand' your data, at some point the complexity of the model will be high enough that performance on the training set will be close to perfect. However, the variance of a complex model will likely cause the test set performance to increase at some point.
This analysis tells you two main things, I think. The first is an upper limit on performance. It's pretty unlikely that you'll do better on data that you haven't seen than on your training data. The other thing it tells you is whether or not getting more data might help. If you can demonstrate that you fully understand your training data by driving training error to zero it might be possible, through the inclusion of more data, to drive your test error further down by getting a more complete sample and then training a powerful model on that.
| null |
CC BY-SA 3.0
| null |
2014-08-06T23:37:15.293
|
2014-08-07T00:20:05.817
|
2014-08-07T00:20:05.817
|
2724
|
2724
| null |
932
|
2
| null |
896
|
4
| null |
I have managed to resolve this. There is an excellent and thorough explanation of the optimization steps in the following thesis: [Semi-Supervised Learning for Natural Language by Percy Liang](http://cs.stanford.edu/~pliang/papers/meng-thesis.pdf).
My mistake was trying to update the quality for all potential clusters pairs. Instead, you should initialize a table with the quality changes of doing each merge. Use this table to find the best merge, and the update the relevant terms that make up the table entries.
| null |
CC BY-SA 3.0
| null |
2014-08-07T03:15:09.933
|
2014-08-07T03:15:09.933
| null | null |
2817
| null |
933
|
2
| null |
920
|
1
| null |
It dates back to 1998, so most likely has been abandoned, or "acquired" by Microsoft as the creator currently works there and has done since publishing that research.
See [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/ists97.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/ists97.pdf)
and [http://research.microsoft.com/en-us/people/cyl](http://research.microsoft.com/en-us/people/cyl) for the author. Maybe you could try to contact him.
| null |
CC BY-SA 4.0
| null |
2014-08-07T09:33:50.817
|
2018-09-07T03:25:01.003
|
2018-09-07T03:25:01.003
|
29575
|
2861
| null |
934
|
1
| null | null |
4
|
331
|
I have train and test sets of chronological data consisting of 305000 instances and 70000,appropriately. There are 15 features in each instance and only 2 possible class values ( NEW,OLD). The problem is that there are only 725 OLD instances in the train set and 95 in the test.
The only algorithm which succeeds for me to handle imbalance is NaiveBayes in Weka (0.02 precision for OLD class), others (trees) classify each instance as NEW.
What is the best approach to handle the imbalance and the appropriate algorithm in such a case?
Thank you in advance.
|
Handling huge dataset imbalance (2 class values) and appropriate ML algorithm
|
CC BY-SA 3.0
| null |
2014-08-07T10:45:38.557
|
2015-12-20T06:02:38.193
| null | null |
2533
|
[
"machine-learning",
"dataset"
] |
935
|
2
| null |
934
|
5
| null |
I'm not allowed to comment, but I have more a suggestion: you could try to implement some "Over-sampling Techniques" like SMOTE:
[http://scholar.google.com/scholar?q=oversampling+minority+classes](http://scholar.google.com/scholar?q=oversampling+minority+classes)
| null |
CC BY-SA 3.0
| null |
2014-08-07T12:46:25.123
|
2014-08-07T12:46:25.123
| null | null |
2863
| null |
936
|
1
| null | null |
-1
|
157
|
I am attempting to compile code using Knitr in R.
My code below is returning the following error, and causes errors in the rest of the document.
```
miss<-sample$sensor_glucose[!is.na(sample$sensor_glucose)]
# Error: "## Warning: is.na() applied to non-(list or vector) of type 'NULL'"
str(miss)
# int [1:103] 213 113 46 268 186 196 187 153 43 175 ...
```
Does anyone know how to remedy this problem?
Thanks in advance!
|
R error using Knitr
|
CC BY-SA 3.0
| null |
2014-08-07T15:12:48.617
|
2014-08-08T12:27:15.360
|
2014-08-08T12:27:15.360
|
97
|
2792
|
[
"r",
"error-handling"
] |
937
|
1
|
998
| null |
59
|
121439
|
I am working on a problem with too many features and training my models takes way too long. I implemented a forward selection algorithm to choose features.
However, I was wondering does scikit-learn have a forward selection/stepwise regression algorithm?
|
Does scikit-learn have a forward selection/stepwise regression algorithm?
|
CC BY-SA 4.0
| null |
2014-08-07T15:33:43.793
|
2021-08-15T21:08:31.770
|
2021-08-15T02:07:30.567
|
29169
|
2854
|
[
"feature-selection",
"scikit-learn"
] |
938
|
2
| null |
934
|
3
| null |
You can apply a clustering algorithm to the instances in the majority class and train a classifier with the centroids/medoids offered by the cluster algorithm. This is subsampling the majority class, the converse of oversampling the minority class.
| null |
CC BY-SA 3.0
| null |
2014-08-07T17:49:05.640
|
2014-08-07T17:49:05.640
| null | null |
172
| null |
939
|
2
| null |
919
|
5
| null |
You will have best results if you care to build the folds so that each variable (and most importantly the target variable) is approximately identically distributed in each fold. This is called, when applied to the target variable, stratified k-fold. One approach is to cluster the inputs and make sure each fold contains the same number of instances from each cluster proportional to their size.
| null |
CC BY-SA 3.0
| null |
2014-08-07T17:53:06.970
|
2014-08-07T17:53:06.970
| null | null |
172
| null |
940
|
1
| null | null |
4
|
1748
|
We are storing the information about our users showing interest in our items. Based on this information, we would like to create a simple recommendation engine that will take the items I1, I2, I3 etc of the current user, search for all other users that had shown interest in those items, and then output the items I4, I5, I6 etc of the other users, sorted by their decreasing popularity. So, basically, the standard "other buyer were also interested in..." functionality.
I'm asking myself what kind of a database is suitable for a realtime recommendation engine like this. My current idea is to build a trie of item IDs, then sort the item IDs of the current user (as the order of items is irrelevant) and to go down the trie; the children of the last trie node will build the needed output.
The problem is that we have 2 million items so that according to our estimation the trie will have at least 1E12 nodes, so that we probably need a distributed sharded database to store it. Before we reinvent the wheel, are there any ready-to-use databases or generally, non-cloud solutions for recommendation engines out there?
|
Database for a trie, or other appropriate structure for recommendation engine
|
CC BY-SA 3.0
| null |
2014-08-07T22:30:52.913
|
2014-08-11T16:05:02.197
| null | null |
2873
|
[
"bigdata",
"recommender-system",
"databases"
] |
941
|
2
| null |
936
|
2
| null |
I agree with @ssdecontrol that a minimal reproducible example would be the most helpful. However, looking at your code (pay attention to the sequence `Error: ... Warning: ...`), I believe that the issue you are experiencing is due to an inappropriate setting of R's global `warn` option. It appears that your current setting is likely `2`, which refers to converting warnings to errors, whereas, you, most likely want the setting `1`, which is to treat warnings as such, without converting them to errors. If that is the case, you just need to set the option appropriately:
```
options(warn=1) # print warnings as they occur
options(warn=2) # treat warnings as errors
```
Note for moderators/administrators: This question seems not to be a data science question, but purely an R question. Therefore, I think it should be moved to StackOverflow, where it belongs.
| null |
CC BY-SA 3.0
| null |
2014-08-08T00:15:05.527
|
2014-08-08T03:03:11.517
|
2014-08-08T03:03:11.517
|
2452
|
2452
| null |
942
|
2
| null |
915
|
5
| null |
You mentioned regression and PCA in the title, and there is a definite answer for each of those.
The asymptotic complexity of linear regression reduces to O(P^2 * N) if N > P, where P is the number of features and N is the number of observations. More detail in [Computational complexity of least square regression operation](https://math.stackexchange.com/a/84503/117452).
Vanilla PCA is O(P^2 * N + P ^ 3), as in [Fastest PCA algorithm for high-dimensional data](https://scicomp.stackexchange.com/q/3220). However fast algorithms exist for very large matrices, explained in that answer and [Best PCA Algorithm For Huge Number of Features?](https://stats.stackexchange.com/q/2806/36229).
However I don't think anyone's compiled a single lit review or reference or book on the subject. Might not be a bad project for my free time...
| null |
CC BY-SA 3.0
| null |
2014-08-08T00:25:04.210
|
2014-08-08T00:25:04.210
|
2017-04-13T12:53:55.957
|
-1
|
1156
| null |
944
|
2
| null |
934
|
2
| null |
In addition to undersampling the majority class (i.e. taking only a few NEW), you may consider oversampling the minority class (in essence, duplicating your OLDs, but there are other smarter ways to do that)
Note that oversampling may lead to overfitting, so pay special attention to testing your classifiers
Check also this answer on CV:
- https://stats.stackexchange.com/a/108325/49130
| null |
CC BY-SA 3.0
| null |
2014-08-08T07:12:52.680
|
2014-08-08T07:12:52.680
|
2017-04-13T12:44:20.183
|
-1
|
816
| null |
945
|
1
|
947
| null |
5
|
2037
|
We have a classification algorithm to categorize Java exceptions in Production.
This algorithm is based on hierarchical human defined rules so when a bunch of text forming an exception comes up, it determines what kind of exception is (development, availability, configuration, etc.) and the responsible component (the most inner component responsible of the exception). In Java an exception can have several causing exceptions, and the whole must be analyzed.
For example, given the following example exception:
```
com.myapp.CustomException: Error printing ...
... (stack)
Caused by: com.foo.webservice.RemoteException: Unable to communicate ...
... (stack)
Caused by: com.acme.PrintException: PrintServer002: Timeout ....
... (stack)
```
First of all, our algorithm splits the whole stack in three isolated exceptions. Afterwards it starts analyzing these exceptions starting from the most inner one. In this case, it determines that this exception (the second caused by) is of type `Availability` and that the responsible component is a "print server". This is because there is a rule that matches containing the word `Timeout` associated to the `Availability` type. There is also a rule that matches `com.acme.PrintException` and determines that the responsible component is a print server. As all the information needed is determined using only the most inner exception, the upper exceptions are ignored, but this is not always the case.
As you can see this kind of approximation is very complex (and chaotic) as a human have to create new rules as new exceptions appear. Besides, the new rules have to be compatible with the current ones because a new rule for classifying a new exception must not change the classification of any of the already classified exceptions.
We are thinking about using Machine Learning to automate this process. Obviously, I am not asking for a solution here as I know the complexity but I'd really appreciate some advice to achieve our goal.
|
Classifying Java exceptions
|
CC BY-SA 3.0
| null |
2014-08-08T10:01:25.400
|
2022-07-08T14:46:48.740
| null | null |
2878
|
[
"machine-learning",
"classification",
"algorithms"
] |
946
|
1
| null | null |
2
|
4101
|
Cross posting this from Cross Validated:
I've seen this question asked before, but I have yet to come across a definitive source answering the specific questions:
- What's the most appropriate statistical test to apply to a small A/B test?
- What's the R code and interpretation to analyze a small A/B test?
I'm running a small test to figure out which ads perform better. I have the following results:
Position 1:
`variation,impressions,clicks
row-1,753,26
row-3,767 7
`
Position 2:
`variation,impressions,clicks
row-1,753,16
row-3,767 13
`
Position 3:
`variation,impressions,clicks
row-1,753,2
row-3,767 7
`
I think it's safe to say these numbers are small and likely to be not normally distributed. Also, it's click data so there's a binary outcome of clicked or not and the trials are independent.
Appropriate test
In analyzing each position for significance, I think comparison with a binomial or Poisson distribution makes the most sense.
According to [the OpenIntro Stats](http://www.openintro.org/stat/textbook.php?stat_book=os) (and other sources) book, a variable follows a Poisson distribution "... if the event being considered is rare, the population is large, and the events occur independently of each other."
The same source classifies a binomial variable approximately the same way adding that the probability of success is the same and the number of trials is fixed.
I appreciate this is not an either/or decision and analysis can be done using both distributions.
Given A/B (split) testing is a science that has been practiced for several years, I imagine that there is a canonical test. However, looking around the internet, I mostly come across analysis that uses the standard normal distribution. That just seems wrong :)
Is there a canonical test to use for A/B tests with small #'s of clicks?
Interpretation and R code
I've used the following R code to test significance for each position:
Position 1:
```
binom.test(7, 767, p=(26/753))
Exact binomial test
data: 7 and 767
number of successes = 7, number of trials = 767, p-value = 1.077e-05
alternative hypothesis: true probability of success is not equal to 0.03452855
95 percent confidence interval:
0.003676962 0.018713125
sample estimates:
probability of success
0.009126467
```
I interpret this result to mean: The probability of success in the test group is indeed different than the control group with a 95% confidence interval that the success probability is between .368% and 1.87%
```
ppois(((26-1)/753), lambda=(7/767), lower.tail = F)
[1] 0.009084947
```
I interpret this result to mean: Given a Poisson distribution with a click rate of 7 per 767 trials, there is a 0.9% chance of having a click rate of 26 or more per 753 trials in the same distribution. Contextualized in the ad example,
there is a .1% chance that the control ad actually performs the same as the test ad.
Is the above interpretation correct? Does the test and interpretation change with the different positions (i.e. are the results of the Poisson test more appropriate for Position 3 given the small numbers)?
|
Analysis of Split (A/B) tests using Poisson and/or Binomial Distribution
|
CC BY-SA 3.0
| null |
2014-08-08T13:44:52.803
|
2016-01-06T11:48:04.607
|
2016-01-06T11:48:04.607
|
11097
|
2883
|
[
"r",
"ab-test"
] |
947
|
2
| null |
945
|
8
| null |
First of all, some basics of classification (and in general any supervised ML tasks), just to make sure we have same set of concepts in mind.
Any supervised ML algorithm consists of at least 2 components:
- Dataset to train and test on.
- Algorithm(s) to handle these data.
Training dataset consists of a set of pairs `(x, y)`, where `x` is a vector of features and `y` is predicted variable. Predicted variable is just what you want to know, i.e. in your case it is exception type. Features are more tricky. You cannot just throw raw text into an algorithm, you need to extract meaningful parts of it and organize them as feature vectors first. You've already mentioned a couple of useful features - exception class name (e.g. `com.acme.PrintException`) and contained words ("Timeout"). All you need is to translate your row exceptions (and human-categorized exception types) into suitable dataset, e.g.:
```
ex_class contains_timeout ... | ex_type
-----------------------------------------------------------
[com.acme.PrintException, 1 , ...] | Availability
[java.lang.Exception , 0 , ...] | Network
...
```
This representation is already much better for ML algorithms. But which one to take?
Taking into account nature of the task and your current approach natural choice is to use decision trees. This class of algorithms will compute optimal decision criteria for all your exception types and print out resulting tree. This is especially useful, because you will have possibility to manually inspect how decision is made and see how much it corresponds to your manually-crafted rules.
There's, however, possibility that some exceptions with exactly the same features will belong to different exception types. In this case probabilistic approach may work well. Despite its name, Naive Bayes classifier works pretty well in most cases. There's one issue with NB and our dataset representation, though: dataset contains categorical variables, and Naive Bayes can work with numerical attributes only*. Standard way to overcome this problem is to use [dummy variables](http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29). In short, dummy variables are binary variables that simply indicate whether specific category presents or not. For example, single variable `ex_class` with values `{com.acme.PrintException, java.lang.Exception, ...}`, etc. may be split into several variables `ex_class_printexception`, `ex_class_exception`, etc. with values `{0, 1}`:
```
ex_class_printexception ex_class_exception contains_timeout | ex_type
-----------------------------------------------------------------------
[1, , 0 , 1 ] | Availability
[0, , 1 , 0 ] | Network
```
One last algorithm to try is Support Vector Machines (SVM). It neither provides helpful visualisation, nor is probabilistic, but often gives superior results.
---
* - in fact, neither Bayes theorem, nor Naive Bayes itself state anything about variable type, but most software packages that come to mind rely on numerical features.
| null |
CC BY-SA 3.0
| null |
2014-08-08T13:53:36.847
|
2014-08-08T13:53:36.847
| null | null |
1279
| null |
948
|
1
| null | null |
1
|
744
|
I have found a number of libraries and tools for data science in Scala, I would like to know about which one has more adoption and which one is gaining adoption at a faster pace and to what extent this is the case. Basically, which one should I bet for (if any at this point).
Some of the tools I've found are (in no particular order):
- Scalding
- Breeze
- Spark
- Saddle
- H2O
- Spire
- Mahout
- Hadoop
- MongoDB
If I need to be more specific to make the question answerable: I'm not particularly interested in clusters and Big Data at this moment, but I'm interested in sizable data (up to 100 GB) for information integration and predictive analytics.
|
Any clear winner for Data Science in Scala?
|
CC BY-SA 3.0
| null |
2014-08-08T16:49:13.850
|
2014-08-09T21:25:58.250
|
2014-08-08T17:00:50.270
|
1281
|
1281
|
[
"tools"
] |
949
|
1
| null | null |
2
|
831
|
I am facing this bizarre issue while using `Apache Pig` rank utility. I am executing the following code:
```
email_id_ranked = rank email_id;
store email_id_ranked into '/tmp/';
```
So, basically I am trying to get the following result
```
1,email1
2,email2
3,email3
...
```
Issue is sometime pig dumps the above result but sometimes it dumps only the emails without the rank. Also when I dump the data on screen using `dump` function pig returns both the columns. I don't know where the issue is. Kindly advice.
Please let me know if you need any more information. Thanks in advance.
Pig version: Apache Pig version 0.11.0-cdh4.6.0
|
Pig Rank function not generating rank in output
|
CC BY-SA 3.0
| null |
2014-08-08T17:32:48.377
|
2014-08-08T17:32:48.377
| null | null |
2433
|
[
"bigdata",
"apache-hadoop",
"apache-pig"
] |
950
|
2
| null |
946
|
0
| null |
The approximation Binomial(k,n,p) ~= Poisson(k,s) (where s = n*p) can be shown under the assumptions:
1) n >> k (to say that n!/(n-k)! ~= n^k),
2) p <<1 (to say that (1-p)^(n-k) ~= (1-p)^n).
It's up to you whether those are sufficiently satisfied. If the exact calculation can be done quickly, in my opinion, it's nice to stay with that.
Also since, if the probability of row 3 sample is different from the row 1 sample, it would almost certainly be on the lower side. It would probably best for you to use
binom.test(7, 767, p=(26/753), alternative='less')
the final option indicating that the alternative to your null hypothesis is that the probability is less than 26/753, not equal to. Of course, that's simply just the sum of Binomial probabilities from 0 to 7 (you can check yourself), the interpretation being that this is the probability of having gotten at most 7 rolls from random chance, if the probability truly was 26/753.
Keep in mind the interpretation of that last sentence. These kinds of tests are generally used when we know what the inherent probability is that we're comparing to (e.g. to see if the set of coin flips has a probability significantly different from 1/2 which is what we expect from a fair coin). In this case, we don't know what the probability is that we're comparing to, we're just making the very crude guess that the 26/753 outcome of row 1 reflects the true probability. It's better than a regular Normal t-test in this case, but don't put too much stock in it unless you have a much higher sample size for row 1.
| null |
CC BY-SA 3.0
| null |
2014-08-08T19:23:33.637
|
2014-08-08T19:23:33.637
| null | null |
2841
| null |
951
|
2
| null |
948
|
5
| null |
Not sure anybody have worked with all these tools, so I'm going to share my experience with some of them and let others share their experience with the others.
MongoDB addresses problems that involve heterogeneous and nested objects, while data mining mostly works with simple tabular data. MongoDB is neither fast with this type of data, nor provide any advanced tools for analysis (correct me if you know any). So I can think of a very few applications for Mongo in data mining.
Hadoop is a large ecosystem, containing dozens of different tools. I will assume that you mean core Hadoop features - HDFS and MapReduce. HDFS provides flexible way to store large amounts of data, while MapReduce gives basis for processing them. It has its clear advantages for processing multi-terabyte datasets, but it also has significant drawbacks. In particular, because of intensive disk IO during MapReduce tasks (that slows down computations a lot) it is terrible for interactive development, iterative algorithms and working with not-that-big datasets. For more details see my [earlier answer](https://datascience.stackexchange.com/a/863/1279).
Many algorithms in Hadoop require multiple MapReduce jobs with complicated data flow. This is where Scalding gets shiny. Scalding (and underlying Java's [Cascading](http://www.cascading.org/)) provides much simpler API, but at the moment uses same MapReduce as its runtime and thus holds all the same issues.
Spark addresses exactly these issues. It drops Hadoop's MapReduce and offers completely new computational framework based on distributed in-memory collections and delayed evaluations. Its API is somewhat similar to Scalding's with all MR complexity removed, so it's really easy to get started with it. Spark is also the first in this list that comes with data mining library - [MLlib](https://spark.apache.org/docs/latest/mllib-guide.html).
But Spark doesn't reinvent things like basic linear algebra. For this purpose it uses Breeze. To my opinion, Breeze is far in quality from scientific packages like SciPy, Octave or Julia, but it is still good enough for most practical use cases.
---
Mahout relies on Hadoop's MapReduce and thus is terrible for iterative algorithms. Spire and Saddle look cute and probably have their niche, but seem to be much less well-known than Breeze. I couldn't find much information about H2O, so it doesn't look like a big player here (comments from people who used it are welcome).
---
Some quick summary. Spark seems to be the most simple, flexible and fast-growing project for large-scale data processing. It facilitates a number of new projects (e.g. Shark or Spark SQL) and penetrates into existing (including [Cascading](http://www.concurrentinc.com/2014/05/cascading-now-supports-tez-spark-and-storm-up-next/) and [Mahout](https://mahout.apache.org/users/sparkbindings/home.html)). Spark knows how to utilize HDFS API, and thus scales to terabytes of data easily. And for data mining downshifters who don't want to bother with a cluster setup there's always pure Breeze.
| null |
CC BY-SA 3.0
| null |
2014-08-08T20:46:58.493
|
2014-08-09T21:25:58.250
|
2017-04-13T12:50:41.230
|
-1
|
1279
| null |
954
|
1
| null | null |
4
|
669
|
I need to do coreference resolution for German texts and I plan to use OpenNLP to perform this task.
As far as I know OpenNLP coreference resolution does not support the German language.
Which components/data do I need to adapt the code such that it is possible to perform coreference resolution for German texts?
|
OpenNLP Coreference Resolution (German)
|
CC BY-SA 3.0
| null |
2014-08-11T07:59:22.780
|
2015-04-04T18:33:40.263
| null | null |
979
|
[
"nlp",
"text-mining"
] |
955
|
1
|
988
| null |
8
|
827
|
Does anyone know a libarary for performing coreference resolution on German texts?
As far as I know, OpenNLP and Stanford NLP are not able to perform coreference resolution for German Texts.
The only tool that I know is [CorZu](http://www.cl.uzh.ch/static/news.php?om=view&nid=163) which is a python library.
|
Coreference Resolution for German Texts
|
CC BY-SA 4.0
| null |
2014-08-11T12:25:47.700
|
2019-06-07T17:10:38.600
|
2019-06-07T17:10:38.600
|
29169
|
979
|
[
"machine-learning",
"nlp"
] |
956
|
2
| null |
904
|
1
| null |
I had to create a web based dashboard. My main charting tool was d3js. But I needed to use ggplot2 to generate few charts. Through d3js's ggplot2 extension, I could create the same. If your charts can be generated through existing extension, then web has better alternatives. Later you can export them to PDF for distribution.
| null |
CC BY-SA 3.0
| null |
2014-08-11T12:48:52.320
|
2014-08-11T12:48:52.320
| null | null |
2910
| null |
957
|
1
| null | null |
2
|
545
|
I know that ARIMA can't detect multiple seasonality, but it is possible to [use fourier functions to add a second seasonality](http://robjhyndman.com/hyndsight/dailydata/).
I need to forecast gas consumption composed by a daily, weekly (week days-weekend), yearly seasonality. Does it make sense to apply three times the [STL decomposition by LOESS](http://stat.ethz.ch/R-manual/R-devel/library/stats/html/stl.html)? The reason is that I applied the fourier method and I have bad results but I don't know if it is only because I applied it wrong.
I'm interested in the theoretical explanation, but here you find also the code:
ARIMA + 2 STL:
```
b <- ts(drop(coredata(dat.ts)), deltat=1/12/30/24, start=1)
fit <- stl(b, s.window="periodic")
b <- seasadj(fit)
dat.ts <- xts(b, index(dat.ts))
# The weekdays are extracted
dat.weekdays <- dat.ts[.indexwday(dat.ts) %in% 1:5]
dat.weekdaysTS <- ts(drop(coredata(dat.weekdays)), frequency=24, start=1)
fit <- stl(dat.weekdaysTS, s.window="periodic")
dat.weekdaysTS <- seasadj(fit)
arima <- Arima(dat.weekdaysTS, order=c(3,0,5))
```
With fourier:
```
dat.weekdays <- dat.ts[.indexwday(dat.ts) %in% 1:5]
dat.weekdaysTS <- ts(drop(coredata(dat.weekdays)), frequency=24, start=1)
z <- fourier(ts(dat.weekdaysTS, frequency=365.25), K=5)
arima <- Arima(dat.weekdaysTS, order=c(3,0,5),xreg=z)
```
|
Multiple seasonality with ARIMA?
|
CC BY-SA 3.0
| null |
2014-08-11T15:19:02.047
|
2022-04-09T15:57:32.523
|
2022-04-09T15:57:32.523
|
1330
|
989
|
[
"time-series",
"arima"
] |
958
|
2
| null |
940
|
1
| null |
Have a look at [Apache Mahout](https://mahout.apache.org/). Last version features also user-item-based recommenders.
| null |
CC BY-SA 3.0
| null |
2014-08-11T16:05:02.197
|
2014-08-11T16:05:02.197
| null | null |
2912
| null |
959
|
2
| null |
866
|
7
| null |
While I am not a data scientist, I am an epidemiologist working in a clinical setting. Your research question did not specify a time period (ie odds of developing CKD in 1 year, 10 years, lifetime?).
Generally, I would go through a number of steps before even thinking about modeling (univariate analysis, bivariate analysis, colinearity checks, etc). However, the most commonly used method for trying to predict a binary event (using continuous OR binary variables) is logistic regression. If you wanted to look at CKD as a lab value (urine albumin, eGFR) you would use linear regression (continuous outcome).
While the methods used should be informed by your data and questions, clinicians are used to seeing odds ratios and risk ratios as these the most commonly reported measures of association in medical journals such as NEJM and JAMA.
If you are working on this problem from a human health perspective (as opposed to Business Intelligence) this Steyerberg's [Clinical Prediction Models](http://www.springer.com/medicine/internal/book/978-0-387-77243-1) is an excellent resource.
| null |
CC BY-SA 3.0
| null |
2014-08-11T18:33:37.023
|
2014-08-11T18:33:37.023
| null | null |
2913
| null |
960
|
2
| null |
919
|
2
| null |
I have to agree that k-fold should do "just" fine. However, there is a nice article about the "Bootstrap .632+" method (basically a smoothened cross validation) that is supposed to be superior (however, they did the comparisons on not-binary data as far as I can tell)
Maybe you want to check out this article here: [http://www.jstor.org/stable/2965703](http://www.jstor.org/stable/2965703)
| null |
CC BY-SA 3.0
| null |
2014-08-11T19:21:50.137
|
2014-08-11T19:21:50.137
| null | null | null | null |
961
|
1
| null | null |
8
|
959
|

These are 4 different weight matrices that I got after training a restricted Boltzman machine (RBM) with ~4k visible units and only 96 hidden units/weight vectors. As you can see, weights are extremely similar - even black pixels on the face are reproduced. The other 92 vectors are very similar too, though none of weights are exactly the same.
I can overcome this by increasing number of weight vectors to 512 or more. But I encountered this problem several times earlier with different RBM types (binary, Gaussian, even convolutional), different number of hidden units (including pretty large), different hyper-parameters, etc.
My question is: what is the most likely reason for weights to get very similar values? Do they all just get to some local minimum? Or is it a sign of overfitting?
I currently use a kind of Gaussian-Bernoulli RBM, code may be found [here](https://github.com/faithlessfriend/Milk.jl/blob/master/src/rbm.jl).
UPD. My dataset is based on [CK+](http://www.pitt.edu/~emotion/ck-spread.htm), which contains > 10k images of 327 individuals. However I do pretty heavy preprocessing. First, I clip only pixels inside of outer contour of a face. Second, I transform each face (using piecewise affine wrapping) to the same grid (e.g. eyebrows, nose, lips etc. are in the same (x,y) position on all images). After preprocessing images look like this:
 
When training RBM, I take only non-zero pixels, so outer black region is ignored.
|
Why a restricted Boltzman machine (RBM) tends to learn very similar weights?
|
CC BY-SA 3.0
| null |
2014-08-11T21:13:36.230
|
2019-05-10T05:02:55.607
|
2016-07-17T04:55:26.173
|
9420
|
1279
|
[
"rbm"
] |
962
|
1
|
966
| null |
30
|
23334
|
I know the difference between clustering and classification in machine learning, but I don't understand the difference between text classification and topic modeling for documents. Can I use topic modeling over documents to identify a topic? Can I use classification methods to classify the text inside these documents?
|
What is difference between text classification and topic models?
|
CC BY-SA 4.0
| null |
2014-08-12T03:50:52.303
|
2021-02-09T01:33:51.537
|
2021-02-09T01:33:51.537
|
29169
|
2916
|
[
"classification",
"text-mining",
"topic-model"
] |
963
|
1
| null | null |
14
|
5416
|
For experimenting we'd like to use the [Emoji](https://en.wikipedia.org/wiki/Emoji) embedded in many Tweets as a ground truth/training data for simple quantitative senitment analysis. Tweets are usually too unstructured for NLP to work well.
Anyway, there are 722 Emoji in Unicode 6.0, and probably another 250 will be added in Unicode 7.0.
Is there a database (like e.g. SentiWordNet) that contains sentiment annotations for them?
(Note that SentiWordNet does allow for ambiguous meanings, too. Consider e.g. [funny](http://sentiwordnet.isti.cnr.it/search.php?q=funny), which is not just positive: "this tastes funny" is probably not positive... same will hold for `;-)` for example. But I don't think this is harder for Emoji than it is for regular words...)
Also, if you have experience with using them for sentiment analysis, I'd be interested to hear.
|
Sentiment data for Emoji
|
CC BY-SA 3.0
| null |
2014-08-12T07:57:03.283
|
2020-08-06T11:03:50.720
|
2014-08-12T16:06:24.710
|
2920
|
2920
|
[
"machine-learning",
"classification",
"parsing"
] |
964
|
2
| null |
962
|
4
| null |
Topic models are usually unsupervised. There are "supervised topic models", too; but even then they try to model topics within a classes.
E.g. you may have a class "football", but there may be topics inside this class that relate to particular matches or teams.
The challenge with topics is that they change over time; consider the matches example above. Such topics may emerge, and disappear again.
| null |
CC BY-SA 3.0
| null |
2014-08-12T07:59:13.060
|
2014-08-12T07:59:13.060
| null | null |
2920
| null |
965
|
1
| null | null |
1
|
74
|
I'm looking for a product that allows us to take in a collection of datastreams, and then after some event, will find any data that changes or correlates with that event. (For example, having a headache, and identifying that I drank too much beer last night and didn't drink enough water)
|
Tool for finding correlations between data after some event
|
CC BY-SA 3.0
| null |
2014-08-12T08:54:34.903
|
2014-08-12T08:54:34.903
| null | null |
2923
|
[
"correlation"
] |
966
|
2
| null |
962
|
35
| null |
Text Classification
I give you a bunch of documents, each of which has a label attached. I ask you to learn why you think the contents of the documents have been given these labels based on their words. Then I give you new documents and ask what you think the label for each one should be. The labels have meaning to me, not to you necessarily.
Topic Modeling
I give you a bunch of documents, without labels. I ask you to explain why the documents have the words they do by identifying some topics that each is "about". You tell me the topics, by telling me how much of each is in each document, and I decide what the topics "mean" if anything.
You'd have to clarify what you me by "identify one topic" or "classify the text".
| null |
CC BY-SA 3.0
| null |
2014-08-12T09:52:01.153
|
2014-08-12T09:52:01.153
| null | null |
21
| null |
967
|
2
| null |
963
|
4
| null |
Total of 972 emoji is not really that big not to be able to label them manually, but I doubt that they will work as a good ground truth. Sources like Twitter are full of irony, sarcasm and other tricky settings where emotional symbols (such as emoji or emoticon) mean something different from normal interpretation. For example, someone may write "xxx cheated their clients, and now they are cheated themselves! ha ha ha! :D". This is definitely negative comment, but author is glad to see xxx company in trouble and thus adds positive emoticon. These cases are not that frequent, but definitely not suitable for ground truth.
Much more common approach is to use emoticon as a seed for collecting actual data set. For example, in [this paper](http://www.saifmohammad.com/WebDocs/NRC-Sentiment-JAIR-2014.pdf) authors use emoticon and emotional hash tags to grab lexicon of words useful for further classification.
| null |
CC BY-SA 3.0
| null |
2014-08-12T12:40:44.907
|
2014-08-12T12:40:44.907
| null | null |
1279
| null |
968
|
1
| null | null |
4
|
973
|
I am using the `stepAIC` function in R to do a bi-directional (forward and backward) stepwise regression. I do not understand what each return value from the function means.
The output is:
```
Df Sum of Sq RSS AIC
<none> 350.71 -5406.0
- aaa 1 0.283 350.99 -5405.9
- bbb 1 0.339 351.05 -5405.4
- ccc 1 0.982 351.69 -5400.5
- ddd 1 0.989 351.70 -5400.5
```
Question Are the values listed under `Df`, `Sum of Sq`, `RSS`, and `AIC` the values for a model where only one variable would be considered as the independent variable (i.e. y ~
aaa, y ~ bbb, etc.)?
|
Understanding output stepAIC
|
CC BY-SA 3.0
| null |
2014-08-12T17:11:45.447
|
2022-05-31T16:47:57.803
|
2014-08-12T18:36:08.880
|
2928
|
2928
|
[
"data-mining",
"r",
"feature-selection"
] |
969
|
2
| null |
968
|
1
| null |
After consulting with someone I found out that the <none> corresponds to a model that would include all the variables, in other words none of the variables were removed. So consider the line in the output for the variable aaa. The listed RSS and AIC are the values for a model that would include all variables but aaa and we see an increase in the RSS and AIC. The other listed results can be considered in the same fashion. The best model is then the one where none are removed since this has the smallest AIC.
| null |
CC BY-SA 4.0
| null |
2014-08-13T14:18:58.917
|
2022-05-31T16:47:57.803
|
2022-05-31T16:47:57.803
|
25180
|
2928
| null |
972
|
1
| null | null |
1
|
43
|
While finding frequent subgraphs in single large graph, subgraph isomorphism (test) is not considered because its not anti-monotone. How and why subgraph isomorphism is not anti-monotone ?
|
Subgraph isomorphism and Anti-monotone property
|
CC BY-SA 3.0
| null |
2014-08-13T20:30:18.143
|
2014-08-13T20:30:18.143
| null | null |
2948
|
[
"bigdata",
"data-mining",
"dataset",
"graphs",
"similarity"
] |
973
|
1
| null | null |
3
|
234
|
I like to find the weight vector for input-space features in a [structured SVM](http://en.wikipedia.org/wiki/Structured_support_vector_machine). The idea is to identify the most important set of input-space features (based on the magnitude of their corresponding weights). I know that in a binary SVM the weight vector can be written as a [linear combination of examples](http://pyml.sourceforge.net/doc/howto.pdf), and the magnitude of those weights represents how much they were effective for the prediction problem at hand. But how do you compute the same for an SSVM?
|
feature weights in structured support vector machine
|
CC BY-SA 3.0
|
0
|
2014-08-13T20:53:15.443
|
2014-08-14T14:42:08.143
|
2014-08-14T14:42:08.143
|
2949
|
2949
|
[
"svm"
] |
974
|
1
| null | null |
23
|
8136
|
I have a big sparse matrix of users and items they like (in the order of 1M users and 100K items, with a very low level of sparsity). I'm exploring ways in which I could perform kNN search on it. Given the size of my dataset and some initial tests I performed, my assumption is that the method I will use will need to be either parallel or distributed. So I'm considering two classes of possible solutions: one that is either available (or implementable in a reasonably easy way) on a single multicore machine, the other on a Spark cluster, i.e. as a MapReduce program. Here are three broad ideas that I considered:
- Assuming a cosine similarity metric, perform the full multiplication of the normalized matrix by its transpose (implemented as a sum of outer products)
- Using locality-sensitive hashing (LSH)
- Reducing first the dimensionality of the problem with a PCA
I'd appreciate any thoughts or advices about possible other ways in which I could tackle this problem.
|
Nearest neighbors search for very high dimensional data
|
CC BY-SA 3.0
| null |
2014-08-14T00:50:51.103
|
2017-07-10T12:46:30.723
| null | null |
1242
|
[
"machine-learning",
"distributed",
"map-reduce",
"dimensionality-reduction"
] |
975
|
2
| null |
974
|
17
| null |
I hope that the following resources might get you additional ideas toward solving the problem:
1) Research paper "Efficient K-Nearest Neighbor Join Algorithms for High Dimensional Sparse Data": [http://arxiv.org/abs/1011.2807](http://arxiv.org/abs/1011.2807)
2) Class project paper "Recommendation System Based on Collaborative Filtering" (Stanford University): [http://cs229.stanford.edu/proj2008/Wen-RecommendationSystemBasedOnCollaborativeFiltering.pdf](http://cs229.stanford.edu/proj2008/Wen-RecommendationSystemBasedOnCollaborativeFiltering.pdf)
3) Project for the Netflix Prize Competition (k-NN-based): [http://cs.carleton.edu/cs_comps/0910/netflixprize/final_results/knn/index.html](http://cs.carleton.edu/cs_comps/0910/netflixprize/final_results/knn/index.html)
4) Research paper "Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data" on the curse of dimensionality phenomenon and its relation to machine learning, in general, and k-NN algorithm, in particular: [http://jmlr.org/papers/volume11/radovanovic10a/radovanovic10a.pdf](http://jmlr.org/papers/volume11/radovanovic10a/radovanovic10a.pdf)
5) Software for sparse k-NN classification (free, but appears not to be open source - might clarify with authors): [http://www.autonlab.org/autonweb/10408.html](http://www.autonlab.org/autonweb/10408.html)
6) Several discussion threads on StackOverflow:
- https://stackoverflow.com/questions/20333092/knn-with-big-sparse-matrices-in-python
- https://stackoverflow.com/questions/18164348/efficient-nearest-neighbour-search-for-sparse-matrices
- https://stackoverflow.com/questions/21085990/scipy-sparse-distance-matrix-scikit-or-scipy
- https://stackoverflow.com/questions/10472681/handling-incomplete-data-data-sparsity-in-knn
- https://stackoverflow.com/questions/5560218/computing-sparse-pairwise-distance-matrix-in-r (unlike all previous discussions, which refer to Python, this one refers to R ecosystem)
7) Pay attention to GraphLab, an open source parallel framework for machine learning ([http://select.cs.cmu.edu/code/graphlab](http://select.cs.cmu.edu/code/graphlab)), which supports parallel clustering via `MapReduce` model: [http://select.cs.cmu.edu/code/graphlab/clustering.html](http://select.cs.cmu.edu/code/graphlab/clustering.html)
You might also check my answer here on Data Science StackExchange on sparse regression for links to relevant `R` packages and `CRAN Task View` pages: [https://datascience.stackexchange.com/a/918/2452](https://datascience.stackexchange.com/a/918/2452).
| null |
CC BY-SA 3.0
| null |
2014-08-14T04:28:36.933
|
2014-08-14T11:52:35.513
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
976
|
2
| null |
915
|
5
| null |
I gave a very limited partial answer for the confirmatory factor analysis package that I developed for Stata in this [Stata Journal article](http://www.stata-journal.com/article.html?article=st0169) based on timing the actual simulations. Confirmatory factor analysis was implemented as a maximum likelihood estimation technique, and I could see very easily how the computation time grew with each dimension (sample size `n`, number of variables `p`, number of factors `k`). As it is heavily dependent on how Stata thinks about the data (optimized to compute across columns/observations rather than rows), I found performance to be `O(n^{0.68} (k+p)^{2.4})` where 2.4 is the fastest matrix inversion asymptotics (and there's hell of a lot of that in confirmatory factor analysis iterative maximization). I did not give a reference for the latter, but I think I got this from [Wikipedia](http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra).
Note that there is also a matrix inversion step in OLS. However, for reasons of numerical accuracy, no one would really brute-force inverse the `X'X` matrix, and would rather use sweep operators and identify the dangerously collinear variables to deal with precision issues. If you add up $10^8$ numbers that originally were in [double precision](http://en.wikipedia.org/wiki/Double-precision_floating-point_format), you will likely end up with a number that only has a single precision. Numerical computing issues may become a forgotten corner of big data calculations as you start optimizing for speed.
| null |
CC BY-SA 3.0
| null |
2014-08-14T17:28:40.453
|
2014-08-14T17:28:40.453
| null | null |
1237
| null |
977
|
1
| null | null |
11
|
309
|
Let me show you an example of a hypothetical online clustering application:

At time n points 1,2,3,4 are allocated to the blue cluster A and points b,5,6,7 are allocated to the red cluster B.
At time n+1 a new point a is introduced which is assigned to the blue cluster A but also causes the point b to be assigned to the blue cluster A as well.
In the end points 1,2,3,4,a,b belong to A and points 5,6,7 to B. To me this seems reasonable.
What seems simple at first glance is actually a bit tricky - to maintain identifiers across time steps. Let me try to make this point clear with a more borderline example:

The green point will cause two blue and two red points to be merged into one cluster which I arbitrarily decided to color blue - mind this is already my human heuristical thinking at work!
A computer to make this decision will have to use rules. For example when points are merged into a cluster then the identity of the cluster is determined by the majority. In this case we would face a draw - both blue and red might be valid choices for the new (here blue colored) cluster.
Imagine a fifth red point close to the green one. Then the majority would be red (3 red vs 2 blue) so red would be a good choice for the new cluster - but this would contradict the even clearer choice of red for the rightmost cluster as those have been red and probably should stay that way.
I find it fishy to think about this. At the end of the day I guess there are no perfect rules for this - rather heuristics optimizing some stability criterea.
This finally leads to my questions:
- Does this "problem" have a name that it can be referred to?
- Are there "standard" solutions to this and ...
- ... is there maybe even an R package for that?
---
[Reasonable Inheritance of Cluster Identities in Repetitive Clustering](http://www.joyofdata.de/blog/reasonable-inheritance-of-cluster-identities-in-repetitive-clustering/)
|
Solutions for Continuous Online Cluster Identification?
|
CC BY-SA 3.0
| null |
2014-08-14T19:09:29.523
|
2016-09-29T10:26:23.220
|
2014-08-16T10:49:06.163
|
725
|
725
|
[
"machine-learning",
"clustering"
] |
978
|
2
| null |
962
|
14
| null |
>
But I don't know what is difference between text classification and topic models in documents
`Text Classification` is a form of supervised learning, hence the set of possible classes are known/defined in advance, and won't change.
`Topic Modeling` is a form of unsupervised learning (akin to clustering), so the set of possible topics are unknown apriori. They're defined as part of generating the topic models. With a non-deterministic algorithm like LDA, you'll get different topics each time you run the algorithm.
`Text classification` often involves mutually-exclusive classes -- think of these as buckets.
But it doesn't have to: given the right kind of labeled input data, you can set of a series of non-mutually-exclusive binary classifiers.
`Topic modeling` is generally not mutually-exclusive: the same document can have its probability distribution spread across many topics.
In addition, there are also hierarchical topic modeling methods.
>
Also can I use topic model for the documents to identify one topic later on can I use the classification to classify the text inside this documents ?
If you're asking whether you can take all of the documents assigned to one topic by a topic modeling algorithm and then apply a classifier to that collection, then yes, you certainly can do that.
I'm not sure it makes much sense, though: at a minimum, you'd need to pick a threshold for the topic probability distribution above which you'll include documents in your collection (typically 0.05-0.1).
Can you elaborate on your use case?
By the way, there's a great tutorial on topic modeling using the MALLET library for Java available here: [Getting Started with Topic Modeling and MALLET](http://programminghistorian.org/lessons/topic-modeling-and-mallet)
| null |
CC BY-SA 4.0
| null |
2014-08-14T21:38:57.630
|
2018-05-04T13:11:47.637
|
2018-05-04T13:11:47.637
|
29575
|
819
| null |
979
|
1
|
980
| null |
17
|
12372
|
I have a problem of clustering huge amount of sentences into groups by their meanings. This is similar to a problem when you have lots of sentences and want to group them by their meanings.
What algorithms are suggested to do this? I don't know number of clusters in advance (and as more data is coming clusters can change as well), what features are normally used to represent each sentence?
I'm trying now the simplest features with just list of words and distance between sentences defined as:
$|A \cup B$ \ $A \cap B|$/$|A \cup B|$
(A and B are corresponding sets of words in sentence A and B)
Does it make sense at all?
I'm trying to apply [Mean-Shift](http://scikit-learn.org/stable/auto_examples/cluster/plot_mean_shift.html#example-cluster-plot-mean-shift-py) algorithm from scikit library to this distance, as it does not require number of clusters in advance.
If anyone will advise better methods/approaches for the problem - it will be very much appreciated as I'm still new to the topic.
|
Algorithms for text clustering
|
CC BY-SA 4.0
| null |
2014-08-15T13:10:20.937
|
2021-06-20T19:21:33.163
|
2021-03-13T23:09:28.130
|
29169
|
2958
|
[
"clustering",
"text-mining",
"algorithms",
"scikit-learn"
] |
980
|
2
| null |
979
|
13
| null |
Check the [Stanford NLP Group](http://www-nlp.stanford.edu/software)'s open source software, in particular, [Stanford Classifier](http://www-nlp.stanford.edu/software/classifier.shtml). The software is written in `Java`, which will likely delight you, but also has bindings for some other languages. Note, the licensing - if you plan to use their code in commercial products, you have to acquire commercial license.
Another interesting set of open source libraries, IMHO suitable for this task and much more, is [parallel framework for machine learning GraphLab](http://select.cs.cmu.edu/code/graphlab), which includes [clustering library](http://select.cs.cmu.edu/code/graphlab/clustering.html), implementing various clustering algorithms. It is especially suitable for very large volume of data (like you have), as it implements `MapReduce` model and, thus, supports multicore and multiprocessor parallel processing.
You most likely are aware of the following, but I will mention it just in case. [Natural Language Toolkit (NLTK)](http://www.nltk.org) for `Python` contains modules for clustering/classifying/categorizing text. Check the relevant chapter in the [NLTK Book](http://www.nltk.org/book/ch06.html).
UPDATE:
Speaking of algorithms, it seems that you've tried most of the ones from `scikit-learn`, such as illustrated in [this](http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf.html) topic extraction example. However, you may find useful other libraries, which implement a wide variety of clustering algorithms, including Non-Negative Matrix Factorization (NMF). One of such libraries is [Python Matrix Factorization (PyMF)](https://code.google.com/p/pymf) ([source code](https://github.com/nils-werner/pymf)). Another, even more interesting, library, also Python-based, is [NIMFA](http://nimfa.biolab.si), which implements various NMF algorithms. Here's a [research paper](http://jmlr.org/papers/volume13/zitnik12a/zitnik12a.pdf), describing `NIMFA`. [Here's](http://nimfa.biolab.si/nimfa.examples.documents.html) an example from its documentation, which presents the solution for very similar text processing problem of topic clustering.
| null |
CC BY-SA 4.0
| null |
2014-08-15T14:00:14.927
|
2021-06-20T19:21:33.163
|
2021-06-20T19:21:33.163
|
25180
|
2452
| null |
981
|
2
| null |
802
|
3
| null |
Basically for this task you can efficiently use any SQL database with good support of B+tree based indexes (MySQL will suite you needs just perfect).
Create 3 tables:
- Documents table, columns: id/document
- N-grams table: n_gram_id/n_gram
- Mapping between n-grams and documents: document_id/n_gram_id
Create indexes on N-gram table/n_gram string and Mapping table/n_gram_id, also primary keys will be indexed by default well.
Your operations will be efficient:
- Insertion of document: just extract all n-grams and insert into document table and N-grams table
- Lookup for in_gram will be quick with support of index
- Querying for all n-grams that contain a sub-n-gram: in 2 steps - just query based on index all n-grams which contain sub-n-gram from 2nd table. Then - retrieve all corresponding documents for each of these n-grams.
You don't even need to use joins to achieve all these operations so indexes will help a lot. Also if data will not suite in one machine - you can implement sharding scheme, like storing n_grams started from a-n on one server and o-z on another or other suitable scheme.
Also you can use MongoDB, but I'm not sure how exactly you need to implement indexing scheme. For MongoDB you will get sharding scheme for free as it is already built-in.
| null |
CC BY-SA 3.0
| null |
2014-08-15T20:22:11.750
|
2014-08-15T20:22:11.750
| null | null |
2958
| null |
982
|
2
| null |
802
|
9
| null |
See [Lucene NGramTokenizer](https://lucene.apache.org/core/4_4_0/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html)
Are you sure you can't just use lucene or similar indexing techniques?
Inverted indexes will store the n-gram only once, then just the document ids that contain the ngram; they don't store this as highly redundant raw text.
As for finding ngrams that contain your query sub-n-gram, I would build an index on the observed ngrams, e.g. using a second lucene index, or [any other substring index](https://en.wikipedia.org/wiki/Substring_index) such as a trie or suffix tree. If your data is dynamic, probably lucene is a reasonable choice, using phrase queries to find your n-grams.
| null |
CC BY-SA 3.0
| null |
2014-08-16T10:25:19.687
|
2014-08-16T10:25:19.687
| null | null |
924
| null |
985
|
1
| null | null |
5
|
9583
|
I have a question about classifying documents using supervised learning and unsupervised learning.
For example: - I have a bunch of documents talking about football. As we know, football has a different meaning in UK, USA, and Australia. Therefore, it is difficult to classify these documents to three different categorizations (which are soccer, American football and Australian football).
My approach tries to use cosine similarity terms based on unsupervised learning. After we use the cluster learning, we are able to create a number of clusters based on cosine similarity, where each cluster will contain similar documents terms. After we create the clusters, we can use a semantic feature to identify these clusters depending on a supervised model like SVM to make accurate categorizations.
My goal is to create more accurate categorizations, because if I would like to test a new document, I would like know if this document can be related to these categorizations or not.
|
Can I use unsupervised learning followed by supervised learning?
|
CC BY-SA 4.0
| null |
2014-08-16T11:05:43.353
|
2019-06-08T00:13:10.650
|
2019-06-07T23:31:10.447
|
29169
|
2916
|
[
"machine-learning",
"classification",
"clustering",
"dimensionality-reduction"
] |
986
|
2
| null |
946
|
2
| null |
OK, so here's your data.
```
dd <- data.frame(position=rep(1:3, each=2),
variation=rep(c(1,3), 3),
impressions=rep(c(753, 767), 3),
clicks=c(26,7,16,13,2,7))
```
which is
```
position variation impressions clicks
1 1 1 753 26
2 1 3 767 7
3 2 1 753 16
4 2 3 767 13
5 3 1 753 2
6 3 3 767 7
```
The two model assumptions you're thinking about are Binomial
```
mod.bin <- glm(cbind(clicks, impressions-clicks) ~ variation + position,
family=binomial, data=dd)
```
where the dependent variable is constructed to have the count of the event of interest in the first column, and the Poisson
```
md.pois <- glm(clicks ~ variation + position + offset(log(impressions)),
family=poisson, data=dd)
```
where the `log(impressions)` offset is necessary whenever the number of trials differs across observations. This means coefficients are interpretable in terms of change in rate not change in count, which is what you want.
The first model generalises the `binom.test` to a setting with covariates, which is what you have. That gets you a more direct answer to your question, and better (if not perfect) measurement of the relevant uncertainty.
Notes
Both models assume no interaction between variation and position ('independent effects'). This may or may not be reasonable. You'd want more replications to investigate that properly. Swap the `+` for a `*` to do so.
In this data `summary` confirms that the two models give rather similar results, so concerns about Poisson vs Binomial don't seem to matter much.
In the wild, count data is usually overdispersed, that is: more variable than you'd expect from a Poisson with a constant rate or a Binomial with constant click probability, often due to unmodeled determinants of click rate / probability. If that's the case then prediction intervals from these models will be too narrow.
| null |
CC BY-SA 3.0
| null |
2014-08-17T14:17:41.307
|
2014-08-17T15:04:13.880
|
2014-08-17T15:04:13.880
|
2978
|
2978
| null |
987
|
1
| null | null |
26
|
17362
|
The problem I am tackling is categorizing short texts into multiple classes. My current approach is to use tf-idf weighted term frequencies and learn a simple linear classifier (logistic regression). This works reasonably well (around 90% macro F-1 on test set, nearly 100% on training set). A big problem are unseen words/n-grams.
I am trying to improve the classifier by adding other features, e.g. a fixed sized vector computed using distributional similarities (as computed by word2vec) or other categorical features of the examples. My idea was to just add the features to the sparse input features from the bag of words. However, this results in worse performance on the test and training set. The additional features by themselves give about 80% F-1 on the test set, so they aren't garbage. Scaling the features didn't help as well. My current thinking is that these kind of features don't mix well with the (sparse) bag of words features.
So the question is: assuming the additional features provide additional information, what is the best way to incorporate them? Could training separate classifiers and combining them in some kind of ensemble work (this would probably have the drawback that no interaction between the features of the different classifiers could be captured)? Are there other more complex models I should consider?
|
Text categorization: combining different kind of features
|
CC BY-SA 3.0
| null |
2014-08-17T17:29:44.123
|
2016-01-06T04:20:10.207
| null | null |
2979
|
[
"machine-learning",
"classification",
"feature-selection",
"logistic-regression",
"information-retrieval"
] |
988
|
2
| null |
955
|
2
| null |
Here is a couple of tools that may be worth a look:
- Bart, an open source tool that have been used for several languages, including German. Available from the website
- Sucre is a tool developed at the University of Stuttgart. I don't know if it's available easily. You can see this paper about it.
| null |
CC BY-SA 3.0
| null |
2014-08-18T08:49:11.933
|
2014-08-18T08:49:11.933
| null | null |
2985
| null |
989
|
1
| null | null |
114
|
142299
|
I am trying to run SVR using scikit-learn (python) on a training dataset that has 595605 rows and 5 columns (features) while the test dataset has 397070 rows. The data has been pre-processed and regularized.
I am able to successfully run the test examples, but on executing using my dataset and letting it run for over an hour, I could still not see any output or termination of the program. I tried executing using a different IDE and even from the terminal, but that does not seem to be the issue.
I also tried changing the 'C' parameter value from 1 to 1e3.
I am facing similar issues with all SVM implementations using scikit.
Am I not waiting long enough for it to complete?
How much time should this execution take?
From my experience, it should not require more than a few minutes.
Here is my system configuration:
Ubuntu 14.04, 8GB RAM, lots of free memory, 4th gen i7 processor
|
SVM using scikit learn runs endlessly and never completes execution
|
CC BY-SA 4.0
| null |
2014-08-18T10:46:57.360
|
2021-08-11T10:26:27.617
|
2021-02-07T20:22:14.793
|
29169
|
793
|
[
"python",
"svm",
"scikit-learn"
] |
990
|
2
| null |
989
|
2
| null |
This makes sense. IIUC, the speed of execution of support vector operations is bound by number of samples, not dimensionality. In other words, it is capped by CPU time and not RAM. I'm not sure exactly how much time this should take, but I'm running some benchmarks to find out.
| null |
CC BY-SA 3.0
| null |
2014-08-18T11:33:32.617
|
2014-08-18T11:33:32.617
| null | null |
2987
| null |
991
|
2
| null |
987
|
10
| null |
Linear models simply add their features multiplied by corresponding weights. If, for example, you have 1000 sparse features only 3 or 4 of which are active in each instance (and the others are zeros) and 20 dense features that are all non-zeros, then it's pretty likely that dense features will make most of the impact while sparse features will add only a little value. You can check this by looking at feature weights for a few instances and how they influence resulting sum.
One way to fix it is to go away from additive model. Here's a couple of candidate models.
SVM is based on separating hyperplanes. Though hyperplane is linear model itself, SVM doesn't sum up its parameters, but instead tries to split feature space in an optimal way. Given the number of features, I'd say that linear SVM should work fine while more complicated kernels may tend to overfit the data.
Despite its name, Naive Bayes is pretty powerful statistical model that showed good results for text classification. It's also flexible enough to capture imbalance in frequency of sparse and dense features, so you should definitely give it a try.
Finally, random forests may work as a good ensemble method in this case. Randomization will ensure that different kinds of features (sparse/dense) will be used as primary decision nodes in different trees. RF/decision trees are also good for inspecting features themselves, so it's worth to note their structure anyway.
Note that all of these methods have their drawbacks that may turn them into a garbage in your case. Combing sparse and dense features isn't really well-studied task, so let us know what of these approaches works best for your case.
| null |
CC BY-SA 3.0
| null |
2014-08-18T13:35:28.000
|
2014-08-18T13:35:28.000
| null | null |
1279
| null |
992
|
1
|
999
| null |
10
|
492
|
I've been analyzing a data set of ~400k records and 9 variables The dependent variable is binary. I've fitted a logistic regression, a regression tree, a random forest, and a gradient boosted tree. All of them give virtual identical goodness of fit numbers when I validate them on another data set.
Why is this so? I'm guessing that it's because my observations to variable ratio is so high. If this is correct, at what observation to variable ratio will different models start to give different results?
|
Why might several types of models give almost identical results?
|
CC BY-SA 3.0
| null |
2014-08-18T14:56:13.800
|
2014-08-20T11:06:54.447
|
2014-08-20T11:06:54.447
|
97
|
1241
|
[
"data-mining",
"classification",
"binary"
] |
993
|
2
| null |
992
|
0
| null |
>
I'm guessing that it's because my observations to variable ratio is so high.
I think this explanation makes perfect sense.
>
If this is correct, at what observation to variable ratio will different models start to give different results?
This will probably depend very much on your specific data (for instance, even whether your nine variables are continuous, factors, ordinary or binary), as well as any tuning decisions you made while fitting your model.
But you can play around with the observation-to-variable ratio - not by increasing the number of variables, but by decreasing the number of observations. Randomly draw 100 observations, fit models and see whether different models yield different results. (I guess they will.) Do this multiple times with different samples drawn from your total number of observations. Then look at subsamples of 1,000 observations... 10,000 observations... and so forth.
| null |
CC BY-SA 3.0
| null |
2014-08-18T16:28:02.147
|
2014-08-18T16:28:02.147
| null | null |
2853
| null |
994
|
2
| null |
992
|
4
| null |
its worth also looking at the training errors.
basically I disagree with your analysis. if logistic regression etc are all giving the same results it would suggest that the 'best model' is a very simple one (that all models can fit equally well - eg basically linear).
So then the question might be why is the best model a simple model?:
It might suggest that your variables are not very predictive. Its of course hard to analyse without knowing the data.
| null |
CC BY-SA 3.0
| null |
2014-08-18T17:05:19.957
|
2014-08-18T17:05:19.957
| null | null |
1256
| null |
995
|
2
| null |
992
|
1
| null |
As @seanv507 suggested, the similar performance may simply be due to the data being best separated by a linear model. But in general, the statement that it is because the "observations to variable ratio is so high" is incorrect. Even as your ratio of sample size to number of variables goes to infinity, you should not expect different models to perform nearly identically, unless they all provide the same predictive bias.
| null |
CC BY-SA 3.0
| null |
2014-08-18T17:25:27.007
|
2014-08-18T17:25:27.007
| null | null |
964
| null |
996
|
2
| null |
989
|
98
| null |
Kernelized SVMs require the computation of a distance function between each point in the dataset, which is the dominating cost of $\mathcal{O}(n_\text{features} \times n_\text{observations}^2)$. The storage of the distances is a burden on memory, so they're recomputed on the fly. Thankfully, only the points nearest the decision boundary are needed most of the time. Frequently computed distances are stored in a cache. If the cache is getting thrashed then the running time blows up to $\mathcal{O}(n_\text{features} \times n_\text{observations}^3)$.
You can increase this cache by invoking SVR as
```
model = SVR(cache_size=7000)
```
In general, this is not going to work. But all is not lost. You can subsample the data and use the rest as a validation set, or you can pick a different model. Above the 200,000 observation range, it's wise to choose linear learners.
Kernel SVM can be approximated, by approximating the kernel matrix and feeding it to a linear SVM. This allows you to trade off between accuracy and performance in linear time.
A popular means of achieving this is to use 100 or so cluster centers found by kmeans/kmeans++ as the basis of your kernel function. The new derived features are then fed into a linear model. This works very well in practice. Tools like [sophia-ml](https://code.google.com/p/sofia-ml/) and [vowpal wabbit](https://github.com/JohnLangford/vowpal_wabbit/wiki) are how Google, Yahoo and Microsoft do this. Input/output becomes the dominating cost for simple linear learners.
In the abundance of data, nonparametric models perform roughly the same for most problems. The exceptions being structured inputs, like text, images, time series, audio.
## Further reading
- How to implement this.
- How to train an ngram neural network with dropout that scales linearly
- Kernel Approximations
- A formal paper on using kmeans to approximate kernel machines
| null |
CC BY-SA 3.0
| null |
2014-08-19T00:56:40.890
|
2016-01-14T15:20:08.840
|
2016-01-14T15:20:08.840
|
8820
|
2997
| null |
997
|
1
| null | null |
7
|
5071
|
Where can I find free spatio-temporal dataset for download so that I can play with it in R ?
|
Where can I find free spatio-temporal dataset for download?
|
CC BY-SA 3.0
| null |
2014-08-19T03:41:24.207
|
2019-07-12T12:49:27.840
|
2016-03-29T10:55:12.397
|
1352
|
2972
|
[
"dataset",
"open-source",
"freebase"
] |
998
|
2
| null |
937
|
34
| null |
No, scikit-learn does not seem to have a forward selection algorithm. However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm similar to sequential backward selection. See the [documentation here](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html)
| null |
CC BY-SA 4.0
| null |
2014-08-19T13:33:32.453
|
2021-08-15T21:08:31.770
|
2021-08-15T21:08:31.770
|
29169
|
2969
| null |
999
|
2
| null |
992
|
7
| null |
This results means that whatever method you use, you are able to get reasonably close to the optimal decision rule (aka [Bayes rule](http://en.wikipedia.org/wiki/Admissible_decision_rule#Bayes_rules_and_generalized_Bayes_rules)). The underlying reasons have been explained in Hastie, Tibshirani and Friedman's ["Elements of Statistical Learning"](http://statweb.stanford.edu/~tibs/ElemStatLearn/). They demonstrated how the different methods perform by comparing Figs. 2.1, 2.2, 2.3, 5.11 (in my first edition -- in section on multidimensional splines), 12.2, 12.3 (support vector machines), and probably some others. If you have not read that book, you need to drop everything RIGHT NOW and read it up. (I mean, it isn't worth losing your job, but it is worth missing a homework or two if you are a student.)
I don't think that observations to variable ratio is the explanation. In light of my rationale offered above, it is the relatively simple form of the boundary separating your classes in the multidimensional space that all of the methods you tried have been able to identify.
| null |
CC BY-SA 3.0
| null |
2014-08-19T14:13:07.147
|
2014-08-19T14:13:07.147
| null | null |
1237
| null |
1000
|
2
| null |
997
|
3
| null |
You can get some documented, publicly available EEG data from the HeadIT database at UCSD.
[http://headit-beta.ucsd.edu/studies](http://headit-beta.ucsd.edu/studies)
The data itself appears to be in Biosemi Data Format (.bdf) files, described here: [http://www.biosemi.com/faq/file_format.htm](http://www.biosemi.com/faq/file_format.htm)
Biosemi provides links to several open-source methods to access and import .bdf files on their website, including several functions for importing into Matlab, as well as into Python (BioSig) and C/C++ libraries:
[http://www.biosemi.com/download.htm](http://www.biosemi.com/download.htm)
Just as a forewarning, EEG data can be a bit of a bear to work with, due to it's inherently low signal/noise ratio.
| null |
CC BY-SA 3.0
| null |
2014-08-19T15:21:57.833
|
2014-08-19T15:21:57.833
| null | null |
2969
| null |
1001
|
2
| null |
997
|
5
| null |
First thing that came to mind would be one's personal workout data from running or biking apps.
Otherwise there is a dataset around NYC's taxi trip data. Quick Googling brought me this: [http://www.andresmh.com/nyctaxitrips/](http://www.andresmh.com/nyctaxitrips/). Variables include time and location for both pickups and dropoffs.
Another dataset comes from Chicago's bikesharing service. It can be found here: [https://www.divvybikes.com/datachallenge](https://www.divvybikes.com/datachallenge).
| null |
CC BY-SA 3.0
| null |
2014-08-19T15:47:29.413
|
2014-08-19T15:47:29.413
| null | null |
525
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.