Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4945
|
2
| null |
4943
|
20
| null |
The regularization parameter (lambda) serves as a degree of importance that is given to misclassifications. SVM pose a quadratic optimization problem that looks for maximizing the margin between both classes and minimizing the amount of misclassifications. However, for non-separable problems, in order to find a solution, the miclassification constraint must be relaxed, and this is done by setting the mentioned "regularization".
So, intuitively, as lambda grows larger the less the wrongly classified examples are allowed (or the highest the price the pay in the loss function). Then when lambda tends to infinite the solution tends to the hard-margin (allow no miss-classification). When lambda tends to 0 (without being 0) the more the miss-classifications are allowed.
There is definitely a tradeoff between these two and normally smaller lambdas, but not too small, generalize well. Below are three examples for linear SVM classification (binary).



For non-linear-kernel SVM the idea is the similar.
Given this, for higher values of lambda there is a higher possibility of overfitting, while for lower values of lambda there is higher possibilities of underfitting.
The images below show the behavior for RBF Kernel, letting the sigma parameter fixed on 1 and trying lambda = 0.01 and lambda = 10


You can say the first figure where lambda is lower is more "relaxed" than the second figure where data is intended to be fitted more precisely.
(Slides from Prof. Oriol Pujol. Universitat de Barcelona)
| null |
CC BY-SA 4.0
| null |
2015-01-26T01:37:05.640
|
2021-01-10T16:58:56.070
|
2021-01-10T16:58:56.070
|
90523
|
5143
| null |
4946
|
2
| null |
4942
|
13
| null |
This is very broad question, which I think it's impossible to cover comprehensively in a single answer. Therefore, I think that it would be more beneficial to provide some pointers to relevant answers and/or resources. This is exactly what I will do by providing the following information and thoughts of mine.
First of all, I should mention the excellent and comprehensive [tutorial on dimensionality reduction](http://research.microsoft.com/en-us/um/people/cburges/papers/fnt_dimensionreduction.pdf) by Burges (2009) from Microsoft Research. He touches on high-dimensional aspects of data frequently throughout the monograph. This work, referring to dimensionality reduction as dimension reduction, presents a theoretical introduction into the problem, suggests a taxonomy of dimensionality reduction methods, consisting of projective methods and manifold modeling methods, as well as provides an overview of multiple methods in each category.
The "projective pursuit" methods reviewed include independent component analysis (ICA), principal component analysis (PCA) and its variations, such as kernel PCA and probabilistic PCA, canonical correlation analysis (CCA) and its kernel CCA variation, linear discriminant analysis (LDA), kernel dimension reduction (KDR) and some others. The manifold methods reviewed include multidimensional scaling (MDS) and its landmark MDS variation, Isomap, Locally Linear Embedding and graphical methods, such as Laplacian eigenmaps and spectral clustering. I'm listing the most of the reviewed methods here in case, if the original publication is inaccessible for you, either online (link above), or offline (References).
There is a caveat for the term "comprehensive" that I've applied to the above-mentioned work. While it is indeed rather comprehensive, this is relative, as some of the approaches to dimensionality reduction are not discussed in the monograph, in particular, the ones, focused on unobservable (latent) variables. Some of them are mentioned, though, with references to another source - a book on dimensionality reduction.
Now, I will briefly cover several narrower aspects of the topic in question by referring to my relevant or related answers. In regard to nearest neighbors (NN)-type approaches to high-dimensional data, please see my answers [here](https://datascience.stackexchange.com/a/975/2452) (I especially recommend to check the paper #4 in my list). One of the effects of the curse of dimensionality is that high-dimensional data is frequently sparse. Considering this fact, I believe that my relevant answers [here](https://datascience.stackexchange.com/a/918/2452) and [here](https://stats.stackexchange.com/a/130665/31372) on regression and PCA for sparse and high-dimensional data might be helpful.
References
Burges, C. J. C. (2010). Dimension reduction: A guided tour. Foundations and Trends® in Machine Learning, 2(4), 275-365. doi:10.1561/2200000002
| null |
CC BY-SA 3.0
| null |
2015-01-26T08:00:14.630
|
2017-02-14T16:31:51.417
|
2017-04-13T12:50:41.230
|
-1
|
2452
| null |
4947
|
2
| null |
4944
|
4
| null |
A common strategy for dealing with imbalance is to penalize harder the missclassifications that select the class with higher frequency.
In a binary classification problem you could penalize by dividing 1/n where n is the number of examples of the opposite class.
See the following from Prof. Jordi Vitriá

This is the loss function for structured output SVM.
The problem you mention is common in object recognition and object classification in images where much more background images are used than images containing the object. A stronger case happens with exemplar SVM's where just a single image of the object is used.
| null |
CC BY-SA 3.0
| null |
2015-01-26T09:39:41.527
|
2015-01-26T09:52:10.093
|
2015-01-26T09:52:10.093
|
5143
|
5143
| null |
4948
|
2
| null |
4944
|
2
| null |
Some good answers have already been posted at this site:
- Quick guide into training highly imbalanced data sets
And on Stats SE:
- https://stats.stackexchange.com/questions/81111/classification-problem-using-imbalanced-dataset
- https://stats.stackexchange.com/questions/16050/how-to-handle-data-imbalance-in-classification
- https://stats.stackexchange.com/questions/60180/testing-classification-on-oversampled-imbalance-data?rq=1
| null |
CC BY-SA 3.0
| null |
2015-01-26T14:45:43.053
|
2015-01-26T14:45:43.053
|
2017-04-13T12:50:41.230
|
-1
|
97
| null |
4949
|
1
|
4954
| null |
4
|
210
|
I hope this is a question appropriate for SO.
The article in question: [http://www.nytimes.com/2015/01/25/opinion/sunday/seth-stephens-davidowitz-searching-for-sex.html](http://www.nytimes.com/2015/01/25/opinion/sunday/seth-stephens-davidowitz-searching-for-sex.html)
As far as I can tell, the only publicly available data from Google Search is through their Trends API. The help page states that
>
The numbers on the graph reflect how many searches have been done for a particular term, relative to the total number of searches done on Google over time. They don't represent absolute search volume numbers, because the data is normalized and presented on a scale from 0-100.
However in the article, the author reports (absolute) "average monthly searches". The source is stated as:
>
All monthly search numbers are approximate and derived from anonymous and aggregate web activity.
Source: analysis of Google data by (author)
So, how did he get this "anonymous and aggregate web activity"?
|
Where did this NY Times op-ed get his Google Search data?
|
CC BY-SA 3.0
| null |
2015-01-26T15:45:51.317
|
2015-03-24T15:07:35.520
|
2020-06-16T11:08:43.077
|
-1
|
7961
|
[
"dataset",
"search",
"google"
] |
4950
|
1
| null | null |
5
|
2546
|
I am looking for a method to parse semi-structured textual data, i.e. data poorly formatted but usually having a visual structure of a matrix which may vary a lot in content and number of items in it, which may have headers or not, which may be interpreted sometimes column-wise or row-wise, and so on.
I have read about the WHISK information extraction paper : [https://homes.cs.washington.edu/~soderlan/soderland_ml99.pdf](https://homes.cs.washington.edu/~soderlan/soderland_ml99.pdf)
but unfortunately, it is not very detailed and I have not been able to find a real-system implementing it, or even snippets of code.
Has anybody have an idea where I can find such help? Or suggest an alternative approach which may be suited to my problem?
Thank you in advance for your reply!
|
semi-structured text parsing using machine learning
|
CC BY-SA 3.0
| null |
2015-01-26T17:45:02.203
|
2016-04-02T15:51:32.827
| null | null |
7966
|
[
"text-mining",
"information-retrieval",
"parsing"
] |
4951
|
1
|
4993
| null |
1
|
576
|
I came across an SVM predictive model where the author used the probabilistic distribution value of the target variable as a feature in the feature set. For example:
The author built a model for each gesture of each player to guess which gesture would be played next. Calculating over 1000 games played the distribution may look like (20%, 10%, 70%). These numbers were then used as feature variables to predict the target variable for cross-fold validation.
Is that legitimate? That seems like cheating. I would think you would have to exclude the target variables from your test set when calculating features in order to not "cheat".
|
Can distribution values of a target variable be used as features in cross-validation?
|
CC BY-SA 3.0
| null |
2015-01-26T18:20:58.203
|
2015-01-30T13:52:46.097
|
2015-01-30T12:42:45.187
|
3430
|
3430
|
[
"accuracy",
"methods"
] |
4952
|
2
| null |
4951
|
2
| null |
There is nothing necessarily wrong with this. If you have no better information, then using past performance (i.e., prior probabilities) can work pretty well, particularly when your classes are very unevenly distributed.
Example methods using class priors are [Gaussian Maximum Likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood) classification and [Naïve Bayes](http://en.wikipedia.org/wiki/Naive_Bayes_classifier).
[UPDATE]
Since you've added additional details to the question...
Suppose you are doing 10-fold cross-validation (holding out 10% of the data for validating each of the 10 subsets). If you use the entire data set to establish the priors (including the 10% of validation data), then yes, it is "cheating" since each of the 10 subset models uses information from the corresponding validation set (i.e., it is not truly a blind test). However, if the priors are recomputed for each fold using only the 90% of data used for that fold, then it is a "fair" validation.
An example of the effect of this "cheating" is if you have a single, extreme outlier in your data. Normally, with k-fold cross-validation, there would be one fold where the outlier is in the validation data and not the training data. When applying the corresponding classifier to the outlier during validation, it would likely perform poorly. However, if the training data for that fold included global statistics (from the entire data set), then the outlier would influence the statistics (priors) for that fold, potentially resulting in artificially favorable performance.
| null |
CC BY-SA 3.0
| null |
2015-01-26T19:24:51.783
|
2015-01-30T13:52:46.097
|
2015-01-30T13:52:46.097
|
964
|
964
| null |
4953
|
2
| null |
4944
|
0
| null |
I would also suggest you to try an idea of 'Anomaly detection' using Gaussian distribution. In some cases it works really good - especially if you have a VERY skewed classes (say, among a million of examples only 10-20 are '1' (in class) and all the rest a 0's). You may look up it in this video by prof. Andrew Ng.
[http://www.youtube.com/watch?v=h5iVXB9mczo](http://www.youtube.com/watch?v=h5iVXB9mczo)
Or in text:
[http://www.holehouse.org/mlclass/15_Anomaly_Detection.html](http://www.holehouse.org/mlclass/15_Anomaly_Detection.html)
Notice, that this is not a classification problem, it is not using a classification algorithm.
| null |
CC BY-SA 3.0
| null |
2015-01-26T21:09:16.850
|
2015-01-26T21:09:16.850
| null | null |
7969
| null |
4954
|
2
| null |
4949
|
3
| null |
Google AdWords. That has absolute search volumes.
| null |
CC BY-SA 3.0
| null |
2015-01-27T01:37:13.737
|
2015-01-27T01:37:13.737
| null | null |
7972
| null |
4955
|
1
| null | null |
8
|
1730
|
My company provides managed services to a lot of its clients. Our customers typically uses following monitoring tools to monitor their servers/webapps:
- OpsView
- Nagios
- Pingdom
- Custom shell scripts
Whenever any issue is found, an alert mail comes to our Ops team so that they act upon rectifying the issue.
As we manage thousands of servers, our Ops teams' inbox is flooded with email alerts all the time. Even a single issue which has a cascading effect, can trigger 20-30 emails.
Now, what I want to do is to implement a system which will be able to extract important features out of an alert email - like server IP address, type of problem, severity of problem etc. and also classify the emails into proper category, like `CPU-Load-Customer1-Server2, MySQL-Replication-Customer2-DBServer3` etc. We will then have a pre-defined set of debugging steps for each category, in order to help the Ops team to rectify the problem faster. Also, the feature extractor will provide input data to the team for a problem.
So far I have been able to train NaiveBayesClassifier with supervised learning techniques i.e. labeled training data(cluster data), and able to classify new unseen emails into its proper cluster/category. As the emails are based on certain templates, the accuracy of the classifier is very high. But we also get alert emails from custom scripts, which may not follow the templates. So, instead of doing supervised learning, I want to try out unsupervised learning for the same. I am looking into KMeans clustering. But again the problem is, we won't know the number of clusters beforehand. So, which algorithm will be best for this use case? Right now I am using Python's TextBlob library for classification.
Also, for feature extraction out of an alert email, I am looking into NLTK ([http://www.nltk.org/book/ch07.html](http://www.nltk.org/book/ch07.html)) library. I tried it out, but it seems to work on proper English paragraphs/texts well, however, for alert emails, it extracted a lot of unnecessary features. Is there already any existing solution for the same? If not, what will be the best way to implement the same? Which library, which algorithm?
PS: I am not a Data Scientist.
Sample emails:
```
PROBLEM: CRITICAL - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.10.0.100 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: CRITICAL Date & Time: Sat Oct 4 07:02:06 UTC 2014 Additional Information: CRITICAL - load average: 41.46, 40.69, 37.91
RECOVERY: OK - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.1.1.100 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: OK Date & Time: Sat Oct 4 07:52:05 UTC 2014 Additional Information: OK - load average: 0.36, 0.23, 4.83
PROBLEM: CRITICAL - Customer1_PROD - Customer1_PROD_SLAVE_DB_01 - CPU Load Avg Service: CPU Load Avg Host: Customer1_PROD_SLAVE_DB_01 Alias: Customer1_PROD_SLAVE_DB_01 Address: 10.100.10.10 Host Group Hierarchy: Opsview > Customer1 - BIG C > Customer1_PROD State: CRITICAL Date & Time: Sat Oct 4 09:29:05 UTC 2014 Additional Information: CRITICAL - load average: 29.59, 26.50, 18.49
```
Classifier code:(format of csv - email, <disk/cpu/memory/mysql>)
```
from textblob import TextBlob
from textblob.classifiers import NaiveBayesClassifier
import csv
train = []
with open('cpu.txt', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
tup = unicode(row[0], "ISO-8859-1"), row[1]
train.append(tup)
// this can be done in a loop, but for the time being let it be
with open('memory.txt', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
tup = unicode(row[0], "ISO-8859-1"), row[1]
train.append(tup)
with open('disk.txt', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
tup = unicode(row[0], "ISO-8859-1"), row[1]
train.append(tup)
with open('mysql.txt', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
tup = unicode(row[0], "ISO-8859-1"), row[1]
train.append(tup)
cl = NaiveBayesClassifier(train)
cl.classify(email)
```
Feature extractor code taken from: [https://gist.github.com/shlomibabluki/5539628](https://gist.github.com/shlomibabluki/5539628)
Please let me know if any more information is required here.
Thanks in advance.
|
How to extract features and classify alert emails coming from monitoring tools into proper category?
|
CC BY-SA 3.0
| null |
2015-01-27T10:31:10.233
|
2015-06-28T08:31:28.390
|
2015-06-28T08:31:28.390
|
10337
|
7979
|
[
"machine-learning",
"classification",
"clustering",
"feature-extraction"
] |
4956
|
2
| null |
155
|
12
| null |
To add to a possibly never ending list:
as mentioned by cyndd, there is [Wikidata](http://www.wikidata.org/wiki/Wikidata:Main_Page),
and for curated structured knowledge, [Wolfram Alpha](http://www.wolframalpha.com/).
| null |
CC BY-SA 3.0
| null |
2015-01-27T11:15:04.250
|
2015-01-27T11:15:04.250
| null | null |
7980
| null |
4957
|
1
|
4966
| null |
5
|
19439
|
Do you know of any machine learning add-ins that I could use within Excel? For example I would like to be able to select a range of data and use that for training purposes and then use another sheet for getting the results of different learning algorithms.
|
Machine learning toolkit for Excel
|
CC BY-SA 3.0
| null |
2015-01-27T15:13:09.157
|
2020-02-19T03:55:45.710
| null | null |
7982
|
[
"machine-learning",
"neural-network"
] |
4958
|
2
| null |
4875
|
0
| null |
Recently I am working at a similar analysis. I wrote some functions to test any possible combinations between variables, however it is specifically used for my own data set which definitely is different from your one.
This is a fairly small job so I can not say any package dealing with such tests. And you have already worked out some combinations. Just keep going for a ideal function, maybe will be done in a couple of days.
I add a link here, which partially answers your question and code is included: [https://stats.stackexchange.com/questions/4040/r-compute-correlation-by-group](https://stats.stackexchange.com/questions/4040/r-compute-correlation-by-group)
| null |
CC BY-SA 3.0
| null |
2015-01-27T22:01:26.237
|
2015-01-28T03:16:42.000
|
2017-04-13T12:44:20.183
|
-1
|
7989
| null |
4959
|
2
| null |
4875
|
4
| null |
The idea you have in mind is called "feature selection" or "attribute selection". The fact that you have a categorical dependent variable and continuous independent variables is mostly irrelevant because you're expected to use an algorithm or statistical method that is suitable for your requirements.
As for feature selection methods, there are several options:
- Find the subset of features that achieves better performance (usually in cross validation)
- Find the subset of features that correlates highly with the target variable and low with each other (although other criteria can be used)
- Use an algorithm that includes a built-in feature selection mechanism (e.g. decision trees, hierarchical bayesian methods)
Furthermore, there are several methods aimed at obtaining a good compromise between a thorough search and a reasonable time execution (e.g. best first, steepest ascent search, etc)
This [question](https://stats.stackexchange.com/questions/56092/feature-selection-packages-in-r) in particular provides very good suggestions for R packages.
| null |
CC BY-SA 3.0
| null |
2015-01-28T03:55:58.367
|
2015-01-28T03:55:58.367
|
2017-04-13T12:44:20.183
|
-1
|
4621
| null |
4960
|
2
| null |
4957
|
1
| null |
[Weka](http://www.cs.waikato.ac.nz/ml/weka/) can import CSV files, and allows you to choose which columns and rows you want to use in your analysis. It's not an "add-in" for Excel per-se, but it might work for you.
| null |
CC BY-SA 3.0
| null |
2015-01-28T04:34:30.777
|
2015-01-28T04:34:30.777
| null | null |
7961
| null |
4961
|
2
| null |
4957
|
3
| null |
First of all, let me tell you that Excel shouldn't be used for machine learning or any data analysis complicated enough that you wouldn't be comfortable doing it on paper. Why? Here is a list of resources to tell you why:
- You shouldn’t use a spreadsheet for important work (I mean it)
- Destroy Your Data Using Excel With This One Weird Trick!
- Using Excel for Statistical Data Analysis - Caveats
- Problems with Excel
- Spreadsheet Addiction
Now, if you really really want to do heavy calculations without exporting your data, I suggest using [xlwings](http://xlwings.org/). Basically, this allows two-way communication between Excel and Python. Watch the video in the homepage for a quick introduction. In this way, you would be able to use numpy, pandas and scikit-learn (or other machine learning library that you may prefer) without exporting your data first.
| null |
CC BY-SA 3.0
| null |
2015-01-28T04:42:19.780
|
2015-01-28T17:00:58.453
|
2015-01-28T17:00:58.453
|
4621
|
4621
| null |
4962
|
2
| null |
4957
|
2
| null |
Nobody does serious machine learning in Excel; that's not what it's for. Fortunately, you can directly import Excel files into better platforms like python. In particular, there's a great package called `pandas`, which makes work very pleasant. [Here's a demo](https://www.youtube.com/watch?v=_JZFSFR6Yeo).
| null |
CC BY-SA 3.0
| null |
2015-01-28T05:53:28.083
|
2015-01-28T05:53:28.083
| null | null |
381
| null |
4963
|
1
|
4964
| null |
2
|
962
|
I found that Apache-Spark very powerful in Big-Data processing. but I want to know about Dryad (Microsoft) benefits. Is there any advantage for this framework than Spark?
Why we must use Dryad instead of Spark?
|
What is advantage of using Dryad instead of Spark?
|
CC BY-SA 3.0
| null |
2015-01-28T05:57:56.090
|
2016-04-11T05:08:41.997
|
2016-04-11T05:08:41.997
|
11097
|
7977
|
[
"bigdata"
] |
4964
|
2
| null |
4963
|
1
| null |
Dryad is an academic project, whereas Spark is widely deployed in production, and now has a company behind it for support. Just focus on Spark.
| null |
CC BY-SA 3.0
| null |
2015-01-28T06:41:28.650
|
2015-01-28T06:41:28.650
| null | null |
381
| null |
4965
|
2
| null |
4875
|
2
| null |
I would suggest to consider using latent variable modeling (LVM) or similar structural equation modeling (SEM) as an approach to this problem. Using this approach is based on recognizing and analyzing latent variables - constructs (factors), measured not directly, but through sets of measured variables (indicators). Note that a closely related term latent feature is frequently used within the machine learning domain. It seems to me that latent variables resemble what you call "combinations/subsets/segments of the IVs".
By hypothesizing - usually, based on theory or domain knowledge - the latent structure of factors, LVM or SEM are able to automatically confirm or decline those hypotheses. This is done by using a combination of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) (see my answer ). While EFA is frequently performed independently (and maybe that's enough for your purposes), doing it along with CFA represents a large part of LVM/SEM methodology, which is usually completed by performing path analysis, which is concerned about relationships between latent variables.
The `R` ecosystem offers a variety of packages for performing LVM/SEM in its entirety or for performing EFA, CFA and path analysis. The most popular ones for EFA are `psych`, `GPArotation` and `Hmisc`. The most popular packages for CFA, path analysis and LVM are `sem` (the first R package for SEM), `lavaan`, `OpenMx`, `semPLS`, `plspm`. Various supplementary SEM-focused packages are [also available](http://pairach.com/2011/08/13/r-packages-for-structural-equation-model).
| null |
CC BY-SA 3.0
| null |
2015-01-28T09:24:14.097
|
2015-01-28T09:24:14.097
| null | null |
2452
| null |
4966
|
2
| null |
4957
|
11
| null |
As far as I know, currently there are not that many projects and products that allow you to perform serious machine learning (ML) work from within Excel.
However, the situation seems to be changing rapidly due to active Microsoft's efforts in popularizing its ML cloud platform Azure ML (along with ML Studio). The [recent acquisition](http://blogs.microsoft.com/blog/2015/01/23/microsoft-acquire-revolution-analytics-help-customers-find-big-data-value-advanced-statistical-analysis) of R-focused company Revolution Analytics by Microsoft (which appears to me as more of acqui-hiring to a large extent) is an example of the company's aggressive data science market strategy.
In regard to ML toolkits for Excel, as a confirmation that we should expect most Excel-enabled ML projects and products to be Azure ML-focused, consider the following two projects (the latter is an open source):
- Excel DataScope (Microsoft Research): https://www.microsoft.com/en-us/research/video/excel-datascope-overview/
- Azure ML Excel Add-In (seems to be Microsoft sponsored): https://azuremlexcel.codeplex.com
| null |
CC BY-SA 4.0
| null |
2015-01-28T10:33:38.880
|
2018-09-07T03:24:23.073
|
2018-09-07T03:24:23.073
|
29575
|
2452
| null |
4967
|
1
| null | null |
5
|
991
|
Let's say I'm trying to predict a person's electricity consumption, using the time of day as a predictor (hours 00-23), and further assume I have a hefty but finite amount of historical measurements.
Now, I'm trying to set up a linear model akin to
$power.used = \alpha* hr.of.day + \beta * temperature$
Problem: using the $hr.of.day$ as a numerical value is a very bad idea for many reasons, the fact that 23 and 0 are actually quite close values is one problem that can be solved with a simple transformation [1]. The fact that electrical consumption is often bi-modal is another problem which isn't solved by a simple transformation.
A possible solution that works rather well is to treat the time of day as a categorical variable. That does the trick, but it suffers from a significant drawback in that there's no information sharing between neighbouring hours.
So what I'm asking is this: does anyone know of a "soft" version of categorical values? I'm suggesting something quite loosely defined: Ideally I would have some parameter alpha that reduces the regression to numerical regression where $\alpha = 1$ and reduces to categorical regression where $\alpha = 0$, and behaves "in between" if it's some other number.
Right now the only answer I can think of is to alter the weights in the regression in such a way that they tend towards zero the further away the quasi-categorical value is from the desired value. Surely there are other approaches?
[1]
introduce the hour variable as two new variables: $cos(time.of.day/24)$ and $sin(time.of.day/24)$
|
Quasi-categorical variables - any ideas?
|
CC BY-SA 4.0
| null |
2015-01-28T13:42:15.430
|
2019-04-24T21:18:29.377
|
2019-04-24T21:18:29.377
|
71218
|
7999
|
[
"time-series",
"regression",
"categorical-data"
] |
4968
|
1
|
4976
| null |
3
|
3509
|
Both Apache-Spark and Apache-Flink projects claim pretty much similar capabilities.
what is the difference between these projects. Is there any advantage in either Spark or Flink?
Thanks
|
What are the differences between Apache Spark and Apache Flink?
|
CC BY-SA 3.0
| null |
2015-01-28T17:40:32.450
|
2018-09-27T05:00:10.680
| null | null |
7977
|
[
"bigdata"
] |
4969
|
2
| null |
305
|
5
| null |
Current size limit for Amazon Redshift is 128 nodes or 2 PBs of compressed data. Might be circa 6PB uncompressed though mileage varies for compression. You can always let us know if you need more. anurag@aws (I run Amazon Redshift and Amazon EMR)
| null |
CC BY-SA 3.0
| null |
2015-01-28T18:42:06.763
|
2015-01-28T18:42:06.763
| null | null |
8003
| null |
4971
|
1
| null | null |
4
|
460
|
Twitter is a popular source of data for many applications, especially involving sentiment analysis and the like. I have some things I'm interested in doing with Twitter data, but here's the issue: To get all Tweets, you have to get special permission from Twitter (which, as I understand it, is never granted) or pay big bucks to Gnip or the like.
OTOH, [Twitter's API documentation](https://dev.twitter.com/streaming/firehose) says:
Few applications require this level of access. Creative use of a combination of other resources and various access levels can satisfy nearly every application use case.
Using the filter api with keyword tracking seems like something that would be a big part of this, but you obviously can't enumerate every keyword. Using a User stream on many User accounts that follow a lot of people might be an option as well, and I'm not sure if it makes sense to think about using the search API in addition.
So here's the question "What combination of other resources and access levels is the best way to get the maximum amount of data from Twitter"?
|
How to access maximum volume of tweets using Twitter Streaming API, without firehose access?
|
CC BY-SA 3.0
| null |
2015-01-28T19:24:07.543
|
2015-09-15T04:38:47.217
|
2015-07-31T08:33:37.290
|
21
|
6554
|
[
"software-development"
] |
4972
|
2
| null |
4967
|
6
| null |
I would suggest you to use the idea of so-called 'fuzzy clustering', where you put each of your hours of the day value into several clusters at the same time. Details in paper: [http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html](http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html)
The idea is trivial:
You decide how many clusters you want to have. For example, 4 (so you divide your day hours into 4 cathegories). Instead of computing just 1 number (which defines cluster membership) for each of your day hours you compute 4 numbers which represent the degree of membership to each of 4 clusters. So for example if you 4 clusters will contain periods 12 AM-6 AM, 6 AM- 12 PM, 12 PM - 6 PM and 6 PM - 12 AM then you would replace for example 4 AM hour in original data with vector of 4 numbers, first one is the biggest, second is smaller, third one is the smallest one etc.
Then you could use these 4 numbers in your model to fit a regression line.
Of course, if you want you could use 24 clusters and in such case each your day of hour would have a high 'relation' with nearby hours and almost 0 with the distant hours.
| null |
CC BY-SA 3.0
| null |
2015-01-28T21:56:23.980
|
2015-01-28T22:02:09.333
|
2015-01-28T22:02:09.333
|
7969
|
7969
| null |
4973
|
2
| null |
155
|
16
| null |
Did you know about the PUMA Benchmarks and dataset downloads?
[https://sites.google.com/site/farazahmad/pumadatasets](https://sites.google.com/site/farazahmad/pumadatasets)
It does include the following:
- TeraSort
- Wikipedia
- List item
- Self-Join
- Adjacency-List
- Movies-database
- Ranked-Inverted-Index
| null |
CC BY-SA 3.0
| null |
2015-01-28T22:27:18.587
|
2015-01-28T22:27:18.587
| null | null |
2699
| null |
4974
|
1
| null | null |
6
|
3083
|
I have an array of edges and weights:
```
[['a', 'b', 4],
['a', 'c', 3],
['c', 'a', 2],
...]
```
I have about 100,000 edges and weights are between 1 and 700, most around 100.
I am thinking of using Markov Cluster Algorithm however wanted to reach out to see if this is the best to use. What about Affinity Propagation? In either case, what is the workflow? Do you typically have a way to measure how well clustered the results. Is there an equivalent to a silhouette score? Is there a way to visualize the clusters?
|
Partitioning Weighted Undirected Graph
|
CC BY-SA 3.0
| null |
2015-01-28T23:19:37.187
|
2018-01-15T12:17:37.263
| null | null |
8009
|
[
"clustering",
"graphs"
] |
4975
|
2
| null |
4974
|
3
| null |
Even a simple Internet search reveals numerous papers on graph clustering approaches and algorithms. [This paper](http://dollar.biz.uiowa.edu/~street/graphClustering.pdf) is most likely the best starting point, as it presents a rather comprehensive overview of the topic in terms of the problem as well as approaches, methods and algorithms for solutions. The rest you can find easily via online search. In regard to graph clustering visualization, I recommend you to check [my relevant answer](https://datascience.stackexchange.com/a/814/2452) - I'm pretty sure that the tools I reference there are able to visualize graph clusters as well.
| null |
CC BY-SA 3.0
| null |
2015-01-29T00:57:39.603
|
2015-01-29T00:57:39.603
|
2017-04-13T12:50:41.230
|
-1
|
2452
| null |
4976
|
2
| null |
4968
|
3
| null |
Flink is the Apache renaming of the [Stratosphere project from several universities in Berlin](http://stratosphere.eu/). It doesn't have the same industrial foothold and momentum that the Spark project has, but it seems nice, and more mature than, say, Dryad. I'd say it's worth investigating, at least for personal or academic use, but for industrial deployment I'd still prefer Spark, which at this point is battle tested. For a more technical discussion, see [this Quora post by committers on both projects](https://www.quora.com/Are-Spark-and-Stratosphere-competitors-Do-they-cover-the-same-set-of-use-cases).
| null |
CC BY-SA 3.0
| null |
2015-01-29T04:19:16.007
|
2015-01-29T04:19:16.007
| null | null |
381
| null |
4977
|
1
|
4982
| null |
3
|
3623
|
I found that Apache-Storm, Apache-Spark, Apache-Flink and TIBCO StreamBase are some powerful frameworks for stream processing. but I don't know which one of them has the best performance and capabilities.
I know Apache-Spark and Apache-Flink are two general purpose frameworks that support stream processing. but Apache-Storm and TIBCO StreamBase are built for stream processing specially. Is there any considerable advantage between these frameworks?
Thanks
|
What is the best Big-Data framework for stream processing?
|
CC BY-SA 3.0
| null |
2015-01-29T07:32:19.207
|
2018-11-15T01:45:56.017
| null | null |
7977
|
[
"bigdata"
] |
4978
|
1
|
4981
| null |
0
|
1066
|
When considering Support Vector Machine, in an take in multiple inputs. Can each of these inputs be a vector??
What i am trying to say is, can the input be a 2 dimensional vector??
|
Can 2 dimensional input be applied to SVM?
|
CC BY-SA 3.0
| null |
2015-01-29T09:04:56.190
|
2015-01-29T12:42:05.757
| null | null |
8013
|
[
"machine-learning",
"svm"
] |
4979
|
1
| null | null |
0
|
72
|
I need to simulate for an academical project how the traffic fluxes (input/output with respect to a monitored area, measured in number of cars) of a city area evolves in correspondence of an event (i.e. the opening of a restricted traffic area to decongest the traffic).
I have some simulated sensors that provide the data: I was thinking to use a combination of a fuzzy system (to assign a membership function to each type of data, e.g. PM10 value and CO2 value) and a markov process: I would need to modify the probability to decrement the number of car in the monitored area (simulating that a car is going out the congested area, towards the new opened area) basing on decisions made by means of a fuzzy system.
So my questions are:
- It is a good way to interpret the problem or there are better ideas that I have not taken into account yet?
- How to implement such a combination of markov chain and fuzzy systems in matlab?
Thanks
|
Matlab simulation through FIS and Markov Process
|
CC BY-SA 3.0
| null |
2015-01-29T11:35:07.657
|
2015-05-05T17:43:39.597
|
2015-02-04T15:46:20.233
|
97
|
6559
|
[
"markov-process",
"matlab",
"simulation"
] |
4980
|
1
| null | null |
12
|
5850
|
I have been working in NLTK for a while using Python. The problem I am facing is that their is no help available on training NER in NLTK with my custom data. They have used MaxEnt and trained it on ACE corpus. I have searched on the web a lot but I could not find any way that can be used to train NLTK's NER.
If anyone can provide me with any link/article/blog etc which can direct me to Training Datasets Format used in training NLTK's NER so I can prepare my Datasets on that particular format. And if I am directed to any link/article/blog etc which can help me TRAIN NLTK's NER for my own data.
This is a question widely searched and least answered. Might be helpful for someone in the future whose working with NER.
|
Help regarding NER in NLTK
|
CC BY-SA 3.0
| null |
2015-01-29T12:13:01.677
|
2017-11-27T13:00:07.393
|
2015-01-30T04:30:58.490
|
8016
|
8016
|
[
"machine-learning",
"python",
"nlp"
] |
4981
|
2
| null |
4978
|
1
| null |
If I understand your question correctly. Yes, SVM can take multiple inputs. My suggestion for handling a vector as a feature would be to expand it out. For example,
```
x0 = (1,2) x0 = 1
x1 = .4 -----> x1 = 2
x2 = 0 x2 = .4
x3 = 0
```
If this does not capture all of the characteristics of the vector that are important, then you may want to add other features (like magnitude of the vector) as well.
| null |
CC BY-SA 3.0
| null |
2015-01-29T12:42:05.757
|
2015-01-29T12:42:05.757
| null | null |
3430
| null |
4982
|
2
| null |
4977
|
2
| null |
It really depends on what you are looking to do. I love Apache Spark, but Storm has some history. I am sure as the streaming capability in Spark is built out that it will become a competitive solution. However, until Spark has some heavy hitting users (for streaming) there will remain unknown bugs.
You can also consider the community. Spark has a great community. I am not sure the level of the Storm community as I am usually the one receiving the data not handling the ingest. I can say we have used Storm on projects and I have been impressed with the real-time analysis and volumes of streaming data.
| null |
CC BY-SA 3.0
| null |
2015-01-29T12:48:41.620
|
2015-01-29T12:48:41.620
| null | null |
3430
| null |
4983
|
2
| null |
4951
|
0
| null |
I agree that there is nothing wrong with using these type of features. I have used for inter-arrival times for example in modeling work. I have noticed however that many of these kind of features have "interesting" covariance relationships with each other, so you have to be really careful about using multiple distribution features in a model.
| null |
CC BY-SA 3.0
| null |
2015-01-29T12:51:45.280
|
2015-01-29T12:51:45.280
| null | null |
8005
| null |
4984
|
2
| null |
41
|
7
| null |
R is great for a lot of analysis. As mentioned about, there are newer adaptations for big data like MapR, RHadoop, and scalable versions of RStudio.
However, if your concern is libraries, keep your eye on Spark. Spark was created for big data and is MUCH faster than Hadoop alone. It has vastly growing machine learning, SQL, streaming, and graph libraries. Thus allowing much if not all of the analysis to be done within the framework (with multiple language APIs, I prefer Scala) without having to shuffle between languages/tools.
| null |
CC BY-SA 3.0
| null |
2015-01-29T12:58:28.257
|
2015-01-29T12:58:28.257
| null | null |
3430
| null |
4985
|
1
|
5011
| null |
3
|
1197
|
I am currently working on a multi-class classification problem with a large training set. However, it has some specific characteristics, which induced me to experiment with it, resulting in few versions of the training set (as a result of re-sampling, removing observations, etc).
I want to perform pre-processing of the data, that is to scale, center and impute (not much imputation though) values. This is the point where I've started to get confused.
I've been taught that you should always pre-process the test set in the same way you've pre-processed the training set, that is (for scaling and centering) to measure the mean and standard deviation on the training set and apply those values to the test set. This seems reasonably to me.
But what to do in case when you have shrinked/resampled training set? Should one focus on characteristics of the data that is actually feeding the model (that is what would 'train' function in R's caret package suggest, as you can put the pre-processing object in there directly) and apply these to the test set, or maybe one should capture the real characteristics of the data (from the whole untouched training set) and apply these? If the second option is better, maybe it would be worth it to capture the characteristics of the data by merging the training and test data together just for pre-processing step to get as accurate estimates as possible (I've actually never heard of anyone doing that though)?
I know I can simply test some of the approaches specified here, and I surely will, but are there any suggestions based on theory or your intuition/experience on how to tackle this problem?
I also have one additional and optional question. Does it make sense to center but NOT scale the data (or the other way around) in any case? Can anyone present any example where that approach would be reasonable?
Thank you very much in advance.
|
Pre-processing (center, scale, impute) among training sets (different forms) and the test set - what is a good approach?
|
CC BY-SA 3.0
| null |
2015-01-29T13:54:24.940
|
2015-05-02T03:55:45.303
| null | null |
8017
|
[
"machine-learning",
"data-mining",
"dataset",
"processing",
"feature-scaling"
] |
4986
|
1
| null | null |
2
|
638
|
I have a weekly dataset and I have to normalize this data.
Data is something like this :
```
1. week 50
2. week 51
3. week 50
4. week 54
5. week 150
6. week 155
7. week ...
```
The important thing is, the difference between week 3 and week 4 (50-54) is not same with week 5 and week 6. And also there is a huge different between week 4 and week 5.
My question is how can i handle all of this things ?
Is the standard normalization functions(for example scikit normalization) can do it for me and should I normalize this data 0-1 or -1 to 1 ?
[Sklearn normalization page](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html)
NOTE I am working with python and generally scikit-learn library.
Any help is appreciated.
|
Normalize weekly data - Python
|
CC BY-SA 3.0
| null |
2015-01-29T17:21:07.070
|
2015-04-29T19:25:35.863
| null | null |
16218
|
[
"python",
"scikit-learn"
] |
4987
|
2
| null |
4986
|
1
| null |
I would find the unit variance of the all the weeks and then divide by that. Scikit can do this for you using [scale](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html).
| null |
CC BY-SA 3.0
| null |
2015-01-29T17:28:51.800
|
2015-01-29T17:28:51.800
| null | null |
3430
| null |
4988
|
2
| null |
4876
|
3
| null |
This looks a well structured dataset. You can read more about database design in [this section of wikipedia](http://en.wikipedia.org/wiki/Database_design#Normalization). Your data are well structured so querying is easy. As Jake C says, you'll want to transform it for specific tasks. Packages like dplr and reshape2 are excellent for this. You could also consider writing your data to a specific database. This is particularly useful if your dataset is so large that R runs out of RAM. I've written an example with SQLite here: [https://scottishsnow.wordpress.com/2014/08/14/writing-to-a-database-r-and-sqlite/](https://scottishsnow.wordpress.com/2014/08/14/writing-to-a-database-r-and-sqlite/)
| null |
CC BY-SA 3.0
| null |
2015-01-29T20:36:25.567
|
2015-01-29T20:36:25.567
| null | null |
8021
| null |
4989
|
2
| null |
41
|
4
| null |
As other answers have noted, R can be used along with Hadoop and other distributed computing platforms to scale it up to the "Big Data" level. However, if you're not wedded to R specifically, but are willing to use an "R-like" environment, [Incanter](http://www.incanter.org) is a project that might work well for you, as it is native to the JVM (based on Clojure) and doesn't have the "impedance mismatch" between itself and Hadop that R has. That is to say, from Incanter, you can invoke Java native Hadoop / HDFS APIs without needing to go through a JNI bridge or anything.
| null |
CC BY-SA 3.0
| null |
2015-01-29T21:03:27.097
|
2015-01-29T21:03:27.097
| null | null |
6554
| null |
4990
|
2
| null |
4951
|
0
| null |
As bogatron and Paul already said, there is nothing wrong with using the prediction from one classifier as a feature in another classifier. Actually, so-called "Cascading classifiers" work that way. From [Wikipedia](http://en.wikipedia.org/wiki/Cascading_classifiers):
>
Cascading is a particular case of ensemble learning based on the concatenation of several classifiers, using all information collected from the output from a given classifier as additional information for the next classifier in the cascade.
This can be helpful not only to inform posterior classifiers using new features but also as an optimization measure. In the Viola-Jones object detection framework, a set of weak classifiers is used sequentially in order to reduce the amount of computation in the object recognition task. If one of the weak classifiers fails to recognize an object of interest, others classifiers don't need to be computed.
| null |
CC BY-SA 3.0
| null |
2015-01-29T21:31:50.280
|
2015-01-29T21:31:50.280
| null | null |
4621
| null |
4991
|
2
| null |
4980
|
3
| null |
Is this article good enough?
[http://www.succeed-project.eu/wiki/index.php/NLTK#Input_format_for_training](http://www.succeed-project.eu/wiki/index.php/NLTK#Input_format_for_training)
There is explanation about how corpus should look like.
Your data needs to be in IOB format (word tag chunktag) to make it work.
Eric NNP B-PERSON
is VB O
the AT B-NP
CEO NN I-NP
of IN O
Google NNP B-ORGANIZATION
| null |
CC BY-SA 3.0
| null |
2015-01-30T10:44:58.467
|
2015-01-30T11:57:37.427
|
2015-01-30T11:57:37.427
|
2750
|
2750
| null |
4992
|
1
|
5060
| null |
11
|
402
|
I'm trying to build a cosine locality sensitive hash so I can find candidate similar pairs of items without having to compare every possible pair. I have it basically working, but most of the pairs in my data seem to have cosine similarity in the -0.2 to +0.2 range so I'm trying to dice it quite finely and pick things with cosine similarity 0.1 and above.
I've been reading Mining Massive Datasets chapter 3. This talks about increasing the accuracy of candidate pair selection by Amplifying a Locality-Sensitive Family. I think I just about understand the mathematical explanation, but I'm struggling to see how I implement this practically.
What I have so far is as follows
- I have say 1000 movies each with ratings from some selection of 1M users. Each movie is represented by a sparse vector of user scores (row number = user ID, value = user's score)
- I build N random vectors. The vector length matches the length of the movie vectors (i.e. the number of users). The vector values are +1 or -1. I actually encode these vectors as binary to save space, with +1 mapped to 1 and -1 mapped to 0
- I build sketch vectors for each movie by taking the dot product of the movie and each of the N random vectors (or rather, if I create a matrix R by laying the N random vectors horizontally and layering them on top of each other then the sketch for movie m is R*m), then taking the sign of each element in the resulting vector, so I end with a sketch vector for each movie of +1s and -1s, which again I encode as binary. Each vector is length N bits.
- Next I look for similar sketches by doing the following
I split the sketch vector into b bands of r bits
Each band of r bits is a number. I combine that number with the band number and add the movie to a hash bucket under that number. Each movie can be added to more than one bucket.
I then look in each bucket. Any movies that are in the same bucket are candidate pairs.
Comparing this to 3.6.3 of mmds, my AND step is when I look at bands of r bits - a pair of movies pass the AND step if the r bits have the same value. My OR step happens in the buckets: movies are candidate pairs if they are both in any of the buckets.
The book suggests I can "amplify" my results by adding more AND and OR steps, but I'm at a loss for how to do this practically as the explanation of the construction process for further layers is in terms of checking pairwise equality rather than coming up with bucket numbers.
Can anyone help me understand how to do this?
|
Amplifying a Locality Sensitive Hash
|
CC BY-SA 3.0
| null |
2015-01-30T11:08:37.280
|
2016-09-12T03:51:16.957
| null | null |
8030
|
[
"machine-learning"
] |
4993
|
2
| null |
4951
|
0
| null |
After speaking with some experienced statisticians, this is what I got.
>
As for technical issues regarding the paper, I'd be worried about data leakage or using future information in the current model. This can also occur in cross validation. You should make sure each model trains only on past data, and predicts on future data. I wasn't sure exactly how they conducted CV, but it definitely matters. It's also non-trivial to prevent all sources of leakage. They do claim unseen examples but it's not explicit exactly what code they wrote here. I'm not saying they are leaking for sure, but I'm saying it could happen.
| null |
CC BY-SA 3.0
| null |
2015-01-30T12:11:41.293
|
2015-01-30T12:11:41.293
| null | null |
3430
| null |
4994
|
2
| null |
4879
|
0
| null |
I would call a mapping between N dimensional input and N dimensional output a regression problem.
If you add more constraints about the relation between the input and output it might be called different names: linear filtering, nonlinear filtering, etc...
some examples on common techniques for that would be: neural networks, regression trees, regularised regressions...
| null |
CC BY-SA 3.0
| null |
2015-01-30T16:33:23.683
|
2015-01-30T16:33:23.683
| null | null |
7999
| null |
4995
|
1
|
5005
| null |
4
|
4084
|
We can access HDFS file system and YARN scheduler In the Apache-Hadoop. But Spark has a higher level of coding. Is it possible to access HDFS and YARN in Apache-Spark too?
Thanks
|
Can we access HDFS file system and YARN scheduler in Apache Spark?
|
CC BY-SA 3.0
| null |
2015-01-30T18:55:46.173
|
2015-01-31T12:29:35.733
| null | null |
7977
|
[
"bigdata",
"apache-hadoop"
] |
4996
|
2
| null |
4977
|
3
| null |
Apache Storm and Apache Spark are more popular than the other ones, there are already many discussions on Quora([Storm vs Spark](http://www.quora.com/What-is-the-difference-between-Apache-Storm-and-Apache-Spark), [Use cases for comparison](http://www.quora.com/Are-there-any-use-cases-for-a-comparison-between-Storm-and-Spark-Streaming)).
Personally, I think Spark is a better choice.
| null |
CC BY-SA 3.0
| null |
2015-01-30T20:15:09.003
|
2015-01-30T20:15:09.003
| null | null |
2522
| null |
4997
|
2
| null |
155
|
16
| null |
## Data Sets
- Academic Torrents
- Quora
- hadoopilluminated.com
- data.gov
- Quandl
- freebase.com
- usgovxml.com
- enigma.com
- datahub.io
- aws.amazon.com/datasets
- databib.org
- datacite.org
- quandl.com
- figshare.com
- GeoLite Legacy Downloadable Databases
- Quora's Big Datasets Answer
- Public Big Data Sets
- Houston Data Portal
- Kaggle Data Sources
- A Deep Catalog of Human Genetic Variation
- A community-curated database of well-known people, places, and things
- Google Public Data
- World Bank Data
- NYC Taxi data
- Open Data Philly Connecting people with data for Philadelphia
- Network Repository An interactive data repository with over 600+ networks in 20+ collections; from large-scale social networks, web graphs, biological networks, communication and technological networks, etc.
- A list of useful sources A blog post includes many data set databases
[Data Sets](https://github.com/okulbilisim/awesome-datascience#data-sets) From [awesome-datascience](https://github.com/okulbilisim/awesome-datascience)
| null |
CC BY-SA 3.0
| null |
2015-01-30T20:26:42.293
|
2017-09-14T19:16:18.913
|
2017-09-14T19:16:18.913
|
-1
|
2522
| null |
4998
|
1
| null | null |
2
|
186
|
I found that Apache-Spark has pretty much simple interface and easy to use. But I want to know about other interfaces.
Can anyone give me a ranking of Big-Data frameworks in base of simplicity of their interfaces. also this is useful to express most simple and complex interfaces in base of your experiences.
Definitely this question is about some frameworks with same tasks. For example a selection between Flink and Spark just in your opinion. Detailed comparison is so lengthy and this is not my purpose. Just a selection or ranking on your opinions is sufficient.
Thanks
|
Which Big-Data Frameworks have most simple interfaces?
|
CC BY-SA 3.0
| null |
2015-01-30T21:04:30.173
|
2015-02-06T01:08:32.770
|
2015-02-01T07:13:30.427
|
7977
|
7977
|
[
"bigdata"
] |
4999
|
2
| null |
4879
|
0
| null |
N_dimension input - n_dimension output is a too general description. You could think of it as a regression problem where you predict multi-dimensional output.
But also it could be the case that you are solving multiclass-classification problem:
input: n features
output: vector which defines class membership - either 0's and 1's or the real value which defines degree of membership to the class
Or you could also think of it as of multilabel classification problem:
input: n features
output: vector of 0 and 1 which define which labels are associated with the input.
So in general multi-dimensional output is not telling anything about the matter of task.
You could try 2 approaches to solve the task which involves multi-dimensional output:
1) One-vs-rest or one-vs-one strategies (or their variations) where for each 'part' (dimension) of the output you train separate classifier or separate regressor.
2) Neural network with multiple output neurons. I would suggest to try it after trying #1, neural networks are complicated, computing-expensive and maybe somewhat clumsy - so far, I wasn't able to construct neural network which would outperform other models in specific tasks I tried to solve. But of course, this is my personal opinion about NN. In your case they may really shine.
| null |
CC BY-SA 3.0
| null |
2015-01-30T21:14:25.563
|
2015-01-30T21:14:25.563
| null | null |
7969
| null |
5000
|
1
|
5019
| null |
9
|
8621
|
Maybe it is a bit general question. I am trying to solve various regression tasks and I try various algorithms for them. For example, multivariate linear regression or an SVR. I know that the output can't be negative and I never have negative output values in my training set, though I could have 0's in it (for example, I predict 'amount of cars on the road' - it can't be negative but can be 0). Rather often I face a problem that I am able to train relatively good algorithm (maybe fit a good regression line to my data) and I have relatively small average squared error on training set. But when I try to run my regression algorithm against new data I sometimes get a negative output. Obviously, I can't accept negative output since it is not a valid value. The question is - what is the proper way of working with such output? Should I think of negative output as a 0 output? Is there any general advice for such cases?
|
Proper way of fighting negative outputs of a regression algorithms where output must be positive all the way
|
CC BY-SA 3.0
| null |
2015-01-30T21:30:18.077
|
2015-02-02T20:20:08.503
| null | null |
7969
|
[
"machine-learning",
"regression"
] |
5001
|
1
|
6756
| null |
4
|
278
|
I'm going to start my degree thesis and I want to do a fault detector system using machine learning techniques. I need datasets for my thesis but I don't know where I can get that data. I'm looking for historical operation/maintenance/fault datasets of any kind of machine in the oil & gas industry (drills, steam injectors etc) or electrical companies (transformators, generators etc).
|
Data available from industry operations
|
CC BY-SA 3.0
| null |
2015-01-30T23:39:04.687
|
2017-06-30T14:45:48.930
|
2017-06-30T14:45:48.930
|
31513
|
8037
|
[
"dataset",
"open-source",
"freebase"
] |
5002
|
2
| null |
4998
|
5
| null |
I think that it is impossible to answer this question comprehensively, at least for the following reasons:
- big data frameworks have different goals and target different knowledge domains, so the comparison simply doesn't make much sense;
- most big data frameworks (and other programming frameworks, for that matter) have multiple interfaces, and frequently those sets of interfaces are significantly different (the intersection is small), so there is a risk of comparing apples and oranges;
- trying to compare anything (in this case, interfaces), using simplicity as a criterion, involves a significant amount of subjectivity - what one person perceive as very simple, another person might find quite complex;
- the variety and the number of big data frameworks is mind-boggling (for example, see https://github.com/onurakpolat/awesome-bigdata); the same applies to a related topic of machine learning frameworks (for example, see https://github.com/josephmisiti/awesome-machine-learning);
- corollary from the points above: a comprehensive comparison (considering all the above-mentioned issues) would go far beyond the scope of a single answer on this site, in volume and effort - it would be more like a long research paper, a book chapter or even a book.
| null |
CC BY-SA 3.0
| null |
2015-01-30T23:44:42.127
|
2015-02-06T01:08:32.770
|
2015-02-06T01:08:32.770
|
2452
|
2452
| null |
5004
|
2
| null |
5000
|
7
| null |
A standard trick is to estimate the logarithm of the desired quantity, then take its exponential, which is always positive. The drawback is that the error is optimized for the log, which treats differences in order of magnitude as equal. Another option is to do your regression as usual then project onto the feasible set (use the positive part of the output; $max(0, \cdot)$)
| null |
CC BY-SA 3.0
| null |
2015-01-31T01:18:10.307
|
2015-02-01T19:45:23.910
|
2015-02-01T19:45:23.910
|
381
|
381
| null |
5005
|
2
| null |
4995
|
4
| null |
Yes.
There are examples on spark official document: [https://spark.apache.org/examples.html](https://spark.apache.org/examples.html)
Just put your HDFS file uri in your input file path as below (scala syntax).
```
val file = spark.textFile("hdfs://train_data")
```
| null |
CC BY-SA 3.0
| null |
2015-01-31T02:22:36.000
|
2015-01-31T10:49:29.950
|
2015-01-31T10:49:29.950
|
1279
|
1003
| null |
5006
|
2
| null |
4995
|
6
| null |
HDFS
Spark was built as an alternative to MapReduce and thus supports most of its functionality. In particular, it means that "Spark can create distributed datasets from any storage source supported by Hadoop, including your local file system, HDFS, Cassandra, HBase, Amazon S3, etc."[1](http://spark.apache.org/docs/1.2.0/programming-guide.html#external-datasets). For most common data sources (like HDFS or S3) Spark automatically recognizes schema, e.g.:
```
val sc = SparkContext(...)
val localRDD = sc.textFile("file://...")
val hdfsRDD = sc.textFile("hdfs://...")
val s3RDD = sc.textFile("s3://...")
```
For more complicated cases you may need to work with lower-level functions like `newAPIHadoopFile`:
```
val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result])
val customRDD = sc.newAPIHadoopRDD(conf, classOf[MyCustomInputFormat],
classOf[MyCustomKeyClass],
classOf[MyCustomValueClass])
```
But general rule is that if some data source is available for MapReduce, it can be easily reused in Spark.
YARN
Currently Spark supports 3 cluster managers / modes:
- Standalone
- Mesos
- YARN
Standalone mode uses Spark's own master server and works for Spark only, while YARN and Mesos modes aim to share same set of system resources between several frameworks (e.g. Spark, MapReduce, Impala, etc.). Comparison of YARN and Mesos may be found [here](http://www.quora.com/How-does-YARN-compare-to-Mesos), and detailed description of Spark on YARN [here](http://blog.cloudera.com/blog/2014/05/apache-spark-resource-management-and-yarn-app-models/).
And, in best traditions of Spark, you can switch between different modes simply by changing [master URL](https://spark.apache.org/docs/1.2.0/submitting-applications.html).
| null |
CC BY-SA 3.0
| null |
2015-01-31T12:29:35.733
|
2015-01-31T12:29:35.733
| null | null |
1279
| null |
5007
|
1
| null | null |
0
|
1344
|
I am wondering if there is a way to proceed 2 exectuions in 1 step in hive.
For example:
```
SELECT * FROM TABLE1
SELECT * FROM TABLE2
;
```
Do this in one window, and do not have to open 2 hive windows to execute each line separetly.
Can it be done on HUE?
|
How to proceed 2 executions in 1 step in hive?
|
CC BY-SA 3.0
| null |
2015-01-31T13:28:45.590
|
2015-02-02T17:00:51.307
|
2015-02-01T18:37:12.500
|
5224
|
5224
|
[
"hive"
] |
5008
|
2
| null |
5001
|
3
| null |
A huge list of open data sets is listed here:
- Publicly Available Datasets
Including Amazon, KDnuggets, Stanford, Twitter, Freebase, Google Public and more.
| null |
CC BY-SA 3.0
| null |
2015-01-31T14:27:15.657
|
2015-01-31T14:27:15.657
|
2017-04-13T12:50:41.230
|
-1
|
97
| null |
5009
|
2
| null |
5007
|
2
| null |
You can use HiveCLI Tool to run HiveQL with a given sql file.
>
$HIVE_HOME/bin/hive -f /home/my/hive-script.sql
Please see official document: [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli)
What you need to do is to
- Put your HiveQLs in a file as below
SELECT * FROM TABLE1;
SELECT * FROM TABLE2;
- Use HiveCLI and run with above file
| null |
CC BY-SA 3.0
| null |
2015-01-31T16:55:53.350
|
2015-01-31T16:55:53.350
| null | null |
1003
| null |
5010
|
2
| null |
5007
|
1
| null |
You can separate each query with a semi colon (;)
```
select column1 from table1;
select column2 from table2;
```
This command can be executed from command line via inline queries or a file. Usage of Hive CLI is not recommended. You must use beeline to execute queries configured via hive server 2 so that all/any underlying security control measures are honored.
you may invoke beeline with the command:
```
beeline
```
| null |
CC BY-SA 3.0
| null |
2015-01-31T17:31:09.967
|
2015-01-31T17:31:09.967
| null | null |
7809
| null |
5011
|
2
| null |
4985
|
2
| null |
I thought about it this way: the training and test sets are both a sample of the unknown population. We assume that the training set is representative of the population we're studying. That is, whatever transformations we make to the training set are what we would make to the overall population. In addition, whatever subset of the training data we use, we assume that this subset represents the training set, which represents the population.
So in response to your first question, it's fine to use that shrinked/resmpled training as long as you feel it's still representative of that population. That's assuming your untouched training set captures the "real characteristics" in the first place :)
As for your second question, don't merge the training and testing set. The testing set is there to act as future unknown observations. If you build these into the model then you won't know if the model wrong or not, because you used up the data you were going to test it with.
| null |
CC BY-SA 3.0
| null |
2015-02-01T01:34:13.597
|
2015-02-01T01:34:13.597
| null | null |
525
| null |
5013
|
1
|
5016
| null |
0
|
2152
|
Our main use case is object detection in 3d lidar point clouds i.e. data is not in RGB-D format. We are planning to use CNN for this purpose using theano. Hardware limitations are CPU: 32 GB RAM Intel 47XX 4th Gen core i7 and GPU: Nvidia quadro k1100M 2GB. Kindly help me with recommendation for architecture.
I am thinking in the lines of 27000 input neurons on basis of 30x30x30 voxel grid but can't tell in advance if this is a good option.
Additional Note: Dataset has 4500 points on average per view per point cloud
|
Machine learning for Point Clouds Lidar data
|
CC BY-SA 3.0
| null |
2015-02-01T09:35:16.113
|
2017-05-19T16:12:46.690
|
2017-05-19T16:12:46.690
|
21
|
8051
|
[
"machine-learning",
"dataset"
] |
5014
|
1
|
5015
| null |
3
|
351
|
Recently I read about path ranking algorithm in a paper (source: [Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion](https://www.cs.cmu.edu/~nlao/publication/2014.kdd.pdf)).
In this paper was a table (Table 3) with facts and I tried to understand how they were calculated.
F1 (harmonic mean of precision and recall) = 0.04
P (precision) = 0.03
R (recall) = 0.33
W (weight given to this feature by logistic regression)
I found a formula for F1 via Google which is
$F1 = 2 * \frac{precision * recall}{precision + recall}$
The problem is that I get the result of 0.055 with this formula, but not the expected result of 0.04.
Can someone help me to get this part?
Also, does someone know how 'W' can be calculated?
Thanks.
|
How to compute F1 score?
|
CC BY-SA 3.0
| null |
2015-02-02T14:53:51.810
|
2018-06-21T18:24:00.583
|
2015-02-02T14:59:43.687
|
8063
|
8063
|
[
"machine-learning"
] |
5015
|
2
| null |
5014
|
2
| null |
First you need to learn about Logistic Regression, it is an algorithm that will assign weights to different features given some training data. Read the wiki intro, is quite helpful, basically the Betas there are the same as the Ws in the paper.
The formula you have is correct, and those value do seem off. It also depends on the number of significant figures you have, perhaps they are making their calculations with more than the ones they are reporting.
But honestly, you can't understand much of the paper unless you understand LR
| null |
CC BY-SA 3.0
| null |
2015-02-02T15:55:49.363
|
2015-02-02T15:55:49.363
| null | null |
8065
| null |
5016
|
2
| null |
5013
|
1
| null |
First, CNNs are great for image recognition, where you usually take sub sampled windows of about 80 by 80 pixels, 27,000 input neurons is too large and it will take you forever to train a CNN on that.
Furthermore, why did you choose CNN? Why don't you try some more down to earth algorithms fisrst? Like SVMs, or Logistic regressions.
4500 Data points and 27000 features seems unrealistic to me, and very prone to over fitting.
Check this first.
[http://scikit-learn.org/stable/tutorial/machine_learning_map/](http://scikit-learn.org/stable/tutorial/machine_learning_map/)
| null |
CC BY-SA 3.0
| null |
2015-02-02T15:59:40.643
|
2015-02-02T15:59:40.643
| null | null |
8065
| null |
5017
|
2
| null |
4979
|
1
| null |
I don't really get why would you mix Fuzziness and Probabilities. HMMs already can give you probabilities without the need of adding Fuzzy systems into the mix.
I would just do a random walk with probabilities of transitions defined by the state of the lights.
| null |
CC BY-SA 3.0
| null |
2015-02-02T16:08:50.580
|
2015-02-02T16:08:50.580
| null | null |
8065
| null |
5018
|
2
| null |
5007
|
2
| null |
Yes, you can execute multiple HQL's using Hue as long as each individual HQL is separated by a semi colon (;)
| null |
CC BY-SA 3.0
| null |
2015-02-02T17:00:51.307
|
2015-02-02T17:00:51.307
| null | null |
7809
| null |
5019
|
2
| null |
5000
|
7
| null |
The problem is your model choice, as you seem to recognize. In the case of linear regression, there is no restriction on your outputs. Often this is fine when predictions need to be non-negative so long as they are far enough away from zero. However, since many of your training examples are zero-valued, this isn't the case.
If your data is non-negative and discrete (as in the case with number of cars on the road), you could model using a generalized linear model (GLM) with a log link function. This is known as Poisson regression and is helpful for modeling discrete non-negative counts such as the problem you described. The Poisson distribution is parameterized by a single value $\lambda$, which describes both the expected value and the variance of the distribution.
This results in an approach similar to the one described by Emre in that you are attempting to fit a linear model to the log of your observations.
| null |
CC BY-SA 3.0
| null |
2015-02-02T20:20:08.503
|
2015-02-02T20:20:08.503
| null | null |
182
| null |
5020
|
2
| null |
4955
|
3
| null |
>
I want to try out unsupervised learning for the same. I am looking into KMeans clustering. But again the problem is, we won't know the number of clusters beforehand. So, which algorithm will be best for this use case?
When you don't know the number of clusters beforehand, it is still possible to do unsupervised learning using a [Dirichlet process](http://en.wikipedia.org/wiki/Dirichlet_process) to sample parameters associated to clusters/groups and then cluster your tokens according to those parameters. The general idea is to use a Dirichlet distribution to generate probabilities over words for each cluster and a Dirichlet process uses these probabilities to assign a cluster to each word in your vocabulary. If you want to share clusters between emails, then you use Hierarchical Dirichlet Processes. [Here](http://blog.echen.me/2012/03/20/infinite-mixture-models-with-nonparametric-bayes-and-the-dirichlet-process/) you can find a nice blog post of how this works.
The most popular library for clustering is [gensim](http://radimrehurek.com/gensim/tut2.html), but notice their warning regarding the Hierarchical Dirichlet Process implementation:
>
gensim uses a fast, online implementation based on [3]. The HDP model is a new addition to gensim, and still rough around its academic edges – use with care.
As for feature extraction, your question doesn't say exactly what kind of unnecessary features you are getting, but if that's the case, you need to filter your tokens before or after processing them with NLTK. In general, you can't expect excellent results for very specific applications.
| null |
CC BY-SA 3.0
| null |
2015-02-02T20:58:47.970
|
2015-02-02T20:58:47.970
| null | null |
4621
| null |
5021
|
1
| null | null |
4
|
1122
|
I've recently become interested in possibly of developing some sort of method for ranking athletes of sports such as American football and determining which players are better than others in terms of specific statistics.
My thoughts are that there are two ways to go about doing this. The first would be some sort of mathematical formula which would take in the statistics of a given player and provide some sort of standardized score which could be compared with other players to determine which is better.
My other idea would be to have some machine learning algorithm go through historical data and determine the patterns which indicate how well a certain combination of statistics would perform in the following week of play by using the patterns it recognizes over time.
I'm not sure which approach would be more effective and so I'm hoping that someone has an idea or any advice as to which would be best to look into. Thanks!
|
Rank players of any given sport
|
CC BY-SA 3.0
| null |
2015-02-03T04:23:37.463
|
2015-02-03T23:58:36.380
|
2015-02-03T23:58:36.380
|
97
|
8074
|
[
"predictive-modeling",
"scoring",
"ranking",
"sports"
] |
5022
|
2
| null |
5021
|
2
| null |
Prediction
If the main goal is predicting anything, say, the statisitcs of the player in the next game, game result, then I would not recommend to do any scoring. Better way to go is using the pure statistics data as an input to the model. Any scoring/rankning - is information loss.
Ranking
If the goal is ranking itself, than you still need to have some target variable to predict. As you may want to check real predictive value of those ranks. That could be, again, playser stats in the next game or game result itself.
---
References
[Sport scores prediction](https://datascience.stackexchange.com/questions/265/can-machine-learning-algorithms-predict-sports-scores-or-plays/269#269) and [RFM scoring](https://datascience.stackexchange.com/questions/1119/predictive-modeling-based-on-rfm-scoring-indicators) are probably the next directions for you to look at.
| null |
CC BY-SA 3.0
| null |
2015-02-03T08:09:44.953
|
2015-02-03T08:09:44.953
|
2017-04-13T12:50:41.230
|
-1
|
97
| null |
5023
|
1
|
5032
| null |
4
|
8697
|
With respect to [ROC](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) can anyone please tell me what the phrase "discrimination threshold of binary classifier system" means? I know what a binary classifier is.
|
What is a discrimination threshold of binary classifier?
|
CC BY-SA 3.0
| null |
2015-02-03T08:52:42.133
|
2021-02-10T17:49:46.840
|
2021-02-10T17:49:46.840
|
85045
|
8013
|
[
"classification",
"graphs",
"classifier",
"roc"
] |
5024
|
1
| null | null |
6
|
9747
|
I am learning about Data Science and I love the Healthcare part. That's why I have started a blog and my third entry is about using Genetic Algorithm for solving NP-Problems.
This post is [https://datasciencecgp.wordpress.com/2015/01/31/the-amazing-genetic-algorithms/](https://datasciencecgp.wordpress.com/2015/01/31/the-amazing-genetic-algorithms/)
I have some expertise with GA package solving problems like the TSP, but do you know any most powerful R package?
Thanks so much!
|
Is the GA R package the best Genetic Algorithm package?
|
CC BY-SA 3.0
|
0
|
2015-02-03T10:14:53.543
|
2015-09-15T15:34:15.223
|
2015-02-04T16:48:43.630
|
8076
|
8076
|
[
"r",
"algorithms",
"genetic"
] |
5025
|
2
| null |
5024
|
5
| null |
For such questions, I like to go to the [Task Views](http://cran.r-project.org/web/views/) on CRAN, since the packages noted there are, to a degree, pre-vetted by the R community. I'd trust those a tiny bit more than just googling myself.
The [Machine Learning Task View at CRAN](http://cran.r-project.org/web/views/MachineLearning.html) says:
>
Packages rgp
and
rgenoud
offer optimization routines based on genetic algorithms. The package
Rmalschains
implements memetic algorithms with local search chains, which are a
special type of evolutionary algorithms, combining a steady state
genetic algorithm with local search for real-valued parameter
optimization.
| null |
CC BY-SA 3.0
| null |
2015-02-03T11:37:06.760
|
2015-02-03T11:37:06.760
| null | null |
2853
| null |
5026
|
2
| null |
5023
|
4
| null |
Classifiers often return probabilities of belonging to a class. For example in logistic regression the predicted values are the predicted probability of belonging to the non-reference class or $\text{Pr}(Y = 1)$. The discrimination threshold is just the cutoff imposed on the predicted probabilities for assigning observations to each class.
| null |
CC BY-SA 3.0
| null |
2015-02-03T16:00:05.673
|
2015-02-03T16:00:05.673
| null | null |
5103
| null |
5027
|
2
| null |
4977
|
3
| null |
Amazon Kinesis might be another choice for stream processing, if you don't want to set up the clusters by yourself.
| null |
CC BY-SA 3.0
| null |
2015-02-03T19:01:59.827
|
2015-02-03T19:01:59.827
| null | null |
8084
| null |
5028
|
2
| null |
3797
|
3
| null |
From what I understand of the objectives of the lambda architecture your point:
>
Your batch layer is probably a map reduce job or a HIVE query.
Is not what was intended. The batch layer is not meant to be directly queried against, but rather feeds a serving layer, possibly a simple key-value store, for low latency queries.

Check out [http://lambda-architecture.net/](http://lambda-architecture.net/) for a more full explanation.
| null |
CC BY-SA 3.0
| null |
2015-02-03T19:53:47.233
|
2015-02-03T19:53:47.233
| null | null |
8085
| null |
5029
|
2
| null |
155
|
9
| null |
One other data source I didn't see listed is [The GDELT Project](http://gdeltproject.org/). From the site:
>
GDELT Project monitors the world's broadcast, print, and web news from nearly every corner of every country in over 100 languages and identifies the people, locations, organizations, counts, themes, sources, and events driving our global society every second of every day, creating a free open platform for computing on the entire world.
| null |
CC BY-SA 3.0
| null |
2015-02-03T20:05:59.917
|
2015-02-03T20:05:59.917
| null | null |
8085
| null |
5031
|
2
| null |
3699
|
3
| null |
If you have an imbalanced dataset you usually want to make it balanced to begin with, since that will artificially affect your scores.
Now, you want to be measuring precision and recall, since those can capture a bit better the imbalanced dataset biases.
L1 or L2 won't perform particularly better in a balanced or unbalanced dataset, what you want to do is call elastic nets (which is a combination of the two) and do cross validation over the coefficients of each of the regularizers.
Also, doing grid search is very odd, you are better using just cross validation and see what parameters work better.
They even have [ElasticNetCV](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNetCV.html#sklearn.linear_model.ElasticNetCV), which does that part for you
| null |
CC BY-SA 4.0
| null |
2015-02-03T22:51:57.083
|
2020-08-13T11:46:33.743
|
2020-08-13T11:46:33.743
|
98307
|
8065
| null |
5032
|
2
| null |
5023
|
4
| null |
Just to add a bit.
Like it was mentioned before, if you have a classifier (probabilistic) your output is a probability (a number between 0 and 1), ideally you want to say that everything larger than 0.5 is part of one class and anything less than 0.5 is the other class.
But if you are classifying cancer rates, you are deeply concerned with false negatives (telling some he does not have cancer, when he does) while a false positive (telling someone he does have cancer when he doesn't) is not as critical (IDK - being told you've cancer coudl be psychologically very costly). So you might artificially move that threshold from 0.5 to higher or lower values, to change the sensitivity of the model in general.
By doing this, you can generate the ROC plot for different thresholds.
| null |
CC BY-SA 3.0
| null |
2015-02-03T22:56:04.747
|
2016-08-22T17:48:36.400
|
2016-08-22T17:48:36.400
|
-1
|
8065
| null |
5033
|
1
| null | null |
-1
|
214
|
I'am trying to create a regression based prediction (like booking website): predict the number of clicks for each hotel.
I have to generate a .csv file containing two columns:
- hotel_id
- predicted_number_of_clicks for all hotel_ids.
My first question question is: should I put the `hotel_id` as feature in the predictive model? I think that I must to drop it... right?
Second question is: how can I write in the csv file only this 2 columns if I drop `hotel_id` from the model features?
|
Format of CSV file
|
CC BY-SA 4.0
| null |
2015-02-03T23:49:30.013
|
2018-12-16T16:42:17.090
|
2018-12-16T16:42:17.090
|
29575
|
8088
|
[
"regression",
"predictive-modeling"
] |
5034
|
2
| null |
5033
|
0
| null |
The hotel_id should not be a feature.
Let's see if I understand you correctly.
At testing time you give your model a whole set of feature values for a particular hotel you are interested in. This hotel has an id, which is known to you.
Your model should be able to take both an id and a set of feature values as input, so it can print both to an output file.
Does that answer your question?
| null |
CC BY-SA 3.0
| null |
2015-02-04T00:16:15.323
|
2015-02-04T00:16:15.323
| null | null |
8075
| null |
5037
|
2
| null |
5001
|
0
| null |
I've found an interesting project With tons of data available. It's a real data benchmark executed over an industrial valve. This is the website.
[Industrial Actuator Real Data Benchmark Study.](http://diag.mchtr.pw.edu.pl/damadics/)
| null |
CC BY-SA 3.0
| null |
2015-02-04T02:54:52.773
|
2015-02-04T02:54:52.773
| null | null |
8037
| null |
5038
|
1
|
5071
| null |
0
|
579
|
I'm trying to use a particular cost function (based on doubling rate of wealth) for a classification problem, and the solution works well in MATLAB. See [https://github.com/acmyers/compareCostFXs](https://github.com/acmyers/compareCostFXs)
When I try to do this in Python 2.7.6 I don't get any errors, but it only returns zeros for the theta values.
Here is the cost function and optimization method I've used in Python:
```
def costFunctionDRW(theta, X, y):
# Initialize useful values
m = len(y)
# Marginal probability of acceptance
marg_pA = sum(y)/m
# Marginal probability of rejection
marg_pR = 1 - marg_pA
# =============================================================
pred = sigmoid(np.dot(X,theta))
final_wealth_individual = (pred/marg_pA)*y + ((1-pred)/marg_pR)*(1-y)
final_wealth = np.prod(final_wealth_individual)
final_wealth = -final_wealth
return final_wealth
result = scipy.optimize.fmin(costFunctionDRW, x0=initial_theta, \
args=(X_array, y_array), maxiter=1000, disp=False, full_output=True )
```
Any advice would be much appreciated!
|
minimization with a negative cost function: works in MATLAB, not in Python
|
CC BY-SA 3.0
| null |
2015-02-04T04:51:28.063
|
2015-02-06T09:41:54.033
| null | null |
985
|
[
"classification",
"python"
] |
5039
|
2
| null |
5038
|
0
| null |
If your number of cases is large, you may be running into a problem of numerical underflow in the line
>
final_wealth = np.prod(final_wealth_individual)
If each value of `final_wealth_individual` is between 0 and 1, multiplying them all together can lead to a result that is too small to represent as a floating point number, resulting in a value of 0.
To address this issue, take the log of `final_wealth_individual` and add them together instead of multiplying. Note that this will cause `final_wealth` to be negative, so you will not need to multiply it by -1 as you are currently.
| null |
CC BY-SA 3.0
| null |
2015-02-04T05:33:59.907
|
2015-02-04T05:33:59.907
| null | null |
182
| null |
5040
|
1
|
5043
| null |
1
|
1340
|
I'm using a set of features, says $X_1, X_2, ..., X_m $, to predict a target value $Y$, which is a continuous value from zero to one.
At first, I try to use a linear regression model to do the prediction, but it does not perform well. The root-mean-squared error is about 0.35, which is quite high for prediction of a value from 0 to 1.
Then, I have tried different models, e.g., decision-tree-based regression, random-forest-based regression, gradient boosting tree regression and etc. However, all of these models also do not perform well. (RMSE $\approx $0.35, there is not significant difference with linear regression)
I understand there are many possible reasons for this problem, such as: feature selection or choice of model, but maybe more fundamentally, the quality of data set is not good.
My question is: how can I examine whether it is caused by bad data quality?
BTW, for the size of data set, there are more than 10K data points, each of which associated with 105 features.
I have also tried to investigate importance of each feature by using decision-tree-based regression, it turns out that, only one feature (which should not be the most outstanding feature in my knowledge to this problem) have an importance of 0.2, while the rest of them only have an importance less than 0.1.
|
How to determine whether a bad performance is caused by data quality?
|
CC BY-SA 3.0
| null |
2015-02-04T06:10:34.893
|
2015-02-04T07:37:14.677
|
2015-02-04T07:37:14.677
|
7867
|
7867
|
[
"machine-learning"
] |
5041
|
2
| null |
5040
|
1
| null |
How many features do you have?
Is quite unlikely that ALL the features are bad. So you could regress with a different number of features.
For example, do one pass with all the features, then take one out (usually X_m) so you have m-1 features. Keep doing this so you can take out uninformative features.
Also, I would recommend you calculating P-Values to see whether your regessors are significative are informative.
| null |
CC BY-SA 3.0
| null |
2015-02-04T06:15:52.873
|
2015-02-04T06:15:52.873
| null | null |
8065
| null |
5042
|
2
| null |
5033
|
1
| null |
Trying to classify using hotel ID is the same as trying to determine if a student is going to perform well on a test based on their last name. You should get additional things, like number of rooms, amenities, location, staff, etc. Those are informative features that you can use in a classifier
| null |
CC BY-SA 3.0
| null |
2015-02-04T06:18:04.207
|
2015-02-04T06:18:04.207
| null | null |
8065
| null |
5043
|
2
| null |
5040
|
3
| null |
First, it sounds like your choice of model selection is a problem here. Your outputs are binary-valued, not continuous. Specifically you may have a classification problem on your hands rather than a traditional regression problem. My first recommendation would be to try a simple classification approach such as logistic regression or linear discriminant analysis.
Regarding your suspicions of bad data, what would bad data look like in this situation? Do you have reason to suspect that your $X$ values are noisy or that your $y$ values are mislabeled? It is also possible that there is not a strong relationship between any of your features and your targets. Since your targets are binary, you should look at histograms of each of your features to get a rough sense of the class conditional distributions, i.e. $p(X_1|y=1)$ vs $p(X_1|y=0)$. In general though, you will need to be more specific about what "bad data" means to you.
| null |
CC BY-SA 3.0
| null |
2015-02-04T06:22:23.377
|
2015-02-04T06:22:23.377
| null | null |
182
| null |
5044
|
2
| null |
5033
|
1
| null |
When training the model you don't need to use the hotel id. The model needs to learn from examples. It only needs feature values and number of clicks, so it can learn the relationship between these.
Once you've trained your model, you can use it for unseen examples.
These would be hotels for which you have both an id and a set of feature values.
Your model should take the id and the feature values as input, but it should only use the feature values for the prediction. The id should just be kept on the side, so it can print it together with the prediction to the output csv file.
I hope this helps!
| null |
CC BY-SA 3.0
| null |
2015-02-04T09:28:16.910
|
2015-02-04T09:28:16.910
| null | null |
8075
| null |
5045
|
2
| null |
5033
|
1
| null |
Supervised learning should try to 'understand' what makes a hotel to have more clicks than other. As a consequence learning tries to define which are the characteristics of some given hotels which make them attractive or not. So it uses some kind of similarities, because it is supposed that similar hotels behaves in a similar way.
Now if you restrict the similarity to identity than you learn nothing new because hotels are unique. In fact such kind of learner exists and is called Rote learner, and it consists of one-to-one mapping from inputs to outputs. It is also called memoisation. And this happens if you will add hotel_id in the features. However I think you hope to use that to predict the number of clicks for new hotels (which does have a different hotel_id than any from training set).
On the other hand, in order to use hotel_id to store prediction you only have to save a copy of the original data set. At learning time you have a train data set from which you remove hotel_id, and use that for learning.
At prediction time you make a copy of the data set for later use. From the original data set remove order_id, use that for prediction and get the results. Now the predicted results have the same order of instances as the copied data set. This happens for sure in python (scikit learn), java (weka), R. In fact I am not aware of a system which does not preserve positions.
Now using positions from the copy of the original and prediction you can associate each hotel_id to each prediction with no problem.
| null |
CC BY-SA 3.0
| null |
2015-02-04T11:22:53.063
|
2015-02-04T11:22:53.063
| null | null |
108
| null |
5046
|
2
| null |
2513
|
3
| null |
Your understanding is correct. The point is that equation (8)
$$y_i(<\textbf{w}, \phi_i> + b) - 1 = 0$$
is not exactly an equation, but a system of equations, one for each $i$ index of the support vectors (those for each $0<\alpha_i<C$.
The point is that you cannot compute $b$ during the optimization of the dual problem since it does not matter for optimization, you have to go back and compute $b$ from all the other equations you have (one possible way is (8)).
Vapnick suggestion is to not use only one of those equations, but two of them, specifically one support vector for a negative observation and one for a positive observation. In other words two support vectors which have opposite signs for $y_i$.
Let's name $A$ the index of one support vector and $B$ the index of a suport vector of opposite side, baiscally you select from the system of equations at (8) only two of them. Evaluate both of them and take the mean.
From:
$$y_A(<\textbf{w},\phi_A>+b)=1$$
$$y_B(<\textbf{w}, \phi_B>+b)=1$$
We get:
$$b_A=\frac{1}{y_A}-<\textbf{w},\phi_A>$$
$$b_B=\frac{1}{y_B}-<\textbf{w},\phi_B>$$
Where $b_A$ and $b_B$ are two estimations, then the mean is
$$b = (b_A+b_B)/2 = -\frac{1}{2}(<\textbf{w},\phi_A>+<\textbf{w},\phi_B>)=-\frac{1}{2}\sum_{i=1}^{n}y_i\alpha_i(<\phi(x_i),\phi(x_A)>+<\phi(x_i),\phi(x_B)>)$$
| null |
CC BY-SA 3.0
| null |
2015-02-04T12:51:31.943
|
2015-02-04T12:51:31.943
| null | null |
108
| null |
5047
|
1
| null | null |
2
|
1048
|
I am working on a research project that deals with American military casualties during WWII. Specifically, I am attempting to construct a count of casualties for each service at the county level. There are two sources of data here, each presenting their own challenges.
1. Army and Air Force data. The National Archives hosts lists of Army and Air Force servicemen killed in action by state and county. There are .gif images of the report available online. [Here](http://media.nara.gov/media/images/29/19/29-1891a.gif) is a sample for several counties in Texas.
I DO NOT need to recover the names or any other information. I simply need to count the number of names (each on its own line, and listed in groups of five) under each County. There are hundreds of these images (50 states - 30-100 for each state).
I have been unable to find an OCR program that can tackle this problem adequately. How would you suggest I approach this challenge? (I have some programming expertise in Python and Java, but would prefer to use any off-the-shelf solutions that may exist).
2. Navy and Marine Core data. This data is organized differently. Each state has long lists of casualties with the address of their next of kin. [Here](http://media.nara.gov/media/images/27/31/27-3023a.gif) is a sample for Texas again. For these images, I need to BOTH count the number of dead and recover their hometown, which is typically the last word in each entry. I can then match these hometowns to counties and merge with database 1.
Again, the usual OCR programs have proved inadequate. Any help on this (admittedly more difficult) problem would be very much appreciated.
Thank you in advance experts!
|
OCR / Text Recognition and Recovery Problem
|
CC BY-SA 3.0
| null |
2015-02-04T22:03:48.507
|
2015-02-13T05:57:36.840
| null | null |
8102
|
[
"dataset",
"text-mining",
"data-cleaning",
"processing"
] |
5048
|
2
| null |
5047
|
0
| null |
In what way have the usual OCR programs proved inadequate?
Do you have some example output that you find you can't work with?
I can see how the columns complicate things.
I'd say for data set 1:
OCR the images, then read the files line by line and match on for instance a sequence of at least five numbers. So you get 0, 1, 2 or 3 per row.
You may miss a couple due to the OCR accidentally recognizing a number as a letter for instance, but I expect this to work reasonably well.
How precise do you have to be?
Data set 2 seems more difficult.
Maybe counting can be done by matching on capitalized sequences followed by a comma.
Placename... very tricky. Once again, do you have some OCR output we can look at?
| null |
CC BY-SA 3.0
| null |
2015-02-04T23:58:53.323
|
2015-02-04T23:58:53.323
| null | null |
8075
| null |
5049
|
2
| null |
155
|
15
| null |
I'm surprised one has not mentioned this, as it seems fairly obvious: [http://www.kaggle.com](http://www.kaggle.com) consistently has new and very interesting datasets. Information is considered an asset, so often companies don't want to release that data (plus privacy concerns). Kaggle gives you data and they hope you solve business problems with it in exchange.
| null |
CC BY-SA 3.0
| null |
2015-02-05T00:49:11.713
|
2015-02-28T13:36:02.240
|
2015-02-28T13:36:02.240
|
84
|
9768
| null |
5050
|
1
| null | null |
3
|
942
|
I'm doing some work trying to extract commonly occurring words from a set of human classified documents and had a couple questions for anyone who might know something about NLP or statistical analysis of text.
We have a set of a bunch of documents, and users have classified them as either good or bad. What I'd like to do is figure out what words are common to the good documents, but not necessarily the other ones.
I could, for example, use the (frequency within good documents / total frequency) which would essentially normalize the effect of a word being generally common. This, unfortunately, gives very high precedence to words that occur in only a few good documents & not at all in the other documents. I could add some kind of minimum threshold for # of occurrences in good docs before evaluating the total frequency, but it seems kind of hacky.
Does anyone know what the best practice equation or model to use in this case is? I've done a lot of searching and found a lot of references to TF-IDF but that seems more applicable for assessing the value of a term on a single document against the whole set of docs. Here I'm dealing with a set of docs that is a subset of the larger collection.
In other words, I'd like to identify which words are uniquely or more important to the class of good documents.
|
Word Frequency Analysis of Document Sets
|
CC BY-SA 3.0
| null |
2015-02-05T01:29:00.997
|
2015-02-06T00:47:49.187
| null | null |
8105
|
[
"data-mining",
"classification",
"statistics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.