Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1232
|
2
| null |
369
|
3
| null |
nDCG is a ranking metric and RMSE is not. In the context of recommender systems, you would use a ranking metric when your ratings are implicit (e.g., item skipped vs. item consumed) rather than explicit (the user provides an actual number, a la Netflix).
| null |
CC BY-SA 3.0
| null |
2014-10-09T02:35:24.533
|
2014-10-09T02:35:24.533
| null | null |
381
| null |
1234
|
2
| null |
1205
|
1
| null |
I am developing a recommendation engine for stack overflow (personal project). Check it on [http://recommender.im](http://recommender.im) .
It is still a working in progress, but I have a quite functional website working. I am putting there most of the code I used through python notebooks.
Basically I used:
- Frontend: angularJS
- Website backend: flask + scikit-learn
- machine learning and data preparation: python, pandas, scikit-learn
I really like python for data science as the community and libraries are really good.
| null |
CC BY-SA 3.0
| null |
2014-10-09T10:50:39.197
|
2014-10-09T10:50:39.197
| null | null |
3596
| null |
1235
|
2
| null |
1225
|
2
| null |
Pandas is the best thing since sliced bread (for data science, at least).
an example:
```
import pd
In [22]: df = pd.read_csv('yourexample.csv')
In [23]: df
Out[23]:
user item1 item2
0 a 2 4
1 b 1 3
2 c 5 6
In [24]: df.columns
Out[24]: Index([u'user ', u'item1 ', u'item2'], dtype='object')
In [25]: df.corr()
Out[25]:
item1 item2
item1 1.000000 0.995871
item2 0.995871 1.000000
In [26]: df.cov()
Out[26]:
item1 item2
item1 4.333333 3.166667
item2 3.166667 2.333333
```
Bingo!
| null |
CC BY-SA 3.0
| null |
2014-10-09T10:58:02.557
|
2014-10-09T10:58:02.557
| null | null |
3596
| null |
1236
|
1
|
1249
| null |
1
|
48
|
how to get the Polysemes of a word in wordnet or any other api. I am looking for any api. with java any idea is appreciated?
|
how to get the Polysemes of a word in wordnet or any other api?
|
CC BY-SA 3.0
| null |
2014-10-09T12:26:01.643
|
2014-10-10T16:54:45.047
| null | null |
3598
|
[
"nlp"
] |
1240
|
1
|
2360
| null |
5
|
12346
|
I know there is the normal subtract the mean and divide by the standard deviation for standardizing your data, but I'm interested to know if there are more appropriate methods for this kind of discrete data. Consider the following case.
I have 5 items that have been ranked by customers. First 2 items were ranked on a 1-10 scale. Others are 1-100 and 1-5. To transform everything to a 1 to 10 scale, is there another method better suited for this case?
If the data has a central tendency, then the standard would work fine, but what about when you have more of a halo effect, or some more exponential distribution?
|
Methods for standardizing / normalizing different rank scales
|
CC BY-SA 3.0
| null |
2014-10-10T00:59:46.703
|
2015-06-12T10:57:34.290
|
2014-12-13T21:47:23.723
|
84
|
3430
|
[
"statistics"
] |
1241
|
1
| null | null |
4
|
224
|
I do movement building work for Effective Altruism ([http://en.m.wikipedia.org/wiki/Effective_altruism](http://en.m.wikipedia.org/wiki/Effective_altruism)), and would like to level up our growth strategy. It occurred to me that a social network visualization tool which allowed us to strategically find and recruit new influencers/donors would be mega useful. I'd love to find something (preferably free), similar to InMaps, which would allow us to:
- Combine all of our social media connections into a single map
- Easily see who the superconnectors are
- Weight each person by their degree of social influence (perhaps some function of things like Klout score * amount of social media connections * number of Google mentions, etc)
Does such a thing exist? If not, is anyone interested in pro bono work for an amazing cause? =)
Disclaimer: I am a data science noob, so preferably the solution would be one with a nice GUI and minimal involvement of R or Python.
|
Collaborative Social Network Visualization
|
CC BY-SA 3.0
| null |
2014-10-10T01:12:19.163
|
2016-10-16T06:26:57.983
|
2014-10-11T09:45:11.417
|
21
|
3605
|
[
"social-network-analysis"
] |
1242
|
2
| null |
1241
|
3
| null |
I think [Gephi](https://gephi.github.io/), an open-source visualization tool, would help you a lot. Actually, as I know, the InMaps and its community detection algorithm are same as the Gephi's.
| null |
CC BY-SA 3.0
| null |
2014-10-10T03:33:45.353
|
2014-10-10T03:33:45.353
| null | null |
1048
| null |
1243
|
1
| null | null |
6
|
191
|
Suppose, for example, that the first search result on a page of Google search results is swapped with the second result. How much would this change the click-through probabilities of the two results? How much would its click-through probability drop if the fifth search result was swapped with the sixth?
Can we say something, with some level of assurance, about how expected click-through probabilities change if we do these types of pairwise swaps within pages of search results?
What we seek is a measure of the contribution to click-through rates made specifically by position bias.
Likely, how position ranking would affect the sales in Amazon or other online shopping website? If we cast the sales into two parts, the product quality and its ranking effect.
`
sales = alpha*quality + beta*position + epsilon
`
How can we quantify the beta?
|
Can we quantify how position within search results is related to click-through probability?
|
CC BY-SA 3.0
| null |
2014-10-10T03:45:09.343
|
2014-12-17T09:32:53.703
| null | null |
1048
|
[
"recommender-system",
"search",
"information-retrieval",
"regression"
] |
1244
|
1
|
1255
| null |
12
|
5203
|
When ML algorithms, e.g. Vowpal Wabbit or some of the factorization machines winning click through rate competitions ([Kaggle](https://www.kaggle.com/c/criteo-display-ad-challenge/forums/t/10555/3-idiots-solution/55862#post55862)), mention that features are 'hashed', what does that actually mean for the model? Lets say there is a variable that represents the ID of an internet add, which takes on values such as '236BG231'. Then I understand that this feature is hashed to a random integer. But, my question is:
- Is the integer now used in the model, as an integer (numeric) OR
- is the hashed value actually still treated like a categorical variable and one-hot-encoded? Thus the hashing trick is just to save space somehow with large data?
|
Hashing Trick - what actually happens
|
CC BY-SA 3.0
| null |
2014-10-10T03:48:54.660
|
2014-10-11T19:48:20.583
| null | null |
1138
|
[
"machine-learning",
"predictive-modeling",
"kaggle"
] |
1246
|
1
|
2515
| null |
10
|
8798
|
let's assume that I want to train a stochastic gradient descent regression algorithm using a dataset that has N samples. Since the size of the dataset is fixed, I will reuse the data T times. At each iteration or "epoch", I use each training sample exactly once after randomly reordering the whole training set.
My implementation is based on Python and Numpy. Therefore, using vector operations can remarkably decrease computation time. Coming up with a vectorized implementation of batch gradient descent is quite straightforward. However, in the case of stochastic gradient descent I can not figure out how to avoid the outer loop that iterates through all the samples at each epoch.
Does anybody know any vectorized implementation of stochastic gradient descent?
EDIT: I've been asked why would I like to use online gradient descent if the size of my dataset is fixed.
From [1], one can see that online gradient descent converges slower than batch gradient descent to the minimum of the empirical cost. However, it converges faster to the minimum of the expected cost, which measures generalization performance. I'd like to test the impact of these theoretical results in my particular problem, by means of cross validation. Without a vectorized implementation, my online gradient descent code is much slower than the batch gradient descent one. That remarkably increases the time it takes to the cross validation process to be completed.
EDIT: I include here the pseudocode of my on-line gradient descent implementation, as requested by ffriend. I am solving a regression problem.
```
Method: on-line gradient descent (regression)
Input: X (nxp matrix; each line contains a training sample, represented as a length-p vector), Y (length-n vector; output of the training samples)
Output: A (length-p+1 vector of coefficients)
Initialize coefficients (assign value 0 to all coefficients)
Calculate outputs F
prev_error = inf
error = sum((F-Y)^2)/n
it = 0
while abs(error - prev_error)>ERROR_THRESHOLD and it<=MAX_ITERATIONS:
Randomly shuffle training samples
for each training sample i:
Compute error for training sample i
Update coefficients based on the error above
prev_error = error
Calculate outputs F
error = sum((F-Y)^2)/n
it = it + 1
```
[1] "Large Scale Online Learning", L. Bottou, Y. Le Cunn, NIPS 2003.
|
Stochastic gradient descent based on vector operations?
|
CC BY-SA 3.0
| null |
2014-10-10T13:34:11.543
|
2014-11-21T11:50:47.717
|
2014-11-21T10:02:39.520
|
2576
|
2576
|
[
"python",
"gradient-descent",
"regression"
] |
1247
|
2
| null |
155
|
15
| null |
I am new to this forum. Chiming in late on this question. I have been maintaining (I am a co-founder of) a catalog of publicly available data portals. There is over 1000 now listed and cover portals at international, federal, state, municipal and academic levels across the globe.
[http://www.opengeocode.org/opendata/](http://www.opengeocode.org/opendata/)
| null |
CC BY-SA 3.0
| null |
2014-10-10T14:35:11.323
|
2014-10-10T14:35:11.323
| null | null |
3609
| null |
1248
|
2
| null |
536
|
1
| null |
I am new to this forum. Data cleansing of address data is an area I work in. I agree with the other posters that you should not modify the original data, but add fields for corrected values. I developed a technique in our systems (opengeocode.org) we call 'reduced to common form'. In this method, addresses and geographic names are analyzed for reduction into an unambiguous short form, which is then used for record matching (vs. the original values). For example, the method I use for matching US postal addresses is based on the US Post Office's published method for matching addresses.
For geographic names, the method will reduce to short gazetteer form in Romanized script.
The link below is an article I wrote a couple of years ago that explains how the street address reduction works:
[http://www.nwstartups.com/api/doc/middleware.php#streetR](http://www.nwstartups.com/api/doc/middleware.php#streetR)
| null |
CC BY-SA 3.0
| null |
2014-10-10T14:53:22.500
|
2014-10-10T14:53:22.500
| null | null |
3609
| null |
1249
|
2
| null |
1236
|
2
| null |
There are several third-party Java APIs for WordNet listed here: [http://wordnet.princeton.edu/wordnet/related-projects/#Java](http://wordnet.princeton.edu/wordnet/related-projects/#Java)
In the past, I've used JWNL the most: [http://sourceforge.net/projects/jwordnet/](http://sourceforge.net/projects/jwordnet/)
The documentation for JWNL isn't great, but it should provide the functionality you need.
| null |
CC BY-SA 3.0
| null |
2014-10-10T16:54:45.047
|
2014-10-10T16:54:45.047
| null | null |
819
| null |
1250
|
2
| null |
1214
|
5
| null |
I recently ran into a similar problem: How to manage extracting a variety of features from a large dataset, without knowing up front what all of them would be. (Even calculating mean values repeatedly would be computationally expensive.) Further, how would I manage predictions based on different feature sets? Meaning, if I added a new feature, how would I know which models to train on new features? It could quickly snowball into a huge mess.
My current solution is to track it all in a local NoSQL database (MongoDB). For example, I might have a collection `features`, each entry of which has a name, a description of how the feature was was calculated, the python file that ran the extraction, etc.
Likewise, a collection `models` includes models run on the data. Each entry might have a name, a list of features that were used to train the model, its eventual parameters, predicted values on a held-out test set, metrics for how the model performed, etc.
From my vantage point, this has a number of benefits:
- By saving predictions, I can use them later in ensemble predictions.
- Because I keep track of which features were used, I know which ones need retraining as I extract more features.
- By saving model descriptions, I ensure that I always know what I've tried. I never have to wonder, "Have I tried LASSO with regularization parameters set by grid-search CV?" I can always look it up, and see how successful it was.
From your question, it sounds like you could adapt this approach to your problem's workflow. Install Mongo or another database of choice, and then save each experimental run, its inputs, its results, and anything else you might wish to track over the course of the project. This should be much easier to query than a spreadsheet, at the least.
| null |
CC BY-SA 3.0
| null |
2014-10-10T18:10:14.507
|
2014-10-10T18:10:14.507
| null | null |
1154
| null |
1253
|
1
| null | null |
28
|
2224
|
I hope you can help me, as I have some questions on this topic. I'm new in the field of deep learning, and while I did some tutorials, I can't relate or distinguish concepts from one another.
|
Why are NLP and Machine Learning communities interested in deep learning?
|
CC BY-SA 3.0
| null |
2014-10-11T10:24:01.393
|
2020-08-05T08:48:18.683
|
2015-11-11T15:45:57.500
|
13727
|
3615
|
[
"machine-learning",
"data-mining",
"neural-network",
"nlp",
"deep-learning"
] |
1255
|
2
| null |
1244
|
8
| null |
The a second bullet is the value in feature hashing. Hashing and one hot encoding to sparse data saves space. Depending on the hash algo you can have varying degrees of collisions which acts as a kind of dimensionality reduction.
Also, in the specific case of Kaggle feature hashing and one hot encoding help with feature expansion/engineering by taking all possible tuples (usually just second order but sometimes third) of features that are then hashed with collisions that explicitly create interactions that are often predictive whereas the individual features are not.
In most cases this technique combined with feature selection and elastic net regularization in LR acts very similar to a one hidden layer NN so it performs quite well in competitions.
| null |
CC BY-SA 3.0
| null |
2014-10-11T19:48:20.583
|
2014-10-11T19:48:20.583
| null | null |
92
| null |
2255
|
1
| null | null |
4
|
3804
|
I need some help with a single layered perceptron with multiple classes.
What I need to do is classify a dataset with three different classes, by now I just learnt how to do it with two classes, so I have no really a good clue how to do it with three.
The dataset have three different classes: Iris-setosa, Iris-versicolor and Iris-versicolor.
The url with the dataset and the information is in : [http://ftp.ics.uci.edu/pub/machine-learning-databases/iris/iris.data](http://ftp.ics.uci.edu/pub/machine-learning-databases/iris/iris.data).
I really appreciate any help anyone can give to me.
Thanks a lot!
|
Single Layer Perceptron with three classes
|
CC BY-SA 3.0
| null |
2014-10-11T23:26:53.197
|
2014-10-13T13:35:09.443
| null | null |
4618
|
[
"dataset"
] |
2256
|
2
| null |
1241
|
2
| null |
I don't think there is a way to build your graph from raw data without using at least basic programming skills. I'm not aware of a drag-and-drop interface for importing and displaying data. Graphs are just a bit too complex. Imagine trying to find the profit of selling a product if all you had was CSVs of receipts dropped into Excel. You'd need labels of the columns, some basic calculations, and so on before you had anything intelligible. Graphs are similar in this regard.
Thankfully, there are open source solutions, with some elbow grease and a few days of work, you can probably get a nice visualization.
Cypher queries are relatively simple to write. Using Neo4j and Cypher, you can create a basic visualization of your graph, which is displayed using D3.js
GraphAlchemist recently open-sourced their project Alchemy.js which specializes in graph visualization. [https://github.com/GraphAlchemist/Alchemy](https://github.com/GraphAlchemist/Alchemy)
| null |
CC BY-SA 3.0
| null |
2014-10-12T02:53:51.650
|
2014-10-12T02:53:51.650
| null | null |
3466
| null |
2257
|
1
| null | null |
6
|
166
|
I have some very complicated data about some movie sales online, first for each data entry, I have a key which is a combination of five keys, which are territory, day, etc, and then, for each key I have the sales for a period of time, and other information, like the movie's box office and genre.
For each day, there is a delay for the data loading to the database, around ten hours, I try to fill the gap, do some data extrapolations.
For each movie we sell, there is some decay of selling since the new release of the movie, i.e. usually for each movie, it follows some sales decay pattern.
For a recent day, I pulled some data, and I found that some decay pattern:



And for that day, the sales for each key can range from around $150000 to $0. The pic is as follow:

In the picture, the 15000 means there are around 15000 keys for each day.
Found [this article](http://homepage.stat.uiowa.edu/%7Ekcowles/s166_2009/Project_Lee&Pyo.pdf).
I am trying to predict for each key, the sales amount, like for a movie, territory, day etc combination, the sales amount, how much dollars, means for that movie, that territory, that day, how much money we get from selling online. I tried ARIMA time series model, but there is some concerns for that model, seen from the pics, there is some seasonal thing, and decay thing for the movie, so the sales prediction can not be always flat, there may be a pump after a going down, it may happens on a weekend, since there is seasonal thing, and the decay trend, etc, how to capture these things. Thank you for your reply!
I am not sure whether can be applied, and how to be applied here.
Thanks a lot in advance.
|
How to do this complicated data extrapolation, prediction modeling?
|
CC BY-SA 4.0
| null |
2014-10-12T05:27:17.687
|
2020-08-18T04:19:29.863
|
2020-08-18T04:19:29.863
|
98307
|
4619
|
[
"data-mining",
"time-series"
] |
2258
|
1
|
2261
| null |
7
|
724
|
First of all I know the question may be not suitable for the website but I'd really appreciate it if you just gave me some pointers.
I'm a 16 years old programmer, I've had experience with many different programming languages, a while ago I started a course at Coursera, titled introduction to machine learning and since that moment i got very motivated to learn about AI, I started reading about neural networks and I made a working perceptron using Java and it was really fun but when i started to do something a little more challenging (building a digit recognition software), I found out that I have to learn a lot of math, I love math but the schools here don't teach us much, now I happen to know someone who is a math teacher do you think learning math (specifically calculus) is necessary for me to learn AI or should I wait until I learn those stuff at school?
Also what other things would be helpful in the path of me learning AI and machine learning? do other techniques (like SVM) also require strong math?
Sorry if my question is long, I'd really appreciate if you could share with me any experience you have had with learning AI.
|
Where to start on neural networks
|
CC BY-SA 3.0
| null |
2014-10-12T11:23:26.493
|
2019-11-26T08:41:17.383
| null | null |
4620
|
[
"machine-learning",
"neural-network",
"svm"
] |
2259
|
2
| null |
2255
|
2
| null |
Well , When it comes to AI I am an absolute beginner but here is my answer to your question based on my understandings :
a perceptron has only one activation function, therefore it can return only the values of true and false (in most cases true=0 and false=1), so because of that, I don't think that you will be able to accomplish your goal using only one perceptron but you can absolutely do it using multiple perceptrons which essentially is a neural networks, of course training the network would be a lot harder than calculating the changes of weights as you do in perceptrons, You are gonna have to take advantage of a training algorithm such as backpropagation and a sigmoid activation function.
I hope my answer was helpful.
| null |
CC BY-SA 3.0
| null |
2014-10-12T11:49:04.480
|
2014-10-12T11:49:04.480
| null | null |
4620
| null |
2260
|
2
| null |
2255
|
5
| null |
The standard way to do this is called 'one versus all'... you train three perceptrons. First 1 target = is class a?, 2nd perceptron target = is class b? 3rd = is class c. You just train each perceptron separately, and then take max of the three perceptrons to decide class
| null |
CC BY-SA 3.0
| null |
2014-10-12T16:35:21.260
|
2014-10-12T16:35:21.260
| null | null |
1256
| null |
2261
|
2
| null |
2258
|
5
| null |
No, you should go ahead and learn the maths on your own. You will "only" need to learn calculus, statistics, and linear algebra (like the rest of machine learning). The theory of neural networks is pretty primitive at this point -- it more of an art than a science -- so I think you can understand it if you try. Ipso facto, there are a lot of tricks that you need practical experience to learn. There are lot of complicated extensions, but you can worry about them once you get that far.
Once you can understand the Coursera classes on ML and neural networks (Hinton's), I suggest getting some practice. You might like [this](http://karpathy.github.io/neuralnets/) introduction.
| null |
CC BY-SA 3.0
| null |
2014-10-12T19:36:33.027
|
2014-10-13T05:27:54.170
|
2014-10-13T05:27:54.170
|
381
|
381
| null |
2262
|
2
| null |
1214
|
5
| null |
you might want to look at [http://deeplearning.net/software/jobman/intro.html](http://deeplearning.net/software/jobman/intro.html)
it was designed for deep learning (I guess), but it is application agnostic. It is effectively an API version of SeanEasters approach
| null |
CC BY-SA 3.0
| null |
2014-10-12T19:52:55.430
|
2014-10-12T19:52:55.430
| null | null |
1256
| null |
2263
|
1
| null | null |
1
|
149
|
I was wondering if someone could point me to suitable database formats for building up a user database:
basically I am collecting logs of impressions data, and I want to compile a user database
which sites user visits, country/gender/..? and other categorisations with the aim of
a) doing searches: give me all users visiting games sites from france...
b) machine learning: eg clustering users by the sites they visit
so I am interested in storing info about 100's of millions of users
with indexes? on user, sites, geo-location
and the idea would be that this data would be continually updated ( eg nightly update to user database of new sites visited etc)
what are suitable database systems. Can someone suggest suitable reading material?
I was imagining Hbase might be suitable...
|
"Hadoop" formats for user database: online advertising
|
CC BY-SA 3.0
| null |
2014-10-13T11:23:23.483
|
2014-11-14T16:00:29.030
|
2014-11-14T16:00:29.030
|
1256
|
1256
|
[
"hbase"
] |
2264
|
2
| null |
2255
|
2
| null |
Perceptrons, strictly speaking, are binary classifiers.
To make a multi-class classifier, you should switch to a standard feed-forward neural net with a [softmax](http://en.wikipedia.org/wiki/Softmax_function) output layer. Without any hidden layers this is equivalent to multinomial logistic regression.
You can also do the one-vs-all trick as @seanv507 suggests. This often works well in practice but there's no strong basis for it in theory, and the true multi-class version is easier conceptually and practically in this case.
| null |
CC BY-SA 3.0
| null |
2014-10-13T13:35:09.443
|
2014-10-13T13:35:09.443
| null | null |
1399
| null |
2265
|
2
| null |
1246
|
1
| null |
Check out the partial_fit method of [scikit's SGD classifier](http://scikit-learn.org/0.15/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier). You have control over what you call with it: you can do it "true" online learning by passing an instance at a time, or you can batch up instances into mini-batches if all your data are available in an array. If they are, you can slice the array to provide the minibatches.
| null |
CC BY-SA 3.0
| null |
2014-10-13T13:50:07.080
|
2014-10-13T13:50:07.080
| null | null |
1399
| null |
2266
|
2
| null |
1243
|
1
| null |
It's just logistic regression. Get a bunch of data about presentations of search results, along with whether an item was clicked on. An instance is a search result item, with possible features being rank, "quality" (not sure what you mean by this) etc. The what you're asking about is a question of inference on the parameter related to rank.
| null |
CC BY-SA 3.0
| null |
2014-10-13T13:53:36.027
|
2014-10-13T13:53:36.027
| null | null |
1399
| null |
2267
|
2
| null |
1243
|
2
| null |
you might want to look at this paper
[Predicting Clicks: Estimating the Click-Through Rate for New Ads](http://research.microsoft.com/pubs/68148/predictingclicks.pdf)
>
Whenever an ad is displayed on the search results page, it has
some chance of being viewed by the user. The farther down the
page an ad is displayed, the less likely it is to be viewed. As a
simplification, we consider the probability that an ad is clicked on
to be dependent on two factors: a) the probability that it is viewed,
and b) the probability that it is clicked on, given that it is viewed:
| null |
CC BY-SA 3.0
| null |
2014-10-13T14:46:54.403
|
2014-10-13T14:46:54.403
| null | null |
1256
| null |
2268
|
1
| null | null |
1
|
3449
|
I am new to Natural Language Processing, I think NLP is a challenging field, the syntax and semantic ambiguities could cause a lot of problems. For example I think for these problems machine translation is a hard task.
Therefore there are probably many approaches and methods that have been applied to this field. But what are the latest and most promising approaches and methods in the field of NLP?
Are these techniques highly dependent on the target language?
|
What is the state of the art in the field of NLP?
|
CC BY-SA 3.0
| null |
2014-10-13T19:02:24.670
|
2014-11-12T18:22:07.253
|
2014-10-14T00:27:48.050
|
24
|
3436
|
[
"nlp"
] |
2269
|
1
| null | null |
29
|
27831
|
I am looking for an online console for the language R. Like I write the code and the server should execute and provide me with the output.
Similar to the website Datacamp.
|
Any Online R console?
|
CC BY-SA 3.0
| null |
2014-10-13T21:13:48.447
|
2021-02-09T01:33:22.723
| null | null |
4637
|
[
"r",
"statistics"
] |
2270
|
2
| null |
2269
|
5
| null |
Yes. I believe [this](http://www.compileonline.com/execute_r_online.php) is what you are looking for.
| null |
CC BY-SA 4.0
| null |
2014-10-13T22:07:51.180
|
2021-02-09T01:32:29.007
|
2021-02-09T01:32:29.007
|
29169
|
3466
| null |
2271
|
2
| null |
2268
|
2
| null |
You are probably aware that deep learning is all the rage these days, and it has touched NLP too. There is a tutorial on it from a recent conference: [Deep Learning for Natural Language Processing (without Magic)](http://nlp.stanford.edu/courses/NAACL2013/) by Richard Socher and Christopher Manning, who are from Stanford.
| null |
CC BY-SA 3.0
| null |
2014-10-14T01:19:37.067
|
2014-10-14T01:19:37.067
| null | null |
381
| null |
2272
|
2
| null |
2269
|
28
| null |
- R On Cloud provides a browser-embedded R-console.

- Jupyter.org evolved from the IPython Project (the language-agnostic parts of IPython); supports Python 3, Julia, R, Haskell, Ruby, etc.

| null |
CC BY-SA 3.0
| null |
2014-10-14T07:02:47.500
|
2018-02-01T18:11:43.553
|
2018-02-01T18:11:43.553
|
45572
|
2961
| null |
2273
|
1
|
2274
| null |
5
|
1083
|
I have a variable whose value I would like to predict, and I would like to use only one variable as predictor. For instance, predict traffic density based on weather.
Initially, I thought about using [Self-Organizing Maps](http://en.wikipedia.org/wiki/Self-organizing_map) (SOM), which performs unsupervised clustering + regression. However, since it has an important component of dimensionality reduction, I see it as more appropriated for a large number of variables.
Does it make sense to use it for a single variable as predictor? Maybe there are more adequate techniques for this simple case: I used "Data Mining" instead of "machine learning" in the title of my question, because I think maybe a linear regression could do the job...
|
What is the best Data Mining algorithm for prediction based on a single variable?
|
CC BY-SA 3.0
| null |
2014-10-14T08:50:53.907
|
2014-10-18T13:25:13.107
|
2014-10-18T13:25:13.107
|
84
|
3159
|
[
"clustering",
"regression",
"correlation"
] |
2274
|
2
| null |
2273
|
5
| null |
Common rule in machine learning is to try simple things first. For predicting continuous variables there's nothing more basic than simple linear regression. "Simple" in the name means that there's only one predictor variable used (+ intercept, of course):
```
y = b0 + x*b1
```
where `b0` is an intercept and `b1` is a slope. For example, you may want to predict lemonade consumption in a park based on temperature:
```
cons = b0 + temp * b1
```
Temperature is in well-defined continuous variable. But if we talk about something more abstract like "weather", then it's harder to understand how we measure and encode it. It's ok if we say that the weather takes values `{terrible, bad, normal, good, excellent}` and assign values numbers from -2 to +2 (implying that "excellent" weather is twice as good as "good"). But what if the weather is given by words `{shiny, rainy, cool, ...}`? We can't give an order to these variables. We call such variables categorical. Since there's no natural order between different categories, we can't encode them as a single numerical variable (and linear regression expects numbers only), but we can use so-called dummy encoding: instead of a single variable `weather` we use 3 variables - `[weather_shiny, weather_rainy, weather_cool]`, only one of which can take value 1, and others should take value 0. In fact, we will have to drop one variable because of [collinearity](http://en.wikipedia.org/wiki/Multicollinearity). So model for predicting traffic from weather may look like this:
```
traffic = b0 + weather_shiny * b1 + weather_rainy * b2 # weather_cool dropped
```
where either `b1` or `b2` is 1, or both are 0.
Note that you can also encounter non-linear dependency between predictor and predicted variables (you can easily check it by plotting `(x,y)` pairs). Simplest way to deal with it without refusing linear model is to use polynomial features - simply add polynomials of your feature as new features. E.g. for temperature example (for dummy variables it doesn't make sense, cause `1^n` and `0^n` are still 1 and 0 for any `n`):
```
traffic = b0 + temp * b1 + temp^2 * b2 [+ temp^3 * b3 + ...]
```
| null |
CC BY-SA 3.0
| null |
2014-10-14T13:10:39.773
|
2014-10-14T13:10:39.773
| null | null |
1279
| null |
2275
|
1
| null | null |
3
|
44
|
[Automated Time Series Forecasting for Biosurveillance](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=923635)
in the above paper, page 4, two models, non-adaptive regression model, adaptive regression model, the non-adaptive regression model's parameter estimation method is "least squares", what is the parameter estimation for the adaptive regression model? is there any package in R to do parameter estimation for this kind of adaptive regression model? If I add more predictors in the adaptive regression model, can R still solve it? and how?
|
What are the parameter estimation methods for the two methods in this paper?
|
CC BY-SA 3.0
| null |
2014-10-14T15:26:01.460
|
2017-05-25T09:19:38.273
|
2017-05-25T09:19:38.273
|
8878
|
4619
|
[
"r",
"statistics"
] |
2276
|
2
| null |
2263
|
0
| null |
The kind of data you store and analyze is very much dependent upon the kind of data you can gather. So, without knowing what your 'impression data' looks like, it is very hard to suggest how to normalize and store it.
Furthermore, the way you store data is also dependent upon how you wish to analyze it. For example, if you want to perform basic analytics like page view counts, how many pages a user visits per session, etc (SQL). . . data needs to be stored differently than if you want to build recommendations based on traffic patterns (graph database).
Please edit your question to include more detail. Apologies that I cannot simply leave a comment.
| null |
CC BY-SA 3.0
| null |
2014-10-14T17:47:02.663
|
2014-10-14T17:47:02.663
| null | null |
3466
| null |
2277
|
2
| null |
2273
|
1
| null |
I am more of an expert on data ETL and combining/aggregating than on the forumulas themselves. I work frequently with weather data. I like to give some suggestions on using weather data in analysis.
- Two types of data are reported in US/Canada:
A. Measurements
B. Weather Type
As far as weather type (sunny, rainy, severe thunderstorm) they are either going to already be reflected in measurements (e.g., sunny, rainy) and are redundant or they are inclement weather conditions and are not necessarily reflected in the measurements.
For inclement weather types, I would have separate formulae.
For measurements, there are 7 standard daily measurements for Weather Station reporting in North America.
Temp Min/Max
Precipitation
Average Wind Speed
Average Cloudiness (percentage)
Total sunlight (minutes)
Snowfall
Snow Depth
Not all stations report all 7 daily measurements. Some report only Temp and Precipitation. So you may want to have one formula for Temp/Precipitation and an expanded formulae when all seven measurements are available.
The two links below are NOAA/NWS weather terms used in their datasets:
This document is the vocabulary for the annual summaries:
[http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/ANNUAL_documentation.pdf](http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/ANNUAL_documentation.pdf)
This document is the vocabulary for the daily summaries
[http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf](http://www1.ncdc.noaa.gov/pub/data/cdo/documentation/GHCND_documentation.pdf)
| null |
CC BY-SA 3.0
| null |
2014-10-14T18:24:36.880
|
2014-10-14T18:24:36.880
| null | null |
3609
| null |
2278
|
1
| null | null |
2
|
75
|
How would I do parameter estimation and prediction for the adaptive regression model using R, as in the 4th page of the paper linked below?
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=923635](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=923635)
Could anyone clarify this for me?
If you know adaptive regression models very well, share some useful link, or describe the model/parameter estimation/prediction, that would be very helpful.
Thank you so much!
|
Adaptive regression parameter estimation in R
|
CC BY-SA 3.0
| null |
2014-10-15T02:22:09.983
|
2014-11-20T15:28:14.573
|
2014-10-21T14:07:51.717
|
24
|
4619
|
[
"r",
"statistics",
"regression"
] |
2279
|
2
| null |
2278
|
1
| null |
Here is some notes for an R package 'earth' for adaptive regression. This may be useful.
[http://cran.r-project.org/web/packages/earth/vignettes/earth-notes.pdf](http://cran.r-project.org/web/packages/earth/vignettes/earth-notes.pdf)
| null |
CC BY-SA 3.0
| null |
2014-10-15T03:08:59.263
|
2014-10-15T03:08:59.263
| null | null |
4619
| null |
2280
|
2
| null |
536
|
0
| null |
In the case you mention, I recommend to keep the changes as a dictionary, for instance in a .csv file. Write a script that replaces the values in the original data based on the translation in your dictionary. That way, you separate the corrections from the script itself.
| null |
CC BY-SA 3.0
| null |
2014-10-15T11:37:47.090
|
2014-10-15T11:37:47.090
| null | null |
906
| null |
2281
|
2
| null |
2258
|
2
| null |
I would say... it really depends. You may need to:
- use machine learning algorithms: this will be useful for specific applications you may have. In this situation what you need is some programming skills and the taste for testing (practicing will make you strong). Here maths are not so much required I would say.
- be able to modify existing algorithms. Your specific application may be reticent to regular algorithms, so you may need to adapt them to get maximum efficiency. Here maths come into play.
- understand the theory behind algorithms. Here maths are necessary, and will help you increase your knowledge of the field of machine learning, develop your own algorithms, speak the same langage as your peers... NN theory may be primitive as said by @Emre, but for instance this is not the case for SVM (the theory behind SVM requires e.g. to understand reproducing kernel Hilbert spaces).
On the mid term for sure you will need strong maths. But you don't need to wait for them to come to you, you can start right now with linear algebra, which is beautiful and useful for everything. And in case you encounter (possibly temporary) difficulties of any sort with maths, keep on practicing the way you already do (many people can talk about the perceptron but are not able to make a perceptron in Java), this is very valuable.
| null |
CC BY-SA 3.0
| null |
2014-10-15T20:37:40.633
|
2014-10-15T20:37:40.633
| null | null |
3317
| null |
2284
|
1
| null | null |
6
|
245
|
I want to identifies different queries in sentences.
Like - `Who is Bill Gates and where he was born?` or `Who is Bill Gates, where he was born?` contains two queries
- Who is Bill Gates?
- Where Bill Gates was born
I worked on Coreference resolution, so I can identify that `he` points to `Bill Gates` so resolved sentence is "Who is Bill Gates, where Bill Gates was born"
Like wise
```
MGandhi is good guys, Where he was born?
single query
who is MGandhi and where was he born?
2 queries
who is MGandhi, where he was born and died?
3 quries
India won world cup against Australia, when?
1 query (when India won WC against Auz)
```
I can perform Coreference resolution (Identifying and converting `he` to `Gandhi`) but not getting how can I distinguish queries in it.
How to do this?
I checked various sentence parser, but as this is pure nlp stuff, sentence parser does not identify it.
I tried to find "Sentence disambiguation" like "word sense disambiguation", but nothing exist like that.
Any help or suggestion would be much appreciable.
|
Is it possible to identify different queries/questions in sentence?
|
CC BY-SA 3.0
| null |
2014-10-16T05:44:40.183
|
2015-02-13T11:08:14.467
|
2014-10-16T09:26:18.960
|
4662
|
4662
|
[
"machine-learning",
"data-mining",
"nlp",
"social-network-analysis"
] |
2285
|
1
| null | null |
2
|
185
|
Recently in a data analytic job interview for an e-commerce site, they asked me, do i have some knowledge of buyer classification problem. Unfortunately i heard this term for the first time.
After interview i tried to search a lot about it over google but didn't find something meaningful. Please any one let me know if you have heard this term before and paste some links explaining this concept. Thanks
|
what is buyer classification problem?
|
CC BY-SA 3.0
| null |
2014-10-16T06:37:09.887
|
2014-10-16T21:12:09.490
| null | null |
713
|
[
"classification"
] |
2286
|
2
| null |
2284
|
2
| null |
The basic thing, you can do in that situation, is to split your query into N simple sentences each of which should be processed in order to receive YES/NO answer considering if the sentence is a query. That way you will receive following results:
```
Input: Gandhi is good guys, Where he was born?
->
Gandhi is good guys - not query
Where he was born? - query
===
1 query
Input: who is MGandhi and where was he born?
->
who is MGandhi - query
where was he born? - query
===
2 queries
```
This approach will require anaphora resolution (in order to convert `he` into `Gandhi` in first example) and a parser to correctly divide complex sentence into simple ones.
| null |
CC BY-SA 3.0
| null |
2014-10-16T08:03:15.593
|
2014-10-16T08:03:15.593
| null | null |
2573
| null |
2287
|
1
|
2288
| null |
2
|
331
|
I am kind of a newbie on machine learning and I would like to ask some questions based on a problem I have .
Let's say I have x y z as variable and I have values of these variables as time progresses like :
t0 = x0 y0 z0
t1 = x1 y1 z1
tn = xn yn zn
Now I want a model that when it's given 3 values of x , y , z I want a prediction of them like:
Input : x_test y_test z_test
Output : x_prediction y_prediction z_prediction
These values are float numbers. What is the best model for this kind of problem?
Thanks in advance for all the answers.
More details:
Ok so let me give some more details about the problems so as to be more specific.
I have run certain benchmarks and taken values of performance counters from the cores of a system per interval.
The performance counters are the x , y , z in the above example.They are dependent to each other.Simple example is x = IPC , y = Cache misses , z = Energy at Core.
So I got this dataset of all these performance counters per interval .What I want to do is create a model that after learning from the training dataset , it will be given a certain state of the core ( the performance counters) and predict the performance counters that the core will have in the next interval.
|
Regression Model for explained model(Details inside)
|
CC BY-SA 3.0
| null |
2014-10-16T12:15:32.017
|
2014-10-27T16:04:23.527
|
2014-10-27T16:04:23.527
|
4668
|
4668
|
[
"machine-learning",
"logistic-regression",
"predictive-modeling",
"regression"
] |
2288
|
2
| null |
2287
|
0
| null |
AFAIK if you want to predict the value of one variable, you need to have one or more variables as predictors; i.e.: you assume the behaviour of one variable can be explained by the behaviour of other variables.
In your case you have three independent variables whose value you want to predict, and since you don't mention any other variables, I assume that each variable depends on the others. In that case you could fit three models (for instance, regression models), each of which would predict the value of one variable, based on the others. As an example, to predict x:
```
x_prediction=int+cy*y_test+cz*z_test
```
, where int is the intercept and cy, cz, the coefficients of the linear regression.
Likewise, in order to predict y and z:
```
y_prediction=int+cx*x_test+cx*z_test
z_prediction=int+cx*x_test+cy*y_test
```
| null |
CC BY-SA 3.0
| null |
2014-10-16T14:34:33.180
|
2014-10-16T14:34:33.180
| null | null |
3159
| null |
2290
|
2
| null |
2285
|
2
| null |
Buyer classification is used to categorize users who purchase groups of items. Buyers are categorized in order to be targeted for advertising. When users buy similar items, they are more likely to buy similar items in the future. This is useful information when pricing items on a website.
A clear example, a website selling nutritional supplements may want to target different buyers by category. Men in their 20s are unlikely to purchase menopause supplements, and women in their 50s are also unlikely to buy creatine. So, by analyzing user purchase history and categorizing types of buyers, the site can send intelligent promotions - males in their 20s get ads for creatine while women in their 50s get ads for menopause supplements.
Also, if I want to run a sale to attract customers, I don't want to offer lower prices on items which are often purchased together. I'd rathe price one low to attract the buyer, and then hope they buy the complimentary item at full price. Categorizing buyers also helps with this problem.
You may want to read up on shopping cart analysis, which is not a new problem. Department stores have been analyzing shopping carts and classifying buyers long before online shopping was popularized. That's why you have to use membership cards to get the 'special' prices.
Fine tuning these details can increase revenue substantially.
| null |
CC BY-SA 3.0
| null |
2014-10-16T21:12:09.490
|
2014-10-16T21:12:09.490
| null | null |
3466
| null |
2293
|
1
|
2296
| null |
10
|
2238
|
I am trying to setup a big data infrastructure using Hadoop, Hive, Elastic Search (amongst others), and I would like to run some algorithms over certain datasets. I would like the algorithms themselves to be scalable, so this excludes using tools such as Weka, R, or even RHadoop. The [Apache Mahout Library](https://mahout.apache.org) seems to be a good option, and it features [algorithms for regression and clustering tasks](https://mahout.apache.org/users/basics/algorithms.html).
What I am struggling to find is a solution for anomaly or outlier detection.
Since Mahout features Hidden Markov Models and a variety of clustering techniques (including K-Means) I was wondering if it would be possible to build a model to detect outliers in time-series, using any of this. I would be grateful if somebody experienced on this could advice me
- if it is possible, and in case it is
- how-to do it, plus
- an estimation of the effort involved and
- accuracy/problems of this approach.
|
Scalable Outlier/Anomaly Detection
|
CC BY-SA 3.0
| null |
2014-10-17T10:47:13.197
|
2017-05-05T11:04:04.157
|
2017-05-04T19:57:23.533
|
31513
|
3159
|
[
"data-mining",
"bigdata",
"algorithms",
"outlier"
] |
2294
|
2
| null |
253
|
3
| null |
It strongly depends on the environment/company you are working with. In my eyes there is a "big data" hype at the moment and a lot of companies try to enter the field with hadoop based solutions - what makes hadoop also a buzzword but its not always the best solution.
In my mind, a good Data Scientist should be able to ask the right questions and keep on asking again until its clear whats really needed. Than a good DataScientist - of course - needs to know how to address the problem (or at least know someone who can). Otherwise your stakeholder could be frustrated :-)
So, I would say it is not absolutely necessary to learn Hadoop.
| null |
CC BY-SA 4.0
| null |
2014-10-17T10:55:04.147
|
2021-02-08T01:59:05.257
|
2021-02-08T01:59:05.257
|
29169
|
4676
| null |
2296
|
2
| null |
2293
|
7
| null |
I would take a look at [t-digest algorithm](https://github.com/tdunning/t-digest). It's [been merged into mahout](https://issues.apache.org/jira/browse/MAHOUT-1361) and also a part of [some other libraries](http://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/quantile/TDigest.java) for big data streaming. You can get more about this algorithm particularly and big data anomaly detection in general in next resources:
- Practical machine learning anomaly detection book.
- Webinar: Anomaly Detection When You Don't Know What You Need to Find
- Anomaly Detection in Elasticsearch.
- Beating Billion Dollar Fraud Using Anomaly Detection: A Signal Processing Approach using Argyle Data on the Hortonworks Data Platform with Accumulo
| null |
CC BY-SA 3.0
| null |
2014-10-17T12:50:56.243
|
2017-05-05T11:04:04.157
|
2017-05-05T11:04:04.157
|
4679
|
4679
| null |
2297
|
1
| null | null |
1
|
494
|
I'm looking for a topic for my masters thesis. Machine learning is my primary domain and I want to work on probabilistic models and applied probability in Machine Learning. Please suggest some exciting new topics that would make for a good masters thesis subject.
Anything related to Markov chains Monte Carlo, Bayesian methods, Probabilistic graphical models, Markov models and so on in context of machine learning would be great!
|
Masters thesis topics in Applied probability and Probabilistic models in Machine Learning
|
CC BY-SA 3.0
| null |
2014-10-17T13:34:48.540
|
2014-10-17T13:34:48.540
| null | null |
2475
|
[
"machine-learning"
] |
2298
|
1
|
2363
| null |
2
|
355
|
I've built a toy Random Forest model in `R` (using the `German Credit` dataset from the `caret` package), exported it in `PMML 4.0` and deployed onto Hadoop, using the `Cascading Pattern` library.
I've run into an issue where `Cascading Pattern` scores the same data differently (in a binary classification problem) than the same model in `R`. Out of 200 observations, 2 are scored differently.
Why is this? Could it be due to a difference in the implementation of Random Forests?
|
Differences in scoring from PMML model on different platforms
|
CC BY-SA 3.0
| null |
2014-10-17T13:58:39.353
|
2014-10-29T14:37:00.910
| null | null |
1127
|
[
"machine-learning",
"r",
"apache-hadoop",
"random-forest",
"predictive-modeling"
] |
2302
|
1
|
2307
| null |
3
|
5042
|
I want to analyze the effectiveness and efficiency of kernel methods for which I would require 3 different data-set in 2 dimensional space for each of the following cases:
- BAD_kmeans: The data set for which the kmeans clustering algorithm
will not perform well.
- BAD_pca: The data set for which the Principal Component Analysis
(PCA) dimension reduction method upon projection of the original
points into 1-dimensional space (i.e., the first eigenvector) will
not perform well.
- BAD_svm: The data set for which the linear Support Vector Machine
(SVM) supervised classification method using two classes of points
(positive and negative) will not perform well.
Which packages can I use in R to generate the random 2d data-set for each of the above cases ? A sample script in R would help in understanding
|
R Script to generate random dataset in 2d space
|
CC BY-SA 3.0
| null |
2014-10-18T04:58:45.100
|
2014-10-19T03:35:17.557
| null | null |
3577
|
[
"machine-learning",
"classification",
"r",
"clustering"
] |
2303
|
1
| null | null |
4
|
2663
|
I would like to learn both Python and R for usage in data science projects.
I am currently unemployed, fresh out of university, scouting around for jobs and thought it would be good if I get some Kaggle projects under my profile.
However, I have very little knowledge in either language. Have used Matlab and C/C++ in the past. But I haven't produced production quality code or developed an application or software in either language. It has been dirty coding for academic usage all along.
I have used a little bit of Python, in a university project, but I dont know the fundamentals like what is a package , etc etc. ie havent read the intricacies of the language using a standard Python Textbook etc..
Have done some amount of coding in C/C++ way back (3-4 years back then switched over to Matlab/Octave).
I would like to get started in Python Numpy Scipy scikit-learn and pandas etc. but just reading up Wikipedia articles or Python textbooks is going to be infeasible for me.
And same goes with R, except that I have zero knowledge of R.
Does anyone have any suggestions?
|
Python and R good tutorials?
|
CC BY-SA 3.0
| null |
2014-10-18T08:08:25.513
|
2020-08-02T12:49:46.867
| null | null |
3223
|
[
"machine-learning",
"r",
"python"
] |
2304
|
1
| null | null |
1
|
1090
|
Is there a good java library for doing time series energy consumption forecasting based on weather data and other variables?
|
Energy consumption time series forcasting
|
CC BY-SA 3.0
| null |
2014-10-18T08:19:00.607
|
2021-01-16T00:13:14.563
|
2021-01-16T00:13:14.563
|
29169
|
4692
|
[
"time-series",
"java"
] |
2305
|
2
| null |
2303
|
0
| null |
I can only recommend [Advanced R](http://adv-r.had.co.nz/) by Hadley Wickham. I think it is at the same time incredibly rich in content and easy to read. You say you have zero knowledge in R, but I believe since you already have programming skills in other languages this book can complement very fruitfully any classical "R beginner manual" (for the latter see [here](http://cran.r-project.org/other-docs.html#english)).
| null |
CC BY-SA 3.0
| null |
2014-10-18T17:32:52.583
|
2014-10-18T17:32:52.583
| null | null |
3317
| null |
2307
|
2
| null |
2302
|
2
| null |
None of the algorithms you mention are good with data that has uniform distribution.
```
size <- 20 #length of random number vectors
set.seed(1)
x <- runif(size) # generate samples from uniform distribution (0.0, 1.0)
y <-runif(size)
df <-data.frame(x,y)
# other distributions: rpois, rmvnorm, rnbinom, rbinom, rbeta, rchisq, rexp, rgamma, rlogis, rstab, rt, rgeom, rhyper, rwilcox, rweibull.
```
See [this page](http://statistics.ats.ucla.edu/stat/r/modules/prob_dist.htm) for tutorial on generating random samples from distributions.
---
For specific set of randomized data sets that are 'hard' for these methods (e.r. linearly inseparable n-classes XOR patterns), see this blog post (incl. R code): [http://tjo-en.hatenablog.com/entry/2014/01/06/234155](http://tjo-en.hatenablog.com/entry/2014/01/06/234155).
| null |
CC BY-SA 3.0
| null |
2014-10-19T03:24:22.877
|
2014-10-19T03:35:17.557
|
2014-10-19T03:35:17.557
|
609
|
609
| null |
2308
|
1
|
2332
| null |
1
|
7879
|
I am looking for a thesis to complete my master M2, I will work on a topic in the big data's field (creation big data applications), using hadoop/mapReduce and Ecosystem ( visualisation, analysis ...), Please suggest some topics or project that would make for a good masters thesis subject.
I add that I have bases in data warehouses, databases, data mining, good skills in programming, system administration and cryptography ...
Thanks
|
Masters thesis topics in big data
|
CC BY-SA 3.0
| null |
2014-10-19T11:02:44.397
|
2015-12-23T11:02:49.097
|
2015-12-23T11:02:49.097
|
3151
|
4705
|
[
"bigdata",
"apache-hadoop",
"research"
] |
2311
|
2
| null |
1253
|
7
| null |
First we need to understand why we need Deep learning. To build models ML need Test Data with Labels (supervised or unsupervised). In many domains as the data grows maintaining the data with labels is hard. Deep learning networks doesn't need labeled data. The Deep learning algorithms can figure out the labels. So this obviates the need for domain experts to come out with labels for the data which is very important in the areas of speech recognition, computer vision, and language understanding. Google Cat image recognition is a very interesting experiment. Also it is interesting to know ["Geoffrey Hinton"](http://www.wired.com/2014/01/geoffrey-hinton-deep-learning/) the professor who was hired by Google.
You may get more insight as you explore in this framework.
| null |
CC BY-SA 4.0
| null |
2014-10-19T19:12:32.940
|
2020-08-05T08:41:32.857
|
2020-08-05T08:41:32.857
|
98307
|
4711
| null |
2313
|
1
| null | null |
6
|
1561
|
Where is the difference between one-class, binary-class and multinominal-class classification?
If I like to classify text in lets say four classes and also want the system to be able to tell me that none of these classes matches the unknown/untrained test-data.
Couldn't I just use all the methods that I mentioned above to reach my goal?
e.g. I could describe C1, C2, C3 and C4 as four different trainings-sets for binary-classification and use the trained models to label an unknow data-set ...
Just by saying, Training-Set for C1 contains class 1 (all good samples for C1) and class 0 (mix of all C2, C3 and C4 as bad samples for C1).
Is unlabeled-data C1 -> 1 or 0
Is unlabeled-data C2 -> 1 or 0
... and so on ...
For multinominal classification I could just define a training-set containing all good sample data for C1, C2, C3 and C4 in one training-set and then use the one resulting model for classification ...
But where is the difference between this two methods? (except of that I have to use different algorithms)
And how would I define a training-set for the described problem of categorizing data in those four classes using one-class classfication (is that even possible)?
Excuse me if I'm completely wrong in my thinking. Would appreciate an answer that makes the methodology a little bit clearer to me =)
|
Machine Learning - Where is the difference between one-class, binary-class and multinominal-class classification?
|
CC BY-SA 3.0
| null |
2014-10-20T06:38:16.490
|
2015-03-03T14:21:51.033
|
2014-10-20T06:46:20.017
|
4717
|
4717
|
[
"machine-learning",
"data-mining",
"classification",
"categorical-data"
] |
2314
|
1
| null | null |
15
|
6769
|
A Random Forest (RF) is created by an ensemble of Decision Trees's (DT). By using bagging, each DT is trained in a different data subset. Hence, is there any way of implementing an on-line random forest by adding more decision tress on new data?
For example, we have 10K samples and train 10 DT's. Then we get 1K samples, and instead of training again the full RF, we add a new DT. The prediction is done now by the Bayesian average of 10+1 DT's.
In addition, if we keep all the previous data, the new DT's can be trained mainly in the new data, where the probability of picking a sample is weighted depending how many times have been already picked.
|
On-line random forests by adding more single Decisions Trees
|
CC BY-SA 3.0
| null |
2014-10-20T08:48:42.167
|
2019-03-17T04:10:21.730
| null | null |
4719
|
[
"random-forest",
"online-learning"
] |
2315
|
2
| null |
1028
|
30
| null |
Every ML algorithm with high complexity can overfit. However, the OP is asking whether an RF will not overfit when increasing the number of trees in the forest.
In general, ensemble methods reduces the prediction variance to almost nothing, improving the accuracy of the ensemble. If we define the variance of the expected generalization error of an individual randomized model as:

From [here](http://arxiv.org/abs/1407.7502), the variance of the expected generalization error of an ensemble corresponds to:

where `p(x)` is the Pearson’s correlation coefficient between the predictions of two randomized models trained on the same data from two independent seeds. If we increase the number of DT's in the RF, larger `M`, the variance of the ensemble decreases when `ρ(x)<1`. Therefore, the variance of an ensemble is strictly smaller than the variance of an individual model.
In a nutshell, increasing the number of individual randomized models in an ensemble will never increase the generalization error.
| null |
CC BY-SA 3.0
| null |
2014-10-20T09:31:18.650
|
2015-11-17T16:19:49.323
|
2015-11-17T16:19:49.323
|
13727
|
4719
| null |
2316
|
1
| null | null |
1
|
48
|
Where is the difference between one-class, binary-class and multinominal-class classification?
If I like to classify text in lets say four classes and also want the system to be able to tell me that none of these classes matches the unknown/untrained test-data.
Couldn't I just use all the methods that I mentioned above to reach my goal?
e.g. I could describe C1, C2, C3 and C4 as four different trainings-sets for binary-classification and use the trained models to label an unknow data-set ...
Just by saying, Training-Set for C1 contains class 1 (all good samples for C1) and class 0 (mix of all C2, C3 and C4 as bad samples for C1).
Is unlabeled-data C1 -> 1 or 0
Is unlabeled-data C2 -> 1 or 0
... and so on ...
For multinominal classification I could just define a training-set containing all good sample data for C1, C2, C3 and C4 in one training-set and then use the one resulting model for classification ...
But where is the difference between this two methods? (except of that I have to use different algorithms)
And how would I define a training-set for the described problem of categorizing data in those four classes using one-class classfication (is that even possible)?
Excuse me if I'm completely wrong in my thinking. Would appreciate an answer that makes the methodology a little bit clearer to me =)
|
Machine Learning - Where is the difference between one-class, binary-class and multinominal-class classification?
|
CC BY-SA 3.0
| null |
2014-10-19T21:03:35.390
|
2014-10-20T10:38:59.437
| null | null |
4717
|
[
"machine-learning",
"classification",
"data-mining"
] |
2317
|
2
| null |
2303
|
0
| null |
The Art of R Programming by Normal Matloff is a great way to find your way towards being an R user. I've recommended this book to several people navigating the tutorial / book universe and to my knowledge they've all stuck with it.
| null |
CC BY-SA 3.0
| null |
2014-10-20T15:32:58.097
|
2014-10-20T15:32:58.097
| null | null |
4724
| null |
2318
|
2
| null |
2303
|
2
| null |
There is an online data science "game" that takes you from learning how to use Python for loading a csv and using scikit to machine learning algorithms such as support vector machines. Here is a [blog post](http://mentally-physically-spiritually-strong.com/2014/10/18/explore-data-science-game/) with a demo video and the actual site is [Explore Data Science](https://exploredatascience.com). Personally, I think its genius.
| null |
CC BY-SA 3.0
| null |
2014-10-20T16:12:45.427
|
2014-10-20T16:12:45.427
| null | null |
3430
| null |
2320
|
2
| null |
19
|
13
| null |
To me Big Data is primarily about the tools (after all, that's where it started); a "big" dataset is one that's too big to be handled with conventional tools - in particular, big enough to demand storage and processing on a cluster rather than a single machine. This rules out a conventional RDBMS, and demands new techniques for processing; in particular, various Hadoop-like frameworks make it easy to distribute a computation over a cluster, at the cost of restricting the form of this computation. I'll second the reference to [http://www.chrisstucchio.com/blog/2013/hadoop_hatred.html](http://www.chrisstucchio.com/blog/2013/hadoop_hatred.html) ; Big Data techniques are a last resort for datasets which are simply too big to handle any other way. I'd say any dataset for any purpose could qualify if it was big enough - though if the shape of the problem is such that existing "big data" tools aren't appropriate, then it would probably be better to come up with a new name.
Of course there is some overlap; when I (briefly) worked at last.fm, we worked on the same 50TB dataset using Hadoop and also in an SQL database on a fairly ridiculous server (I remember it had 1TB RAM, and this is a few years ago). Which in a sense meant it both was and wasn't big data, depending on which job you were working on. But I think that's an accurate characterization; the people who worked on the Hadoop jobs found it useful to go to Big Data conferences and websites, while the people who worked on the SQL jobs didn't.
| null |
CC BY-SA 3.0
| null |
2014-10-20T19:40:34.193
|
2014-10-20T19:40:34.193
| null | null |
4730
| null |
2321
|
1
| null | null |
6
|
6294
|
I am drawing samples from two classes in the two-dimensional Cartesian space, each of which has the same covariance matrix $[2, 0; 0, 2]$. One class has a mean of $[1.5, 1]$ and the other has a mean of $[1, 1.5]$. If the priors are $4/7$ for the former and $3/7$ for the later, how would I derive the equation for the ideal decision boundary?
If it turns out that misclassifying the second class is twice as expensive as the first class, and the objective is to minimize the expected cost, what equation would I use for the best decision boundary?
|
Bayes Optimal Decision Boundaries for Gaussian Data with Equal Covariance
|
CC BY-SA 3.0
| null |
2014-10-20T22:13:32.420
|
2015-08-12T20:30:28.753
|
2015-08-12T20:30:28.753
| null |
4736
|
[
"classification",
"discriminant-analysis"
] |
2322
|
1
| null | null |
4
|
255
|
Are there any good sources that explain how decision trees can be implemented in a scalable way on a distributed computing system. Where in a given source is this explained?
|
Distributed Scalable Decision Trees
|
CC BY-SA 3.0
| null |
2014-10-20T22:22:09.660
|
2017-05-17T06:16:47.753
|
2017-05-17T06:16:47.753
|
31513
|
4736
|
[
"classification",
"bigdata",
"decision-trees"
] |
2323
|
1
|
39726
| null |
11
|
6074
|
Can anyone explain how field-aware factorization machines (FFM) compare to standard Factorization Machines (FM)?
Standard:
[http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf](http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf)
"Field Aware":
[http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf](http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf)
|
Field Aware Factorization Machines
|
CC BY-SA 3.0
| null |
2014-10-21T00:09:40.597
|
2018-10-17T20:53:25.863
|
2016-10-01T16:31:45.827
|
20995
|
1138
|
[
"machine-learning",
"recommender-system"
] |
2324
|
2
| null |
2323
|
1
| null |
Standard factorization machines have fields too. The "novelty" here seems to be the use of GBDT features and the application of the hashing tricks. Not to great effect, it seems: check out the minute range in performance on the last slide.
| null |
CC BY-SA 3.0
| null |
2014-10-21T01:34:13.407
|
2014-10-21T01:34:13.407
| null | null |
381
| null |
2325
|
2
| null |
2314
|
9
| null |
There's [a recent paper on this subject](http://www.vision.cs.chubu.ac.jp/CV-R/pdf/AmirICCVW2009.pdf) (On-line Random Forests), coming from computer vision. [Here's an implementation](http://www.ymer.org/amir/software/online-random-forests/), and a presentation: [Online random forests in 10 minutes](http://www.slideshare.net/CvilleDataScience/online-random-forest-in-10-minutes)
| null |
CC BY-SA 3.0
| null |
2014-10-21T02:43:26.790
|
2014-10-21T02:43:26.790
| null | null |
381
| null |
2326
|
2
| null |
2322
|
3
| null |
[Apache Spark](http://spark.apache.org/) [can do it](http://spark-summit.org/2014/talk/scalable-distributed-decision-trees-in-spark-mllib), using the new [MLLib](http://spark.apache.org/docs/1.1.0/mllib-decision-tree.html) library. [Here's a presentation](https://www.youtube.com/watch?v=3rRrcPXHu98), and [here are some benchmarks](http://databricks.com/blog/2014/09/29/scalable-decision-trees-in-mllib.html). Bindings are available for python, scala, and java.
| null |
CC BY-SA 3.0
| null |
2014-10-21T06:12:51.887
|
2014-10-21T06:12:51.887
| null | null |
381
| null |
2327
|
2
| null |
2303
|
1
| null |
I have found the video tutorial/IPython notebook format really helped me get into the python ecosystem.
There were two tutorials at SciPy 2013 that cover sklearn ([part 1 of 1st tutorial](https://www.youtube.com/watch?v=r4bRUvvlaBw), [github repo for notebooks](https://github.com/jakevdp/sklearn_scipy2013)).
Similar tutorials, from PyCon2012 and PyData2012, are out there for pandas but I don't have the rep to link searching for `pandas tutorial` on youtube should allow you to find them.
Since you mention Kaggle, I guess you will have seen their getting started with python tutorial for the titanic passenger dataset (I don't have the rep here to provide a link but searching for `Getting Started with Python: Kaggle's Titanic Competition` should get you there).
| null |
CC BY-SA 3.0
| null |
2014-10-21T07:06:21.580
|
2014-10-21T07:06:21.580
| null | null |
4739
| null |
2328
|
1
|
2329
| null |
2
|
414
|
There're many data points, each of which is associated with two coordinates and a numeral value, or three coordinates. And I wish it is coloured.
I checked packages "scatterplot3d" and "plot3D" but I couldn't find one like the example I give. It is like it has a fitting surface.
My data is basically like the following. In this way I think this kind of plot is gonna be perfectly suitble for this data:
```
ki,kt,Top10AverageF1Score
360,41,0.09371256716549396
324,41,0.09539634212851525
360,123,0.09473510831594467
36,164,0.09773486852645874
...
```
But I also may have one more additional variable, which makes it like:
```
NeighborhoodSize,ki,kt,Top10AverageF1Score
10,360,41,0.09371256716549396
15,324,41,0.09539634212851525
15,360,123,0.09473510831594467
20,36,164,0.09773486852645874
...
```
Do you also have any good idea for visualizing the second case? What kind of plot and which packages and functions, etc.

|
Which packages or functions can I use in R to plot 3D data like this?
|
CC BY-SA 3.0
| null |
2014-10-21T09:42:51.610
|
2014-10-21T12:50:19.847
|
2014-10-21T12:50:19.847
|
3443
|
3443
|
[
"r",
"visualization"
] |
2329
|
2
| null |
2328
|
2
| null |
You could use the `wireframe` function from the `lattice` package:
```
library("lattice")
wireframe(volcano[1:30, 1:30], shade=TRUE, zlab="")
```

| null |
CC BY-SA 3.0
| null |
2014-10-21T10:22:21.390
|
2014-10-21T10:22:21.390
| null | null |
2961
| null |
2332
|
2
| null |
2308
|
3
| null |
Since it's a master's thesis, how about writing something regarding decision trees, and their "upgrades": boosting and Random Forests? And then integrate that with Map/Reduce, together with showing how to scale a Random Forest on Hadoop using M/R?
| null |
CC BY-SA 3.0
| null |
2014-10-22T08:30:00.990
|
2014-10-22T08:30:00.990
| null | null |
1127
| null |
2334
|
1
|
2335
| null |
4
|
811
|
I want to analyze [MovieLens data set](http://grouplens.org/datasets/movielens/) and load on my machine the M1 file. I combine actually two data files (ratings.dat and movies.dat) and sort the table according `'userId'` and `'Time'` columns. The head of my DataFrame looks like here (all columns values are corresponding to the original data sets):
```
In [36]: df.head(10)
Out[36]:
userId movieId Rating Time movieName \
40034 1 150 5 978301777 Apollo 13 (1995)
77615 1 1028 5 978301777 Mary Poppins (1964)
550485 1 2018 4 978301777 Bambi (1942)
400889 1 1962 4 978301753 Driving Miss Daisy (1989)
787274 1 1035 5 978301753 Sound of Music, The (1965)
128308 1 938 4 978301752 Gigi (1958)
497972 1 3105 5 978301713 Awakenings (1990)
28417 1 2028 5 978301619 Saving Private Ryan (1998)
6551 1 1961 5 978301590 Rain Man (1988)
35492 1 2692 4 978301570 Run Lola Run (Lola rennt) (1998)
genre
40034 Drama
77615 Children's|Comedy|Musical
550485 Animation|Children's
400889 Drama
787274 Musical
128308 Musical
497972 Drama
28417 Action|Drama|War
6551 Drama
35492 Action|Crime|Romance
[10 rows x 6 columns]
```
I can not understand that the same user with user Id 1 see or rated the different movies (Apollo13 (Id:150), Mary Poppins (Id:1028) and Bambi (Id:2018) exactly at the same time (up to the milleseconds). If somebody works already with this data set, please, clear this situation.
|
MovieLens data set
|
CC BY-SA 3.0
| null |
2014-10-22T14:53:42.127
|
2014-10-22T15:50:43.230
|
2014-10-22T15:43:44.803
|
3281
|
3281
|
[
"dataset",
"pandas"
] |
2335
|
2
| null |
2334
|
2
| null |
When you enter ratings on movie lens, you get pages with 10 movies or so. You set all the ratings, then submit by clicking "next page" or something.
So I guess all the ratings for the same page are received at the same time, when you submit the page.
| null |
CC BY-SA 3.0
| null |
2014-10-22T15:50:43.230
|
2014-10-22T15:50:43.230
| null | null |
4760
| null |
2337
|
1
|
2339
| null |
6
|
232
|
I am not sure whether I formulated the question correctly. Basically, what I want to do is:
Let's suppose I have a list of 1000 strings which look like this:
cvzxcvzxstringcvzcxvz
otortorotrstringgrptprt
vmvmvmeopstring2vmrprp
vccermpqpstring2rowerm
proorororstring3potrprt
mprto2435string3famerpaer
etc.
I'd like to extract these reoccuring strings that occur on the list. What solution should I use? Does anyone know about algorithm that could do this?
|
Clustering strings inside strings?
|
CC BY-SA 3.0
| null |
2014-10-23T14:51:57.160
|
2014-10-24T18:35:41.350
| null | null |
4774
|
[
"nlp",
"text-mining",
"feature-extraction"
] |
2338
|
2
| null |
326
|
7
| null |
In my experience, the answer depends on the project at hand. For pure research, I prefer R for two reasons: 1) broad variety of libraries and 2) much of the data science literature includes R samples.
If the project requires an interactive interface to be used by laypersons, I've found R to be too constrained. Shiny is a great start, but it's not flexible enough yet. In these cases, I'll start to look at porting my R work over to Python or js.
| null |
CC BY-SA 3.0
| null |
2014-10-23T15:50:09.820
|
2014-10-23T15:50:09.820
| null | null |
4776
| null |
2339
|
2
| null |
2337
|
2
| null |
Interesting question! I have not encountered it before so here is a solution I just made up, inspired by the approach taken by the word2vec paper:
- Define the pair-wise similarity based on the longest common substring (LCS), or the LCS normalized by the products of the string lengths. Cache this in a matrix for any pair of strings considered since it is expensive to calculate. Also consider approximations.
- Find a Euclidean (hyperspherical, perhaps?) embedding that minimizes the error (Euclidean distance if using the ball, and the dot product if using the sphere). Assume random initialization, and use a gradient-based optimization method by taking the Jacobian of the error.
- Now you have a Hilbert space embedding, so cluster using your algorithm of choice!
Response to deleted comment asking how to cluster multiple substrings: The bulk of the complexity lies in the first stage; the calculation of the LCS, so it depends on efficiently you do that. I've had luck with genetic algorithms. Anyway, what you'd do in this case is define a similarity vector rather than a scalar, whose elements are the k-longest pair-wise LCS; see [this](https://cstheory.stackexchange.com/questions/8361/algorithm-find-the-first-k-longest-substrings-between-two-similar-strings) discussion for algorithms. Then I would define the error by the sum of the errors corresponding to each substring.
Something I did not address is how to choose the dimensionality of the embedding. The word2vec paper might provide some heuristics; see [this](https://groups.google.com/forum/#!topic/word2vec-toolkit/HRvNPIqe6mM) discussion. I recall they used pretty big spaces, on the order of a 1000 dimensions, but they were optimizing something more complicated, so I suggest you start at R^2 and work your way up. Of course, you will want to use a higher dimensionality for the multiple LCS case.
| null |
CC BY-SA 3.0
| null |
2014-10-23T17:10:40.890
|
2014-10-24T18:35:41.350
|
2017-04-13T12:32:33.517
|
-1
|
381
| null |
2340
|
2
| null |
2304
|
1
| null |
If you just want to get a forecast, use automatic forecasting software like Autobox, ForecastPro, SCA, etc.
- Comparison of different software: https://stats.stackexchange.com/questions/68253/expert-forecasting-software-evaluation
- Worked example: http://people.duke.edu/~rnau/autocomp.htm
If you are doing research on forecasting techniques, there are some Java packages, for example:
- JMotif http://code.google.com/p/jmotif/
- JTSA http://sourceforge.net/projects/jtsa/
However, Java is not very widely used for this; if you must use Java you will likely have a feeling of being out in the cold. R and Python would have much more packages, examples, etc. A tiny sample of what is available in R:
- http://www.amazon.com/Introductory-Time-Series-Paul-Cowpertwait/dp/0387886974/
- http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/
- http://www.r-bloggers.com/time-series-analysis-and-mining-with-r/
| null |
CC BY-SA 3.0
| null |
2014-10-23T20:11:18.750
|
2014-10-23T20:11:18.750
|
2017-04-13T12:44:20.183
|
-1
|
26
| null |
2341
|
2
| null |
1155
|
1
| null |
I would think about starting with a power analysis: i.e. how many data points do you need to measure the effect (or parameter) that you are interested to a specified level of confidence, ceteris paribus? Then, you estimate a cost.
| null |
CC BY-SA 3.0
| null |
2014-10-23T21:12:27.350
|
2014-10-23T21:12:27.350
| null | null |
4779
| null |
2342
|
2
| null |
2322
|
2
| null |
You should read the paper from Google on PLANET, which was their distributed MapReduce-based implementation of random decision forests: [http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36296.pdf](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36296.pdf)
You may or may not like the architecture but there are a number of interesting ideas about scaling up here.
| null |
CC BY-SA 3.0
| null |
2014-10-23T21:59:32.713
|
2014-10-23T21:59:32.713
| null | null |
21
| null |
2343
|
1
| null | null |
2
|
5956
|
I need some serious help. I am supposed to implement a project (Non-Existing as of now) for my Machine Learning course. I have no basics in AI or Data mining or Machine learning. I have been searching for a while and unable to find something that i can finish implementing in 3-4 weeks time. It carries a huge chunk of my final marks and no matter how much i try i am unable to understand how it works!
Can the machine learning masters please help me out with this. I need a project suggestion to start with. And i want to know how to proceed after gathering the data set. I am totally blank and running out of time for my graduation :(
Appreciate your suggestions! Thanks in advance.
|
Small project ideas for Machine Learning
|
CC BY-SA 3.0
| null |
2014-10-24T23:13:07.803
|
2014-12-19T07:02:47.587
| null | null |
4794
|
[
"machine-learning",
"bigdata",
"data-mining",
"clustering",
"algorithms"
] |
2344
|
1
| null | null |
2
|
870
|
I have a question about memory usage.
I want to do 4 things:
```
1) make a dataframe from one of several columns from a datasource, say a json string
2) make the third column of the original dataset the index to the dataframe
3) change the name of another column
4) change the series i've created to a dataframe
```
My question is about memory efficiency. It seems that for step 1), I am first loading a whole dataframe, then run a concat command to concatenate the columns I want.
For step 2, I again need to resave the new dataframe as another object.
For step 3, it seems to stick so nothing there.
Please advise on a more efficient way to go about this, if that exists.
Command:
```
df = pd.DataFrame(jsonobject)
df = df.set_index("columnC")
df.index.names= ["foo"]
df1 = df["foo"].map(lambda x:x["id"])
df2 = pd.DataFrame(df1)
```
|
pandas dataframes memory
|
CC BY-SA 3.0
| null |
2014-10-25T01:36:27.483
|
2014-10-25T01:36:27.483
| null | null |
1223
|
[
"pandas"
] |
2345
|
2
| null |
2343
|
3
| null |
Gather million twitter user accounts. Then try to guess their gender based on their avatar, tweets and who they follow.
| null |
CC BY-SA 3.0
| null |
2014-10-25T08:40:14.613
|
2014-10-25T08:40:14.613
| null | null |
4774
| null |
2346
|
1
| null | null |
6
|
224
|
I work in an analytical role at a a large financial services firm. We do a ton of daily reporting over metrics that rarely change in a meaningful way from day to day. From this daily reporting, our management is required to extract what was important yesterday and what important trends have developed / are developing over time.
I want to change this to a model of daily exception reporting and weekly trend reporting.
Features might include:
- User report consolidation (so there's only one daily email)
- report ordering based upon level of variance from past performance (see the most important stuff first)
- HTML email support (with my audience, pretty counts)
- Web interface to allow preference changes, including LDAP support (make administration easier)
- Unsubscribe feature at the report level
Here's what I'd like to know:
- What are the practical problems I might run into?
- What is the best way to display the new reports?
- How should I define an "exception"? How can I know if my definition is a good one?
- I assume I'd be using a mix of Python, SQL, and powershell. Anything else I should consider, e.g. R? What are some good resources?
|
Business exception reporting
|
CC BY-SA 3.0
| null |
2014-10-25T22:37:18.523
|
2017-12-14T21:41:01.897
|
2017-12-14T21:41:01.897
|
29575
|
4800
|
[
"recommender-system",
"visualization"
] |
2347
|
2
| null |
2346
|
1
| null |
Here's some practical advice from my own experiences-
- The first thing to do is to convince management the the change will be for the better. A mock-up of a sample report can be very useful here.
- Even if management agrees, they'll still want to know why variances have occurred, and hence you'll need to be able to supply more data.
- Although exception reporting is best, management will want to see everything anyway, as doing so makes them feel as though they are doing something useful.
- Don't change everything at once- too big a change at once can cause resistance.
- For how to best present the data, read Edward Tufte's books, at the very least his first one, "The Visual Display of Quantitative Data".
- Defining what's an exception can be hard, because the recipients will each have their own ideas. Using say, a 95% confidence interval is good, but it won't be universally liked. Some people will consider any change above $X significant, and others will want to see everything that's more than X% different from the prior period. Have fun with this part :(
| null |
CC BY-SA 3.0
| null |
2014-10-25T23:07:15.843
|
2014-10-30T13:40:45.283
|
2014-10-30T13:40:45.283
|
1241
|
1241
| null |
2348
|
1
| null | null |
2
|
2717
|
I have Train and Test data, how to calculate classification accuracy with confusion matrix ? Thanks
```
@attribute outlook {sunny, overcast, rainy}
@attribute temperature {hot, mild, cool}
@attribute humidity {high, normal}
@attribute windy {TRUE, FALSE}
@attribute play {yes, no}
```
Train:
```
1 sunny hot high FALSE no
2 sunny hot high TRUE no
3 overcast hot high FALSE yes
4 rainy mild high FALSE yes
5 rainy cool normal FALSE yes
6 rainy cool normal TRUE no
7 sunny cool normal FALSE yes
8 rainy mild normal FALSE yes
9 sunny mild normal TRUE yes
10 overcast mild high TRUE yes
11 overcast hot normal FALSE yes
12 rainy mild high TRUE no
```
Test:
```
overcast cool normal TRUE yes
sunny mild high FALSE no
```
Rules found:
```
(humidity,normal), (windy,FALSE) -> (play,yes) [Support=0.33 , Confidence=1.00 , Correctly Classify= 4, 8, 9, 12]
(outlook,overcast) -> (play,yes) [Support=0.25 , Confidence=1.00 , Correctly Classify= 2, 11]
(outlook,rainy), (windy,FALSE) -> (play,yes) [Support=0.25 , Confidence=1.00 , Correctly Classify= 3]
(outlook,sunny), (temperature,hot) -> (play,no) [Support=0.17 , Confidence=1.00 , Correctly Classify= 0, 1]
(outlook,sunny), (humidity,normal) -> (play,yes) [Support=0.17 , Confidence=1.00 , Correctly Classify= 10]
(outlook,rainy), (windy,TRUE) -> (play,no) [Support=0.17 , Confidence=1.00 , Correctly Classify= 5, 13]
```
|
How to calculate classification accuracy with confusion matrix?
|
CC BY-SA 3.0
| null |
2014-10-25T23:46:14.493
|
2014-10-27T17:39:18.997
|
2014-10-27T11:13:51.847
|
3503
|
3503
|
[
"classification",
"accuracy"
] |
2349
|
1
|
2350
| null |
5
|
199
|
For my Computational Intelligence class, I'm working on classifying short text. One of the papers that I've found makes a lot of use of granular computing, but I'm struggling to find a decent explanation of what exactly it is.
From what I can gather from the paper, it sounds to me like granular computing is very similar to fuzzy sets. So, what exactly is the difference. I'm asking about rough sets as well, because I'm curious about them and how they relate to fuzzy sets. If at all.
Edit: [Here](http://ijcai.org/papers11/Papers/IJCAI11-298.pdf) is the paper I'm referencing.
|
Rough vs Fuzzy vs Granular Computing
|
CC BY-SA 3.0
| null |
2014-10-26T13:12:23.597
|
2014-12-13T21:30:55.703
|
2014-12-13T21:30:55.703
|
84
|
4804
|
[
"machine-learning",
"data-mining",
"classification"
] |
2350
|
2
| null |
2349
|
4
| null |
"Granularity" refers to the resolution of the variables under analysis. If you are analyzing height of people, you could use course-grained variables that have only a few possible values -- e.g. "above-average, average, below-average" -- or a fine-grained variable, with many or an infinite number of values -- e.g. integer values or real number values.
A measure is "fuzzy" if the distinction between alternative values is not crisp. In the course-grained variable for height, a "crisp" measure would mean that any given individual could only be assigned one value -- e.g. a tall-ish person is either "above-average", or "average". In contrast, a "fuzzy" measure allows for degrees of membership for each value, with "membership" taking values from 0 to 1.0. Thus, a tall-ish person could be a value of "0.5 above-average", "0.5 average", "0.0 below-average".
Finally, a measure is "rough" when two values are given: upper and lower bounds as an estimate of the "crisp" measure. In our example of a tall-ish person, the rough measure would be {UPPER = above-average, LOWER = average}.
Why use granular, fuzzy, or rough measures at all, you might ask? Why not measure everything in nice, precise real numbers? Because many real-world phenomena don't have a good, reliable intrinsic measure and measurement procedure that results in a real number. If you ask married couples to rate the quality of their marriage on a scale from 1 to 10, or 1.00 to 10.00, they might give you a number (or range of numbers), but how reliable are those reports? Using a course-grained measure (e.g. "happy", "neutral/mixed", "unhappy"), or fuzzy measure, or rough measure can be more reliable and more credible in your analysis. Generally, it's much better to use rough/crude measures well than to use precise/fine-grained measures poorly.
| null |
CC BY-SA 3.0
| null |
2014-10-26T19:36:07.113
|
2014-10-26T19:36:07.113
| null | null |
609
| null |
2351
|
2
| null |
2298
|
1
| null |
I think the most likely explanation is that the two libraries do not quite support TreeModel in PMML in the same way. Perhaps one only supports a subset of features, and ignores ones it does not understand. This could cause different scoring.
I'd also double check that upstream parsing code is the same in both cases. Maybe a missing value is treated differently upstream.
| null |
CC BY-SA 3.0
| null |
2014-10-26T20:10:06.923
|
2014-10-26T20:10:06.923
| null | null |
21
| null |
2352
|
1
| null | null |
2
|
47
|
I have data collected from a computer simulation of football games which seem to have recurring patterns of the following form.
if madrid plays arsernal and the match ends under 3 goal, then on their next match against each others, madrid will win. if madrid happens to loose and then plays against chelsea next, they will win 90% of the time.
how do I find such inferences from simulation generated data like this. There are other forms of hidden patterns that I believe exists in the dataset.
|
making logical inference from a simuation generated data
|
CC BY-SA 3.0
| null |
2014-10-26T20:11:22.317
|
2022-05-26T00:03:40.363
| null | null |
4811
|
[
"machine-learning",
"data-mining"
] |
2353
|
1
| null | null |
4
|
382
|
I cant seem to figure out why I have a high percentage error.
I'm trying to get a perceptron between X1 and X2 which are Gaussian distributed data sets with distinct means and identical co-variances.
Below is my code:
```
N=200;
C= [2 1; 1 2]; %Covariance
m1=[0 2];
m2=[1.5 0];%mean
X1 = mvnrnd(m1, C, N/2);
X2 = mvnrnd(m2, C, N/2);
X = [X1; X2];
X = [X ones(N,1)]; %bias
y = [-1*ones(N/2,1); ones(N/2,1)]; %classification
%Split data into training and test
ii = randperm(N);
Xtr = X(ii(1:N/2),:);
ytr = X(ii(1:N/2),:);
Xts = X(ii(N/2+1:N),:);
yts = y(ii(N/2+1:N),:);
Nts = N/2;
w = randn(3,1);
eta = 0.001;
%learn from training set
for iter=1:500
j = ceil(rand*N/2);
if( ytr(j)*Xtr(j,:)*w < 0)
w = w + eta*Xtr(j,:)';
end
end
%apply what you have learnt to test set
yhts = Xts * w;
disp([yts yhts])
PercentageError = 100*sum(yts .*yhts < 0)/Nts;
```
What am I doing wrong and how can I address this challenge?
|
MATLAB Perceptron
|
CC BY-SA 3.0
| null |
2014-10-26T20:35:39.767
|
2016-12-21T19:51:44.737
|
2016-12-21T19:51:44.737
|
25652
|
4813
|
[
"machine-learning",
"matlab"
] |
2354
|
2
| null |
172
|
6
| null |
I think your best bet would be [rosetta](http://pythonhosted.org/rosetta/). I'm finding it extremely useful and easy. Check its [pandas methods](http://pythonhosted.org/rosetta/#module-rosetta.parallel.pandas_easy).
You can get it by pip.
| null |
CC BY-SA 3.0
| null |
2014-10-27T04:15:00.747
|
2014-10-27T04:15:00.747
| null | null |
4815
| null |
2355
|
1
|
2371
| null |
3
|
3061
|
Sorry, if this topic is not connected directly to Data Science.
I want to understand how the [Graphlab tool](http://graphlab.com/learn/gallery/index.html) works. Firstly I want to execute the toy examples from the Gallery site. When I try to execute the example code, everything is OK except one command: I can not see the graphlab plot after `show()`. The command `show()` returns to me some kind of object in IPython and nothing in the IPython Notebook.
If the example code has the plot, which depends directly on the matplotlib module, I can produce the real plots and save it on my machine. Consequently, I suppose the main error depends on the graphlab (or object from its class).
If somebody already used this tool and rendered the plot, can he/she tell me, how I can execute the plots command?
```
In [8]: import graphlab
In [9]: from IPython.display import display
from IPython.display import Image
graphlab.canvas.set_target('ipynb')
In [10]:import urllib
url = 'https://s3.amazonaws.com/GraphLab-Datasets/americanMovies /freebase_performances.csv'
urllib.urlretrieve(url, filename='freebase_performances.csv') # downloads an 8MB file to the working directory
Out[10]: ('freebase_performances.csv', <httplib.HTTPMessage instance at 0x7f44e153cf38>)
In [11]: data = graphlab.SFrame.read_csv('remote://freebase_performances.csv', column_type_hints={'year': int})
```
...
...
...
```
In [15]:data.show()
```
No plot after this line
...
...
```
In [19]:print data.show()
<IPython.core.display.Javascript object at 0x7f44e14c0850>
```
The object of graphlab (?) after print command
|
How do I show plots when using Graphlab?
|
CC BY-SA 3.0
| null |
2014-10-27T09:55:36.887
|
2016-11-29T10:53:36.083
|
2016-11-29T10:53:36.083
|
26596
|
3281
|
[
"python",
"graphs",
"ipython"
] |
2356
|
2
| null |
2287
|
0
| null |
OK, so values at time t-1 predict values at time t. That makes sense.
First you should decide whether you think these values are independent or not. Do the x predict the y or z at all? And, do you think just the previous 1 value is predictive, or the previous n?
Either way you could model this as a simple regression problem. What technique is best really depends on what you expect the relationship to be, and what these variables are; I am not sure that's given here.
For example if they're sensor values read fairly rapidly, and the sensor changes slowly, you'd expect some simple model like a moving average to do well. For other types of values this would not be predictive at all.
This looks like the Markov chain model, so you may look into that, but somehow I think it's over-general for what I think the problem is.
| null |
CC BY-SA 3.0
| null |
2014-10-27T15:43:00.067
|
2014-10-27T15:43:00.067
| null | null |
21
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.