Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5178
|
1
|
6771
| null |
59
|
28897
|
I am a PhD student of Geophysics and work with large amounts of image data (hundreds of GB, tens of thousands of files). I know `svn` and `git` fairly well and come to value a project history, combined with the ability to easily work together and have protection against disk corruption. I find `git` also extremely helpful for having consistent backups but I know that git cannot handle large amounts of binary data efficiently.
In my masters studies I worked on data sets of similar size (also images) and had a lot of problems keeping track of different version on different servers/devices. Diffing 100GB over the network really isn't fun, and cost me a lot of time and effort.
I know that others in science seem to have similar problems, yet I couldn't find a good solution.
I want to use the storage facilities of my institute, so I need something that can use a "dumb" server. I also would like to have an additional backup on a portable hard disk, because I would like to avoid transferring hundreds of GB over the network wherever possible. So, I need a tool that can handle more than one remote location.
Lastly, I really need something that other researcher can use, so it does not need to be super simple, but should be learnable in a few hours.
I have evaluated a lot of different solutions, but none seem to fit the bill:
- svn is somewhat inefficient and needs a smart server
- hg bigfile/largefile can only use one remote
- git bigfile/media can also use only one remote, but is also not very efficient
- attic doesn't seem to have a log, or diffing capabilities
- bup looks really good, but needs a "smart" server to work
I've tried `git-annex`, which does everything I need it to do (and much more), but it is very difficult to use and not well documented. I've used it for several days and couldn't get my head around it, so I doubt any other coworker would be interested.
How do researchers deal with large datasets, and what are other research groups using?
To be clear, I am primarily interested in how other researchers deal with this situation, not just this specific dataset. It seems to me that almost everyone should have this problem, yet I don't know anyone who has solved it. Should I just keep a backup of the original data and forget all this version control stuff? Is that what everyone else is doing?
|
How to deal with version control of large amounts of (binary) data
|
CC BY-SA 3.0
| null |
2015-02-13T10:09:25.177
|
2020-02-05T03:22:42.210
|
2015-02-19T05:31:32.323
|
2452
|
8320
|
[
"bigdata",
"databases",
"binary",
"version-control"
] |
5179
|
2
| null |
5178
|
9
| null |
I have dealt with similar problems with very large synthetic biology datasets, where we have many, many GB of [flow cytometry](http://en.wikipedia.org/wiki/Flow_cytometry) data spread across many, many thousands of files, and need to maintain them consistently between collaborating groups at (multiple) different institutions.
Typical version control like svn and git is not practical for this circumstance, because it's just not designed for this type of dataset. Instead, we have fallen to using "cloud storage" solutions, particularly [DropBox](https://www.dropbox.com/) and [Bittorrent Sync](http://en.wikipedia.org/wiki/BitTorrent_Sync). DropBox has the advantage that it does do at least some primitive logging and version control and manages the servers for you, but the disadvantage that it's a commercial service, you have to pay for large storage, and you're putting your unpublished data on a commercial storage; you don't have to pay much, though, so it's a viable option. Bittorrent Sync has a very similar interface, but you run it yourself on your own storage servers and it doesn't have any version control. Both of them hurt my programmer soul, but they're the best solutions my collaborators and I have found so far.
| null |
CC BY-SA 3.0
| null |
2015-02-13T13:38:56.797
|
2015-02-13T13:38:56.797
| null | null | null | null |
5180
|
2
| null |
5164
|
1
| null |
As there isn't too much code/context you're giving us I assume that you're working with a binary classification problem:
```
import numpy as np
X = ... # input for classification, shape (n_samples, n_features)
y_pred = rf.predict_proba(X)[:, 1] # index slicing to retrieve a 1d-array of probabilities
y_pred
# array([ 0.18, 0.21, 0.1, 0.2, 0.3])
# Now lets see what the 2 biggest probabilities are
np.argsort(y_pred)
# [2, 0, 3, 1, 4] => lowest probability is at index 2 (the 3rd probability as classified by the RandomForestClassifier)
# Now lets see what the top 5 samples look like (remember, argsort sorts in ascending order, so we take the last 5 indices of argsort):
X[np.argsort(y_pred)[-5:]]
```
Now you should see the rows in X with the 5 most confident predictions.
| null |
CC BY-SA 3.0
| null |
2015-02-18T14:07:39.487
|
2015-02-18T14:07:39.487
| null | null |
8319
| null |
5181
|
2
| null |
5160
|
1
| null |
Imagine each node in your graph as a user, and each edge as an action such as 'share'. It may also be a bidirectional relationship 'share' and 'view'.
Some social scientists and engineers estimate the probability that a message is 'shared' and 'viewed' given that a particular user decides to share it. This process is called "information diffusion", "information propagation", "cascading", etc.
If you are interested in the details of how these calculations are performed, check out these papers:
[http://cs.stanford.edu/people/jure/pubs/netrate-netsci14.pdf](http://cs.stanford.edu/people/jure/pubs/netrate-netsci14.pdf)
[http://cs.stanford.edu/people/jure/pubs/cascades-www14.pdf](http://cs.stanford.edu/people/jure/pubs/cascades-www14.pdf)
[http://cs.stanford.edu/people/jure/pubs/infopath-wsdm13.pdf](http://cs.stanford.edu/people/jure/pubs/infopath-wsdm13.pdf)
Similar topics:
[http://snap.stanford.edu/papers.html](http://snap.stanford.edu/papers.html)
| null |
CC BY-SA 3.0
| null |
2015-02-18T23:14:02.563
|
2015-02-18T23:14:02.563
| null | null |
3466
| null |
5182
|
2
| null |
5178
|
9
| null |
I have used [Versioning on Amazon S3 buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) to manage 10-100GB in 10-100 files. Transfer can be slow, so it has helped to compress and transfer in parallel, or just run computations on EC2. The [boto](https://github.com/boto/boto) library provides a nice python interface.
| null |
CC BY-SA 3.0
| null |
2015-02-19T06:30:45.190
|
2015-02-19T06:30:45.190
| null | null |
8335
| null |
5184
|
2
| null |
5175
|
7
| null |
Finally solved the issue.
You should go to:
```
Data –> your_data_source –> Edit Aliases –> Measure Names
```
And see full mapping between variables/aggregations and aliases.

| null |
CC BY-SA 3.0
| null |
2015-02-19T12:06:08.173
|
2015-02-19T12:06:08.173
| null | null |
97
| null |
5185
|
2
| null |
5171
|
1
| null |
A decaying exponential of the form $e^{-\alpha x}$ (where $x$ is the distance from the observation) is convenient in this situation. It has the nice feature that the weight is equal to $1$ when the observation lies exactly at one of your training points and decays to zero as $x\rightarrow \infty$.
What you need to decide is the scaling factor, $\alpha$, which can greatly affect your results, since the decaying exponential is nonlinear. If $\alpha$ is too small or large, then all $k$ points will be weighted nearly equally (assuming none of the $k$ points have a distance very close to zero). One approach is to pick a global value of $\alpha$ that is suitable for your data set. Another approach is to compute a new value of $\alpha$ for each set of $k$ neighbors under consideration, which will guarantee variability in the weights (as long as all $k$ neighbors are not equidistant).
| null |
CC BY-SA 3.0
| null |
2015-02-19T14:59:05.007
|
2015-02-19T14:59:05.007
| null | null |
964
| null |
5186
|
1
| null | null |
1
|
184
|
I need to draw a decision tree about this subject :
>
The research and development manager in an old oil company, which is
considering making some changes, lists the following courses of action
for the company:
(i) Adopt a process developed by another oil company. This would cost
€7 million in royalties and yield a net €20 million profit (before
paying the royalty).
(ii) Carry out one or two (not simultaneously) alternative research
projects :
(R1) the more expensive one has a 0.8 chance of success; net profit
€16 million and a further €6 million in royalties. If it fails there
will be a net loss of €10 million.
(R2) the alternative research programme is less expensive but only has
a 0.7 chance of success with a net profit of €15 million and a further
€5 million in royalties. If it fails a net loss of €6 million will be
incurred.
(iii) Make no changes. After meeting current operating costs the
company expects to make a net €15 million profit from its existing
process. Failure of one research program would still leave open all
remaining courses of action (including the other research programme).
I need also to indicate the different payoffs. This is what I've done so far :

I would like to be sure that I'm going in the right direction since I'm a beginner with decision trees. And then I need to decide the best course of action using Bayes, Maximax and Maximin rules.
|
Decision Tree Bayes rules / Maximax / Maximin
|
CC BY-SA 3.0
| null |
2015-02-19T16:23:15.093
|
2017-05-11T07:49:57.700
|
2017-05-11T07:49:57.700
|
8878
|
8343
|
[
"data-mining",
"statistics",
"visualization",
"decision-trees"
] |
5188
|
2
| null |
5170
|
2
| null |
No, you can't infer that.
I assume you have the same training set with the same predictors (symptoms). The only difference in the training set is the binary class label for each patient.
The smaller tree just means:
- with the given symptoms, it might be easier to distinguish between the people who have the condition 1 and the ones not having it. (since you have the same certainty with fewer information)
This distinction seems harder with medical condition 2, that's way you have to consider more symptoms to be pretty sure about your classification. So if condition 2 is a very mild condition where it is hard to diagnose even for an expert if someone has it then it would result in a deep tree.
| null |
CC BY-SA 3.0
| null |
2015-02-19T22:17:45.773
|
2015-02-19T22:17:45.773
| null | null |
8191
| null |
5192
|
1
|
5234
| null |
5
|
252
|
I'm quite new to Data Science, but I would like to do a project to learn more about it.
My subject will be Data Understanding in Public Health.
So I want to do some introductory research to public health.
I would like to visualize some data with the use of a tool like Tableau.
Which path would you take to develop a good understanding of Data Science? I imagine taking some online courses, eg. Udacity courses on data science, but which courses would you recommend?
Where can I get real data (secondary Dummy Data) to work with?
And are there any good resources on research papers done in Data Science area with the subject of Public Health?
Any suggestions and comments are welcome.
|
Where can I find resources and papers regarding Data Science in the area of Public Health
|
CC BY-SA 3.0
| null |
2015-02-20T12:21:31.560
|
2016-08-17T20:22:51.343
|
2015-02-25T21:49:01.210
|
2452
|
7887
|
[
"visualization",
"tableau",
"research"
] |
5193
|
1
| null | null |
6
|
13465
|
I was wondering which language can I use: R or Python, for my internship in fraud detection in an online banking system: I have to build machine learning algorithms (NN, etc.) that predict transaction frauds.
Thank you very much for your answer.
|
Python or R for implementing machine learning algorithms for fraud detection
|
CC BY-SA 3.0
| null |
2015-02-20T14:09:23.483
|
2015-07-20T21:15:48.320
| null | null |
8357
|
[
"machine-learning",
"r",
"python"
] |
5196
|
1
| null | null |
0
|
106
|
Hello and thanks in advance! I'd like some advice on a scalability issue and the best way to resolve. I'm writing an algorithm in R to produce forecasts for several thousand entities. One entity takes about 43 seconds to generate a forecast and upload the data to my database. That equates to about 80+ hours for the entire set of entities and that's much too long.
I thought about running several R processes in parallel, possibly many on a few different servers, each performing forecasts for a portion of total entities. Though that would work, is there a better way? Can Hadoop help at all? I have little experience with Hadoop so don't really know if it can apply. Thanks again!
|
Scaling thousands of automated forecasts in R
|
CC BY-SA 3.0
| null |
2015-02-20T21:05:16.223
|
2015-02-28T21:42:31.383
| null | null |
8358
|
[
"r",
"scalability",
"forecast"
] |
5197
|
1
| null | null |
1
|
608
|
A usual way to find association rules in R is the "arules" package, which easily let's use calculate some rules based on the apriori algorithm.
However, for the data i'm using, I have a lot of NULL cases (baskets where no product A o B is present). This means that I need to calculate some null-invariant measures (kulcynski, for instance).
Does anyone know of any package or workable code that let's me implement this as opposed to writing from scratch the entire algorithm?
|
Null-invariant measures of association in R
|
CC BY-SA 3.0
| null |
2015-02-20T21:56:45.690
|
2018-03-05T06:30:12.613
| null | null |
8296
|
[
"data-mining",
"r"
] |
5198
|
1
|
5210
| null |
3
|
572
|
Are there commonly accepted ways to visualize the results of a multivariate regression for a non-quantitative audience? In particular, I'm asking how one should present data on coefficients and T statistics (or p-values) for a regression with around 5 independent variables.
|
How to visualize multivariate regression results
|
CC BY-SA 3.0
| null |
2015-02-20T23:16:18.767
|
2015-02-23T08:17:30.120
| null | null |
6403
|
[
"visualization",
"regression",
"linear-regression"
] |
5199
|
1
|
5201
| null |
1
|
1030
|
This question is likely somewhat naive. I know I (and my colleagues) can install and use Python on local machines. But is that really a best practice? I have no idea.
Is there value in setting up a Python "server"? A box on the network where we develop our data science related Python code. If so, what are the hardware requirements for such a box? Do I need to be concerned about any specific packages or conflicts between projects?
|
What is a good hardware setup for using Python across multiple users
|
CC BY-SA 3.0
| null |
2015-02-21T01:16:34.387
|
2015-02-21T11:59:31.860
| null | null |
8368
|
[
"python"
] |
5200
|
2
| null |
5193
|
9
| null |
I would say that it is your call and purely depends on your comfort with (or desire to learn) the language. Both languages have extensive ecosystems of packages/libraries, including some, which could be used for fraud detection. I would consider anomaly detection as the main theme for the topic. Therefore, the following resources illustrate the variety of approaches, methods and tools for the task in each ecosystem.
Python Ecosystem
- scikit-learn library: for example, see this page;
- LSAnomaly, a Python module, improving OneClassSVM (a drop-in replacement): see this page;
- Skyline: an open source example of implementation, see its GitHub repo;
- A relevant discussion on StackOverflow;
- pyculiarity, a Python port of Twitter's AnomalyDetection R Package (as mentioned in 2nd bullet of R Ecosystem below "Twitter's Anomaly Detection package").
R Ecosystem
- CRAN Task Views, in particular, Time Series, Multivariate and Machine Learning;
- Twitter's Anomaly Detection package;
- Early Warning Signals (EWS) Toolbox, which includes earlywarnings package;
- h2o machine learning platform (interfaces with R) uses deep learning for anomaly detection.
Additional General Information
- A set of slides, mentioning a variety of methods for anomaly detection (AD);
- A relevant discussion on Cross Validated;
- ELKI Data Mining Framework, implementing a large number of AD (and other) algorithms.
| null |
CC BY-SA 3.0
| null |
2015-02-21T10:08:39.980
|
2015-07-20T21:15:48.320
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
5201
|
2
| null |
5199
|
0
| null |
Is installing Python locally a good practice? Yes, if you are going to develop in Python, it is always a good idea to have a local environment where you can break things safely.
Is there value in setting up a Python "server"? Yes, but before doing so, be sure to be able to share your code with your colleagues using a [version control system](http://en.wikipedia.org/wiki/Revision_control). My reasoning would be that, before you move things to a server, you can move a great deal forward by being able to test several different versions in the local environment mentioned above. Examples of VCS are [git](http://git-scm.com), [svn](https://subversion.apache.org), and for the deep nerds, [darcs](http://darcs.net).
Furthermore, a "Python server" where you can deploy your software once it is integrated into a releasable version is something usually called "[staging server](http://en.wikipedia.org/wiki/Staging_site)". There is a whole philosophy in software engineering — [Continuous Integration](http://en.wikipedia.org/wiki/Continuous_integration) — that advocates staging whatever you have in VCS daily or even on each change. In the end, this means that some automated program, running on the staging server, checks out your code, sees that it compiles, runs all defined tests and maybe outputs a package with a version number. Examples of such programs are [Jenkins](http://jenkins-ci.org), [Buildbot](http://buildbot.net) (this one is Python-specific), and [Travis](https://travis-ci.org/recent) (for cloud-hosted projects).
What are the hardware requirements for such a box? None, as far as I can tell. Whenever it runs out of disk space, you will have to clean up. Having more CPU speed and memory will make concurrent builds easier, but there is no real minimum.
Do I need to be concerned about any specific packages or conflicts between projects? Yes, this has been identified as a problem, not only in Python, but in many other systems (see [Dependency hell](http://en.wikipedia.org/wiki/Dependency_hell)). The established practice is to keep projects isolated from each other as far as their dependencies are concerned. This means, avoid installing dependencies on the system Python interpreter, even locally; always define a [virtual environment](http://virtualenv.readthedocs.org/en/latest/) and install dependencies there. Many of the aforementioned CI servers will do that for you anyway.
| null |
CC BY-SA 3.0
| null |
2015-02-21T11:59:31.860
|
2015-02-21T11:59:31.860
| null | null |
1367
| null |
5202
|
2
| null |
5193
|
2
| null |
This is largely a subjective question. Trying to list some criteria that seem objective to me:
- the important advantage of Python is that it is a general-purpose language. If you will need to do anything else than statistics with your program (generate a web interface, integrate it with a reporting system, pass it on to other developers for maintenance) you are far better off with Python.
- the important advantage of R is that it is a specialized language. If you already know that there is a technique you want to use, and it is not a usual suspect (like NN), then you probably will find a library in CRAN that makes life easier for you.
And here is another, more subjective advice:
- both languages are not really performance-oriented. If you need to process large quantities of data, or process very fast, or process in parallel, you will run into trouble ... but it is far easier to run into such trouble with R than with Python. In my experience, you find the limits of R within some weeks, and the way to push them is quite arcane; while you can use Python for years, never really missing the speed that C developers always mention as the Holy Grail, and even when you do, you can use Cython to make up for the difference. The only real trouble is concurrency, but for statistics, it is hardly ever an issue.
| null |
CC BY-SA 3.0
| null |
2015-02-21T12:19:34.107
|
2015-02-21T12:19:34.107
| null | null |
1367
| null |
5203
|
2
| null |
5196
|
2
| null |
If you're working with R language, I would suggest first to try use R ecosystem's abilities to parallelize the processing, if possible. For example, take a look at packages, mentioned in [this CRAN Task View](http://cran.r-project.org/web/views/HighPerformanceComputing.html).
Alternatively, if you're not comfortable or satisfied with the approaches, implemented by the above-referred packages, you can try some other approaches, such as Hadoop or something else. I think that a Hadoop solution would be an overkill for such problem, considering the learning curve, associated with it, as well as the fact that, as far as I understand, Hadoop or other MapReduce frameworks/architectures target long-running processes (an average task is ~ 2 hours, I read somewhere recently). Hope this helps.
| null |
CC BY-SA 3.0
| null |
2015-02-21T12:33:53.923
|
2015-02-21T12:33:53.923
| null | null |
2452
| null |
5204
|
1
|
5237
| null |
1
|
362
|
Most literature focus on either explicit rating data or implicit (like/unknown) data. Are there any good publications to handle like/dislike/unknown data? That is, in the data matrix there are three values, and I'd like to recommend from unknown entries.
And are there any good open source implementations on this?
Thanks.
|
Matrix factorization for like/dislike/unknown data
|
CC BY-SA 3.0
| null |
2015-02-21T14:02:39.170
|
2015-04-27T12:48:00.957
| null | null |
1376
|
[
"machine-learning",
"recommender-system"
] |
5205
|
1
| null | null |
1
|
1878
|
I have a pandas dataframe with a salary column which contains values like:
>
£36,000 - £40,000 per year plus excellent bene...,
£26,658 to £32,547 etc
I isolated this column and split it with the view to recombining into the data frame later via a column bind in pandas.
I now have an object with columns like the below. The columns I split the original data frame column I think are blank because I didn't specify them (I called `df['salary']=df['salary'].astype(str).str.split()`
)
So my new object contains this type of information:
[£26,658, to, £32,547],
[Competitive, with, Excellent, Benefits]
What I want to do is:
- Create three columns called minvalue and maxvalue and realvalue
- List items starting with £ (something to do with "^£"?
- Take till the end of the items found ignoring the £ (get the number out) (something to do with (substr(x,2,nchar(x)))?
- If there are two such items found, call the first number "minvalue" and call the second number "maxvalue" and put it below the right column. If there is only one value in the row, put it below the realvalue column.
I am very new to pandas and programming in general, but keen on learning, your help would be appreciated.
|
Split columns by finding "£" and then converting to minvalue, maxvalue
|
CC BY-SA 3.0
| null |
2015-02-22T11:57:57.840
|
2020-07-31T15:36:10.983
|
2015-02-22T12:57:21.040
|
8375
|
8375
|
[
"dataset",
"statistics",
"pandas"
] |
5206
|
2
| null |
5205
|
4
| null |
This is more of a general regex question, rather than pandas specific.
I would first create a function that extracts the numbers you need from strings, and then use the `pandas.DataFrame.apply` function to apply it on the pandas column containing the strings. Here is what I would do:
```
import re
def parseNumbers(salary_txt):
return [int(item.replace(',','')) for item in re.findall('£([\d,]+)',salary_txt)]
#testing if this works
testcases = ['£23,000 to £100,000','£34,000','£10000']
for testcase in testcases:
print testcase,parseNumbers(testcase)
```
Here, I just used `re.findall`, which finds all patterns that look like `£([\d,]+)`. This is anything that starts with £ and is followed by an arbitrary sequence of digits and commas. The parenthesis tells python to extract only the bit after the £ sign. The last thing I do is I remove commas, and parse the remaining string into an integer. You could be more elegant about this I guess, but it works.
# using this function in pandas
```
df['salary_list'] = df['salary'].apply(parseNumbers)
df['minsalary'] = df['salary'].apply(parseNumbers).apply(min)
df['maxsalary'] = df['salary'].apply(parseNumbers).apply(max)
```
Checking if this all works:
```
import pandas
df = pandas.DataFrame(testcases,columns = ['salary'])
df['minsalary'] = df['salary'].apply(parseNumbers).apply(min)
df['maxsalary'] = df['salary'].apply(parseNumbers).apply(max)
df
salary minsalary maxsalary
0 £23,000 to £100,000 23000 100000
1 £34,000 34000 34000
2 £10000 10000 10000
```
The advantages of moving the parsing logic to a separate function is that:
- it may be reusable in other code
- it is easier to read for others, even if they aren't pandas experts
- it's easier to develop and test the parsing functionality in isolation
| null |
CC BY-SA 3.0
| null |
2015-02-22T13:14:36.200
|
2015-02-22T13:14:36.200
|
2020-06-16T11:08:43.077
|
-1
|
8376
| null |
5208
|
1
|
5211
| null |
1
|
193
|
As an example. If you are tying to classify humans from dogs. Is it possible to approach this problem by classifying different kinds of animals (birds, fish, reptiles, mammals, ...) or even smaller subsets (dogs, cats, whales, lions, ...)
Then when you try to classify a new data set, anything that did not fall into one of those classes can be considered a human.
If this is possible, are there any benefits into breaking a binary class problem into several classes (or perhaps labels)?
Benefits I am looking into are: accuracy/precision of the classifier, parallel learning.
|
Splitting binary classification into smaller susbsets
|
CC BY-SA 3.0
| null |
2015-02-22T20:30:30.630
|
2015-02-23T11:43:11.833
|
2015-02-23T00:19:21.277
|
8381
|
8381
|
[
"machine-learning",
"classification"
] |
5209
|
1
|
5998
| null |
4
|
3472
|
I am performing Named Entity Recognition using Stanford NER. I have successfully trained and tested my model. Now I want to know:
1) What is the general way of measuring accuracy of NER model ?? For example what techniques or approaches are used ??
2) Is there any built-in method in STANFORD NER for evaluating the accuracy ??
|
Accuracy of Stanford NER
|
CC BY-SA 3.0
| null |
2015-02-23T08:00:03.360
|
2015-10-05T20:39:30.453
| null | null |
8016
|
[
"nlp",
"performance"
] |
5210
|
2
| null |
5198
|
2
| null |
I personally like dotcharts of standardized regression coefficients, possibly with standard error bars to denote uncertainty. Make sure to standardize coefficients (and SEs!) appropriately so they "mean" something to your non-quantitative audience: "As you see, an increase of 1 unit in Z is associated with an increase of 0.3 units in X."
In R (without standardization):
```
set.seed(1)
foo <- data.frame(X=rnorm(30),Y=rnorm(30),Z=rnorm(30))
model <- lm(X~Y+Z,foo)
coefs <- coefficients(model)
std.errs <- summary(model)$coefficients[,2]
dotchart(coefs,pch=19,xlim=range(c(coefs+std.errs,coefs-std.errs)))
lines(rbind(coefs+std.errs,coefs-std.errs,NA),rbind(1:3,1:3,NA))
abline(v=0,lty=2)
```

| null |
CC BY-SA 3.0
| null |
2015-02-23T08:17:30.120
|
2015-02-23T08:17:30.120
| null | null |
2853
| null |
5211
|
2
| null |
5208
|
1
| null |
If you try to get the best accuracy, etc... for a given question you should always learn on a training set that is labeled exactly according to your questions. You shouldn't expect to get better results if you are using more granular class labels. The classifier then would then try to pick up the differences in the classes and try to separate them apart. Since in practice your variables in the training set will not perfectly explain the more granular classification question you shouldn't expect to get a better answer for your less granular classification problem.
If you are not happy with the accuracy of your model you try the following instead:
- review the explanatory variables. Think about what might influence the classification problem. Maybe there us a clever way to construct new variables (from your existing ones) that helps. It's nowpossible to give a general advise on that since you have to consider the properties of your classifier
- if your class distribution is very skewed you might consider over/undersampling
- you might run more different classifiers and then classify based on the majority vote. Note that you will most likely sacrifice explainability of your model.
Also you seem to have some missunderstanding, when you write 'you would assign it to human if it doesn't fall into any of the granular classes'. Note that you always try to pick class labels covering the whole universe (all possible classes). This can be always defined as the complement of the other classes. Also you will have to have instances for each class in your training set.
| null |
CC BY-SA 3.0
| null |
2015-02-23T11:43:11.833
|
2015-02-23T11:43:11.833
| null | null |
8191
| null |
5212
|
1
|
5220
| null |
1
|
515
|
We are currently developing a customer relationship management software for SME's. What I'd like to structure for our future CRM is developing CRM with a social-based approach (Social CRM). Therefore we will provide our users (SME's) to integrate their CRM into their social network accounts. Also CRM will be enhance intercorporate communication of owner company.
All these processes I've just indicated above will certainly generate lots of unstructured data.
I am wondering how can we integrate big data and data-mining contepts for our project; especially for the datas generated by social network? I am not the expert of these topics but I really want to start from somewhere.
### Basic capabilities of CRM (Modules)
-Contacts: People who you have a business relationship.
-Accounts: Clients who you've done a business before.
-Leads: Accounts who are your potential customers.
-Oppurtunites: Any business opportunity for an account or a lead.
-Sales Orders
-Calendar
-Tasks
What kind of unstructured data or the ways (ideas) could be useful for the modules I've just wrote above? If you need more specific information please write in comments.
|
Big data and data mining for CRM?
|
CC BY-SA 3.0
| null |
2015-02-23T12:03:47.773
|
2015-07-31T08:33:51.563
|
2020-06-16T11:08:43.077
|
-1
|
8386
|
[
"data-mining",
"bigdata",
"software-development"
] |
5214
|
1
|
5216
| null |
0
|
80
|
I am starting to play around in datamining / machine learning and I am stuck on a problem that's probably easy.
So I have a report that lists the url and the number of visits a person did. So a combination of ip and url result in an amount of visits.
Now I want to run the k-means clustering algorithm on this so I thought I could approach it like this:
This is my data:
```
url ip visits
abc.be 123 5
abc.be/a 123 2
abc.be/b 123 2
abc.be/b 321 4
```
And I would turn in into a feature vector/matrix like so:
```
abc.be abc.be/a abc.be/b impressions
1 0 0 5
0 1 0 2
0 0 1 2
0 0 1 4
```
But I am stuck on how to transform my data set to a feature matrix. Any help would be appreciated.
|
Going from report to feature matrix
|
CC BY-SA 3.0
| null |
2015-02-23T15:45:21.700
|
2015-02-23T22:39:53.290
| null | null |
8389
|
[
"feature-extraction",
"k-means"
] |
5215
|
2
| null |
5205
|
0
| null |
You can check the data types of your columns by doing `df.dtypes`, and if `'salary'` isn't a string, you can convert it using `df['salary'] = df['salary'].astype(str)`. This is what you were already doing before splitting. From there, Ferenc's method should work!
| null |
CC BY-SA 4.0
| null |
2015-02-23T21:57:39.677
|
2020-07-31T15:36:10.983
|
2020-07-31T15:36:10.983
|
98307
|
8391
| null |
5216
|
2
| null |
5214
|
1
| null |
I don't understand what you mean by
>
So I have a report that lists the url and the number of visits a person did. So a combination of ip and url result in an amount of visits.
Assuming that you equate an IP with a user, and you wish to cluster users by their URL visitation frequencies, your matrix, `M`, would have
- One row per IP (user)
- One column for each URL that you are tracking (your features)
- and the entries in M would be "visits" of a given URL by a particular IP
Given these assumptions, and your report, `M` would be:
```
abc.be abc.be/a abc.be/b
123 5 2 2
321 0 0 4
```
| null |
CC BY-SA 3.0
| null |
2015-02-23T22:39:53.290
|
2015-02-23T22:39:53.290
| null | null |
8392
| null |
5217
|
2
| null |
2269
|
0
| null |
[https://www.codeschool.com/](https://www.codeschool.com/) is very similar to [https://www.datacamp.com/](https://www.datacamp.com/)
when I tried it I fell in love with R and then found datacamp.
www.codecademy.com is also console-based but R is not yet available.
| null |
CC BY-SA 3.0
| null |
2015-02-24T02:11:45.787
|
2015-02-24T02:11:45.787
| null | null |
8394
| null |
5220
|
2
| null |
5212
|
2
| null |
The two modules where you can really harness data mining and big data techniques are probably Leads and Opportunities. The reason is that, as you've written yourself, both contain 'potential' information that you can harness (through predictive algorithms) to get more customers. Taking Leads as an example, you can use a variety of machine learning algorithms to assign a probability to each account, based on that account's potential for becoming your customer in the near future. Since you already have an Accounts module which gives you information about your current customers, you can use this information to train your machine learning algorithms. This is all at a very high level but hopefully, you're getting the gist of what I'm saying.
| null |
CC BY-SA 3.0
| null |
2015-02-24T03:51:15.133
|
2015-02-24T03:51:15.133
| null | null |
8395
| null |
5221
|
2
| null |
1151
|
1
| null |
My advice is to begin with a thorough grounding in Statistics, for which a lot of classic and unambiguous material is available online. Topics that you should have a firm grasp on include regression, correlation, hypothesis testing and the bias-variance tradeoff. You don't have to go into too much theoretical depth into any of these topics but you should know what these are before you start studying machine learning. Your study in machine learning should include developing an understanding of hypothesis spaces, Bayesian analysis (can you explain the difference between MLE, MAP and optimal Bayes?), Expectation Maximization, logistic regression, clustering (especially k-means), max-margin classifiers (SVMs), overfitting (can you explain what it is in terms of bias and variance?) and feature selection. Linear algebra is helpful but can be done without in many cases.
As someone who does research in data mining and has worked closely with several companies, I can tell you that if you seriously want to go into machine learning, it is NOT enough to simply know how to code something using Weka or R. Those are easy enough to use once you know the concepts. When companies hire data scientists, they want someone who can take the raw data and do something useful with it. A good grasp of fundamentals is obviously essential, since each company's data has its own quirks (and will typically be too big for you to try 'everything'). Good luck!
| null |
CC BY-SA 3.0
| null |
2015-02-24T04:07:29.860
|
2015-02-24T04:07:29.860
| null | null |
8395
| null |
5222
|
2
| null |
1151
|
2
| null |
So far the answers have focused on learning particular methods. They are fine, but they won't make you a Data Scientist. Being a Data Scientist is not solely or even primarily about having mastery of particular data analysis methods (ML or others).
Most fundamental is problem solving and decision support. What ever data you collect, what ever analysis methods you apply, and however you improve those methods over time, these must support the over-arching goals of solving problems or making better decisions.
You need to start getting first-hand experience with data in your field. I don't mean Kaggle data (i.e. already cleaned). I mean raw data or nearly raw. A good 50% of a data scientist's time is spent wrangling raw data and cleaning it to the point where it's usable in analysis. You need to learn how to deal with missing data, erroneous data, ambiguous data, misformatted data, and so on.
You should also get some experience with decisions that do not map neatly on to the data. Recommender systems are easy in this regard. For example, you might take on the challenge of evaluating software vulnerabilities to guide vulnerability management decisions.
| null |
CC BY-SA 3.0
| null |
2015-02-24T08:40:05.083
|
2015-02-24T08:40:05.083
| null | null |
609
| null |
5223
|
2
| null |
5166
|
2
| null |
Yes, this is a straightforward application for neural networks. In this case yk are the outputs of the last layer ("classifier"); xk is a feature vector and yk is what it gets classified into. For simplicity prepare your data so that N is the same for all. The problem you have is perhaps that in the case of time series you won't have enough data: you need (ideally) many 1000's of examples to train a network, which in this case means time series, not points. Look at the specialized literature on neural networks for time series prediction for ideas on network architecture.
Library: try Pylearn2 at [http://deeplearning.net/software/pylearn2/](http://deeplearning.net/software/pylearn2/) It's not the only good option but it should serve you well.
| null |
CC BY-SA 3.0
| null |
2015-02-24T11:52:40.977
|
2015-02-24T11:52:40.977
| null | null |
26
| null |
5224
|
1
|
9227
| null |
41
|
41402
|
I would like to use a neural network for image classification. I'll start with pre-trained CaffeNet and train it for my application.
# How should I prepare the input images?
In this case, all the images are of the same object but with variations (think: quality control). They are at somewhat different scales/resolutions/distances/lighting conditions (and in many cases I don't know the scale). Also, in each image there is an area (known) around the object of interest that should be ignored by the network.
I could (for example) crop the center of each image, which is guaranteed to contain a portion of the object of interest and none of the ignored area; but that seems like it would throw away information, and also the results wouldn't be really the same scale (maybe 1.5x variation).
# Dataset augmentation
I've heard of creating more training data by random crop/mirror/etc, is there a standard method for this? Any results on how much improvement it produces to classifier accuracy?
|
How to prepare/augment images for neural network?
|
CC BY-SA 3.0
| null |
2015-02-24T11:59:36.033
|
2022-06-16T19:08:44.980
|
2022-06-16T19:08:44.980
|
29169
|
26
|
[
"neural-network",
"image-classification",
"convolutional-neural-network",
"preprocessing"
] |
5225
|
2
| null |
5192
|
1
| null |
Data science is a new somewhat vague terminology. You will find much more information by using keywords such as: epidemiology, population health, public health surveillance, public health, statistics, evidence based medicine, biostatistics, statistical epidemiology, clinical decision making, interventions etc.
The question as posted is overly broad. On the other hand you're in luck since there is a substantial amount of information on this topic. Many of the R/Data Science courses on Coursera from John Hopkins have a public health flavor. Also, R and S-PLUS before it were primarily used in health settings.
| null |
CC BY-SA 3.0
| null |
2015-02-24T22:13:46.327
|
2015-02-24T22:13:46.327
| null | null |
6467
| null |
5226
|
1
|
5229
| null |
85
|
142811
|
I am doing some problems on an application of decision tree/random forest. I am trying to fit a problem which has numbers as well as strings (such as country name) as features. Now the library, [scikit-learn](http://scikit-learn.org) takes only numbers as parameters, but I want to inject the strings as well as they carry a significant amount of knowledge.
How do I handle such a scenario?
I can convert a string to numbers by some mechanism such as hashing in Python. But I would like to know the best practice on how strings are handled in decision tree problems.
|
strings as features in decision tree/random forest
|
CC BY-SA 4.0
| null |
2015-02-25T01:07:14.717
|
2020-10-29T06:16:43.570
|
2019-10-02T14:32:28.693
|
26686
|
8409
|
[
"machine-learning",
"python",
"scikit-learn",
"random-forest",
"decision-trees"
] |
5227
|
1
|
5235
| null |
1
|
128
|
I am struggling to choose a right data prediction method for the following problem.
Essentially I am trying to model a scheduler operation, trying to predict its scheduling without knowing the scheduling mechanism and having incomplete data.
(1) There are M available resource blocks that can carry data, N data channels that must be scheduled every time instance i
(2) Inputs into the scheduler:
- Matrix $X_i$ size M by N, consisting of N column vectors from each data source. Each of M elements is index from 1 to 32 carrying information about quality of data channel for particular resource block. 1 - really bad quality, 32 - excellent quality.
- Data which contains type of data to be carried (voice/internet etc)
Scheduler prioritizes number of resource blocks occupied by each channel every time instant i.
Given that
- I CAN see resource allocation map every time instant
- I DO have access to matrix $X_i$
- I DON'T know the algorithm of scheduler and
- I dont have access to the type of data to be scheduled.
I want to have a best guess (prediction) how the data will be scheduled based on this incomplete information i.e, which resource block will be occupied by which data channel. What is the best choice of prediction/modelling algorithm?
Any help appreciated!
|
problem of choosing right statistical method for scheduler prediction
|
CC BY-SA 3.0
| null |
2015-02-25T03:24:20.450
|
2015-02-26T05:08:31.993
| null | null |
8410
|
[
"predictive-modeling"
] |
5228
|
1
| null | null |
3
|
585
|
There are several metrics for the quality of a graph clustering, e.g. Newman modularity. These enable you to compare two candidate clusterings of the same graph.
Does anyone know a metric that will answer the question "how modular is this graph"? For example the first of these two graphs is more modular than the second:
o===o-----o====o o----o===o-----o
It would be possible to choose a clustering algorithm, run it, and compute your preferred modularity metric for the best clustering found. But this is only a lower bound, so it doesn't seem very satisfactory.
The question matters. For example, the work of life scientists will be easier if the molecular organisation of life is modular than if it is not. It would be good to have a robust test - some of the discussion so far seems to involve wishful thinking.
My best attempt at this is:
- a tree is more modular if the edges near leaves are higher weight
- the modularity of a graph is the modularity of its min cut spanning tree
Does anyone know of an established answer to this question?
|
Graph modularity measure
|
CC BY-SA 3.0
| null |
2015-02-25T10:07:24.563
|
2018-01-25T07:43:29.207
| null | null |
8417
|
[
"clustering"
] |
5229
|
2
| null |
5226
|
75
| null |
In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables.
Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data.
One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'.
Finally, the answer to your question lies in coding the categorical feature into multiple binary features. For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding, binary encoding, one-of-k-encoding or whatever. You can check documentation here for [encoding categorical features](http://scikit-learn.org/stable/modules/preprocessing.html) and [feature extraction - hashing and dicts](http://scikit-learn.org/stable/modules/feature_extraction.html#dict-feature-extraction). Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well.
| null |
CC BY-SA 3.0
| null |
2015-02-25T10:10:55.683
|
2016-12-04T15:18:47.823
|
2016-12-04T15:18:47.823
|
26596
|
108
| null |
5230
|
2
| null |
1227
|
2
| null |
Have you heard about the [intersection graph](http://en.wikipedia.org/wiki/Intersection_graph)? You can try to draw players as points, connections (team mates) as edges and teams as transparent coloured blobs on top.
As for your original question, I cannot understand your goal. I think your formulation is not well defined / incomplete. Suppose you have teams A [1,2] B [2,3] and C [1,3]. What do you want to display? Do you want to list the parts of the Venn diagram? I think that for more than 3 sets this can become more cumbersome than the bipartite graph itself = simple listing of team compositions.
| null |
CC BY-SA 3.0
| null |
2015-02-25T11:18:26.313
|
2015-02-25T11:18:26.313
| null | null |
6550
| null |
5231
|
1
|
5239
| null |
1
|
566
|
I produced association rules by using the arules package (apriori). I'm left with +/- 250 rules. I would like to test/validate the rules that I have, like answering the question: How do I know that these association rules are true? How can I validate them? What are common practice to test it?
I thought about cross validation (with training data and test data) as I read that it's not impossible to use it on unsupervised learning methods..but I'm not sure if it makes sense since I don't use labeled data.
If someone has a clue, even if it's not specifically about association rules (but testing other unsupervised learning methods), that would also be helpful to me.
I uploaded an example of the data that I use here in case it's relevant: [https://www.mediafire.com/?4b1zqpkbjf15iuy](https://www.mediafire.com/?4b1zqpkbjf15iuy)
|
How to test/validate unlabeled data in association rules in R?
|
CC BY-SA 3.0
| null |
2015-02-25T14:10:04.777
|
2015-02-27T00:45:39.740
| null | null |
8422
|
[
"machine-learning",
"r",
"cross-validation"
] |
5232
|
2
| null |
435
|
1
| null |
First, some caveats
I'm not sure why you can't use your preferred programming (sub-)paradigm*, Inductive Logic Programming (ILP), or what it is that you're trying to classify. Giving more detail would probably lead to a much better answer; especially as it's a little unusual to approach selection of classification algorithms on the basis of the programming paradigm with which they're associated. If your real world example is confidential, then simply make up a fictional-but-analogous example.
Big Data Classification without ILP
Having said that, after ruling out ILP we have 4 other logic programming paradigms in our consideration set:
- Abductive
- Answer Set
- Constraint
- Functional
in addition to the dozens of paradigms and sub-paradigms outside of logic programming.
Within Functional Logic Programming for instance, there exists extensions of ILP called Inductive Functional Logic Programming, which is based on inversion narrowing (i.e. inversion of the narrowing mechanism). This approach overcomes several limitations of ILP and ([according to some scholars, at least](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.4755&rep=rep1&type=pdf)) is as suitable for application in terms of representation and has the benefit of allowing problems to be expressed in a more natural way.
Without knowing more about the specifics of your database and the barriers you face to using ILP, I can't know if this solves your problem or suffers from the same problems. As such, I'll throw out a completely different approach as well.
[ILP is contrasted with "classical" or "propositional" approaches to data mining](http://books.google.com/books?hl=en&lr=&id=xzVD8C2YpnQC&oi=fnd&pg=PR11&dq=Predictive%20Data-Mining.%20A%20Practical%20Guide.%20&ots=IN48sFFcSQ&sig=d6TOKBx_6qmitB8BSTk4_nLO9ak). Those approaches include the meat and bones of Machine Learning like decision trees, neural networks, regression, bagging and other statistical methods. Rather than give up on these approaches due to the size of your data, you can join the ranks of many Data Scientists, Big Data engineers and statisticians who utilize High Performance Computing (HPC) to employ these methods on with massive data sets (there are also sampling and other statistical techniques you may choose to utilize to reduce the computational resources and time required to analyze the Big Data in your relational database).
HPC includes things like utilizing multiple CPU cores, scaling up your analysis with elastic use of servers with high memory and large numbers of fast CPU cores, using high-performance data warehouse appliances, employing clusters or other forms of parallel computing, etc. I'm not sure what language or statistical suite you're analyzing your data with, but as an example this [CRAN Task View](http://cran.r-project.org/web/views/HighPerformanceComputing.html) lists many HPC resources for the R language which would allow you to scale up a propositional algorithm.
| null |
CC BY-SA 3.0
| null |
2015-02-25T17:08:09.830
|
2015-02-25T17:08:09.830
| null | null |
2723
| null |
5233
|
1
| null | null |
2
|
1602
|
Please I want to know if there is any SVM R package that can handle more than one response variable (y) at a time. that is to train one model for predicting more than one response variable. it could be regression or multi class classification problem.
Thanks for your help
|
training one SVM model for predicting more than one response variable
|
CC BY-SA 3.0
| null |
2015-02-25T18:28:27.903
|
2019-03-20T17:21:16.947
| null | null |
8425
|
[
"machine-learning",
"r",
"algorithms"
] |
5234
|
2
| null |
5192
|
3
| null |
I don't think that you will learn much about data science (meaning, acquire understanding and skills) by using software tools like Tableau. Such tools are targeting mainly advanced users (not data scientists), for example analysts and other subject matter experts, who use graphical user interface (GUI) to analyze and (mostly) visualize data. Having said that, software tools like Tableau might be good enough to perform initial phase of data science workflow: exploratory data analysis (EDA).
In terms of data science self-education, there are several popular online courses (MOOCs) that you can choose from (most come in both free and paid versions). In addition to the one on Udacity that you've mentioned ([https://www.udacity.com/course/ud359](https://www.udacity.com/course/ud359)), there are two data science courses on Coursera: Introduction to Data Science by University of Washington ([https://www.coursera.org/course/datasci](https://www.coursera.org/course/datasci)) and a set of courses from Data Science specialization by Johns Hopkins University ([https://www.coursera.org/specialization/jhudatascience/1](https://www.coursera.org/specialization/jhudatascience/1)). Note that you can take specialization's individual courses for free at your convenience. There are several other, albeit less popular, data science MOOCs.
In terms of data sources, I'm not sure what do you mean by "Dummy Data", but there is a wealth of open data sets, including many in the area of public health. You can review corresponding resources, listed on KDnuggets ([http://www.kdnuggets.com/datasets/index.html](http://www.kdnuggets.com/datasets/index.html)) and choose ones that you're interested in. For a country-level analysis, the fastest way to obtain data is finding and visiting corresponding open data government websites. For example, for public health data in US, I would go to [http://www.healthdata.gov](http://www.healthdata.gov) and [http://www.data.gov](http://www.data.gov) (the latter - for corresponding non-medical data that you might want to include in your analysis).
In regard to research papers in the area of public health, I have two comments: 1) most empirical research in that (or any other) area IMHO can be considered a data science study/project; 2) you need to perform a literature review in the area or on the topic of your interest, so you're on your own in that sense.
Finally, a note on software tools. If you're serious about data science, I would suggest to invest some time in learning either R, or Python (if you don't know them already), as those are two most popular open source tools among data scientists nowadays. Both have a variety of feature-rich development environments as well as large ecosystems of packages/libraries and users/developers all over the world.
You might also find useful some of my other related answers here on Data Science StackExchange site. For example, I recommend you to read [this answer](https://datascience.stackexchange.com/a/742/2452), [this answer](https://datascience.stackexchange.com/a/843/2452) and [this answer](https://datascience.stackexchange.com/a/759/2452). Good luck!
| null |
CC BY-SA 3.0
| null |
2015-02-25T22:41:45.663
|
2015-02-25T22:41:45.663
|
2017-04-13T12:50:41.230
|
-1
|
2452
| null |
5235
|
2
| null |
5227
|
2
| null |
Do you know if the scheduler has a memory?
Let us assume for a moment that the scheduler has no memory. This is a straightforward classification (supervised learning) problem: the inputs are X, the outputs are the schedules (N->M maps). Actually, if every N gets scheduled and the only question is which M it gets, the outputs are lists which channel (or none) is scheduled to each block, and there is only a certain possible number of those, so you can model them as discrete outputs (classes) with their own probabilities. Use whatever you like (AdaBoost, Naive Bayes, RBF SVM, Random Forest...) as a classifier. I think you will quickly learn about the general behavior of the scheduler.
If the scheduler has a memory, then things get complicated. I think you might approach that as a hidden Markov model: but the number of individual states may be quite large, and so it may be essentially impossible to build a complete map of transition probabilities.
| null |
CC BY-SA 3.0
| null |
2015-02-26T05:08:31.993
|
2015-02-26T05:08:31.993
| null | null |
26
| null |
5236
|
2
| null |
5228
|
1
| null |
I am not sure there is a clear answer to this, especially as the problem does not seem to be well-defined right now - your "figure" seems to indicate edge weights but you then mention node weights, something significantly different.
If the question is whether you can find a way to split a graph into two smaller modules, then you might want to look into applying [Sparsest Cut](http://en.wikipedia.org/wiki/Cut_%28graph_theory%29#Sparsest_cut) techniques - a cut with low cost would imply (?) high modularity. I believe these can be easily modified to account for either unlabeled, edge-labeled or node-labeled graphs.
| null |
CC BY-SA 3.0
| null |
2015-02-26T11:19:35.900
|
2015-02-26T11:19:35.900
| null | null |
8433
| null |
5237
|
2
| null |
5204
|
4
| null |
This is very similar to the netflix problem, most matrix factorization methods can be adapted so that the error function is only evaluated at known points. For instance, you can take the gradient descent approach to SVD (minimizing the frobenius norm) but only evaluate the error and calculate the gradient at known points. I believe you can easily find code for this.
Another option would be exploiting the binary nature of your matrix and adapting binary matrix factorization tools in order to enforce binary factors (if you require them). I'm sure you can adapt one of the methods described [here](http://www.hongliangjie.com/2011/03/15/reviews-on-binary-matrix-decomposition/) to work with unknown data using a similar trick as the one above.
| null |
CC BY-SA 3.0
| null |
2015-02-26T11:31:04.870
|
2015-02-26T11:31:04.870
| null | null |
8433
| null |
5239
|
2
| null |
5231
|
0
| null |
You may want to consider using your own `APparameter` object to put "significance" constraints on the rules learned by Apriori. See page 13 of the [arules documentation](http://cran.r-project.org/web/packages/arules/arules.pdf). This could reduce the number of uninteresting rules returned in your run.
In lieu of gold standard data for your domain, consider bootstrap resampling as a form of validation, as described [in this article](http://eprints.pascal-network.org/archive/00003198/01/lal.pdf).
| null |
CC BY-SA 3.0
| null |
2015-02-27T00:45:39.740
|
2015-02-27T00:45:39.740
| null | null |
8392
| null |
5241
|
2
| null |
2353
|
0
| null |
I think this line is wrong:
```
ytr = X(ii(1:N/2),:);
```
ytr should be the label of the training data. In this case, it should be
```
ytr = y(ii(1:N/2),:);
```
| null |
CC BY-SA 3.0
| null |
2015-02-27T05:28:25.457
|
2015-02-28T13:38:20.850
|
2015-02-28T13:38:20.850
|
84
|
8415
| null |
5242
|
2
| null |
5233
|
0
| null |
Check out the e1071 package, here is the manual [http://cran.r-project.org/web/packages/e1071/e1071.pdf](http://cran.r-project.org/web/packages/e1071/e1071.pdf)
| null |
CC BY-SA 3.0
| null |
2015-02-27T08:13:53.847
|
2015-02-27T08:13:53.847
| null | null |
8310
| null |
5243
|
1
|
5286
| null |
0
|
283
|
I have been working in the last years with statistics and have gone pretty deep in programming with R. I have however always felt that I wasn't completely grasping what I was doing, still understanding all passages and procedures conceptually.
I wanted to get a bit deeper into the math behind it all. I've been looking online for texts and tips, but all texts start with a very high level. Any suggestions on where to start?
To be more precise, I'm not looking for an exaustive list of statistical models and how they work, I kind of get those. I was looking for something like "Basics of statistical modelling"
|
Interested in Mathematical Statistics... where to start from?
|
CC BY-SA 3.0
| null |
2015-02-27T10:22:59.577
|
2015-03-06T03:46:41.693
| null | null | null |
[
"statistics",
"predictive-modeling"
] |
5244
|
1
|
5281
| null |
3
|
1203
|
Many of us are very familiar with using R in reproducible, but very much targeted, ad-hoc analysis. Given that R is currently the best collection of cutting-edge scientific methods from world-class experts in each particular field, and given that plenty of libraries exist for data io in R, it seems very natural to extend its applications into production environments for live decision making.
Therefore my questions are:
- did someone of you go into production with pure R (I know of shiny, yhat etc, but would be very interesting to hear of pure R);
- is there a good book/guide/article on the topic of building R into some serious live decision-making pipelines (such as e.g. credit scoring);
- I would like to hear also if you think it's not a good idea at all;
|
R in production
|
CC BY-SA 3.0
| null |
2015-02-27T10:42:35.407
|
2022-03-16T17:41:50.737
| null | null |
8310
|
[
"r",
"predictive-modeling",
"scoring"
] |
5248
|
2
| null |
2575
|
3
| null |
In Hive itself? Unfortunately, the answer is simply no -- as the language definition manual shows, that statistic is simply not built in. In addition to the language manual, you can get more information on statistics in development in Hive [here](https://cwiki.apache.org/confluence/display/Hive/StatsDev) and [here](https://cwiki.apache.org/confluence/display/Hive/StatisticsAndDataMining).
Having said that, there are plenty of ways to calculate Kendall's W on data that's in Hive.
You could write out the data to a file or query it into R or a statistical package such as SAS, Stat, MATLAB, Excel, etc then run your calculation and, if necessary, write your results back to Hive.
In R, for instance, you could do something like this:
```
install.packages("RODBC")
require(RODBC)
db <- odbcConnect("Hive_DB")
hql <- "select * from table A"
data <- sqlQuery(db , hql)
kenw <- cor(x = data$a, y = data$b, method="kendall")
sqlSave(db, kenw, tablename = "new_table_of_kendall_coef")
```
or (if using Linux or Unix) then you could use `RHive` without needing to use an ODBC name.
Another way to go about it would be to take the functions that do exist in Hive (which you linked to) and calculate Kendall's coefficient yourself with a custom function. As to how to specifically implement that, well you'd probably want to post on Cross Validated (stats.stackexchange.com).
| null |
CC BY-SA 3.0
| null |
2015-02-27T19:38:22.287
|
2015-02-27T20:00:37.363
|
2015-02-27T20:00:37.363
|
2723
|
2723
| null |
5249
|
1
|
5259
| null |
1
|
633
|
I start with a data.frame (or a data_frame) containing my dependent Y variable for analysis, my independent X variables, and some "Z" variables -- extra columns that I don't need for my modeling exercise.
What I would like to do is:
- Create an analysis data set without the Z variables;
- Break this data set into random training and test sets;
- Find my best model;
- Predict on both the training and test sets using this model;
- Recombine the training and test sets by rows; and finally
- Recombine these data with the Z variables, by column.
It's the last step, of course, that presents the problem -- how do I make sure that the rows in the recombined training and test sets match the rows in the original data set? We might try to use the row.names variable from the original set, but I agree with Hadley that this is an error-prone kludge (my words, not his) -- why have a special column that's treated differently from all other data columns?
One alternative is to create an ID column that uniquely identifies each row, and then keep this column around when dividing into the train and test sets (but excluding it from all modeling formulas, of course). This seems clumsy as well, and would make all my formulas harder to read.
This must be a solved problem -- could people tell me how they deal with this? Especially using the plyr/dplyr/tidyr package framework?
|
Combining data sets without using row.name
|
CC BY-SA 3.0
| null |
2015-02-28T00:44:04.353
|
2015-03-01T07:16:51.440
| null | null |
3510
|
[
"machine-learning",
"r",
"predictive-modeling"
] |
5251
|
2
| null |
5244
|
2
| null |
R and most of its CRAN modules are licensed using the GPL.
In many companies, legal departments go crazy if you propose to use anything that is GPL in production... It's not reasonable, but you'll see they love Apache, and hate GPL. Before going into production, make sure it's okay with the legal department. (IMHO you are safe to use your modified code for internal products. Integrating R into your commercial product and handing this out to others is very different. But unfortunately, many legal departments try to ban all use of GPL whatsoever.)
Other than that, R is often really slooow unless calling Fortran code hidden inside. It's nice when you are still trying to figure out what to do. But for production, you may want maximum performance, and full integration with your services. Benchmark yourself, if R is the best choice for your use case.
On the performance issues with R (I know R advocates are going to downvote me for saying so ...):
>
Morandat, F., Hill, B., Osvald, L., & Vitek, J. (2012). Evaluating the design of the R language. In ECOOP 2012–Object-Oriented Programming (pp. 104-131). Springer Berlin Heidelberg.
(by the TraceR/ProfileR/ReactoR people from purdue, who are now working on fastR which tries to execute R code on the JVM?) states:
>
On those benchmarks, R is on average 501 slower than C and 43 times slower Python.
and:
>
Observations. R is clearly slow and memory inefficient. Much more so than other dynamic languages. This is largely due to the combination of language features (call-by-value, extreme dynamism, lazy evaluation) and the lack of efficient built-in types. We believe that with some effort it should be possible to improve both time and space usage, but this would likely require a full rewrite of the implementation.
Sorry to break the news. It's now my research, but it aligns with my observations.
| null |
CC BY-SA 3.0
| null |
2015-02-28T17:10:41.147
|
2015-02-28T19:56:59.157
|
2015-02-28T19:56:59.157
|
924
|
924
| null |
5252
|
2
| null |
1165
|
5
| null |
As a fellow CS Ph.D. defending my dissertation in a Big Data-esque topic this year (I started in 2012), the best piece of material I can give you is in a [link](http://www.rpajournal.com/dev/wp-content/uploads/2014/10/A3.pdf).
This is an article written by two Ph.D.s from MIT who have talked about Big Data and MOOCs. Probably, you will find this a good starting point. BTW, along this note, if you really want to come up with a valid topic (that a committee and your adviser will let you propose, research and defend) you need to read LOTS and LOTS of papers. The majority of Ph.D. students make the fatal error of thinking that some 'idea' they have is new, when it's not and has already been done. You'll have to do something truly original to earn your Ph.D. Rather than actually focus on forming an idea right now, you should do a good literature survey and the ideas will 'suggest themselves'. Good luck! It's an exciting time for you.
| null |
CC BY-SA 4.0
| null |
2015-02-28T17:15:39.423
|
2020-08-17T20:32:32.897
|
2020-08-17T20:32:32.897
|
98307
|
8395
| null |
5253
|
2
| null |
5120
|
1
| null |
I would actually try out regression. Also, don't make the mistake of using the serial number in your machine learning algorithms! The reason why I'm suggesting regression as opposed to 'better' machine learning algorithms is because you said you wanted to learn, and it's important to understand the algorithms (for the long run, and to truly be good at this stuff) that you're using. Regression is the easiest tool in the book that works quite well! Weka is so easy to use that you'll be able to plug and play different machine learning algorithms just for the sake of it. Another pointer that's won me several competitions is to do some feature selection before using regression/machine learning. For example, in your case, it is reasonable to assume that a student who scores high in Physics probably has a better chance of scoring high in Math as opposed to someone who scores high in English (but not necessarily Physics). If you have enough data, the algorithm itself will be able to deduce these positive/negative correlations and train the model accordingly. Sometimes, there isn't enough data, and you have to do some feature selection. Good luck! I'm a regular participant on Kaggle myself, and I think it's great that you're taking the 'hacker' route to learn more. It's the best way to get your hands dirty on real data and engineering problems.
| null |
CC BY-SA 3.0
| null |
2015-02-28T17:21:46.300
|
2015-02-28T17:21:46.300
| null | null |
8395
| null |
5254
|
2
| null |
5196
|
0
| null |
If you know your approach is working, you can try to implement it more efficiently. Identify the crucial points, try to vectorize them better, for example.
The R interpreter isn't the fastest. There are some efforts underway, but they are not yet ready. Vectorization (which means less interpreter, more low-level code) often yields a factor of 2x-5x, but you can sometimes get a factor of 100x by implementing it e.g. in C. (And the R advocates are going to hate me for saying this...)
Once you know that your approach is working, this may be worth the effort.
| null |
CC BY-SA 3.0
| null |
2015-02-28T21:42:31.383
|
2015-02-28T21:42:31.383
| null | null |
924
| null |
5255
|
1
|
5256
| null |
12
|
1256
|
Terms like 'data science' and 'data scientist' are increasingly used these days.
Many companies are hiring 'data scientist'. But I don't think it's a completely new job.
Data have existed from the past and someone had to deal with data.
I guess the term 'data scientist' becomes more popular because it sounds more fancy and 'sexy'
How were data scientists called in the past?
|
What is an 'old name' of data scientist?
|
CC BY-SA 3.0
| null |
2015-02-28T22:10:58.473
|
2016-01-29T01:33:57.403
| null | null |
8478
|
[
"bigdata"
] |
5256
|
2
| null |
5255
|
13
| null |
In reverse chronological order: data miner, statistician, (applied) mathematician.
| null |
CC BY-SA 3.0
| null |
2015-02-28T23:21:40.430
|
2015-02-28T23:21:40.430
| null | null |
381
| null |
5257
|
1
| null | null |
1
|
100
|
I want to turn reviews of up to 5 stars and the number of reviews into upvotes. What's a good algorithm for doing this?
A venue with 10 reviews total with a 5-star average rating should obviously get more upvotes than a venue with 10 reviews total with a 3-star average rating. Also, a venue with 60 ratings and a 4-star rating should probably get more upvotes than the one with 10 reviews and a 5-star rating.
I need this rating to be based off of the total number of reviews and the average star rating, but I would also like the number to stay below a variable number (for example, say upvotes stay below 100, but I can also plug in 200 and it would stay below 200).
|
Good formula for turning star reviews into upvotes
|
CC BY-SA 3.0
| null |
2015-03-01T02:56:05.590
|
2015-11-20T09:15:55.650
|
2015-10-21T08:05:21.093
|
13413
|
8482
|
[
"algorithms"
] |
5259
|
2
| null |
5249
|
1
| null |
You need neither to use the row names or to create an additonal ID column. Here is an approach based on the indices of the training set.
An example data set:
```
set.seed(1)
dat <- data.frame(Y = rnorm(10),
X1 = rnorm(10),
X2 = rnorm(10),
Z1 = rnorm(10),
Z2 = rnorm(10))
```
Now, your steps:
- Create an analysis data set without the Z variables
dat2 <- dat[grep("Z", names(dat), invert = TRUE)]
dat2
# Y X1 X2
# 1 -0.6264538 1.51178117 0.91897737
# 2 0.1836433 0.38984324 0.78213630
# 3 -0.8356286 -0.62124058 0.07456498
# 4 1.5952808 -2.21469989 -1.98935170
# 5 0.3295078 1.12493092 0.61982575
# 6 -0.8204684 -0.04493361 -0.05612874
# 7 0.4874291 -0.01619026 -0.15579551
# 8 0.7383247 0.94383621 -1.47075238
# 9 0.5757814 0.82122120 -0.47815006
# 10 -0.3053884 0.59390132 0.41794156
- Break this data set into random training and test sets
train_idx <- sample(nrow(dat2), 0.8 * nrow(dat2))
train_idx
# [1] 7 4 3 10 9 2 1 5
train <- dat2[train_idx, ]
train
# Y X1 X2
# 7 0.4874291 -0.01619026 -0.15579551
# 4 1.5952808 -2.21469989 -1.98935170
# 3 -0.8356286 -0.62124058 0.07456498
# 10 -0.3053884 0.59390132 0.41794156
# 9 0.5757814 0.82122120 -0.47815006
# 2 0.1836433 0.38984324 0.78213630
# 1 -0.6264538 1.51178117 0.91897737
# 5 0.3295078 1.12493092 0.61982575
test_idx <- setdiff(seq(nrow(dat2)), train_idx)
test_idx
# [1] 6 8
test <- dat2[test_idx, ]
test
# Y X1 X2
# 6 -0.8204684 -0.04493361 -0.05612874
# 8 0.7383247 0.94383621 -1.47075238
- Find my best model
...
- Predict on both the training and test sets using this model
...
- Recombine the training and test sets by rows
idx <- order(c(train_idx, test_idx))
dat3 <- rbind(train, test)[idx, ]
identical(dat3, dat2)
# [1] TRUE
- Recombine these data with the Z variables, by column
dat4 <- cbind(dat3, dat[grep("Z", names(dat))])
identical(dat, dat4)
# [1] TRUE
In summary, we can use the indices of the training and test data to combine the data in the rows in the original order.
| null |
CC BY-SA 3.0
| null |
2015-03-01T07:16:51.440
|
2015-03-01T07:16:51.440
| null | null |
106
| null |
5261
|
1
| null | null |
4
|
758
|
I'm struggling with for loop in R. I have a following data frame with sentences and two dictionaries with pos and neg words:
```
library(stringr)
library(plyr)
library(dplyr)
library(stringi)
library(qdap)
library(qdapRegex)
library(reshape2)
library(zoo)
# Create data.frame with sentences
sent <- data.frame(words = c("great just great right size and i love this notebook", "benefits great laptop at the top",
"wouldnt bad notebook and very good", "very good quality", "bad orgtop but great",
"great improvement for that great improvement bad product but overall is not good", "notebook is not good but i love batterytop"), user = c(1,2,3,4,5,6,7),
number = c(1,1,1,1,1,1,1), stringsAsFactors=F)
# Create pos/negWords
posWords <- c("great","improvement","love","great improvement","very good","good","right","very","benefits",
"extra","benefit","top","extraordinarily","extraordinary","super","benefits super","good","benefits great",
"wouldnt bad")
negWords <- c("hate","bad","not good","horrible")
```
And now I'm gonna to create replication of origin data frame for big data simulation:
```
# Replicate original data.frame - big data simulation (700.000 rows of sentences)
df.expanded <- as.data.frame(replicate(100000,sent$words))
sent <- coredata(sent)[rep(seq(nrow(sent)),100000),]
sent$words <- paste(c(""), sent$words, c(""), collapse = NULL)
rownames(sent) <- NULL
```
For my further approach, I'll have to do descending ordering of words in dictionaries with their sentiment score (pos word = 1 and neg word = -1).
```
# Ordering words in pos/negWords
wordsDF <- data.frame(words = posWords, value = 1,stringsAsFactors=F)
wordsDF <- rbind(wordsDF,data.frame(words = negWords, value = -1))
wordsDF$lengths <- unlist(lapply(wordsDF$words, nchar))
wordsDF <- wordsDF[order(-wordsDF[,3]),]
wordsDF$words <- paste(c(""), wordsDF$words, c(""), collapse = NULL)
rownames(wordsDF) <- NULL
```
Then I have a following function with for loop. 1) matching exact words 2) count them 3) compute score 4) remove matched words from sentence for another iteration:
```
scoreSentence_new <- function(sentence){
score <- 0
for(x in 1:nrow(wordsDF)){
sd <- function(text) {stri_count(text, regex=wordsDF[x,1])} # count matched words
results <- sapply(sentence, sd, USE.NAMES=F) # count matched words
score <- (score + (results * wordsDF[x,2])) # compute score
sentence <- str_replace_all(sentence, wordsDF[x,1], " ") # remove matched words from sentence for next iteration
}
score
}
```
When I call that function
```
SentimentScore_new <- scoreSentence_new(sent$words)
sent_new <- cbind(sent, SentimentScore_new)
sent_new$words <- str_trim(sent_new$words, side = "both")
```
it resulted into desired output:
```
words user SentimentScore_new
great just great right size and i love this notebook 1 4
benefits great laptop at the top 2 2
wouldnt bad notebook and very good 3 2
very good quality 4 1
bad orgtop but great 5 0
great improvement for that great improvement bad product but overall is not good 6 0
notebook is not good but i love batterytop 7 0
```
In real I'm using dictionaries with pos/neg words about 7.000 words and I have 200.000 sentences. When I used my approach for 1.000 sentences it takes 45 mins. Please, could you anyone help me with some faster approach using of vectorization or parallel solution. Because of my beginner R programming skills I'm in the end of my efforts :-( Thank you very much in advance for any of your advice or solution
I was wondering about something like that:
```
n <- 1:nrow(wordsDF)
score <- 0
try_1 <- function(ttt) {
sd <- function(text) {stri_count(text, regex=wordsDF[ttt,1])}
results <- sapply(sent$words, sd, USE.NAMES=F)
score <- (score + (results * wordsDF[ttt,2])) # compute score (count * sentValue)
sent$words <- str_replace_all(sent$words, wordsDF[ttt,1], " ")
score
}
a <- unlist(sapply(n, try_1))
apply(a,1,sum)
```
But doesn't work :-(
|
Vectorization of for loop in sentiment analysis
|
CC BY-SA 3.0
| null |
2015-03-01T14:28:22.123
|
2015-04-06T22:19:59.417
|
2015-03-02T08:20:08.023
|
8488
|
8488
|
[
"r"
] |
5263
|
2
| null |
851
|
3
| null |
Try using [Earth Mover's Distance](http://ai.stanford.edu/%7Erubner/papers/rubnerIjcv00.pdf). It measures how much it "costs" to optimally reshape one histogram into another, where an elementary transform (using the terminology of edit distance-like distance measures) is moving a unit of mass from a bin to a bin. If you have an implementation of EMD, then your distance is dist = EMD(H1, H2, D), where
```
H1 = [ 14, 13, 40, 31 ]; -- histogram #1 (first image)
H2 = [ 82, 8, 7 ]; -- histogram #2 (second image)
D = [ D11, D12, D13; ... ; D41, D42, D43 ]; -- (cross-bin) ground distance
```
Dij is a distance from the i'th bin of the first histogram to the j'th bin of the second histogram. For example, D21 is a distance between 'White' and 'Pink'. It may be defined differently, based on which color space you want to use, and further, based on how you define what it means for two colors to be similar. If I use RGB, and I do not care much as to how human eye perceives colors, then D21 can be the \ell_2 norm of the difference [1, 1, 1] - [1, 0.05, 0.7]. (As far as I remember, the paper I mentioned uses LAB instead of RGB.)
P.S. Besides color, you may include spatial information into comparison. EMD can handle it well, as long as you properly combine similarity of colors with spatial similarity in the definition of ground distance D.
| null |
CC BY-SA 4.0
| null |
2015-03-02T05:05:55.497
|
2020-08-17T20:32:41.403
|
2020-08-17T20:32:41.403
|
98307
|
8462
| null |
5264
|
1
|
5269
| null |
1
|
815
|
Does reinforcement learning always need a grid world problem to be applied to?
Can anyone give me any other example of how reinforcement learning can be applied to something which does not have a grid world scenario?
|
Does reinforcement learning only work on grid world?
|
CC BY-SA 4.0
| null |
2015-03-02T05:10:01.897
|
2019-12-03T06:39:57.647
|
2019-12-03T06:39:57.647
|
52298
|
8013
|
[
"machine-learning",
"reinforcement-learning"
] |
5265
|
2
| null |
5264
|
3
| null |
Reinforcement learning does not depend on a grid world. It can be applied to any space of possibilities where there is a "fitness function" that maps between points in the space to a fitness metric.
Topological spaces have a formally-defined "neighborhoods" but do not necessarily conform to a grid or any dimensional representation. In a topological space, the only way to get from "here" to "there" is via some "paths" which are sets of contiguous neighborhoods. Continuous fitness functions can be defined over topological spaces.
For what it is worth, reinforcement learning is not the be-all-end-all (family of) learning algorithms in fitness landscapes. In a sufficiently rugged fitness landscape, other learning algorithms can perform better. Also, if there are regions of the space where there are no well-defined fitness function at a given point in time, it may be indeterminate as to what learning algorithms are optimal, if any.
| null |
CC BY-SA 3.0
| null |
2015-03-02T10:06:27.483
|
2015-03-02T10:06:27.483
| null | null |
609
| null |
5267
|
1
| null | null |
4
|
1261
|
I took on a project to predict the outcome of soccer matches but it turned out to be a very challenging task.
I tried out different models but I only got 50-54% accuracy on my test dataset. Some of the models were created in such
a way that a certain model would predict if a team will win, draw, or lose a match. That same model would also predict if
the opponent of that team will win, draw, or lose the match. Each model predicting with an accuracy of about 50% on each team distinctively. The second set of models I tried, takes the combination of data from both teams and predicts which class the match belongs to (home win, away win, draw). In the system,
only 10 matches are given everyday to be predicted. Meaning, if I predict the 10 matches using the second model, I have a chance of predicting 5
correctly. In this project, I only need to predict 3 matches correctly out of the 10 matches given in a day. Is there a system
of knowing the 3 matches which my models have the best chance of predicting correctly? I only need to get 3 correct predictions, I usually
get 5 correctly but I don't know how to select my 3 best matches.
Note: The first type of models uses about 50 features for prediction while the second uses 101. I've tried ensembles, they still give
me ~50% accuracy. I'm still about to set up a system that selects matches where the prediction for the home team does not
contradict the prediction for the away team using the first type of models.
|
Predicting Soccer: guessing which matches a model will predict correctly
|
CC BY-SA 4.0
| null |
2015-03-02T21:15:34.480
|
2019-03-19T17:47:38.893
|
2019-03-19T17:47:38.893
|
69822
|
4811
|
[
"machine-learning",
"predictive-modeling"
] |
5268
|
1
|
5674
| null |
7
|
804
|
The project I am working on allows users to create Stock Screeners based on both technical and fundamental criteria. Stock Screeners are then "backtested" by simulating the results of applying in over the last 10 years using Point-in-Time data. I get back the list of trades and overall graph of performance.
(If that is unclear, I have an overview [here](https://www.equitieslab.com/features/stock-screener/) and [there](https://www.equitieslab.com/wiki/QuickStart/StockScreener) with more details).
Now a common problem is that users create overfitted stock screeners. I would love to give them a warning when the screen is likely to be over-fitted.
Fields I have to work with
- All trades made by the Stock Screener
Stock, Start Date, Start Price, End Date, End Price
- S&P 500 performance for the same time frame
- Market Cap, Sector, and Industry of each Stock
|
How to detect overfitting of a stock screener
|
CC BY-SA 3.0
| null |
2015-03-02T23:02:45.583
|
2015-05-03T00:30:18.570
| null | null |
8344
|
[
"machine-learning",
"data-mining",
"classification",
"bigdata",
"statistics"
] |
5269
|
2
| null |
5264
|
6
| null |
The short answer is no! Reinforcement Learning is not limited to discrete spaces. But most of the introductory literature does deal with discrete spaces.
As you might know by now that there are three important components in any Reinforcement Learning problem: Rewards, States and Actions. The first is a scalar quantity and theoretically the latter two can either be discrete or continuous. The convergence proofs and analyses of the various algorithms are easier to understand for the discrete case and also the corresponding algorithms are easier to code. That is one of the reasons, most introductory material focuses on them.
Having said that, it should be interesting to note that the early research on Reinforcement Learning actually focussed on continuous state representations. It was only in the the 90s since the literature started representing all the standard algorithms for discrete spaces as we had a lot of proofs for them.
Finally, if you noticed carefully, I said continuous states only. Mapping continuous states and continuous actions is hard. Nevertheless, we do have some solutions for now. But it is an active area of Research in RL.
This [paper by Sutton](http://webdocs.cs.ualberta.ca/~sutton/papers/SSR-98.pdf) from '98 should be a good start for your exploration!
| null |
CC BY-SA 3.0
| null |
2015-03-03T01:35:31.727
|
2015-03-03T01:35:31.727
| null | null |
8491
| null |
5270
|
2
| null |
4980
|
4
| null |
Training a model, related to information extraction, in general, and named entity recognition/resolution (NER), in particular, is described in detail in Chapter 7 of the [NLTK Book](http://www.nltk.org/book), available online at this URL: [http://www.nltk.org/book/ch07.html](http://www.nltk.org/book/ch07.html).
Additionally, I think that you might find useful my [related answer](https://stats.stackexchange.com/a/136760/31372) on Cross Validated site. It has a lot of references to relevant sources on NER and related topics as well as to various related software tools.
| null |
CC BY-SA 3.0
| null |
2015-03-03T05:59:08.820
|
2015-03-03T05:59:08.820
|
2017-04-13T12:44:20.183
|
-1
|
2452
| null |
5273
|
1
| null | null |
6
|
251
|
The training set contains $p$ papers. Each paper is annotated as research or non-research. To develop the research paper filter, we consider the $W$ most frequent phrases in a paper. The research paper filter will use the presence/absence of these $W$ phrases to decide if the paper is indeed a research paper or not.
- How many possible hypothesis are there?
I have $2^W$ possible hypothesis, but if we are including conjunctive hypothesis symbols, then I would say $4^W$. Is my logic correct?
- How many of them are consistent?
For this question, I have no idea where to begin. Any insight?
|
How many possible hypotheses
|
CC BY-SA 3.0
| null |
2015-03-03T16:12:13.860
|
2015-12-22T08:29:16.733
|
2020-06-16T11:08:43.077
|
-1
|
8505
|
[
"machine-learning",
"classification"
] |
5275
|
2
| null |
5255
|
4
| null |
I do think it is new job, basically data scientist has to apply mathematical algorithms on data with considerable constraint in terms 1) Run time of the application 2) Resource use of the application. If these constraints are not present, I would not call the job data science. Moreover, these algorithms are often need to be ran on distributed systems, which is another dimension of the problem.
Of course, this has been done before, in some combination of statistics, mathematics and programming, but it was not wide spread to give rise to the new term. The real rise of data science is from the ability to gather large amounts of data, thus need to need to process it.
| null |
CC BY-SA 3.0
| null |
2015-03-04T04:39:35.380
|
2015-03-04T04:39:35.380
| null | null |
3070
| null |
5276
|
2
| null |
5255
|
5
| null |
Terms that covered more or less the same topics that Data Science covers today:
- Pattern Recognition
- Machine Learning
- Data Mining
- Quantitative methods
| null |
CC BY-SA 3.0
| null |
2015-03-04T06:05:05.320
|
2015-03-04T06:05:05.320
| null | null |
8517
| null |
5277
|
1
|
5278
| null |
16
|
28405
|
So, our data set this week has 14 attributes and each column has very different values. One column has values below 1 while another column has values that go from three to four whole digits.
We learned normalization last week and it seems like you're supposed to normalize data when they have very different values. For decision trees, is the case the same?
I'm not sure about this but would normalization affect the resulting decision tree from the same data set? It doesn't seem like it should but...
|
Do you have to normalize data when building decision trees using R?
|
CC BY-SA 3.0
| null |
2015-03-04T08:05:45.003
|
2015-03-04T12:15:10.117
|
2015-03-04T08:15:57.703
|
8318
|
8318
|
[
"r",
"beginner"
] |
5278
|
2
| null |
5277
|
21
| null |
Most common types of decision trees you encounter are not affected by any monotonic transformation. So, as long as you preserve orde, the decision trees are the same (obviously by the same tree here I understand the same decision structure, not the same values for each test in each node of the tree).
The reason why it happens is because how usual impurity functions works. In order to find the best split it searches on each dimension (attribute) a split point which is basically an if clause which groups target values corresponding to instances which has test value less than split value, and on the right the values greater than equal. This happens for numerical attributes (which I think is your case because I do not know how to normalize a nominal attribute). Now you might note that the criteria is less than or greater than. Which means that the real information from the attributes in order to find the split (and the whole tree) is only the order of the values. Which means that as long as you transform your attributes in such a way that the original ordering is reserved, you will get the same tree.
Not all models are insensitive to such kind of transformation. For example linear regression models give the same results if you multiply an attribute with something different than zero. You will get different regression coefficients, but the predicted value will be the same. This is not the case when you take a log of that transformation. So for linear regression, for example, normalizing is useless since it will provide the same result.
However this is not the case with a penalized linear regression, like ridge regression. In penalized linear regressions a constraint is applied to coefficients. The idea is that the constraint is applied to the sum of a function of coefficients. Now if you inflate an attribute, the coefficient will be deflated, which means that in the end the penalization for that coefficient it will be artificially modified. In such kind of situation, you normalize attributes in order that each coefficient to be constraint 'fairly'.
Hope it helps
| null |
CC BY-SA 3.0
| null |
2015-03-04T12:15:10.117
|
2015-03-04T12:15:10.117
| null | null |
108
| null |
5279
|
1
| null | null |
4
|
1014
|
I was given a target function to design neural network and train: (y = (x1 ∧ x2) ∨ (x3 ∧ x4))
The number of input and number of output seems obvious (4 and 1). And the training data can use truth table.
However, in order to train as a multilayer artificial neural network, I need to choose number of hidden units. May I know where can I find some general guideline for this?
Thank you!
|
How to select topology for neural network?
|
CC BY-SA 3.0
| null |
2015-03-04T14:19:00.713
|
2015-04-04T03:28:34.233
| null | null |
8524
|
[
"neural-network"
] |
5280
|
1
| null | null |
2
|
124
|
I want to generate an $N\times N$ matrix $A$ so as to target an $N$ vector of row sums and simultaneously all column sums should sum to 1. In addition to this, I have a prefixed number of elements which are set to zero.
For example, beginning with:
$$
\left[\begin{array}{rrr}
0 & 1 & 1 \\
1 & 0 & 0 \\
1 & 1 & 0
\end{array}\right]
$$
and the row sum vector $[1.5, 0.25, 1]^{T}$, I want to end up with
$$
\left[\begin{array}{rrr}
0 & a_{12} & a_{13} \\
a_{21} & 0 & 0 \\
a_{31} & a_{32} & 0
\end{array}\right]
$$
under the following conditions:
$a_{12} + a_{13} = 1.5$
$ a_{21} = 0.25$
$a_{31}+a_{32} = 1$
$a_{21}+a_{31} = 1$
$a_{12}+a_{32} = 1$
$a_{13} = 1$
While this is simplistic, in general, I have $2N$ equations in $N^{2}-Z$ unknowns, where $Z$ is the number of elements fixed to zero. So, this system of equations could be overdetermined or underdetermined, but I would like to be able to generate matrices like this such that all nonzero elements lie in $(0,1]$.
|
Computation of a column-stochastic matrix with target row sums
|
CC BY-SA 3.0
| null |
2015-03-04T14:34:51.013
|
2015-05-05T10:37:22.430
|
2015-03-06T09:35:47.037
|
8462
|
8526
|
[
"statistics",
"algorithms",
"graphs",
"social-network-analysis"
] |
5281
|
2
| null |
5244
|
1
| null |
Speed of code execution is rarely an issue. The important speed in business is almost always the speed of designing, deploying, and maintaining the application. An experienced programmer can optimize where necessary to get code execution fast enough. In these cases, R can make a lot of sense in production.
In cases where speed of execution IS an issue, you are already going to find an optimized C++ or some such real-time decision engine. So your choices are integrate an R process, or add the bits you need to the engine. The latter is probably the only option, not because of the speed of R, but because you don't have the time to incorporate any external process. If the company has nothing to start with, I can't imagine everyone saying "let's build our time critical real-time engine in R because of the great statistical libraries".
I'll give a few examples from my corporate experiences, where I use R in production:
- Delivering Shiny applications dealing with data that is not/ not yet institutionalized. I will generally load already-processed data frames and use Shiny to display different graphs and charts. Computation is minimal.
- Decision making analysis that requires heavy use of advanced libraries (mcclust, machine learning) but done on a daily or longer time-scale. In this case there is no reason to use any other language. I've already done the prototyping in R, so my fastest and best option is to keep things there.
I did not use R for production when integrating with a real-time C++ decision engine. Issues:
- An additional layer of complication to spawn R processes and integrate the results
- A suitable machine-learning library (Waffles) was available in C++
The caveat in the latter case: I still use R to generate the training files.
| null |
CC BY-SA 3.0
| null |
2015-03-04T15:53:46.530
|
2015-03-04T15:53:46.530
| null | null |
1077
| null |
5282
|
2
| null |
5279
|
1
| null |
You problem is linearly separable so you can use a single layer perceptron.
Hidden units are necessary only for non linear problems (xor is a classic example).
As for general guidelines, I don't think anyone has ever needed more than one hidden layer. Also the number of neurons in the hidden layer doesn't need to exceed the number of inputs.
| null |
CC BY-SA 3.0
| null |
2015-03-04T21:47:35.453
|
2015-03-05T03:17:42.347
|
2015-03-05T03:17:42.347
|
8517
|
8517
| null |
5284
|
1
| null | null |
2
|
1020
|
I'm a Newbie to Hadoop!
By web search and after going through hadoop guides, it is clear that hdfs doesn't allow us to edit the file but to append some data to existing file, it will be creating a new instance temporarily and append the new data to it.
That being said,
1.would like to know whether the new file or temp file is created in the same block or in a different block?
2.What will happen if the revised file exceeds the previous allocated block size?
Any help would be greatly appreciated!
|
How append works in hdfs? Where the newly created instance of file is placed?
|
CC BY-SA 3.0
| null |
2015-03-05T18:28:43.367
|
2015-03-06T16:51:02.073
|
2015-03-06T16:51:02.073
|
8540
|
8540
|
[
"data-mining",
"bigdata",
"apache-hadoop"
] |
5285
|
2
| null |
2543
|
2
| null |
Here are two ideas that I was thinking about.
- Bug Prediction
Based on the previous activity future bugs can be predicted. There are a number of papers available on the net. I remember there was a paper about bug prediction from SVN data. ( Just google for software bug prediction )
- Error position from software traces
Errors may be like rare events or anomalies in the trace output. May be you can classify the error and pinpoint the procedure that caused the error. I am currently thinking about a system like that now ( Not sure about its success though ). I asked a question here: [https://stats.stackexchange.com/questions/140232/error-position-in-software-trace-file](https://stats.stackexchange.com/questions/140232/error-position-in-software-trace-file)
| null |
CC BY-SA 3.0
| null |
2015-03-06T02:43:38.727
|
2015-03-06T02:43:38.727
|
2017-04-13T12:44:20.183
|
-1
|
8516
| null |
5286
|
2
| null |
5243
|
1
| null |
When looking for texts to learn advanced topics, I start with a web search for relevant grad courses and textbooks, or background tech/math books like those from Dover.
To wit, Theoretical Statistics by Keener looks relevant:
[http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7)
And this:
"Looking for a good Mathematical Statistics self-study book (I'm a physics student and my class & current book are useless to me)"
[http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/](http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/)
| null |
CC BY-SA 3.0
| null |
2015-03-06T03:46:41.693
|
2015-03-06T03:46:41.693
| null | null |
3556
| null |
5287
|
1
| null | null |
4
|
98
|
Any suggestion on what kind of dataset lets say $n \times d$ ($n$ rows, $d$ columns) would give me same eigenvectors?.
I believe it should be the one with same absolute values in each cell. Like alternating +1 and -1. But it seems to work otherwise.
Any pointers?
|
Dataset to give same eigenvectors?
|
CC BY-SA 3.0
| null |
2015-03-06T04:09:56.030
|
2015-03-10T09:49:55.490
|
2015-03-06T09:38:33.380
|
84
|
8547
|
[
"machine-learning",
"data-mining",
"python"
] |
5288
|
2
| null |
5280
|
1
| null |
This problem, being more of a combinatorial rather than statistical nature, can be though of as a max-flow problem in a bipartite network. Each column of your matrix corresponds to a "source node" in the left part of the bipartite graph $BP(L, R)$; each row of the matrix corresponds to a "destination node" in the right part $R$. Capacity of each source node is the corresponding column's sum; similarly, capacities are defined for the destinations via row sums. Value $a_{ij}$ in the matrix describes flow of mass from $j$'th source to $i$'th destination along an existing edge $(j \to i)$ of $BP$. Your initially given 0-1 matrix describes which edges are present (e.g., the matrix in your example prescribes the edge $(3, 2)$ to be absent from the network, along with all self-loops). If you denote $F$ to be the sum of all column sums (or, alternatively, the total capacity of all sources), then your problem narrows down to finding a max-flow from $L$ to $R$ in $BP$ and checking whether the volume of the discovered flow equals $F$.
Note 1 (Feasibility): Before plunging into finding best flows, you may want to check your row sums: a necessary condition for your problem to have a solution is that the sum of all column sums (which is $N$ in your particular case of a column-stochastic matrix) should be equal the sum of row sums, as both numbers describe the same -- the sum of all elements of the target matrix. (For example, for $N = 3$ and the vector of row sums from your example, the problem is unfeasible.)
Note 2 (Upper bound): Since $a_{ij}$ describes the flow from $j$'th source, and the capacities of all sources are 1 in your setting, then each source cannot send more than 1 unit of mass to a single destination. Thus, each $a_{ij}$ will not exceed 1.
Note 3 (Lower bound): To make $a_{ij}$ for each existing edge $(j \to i)$ strictly positive, you can augment the max-flow problem with lower bounds on edge capacities. In a general setting, edge capacity bounds are $0$ and $+\infty$. You may want to replace the lower bound with some positive number, small enough not to interfere with the discovery of the best flow and large enough to satisfy your edge saturation needs.
| null |
CC BY-SA 3.0
| null |
2015-03-06T05:45:03.720
|
2015-03-06T05:45:03.720
| null | null |
8462
| null |
5289
|
2
| null |
5287
|
2
| null |
The only such existing "dataset" is a 1-by-1 matrix with an arbitrary value of its sole item.
If the eigenvectors, being the columns of matrix $V$, are "the same" (precisely up to a scalar constant factor), then matrix $V$ is singular (its columns are linearly dependent; its determinant is zero), and, hence, non-invertible $\Rightarrow$ such an eigendecomposition $V \cdot \Lambda \cdot V^{-1}$ cannot exist.
| null |
CC BY-SA 3.0
| null |
2015-03-06T06:11:10.997
|
2015-03-06T09:45:35.087
|
2015-03-06T09:45:35.087
|
8462
|
8462
| null |
5290
|
2
| null |
5287
|
4
| null |
I will restrict myself to the finite-dimensional situation.
First, if we are talking about normal eigenvalues and not generalized ones, we need a square matrix. If $M\in\mathbb{R}^{n\times n}$ (or $\mathbb{C}^{n\times n}$) is a [diagonizable matrix](http://en.wikipedia.org/wiki/Diagonalizable_matrix), it has an [eigendecomposition](http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix#Eigendecomposition_of_a_matrix)
\begin{equation}
M = V\Lambda V^{-1}
\textrm{,}
\end{equation}
with $V:=\begin{bmatrix}v_{1},v_{2},\ldots,v_{n}\end{bmatrix}$ the matrix with its $n$ eigenvectors $v_{i}$ as columns and $\Lambda :=\operatorname{diag}\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)$ the diagonal matrix with the corresponding eigenvalues on its main diagonal.
As you can see from this equation if you want to get matrices with the same eigenvectors, i.e. $V$ fixed, you can only change the eigenvalues $\lambda_{i}$. In general the relationship between the eigenvalues and the entries of the original matrix $M$ is non-trivial.
The easiest situation arises if the eigenvectors $v_{i}$ form the standard basis, i.e. only $1$ at the $i$th position, $0$ otherwise. Then $M=I\Lambda I^{-1}=\Lambda $ ($I$ the identity matrix) and you can change every entry on the main diagonal without changing any eigenvector.
## Edit
I understood the question differently than victor. I thought the question asks what kind of different datasets can be described by the same eigenvectors.
| null |
CC BY-SA 3.0
| null |
2015-03-06T07:56:19.903
|
2015-03-10T09:49:55.490
|
2015-03-10T09:49:55.490
|
8249
|
8249
| null |
5291
|
1
|
5763
| null |
1
|
185
|
I am using twitteR package to retrievie timeline data. My request looks as follows:
`tweets <- try(userTimeline(user , n=50),silent=TRUE)`
and this worked quite well for a time, but now I receive this error message:
```
Error in function (type, msg, asError = TRUE) : easy handle already used in multi handle
```
In a related question on Stackoverflow one answer is to use Rcurl directly but this does not seem to work with twitteR package. Anybody got an idea on this?
|
Error using twitter R package's userTimlien
|
CC BY-SA 3.0
| null |
2015-03-06T10:11:02.133
|
2015-05-12T09:30:09.273
| null | null |
8549
|
[
"data-mining",
"r"
] |
5292
|
1
| null | null |
2
|
224
|
I have a huge collection of objects from which only a tiny fraction are in a class of interest. The collection is initially unlabelled, but labels can be added using an expensive operation (for example, by human).
Currently I use the simple generic machine learning strategy:
- Use hand-crafted rules to select a smaller subset of objects (thus leaving out a fraction of interesting ones).
- Label part of the smaller subset, and use these for training and choosing a classification algorithm and its parameters.
- Classify the remaining objects in the smaller set (and also perhaps in the big set).
This has two drawbacks:
- The labeller still needs to see a huge number of uninteresting objects, and therefore is able to label only a very small fraction of interesting ones.
- The objects not in the smaller set are completely ignored in the learning phase, resulting in a loss of some information (the classification algorithm might not work well on this complement).
It seems that it would be better to use online learning: i.e., select the objects to show to the labeller based on the previous labels. But then it becomes no longer obvious that the result of classification algorithm retains the nice theoretical properties (i.e., statistical consistency).
Is there a general framework for active object detection which works either theoretically or practically (or both)? I could not get the complete picture from the Wikipedia article [active learning](http://en.wikipedia.org/wiki/Active_learning_%28machine_learning%29).
|
Generic strategy for object detection
|
CC BY-SA 3.0
| null |
2015-03-06T10:28:04.483
|
2017-03-30T14:06:53.130
|
2017-02-28T12:38:05.317
|
13727
|
6550
|
[
"machine-learning",
"classification",
"object-recognition",
"semi-supervised-learning",
"active-learning"
] |
5294
|
1
|
5298
| null |
4
|
870
|
This is an interview question
>
How are neural nets related to Fourier transforms?
I could find papers that talk about methods to process the Discrete Fourier Transform (DFT) by a single-layer neural network with a linear transfer function. Is there some other correlation that I'm missing?
|
How are neural nets related to Fourier transforms?
|
CC BY-SA 3.0
| null |
2015-03-09T16:53:10.930
|
2015-03-09T21:56:00.827
| null | null |
7960
|
[
"neural-network"
] |
5295
|
2
| null |
778
|
8
| null |
Python has some very good tools for working with big data:
## numpy
Numpy's memmory-mapped arrays let you access a file saved on disk as though it were an array. Only the parts of the array you are actively working with need to be loaded into memory. It can be used pretty much the same as an ordinary array.
## h5py and pytables
These two libraries provide access to HDF5 files. These files allow access to just part of the data. Further, thanks to the underlying libraries used to access the data, many mathematical operations and other manipulations of the data can be done without loading it into a python data structure. Massive, highly structured files are possible, much bigger than 5 TB. It also allows seamless, lossless compression.
## databases
There are various types of databases that allow you to store big data sets and load just the parts you need. Many databases allow you to do manipulations without loading the data into a python data structure at all.
## pandas
This allows higher-level access to various types of data, including HDF5 data, csv files, databases, even websites. For big data, it provides wrappers around HDF5 file access that makes it easier to do analysis on big data sets.
## mpi4py
This is a tool for running your python code in a distributed way across multiple processors or even multiple computers. This allows you to work on parts of your data simultaneously.
## dask
It provides a version of the normal numpy array that supports many of the normal numpy operations in a multi-core manner that can work on data too large to fit into memory.
## blaze
A tool specifically designed for big data. It is basically a wrapper around the above libraries, providing consistent interfaces to a variety of different methods of storing large amounts of data (such as HDF5 or databases) and tools to make it easy to manipulate, do mathematical operations on, and analyze data that is too big to fit into memory.
| null |
CC BY-SA 3.0
| null |
2015-03-09T18:21:32.663
|
2015-03-09T18:21:32.663
| null | null |
8110
| null |
5298
|
2
| null |
5294
|
4
| null |
They are not related in any meaningful sense. Sure, you can use them both to extract features, or do any number of things, but the same can be said about a many techniques. I would have asked "what kind of neural network?" to see if the interviewer had something specific in mind.
| null |
CC BY-SA 3.0
| null |
2015-03-09T21:54:19.297
|
2015-03-09T21:54:19.297
| null | null |
381
| null |
5299
|
2
| null |
5294
|
3
| null |
The similarity is regression.
NNs can be used for regression and the fourier transform is in its heart just a curve fit of multiple sin and cos functions to some data.
| null |
CC BY-SA 3.0
| null |
2015-03-09T21:56:00.827
|
2015-03-09T21:56:00.827
| null | null |
5316
| null |
5300
|
2
| null |
5255
|
2
| null |
Also: "Business Intelligence developer"
| null |
CC BY-SA 3.0
| null |
2015-03-10T09:29:06.893
|
2015-03-10T09:29:06.893
| null | null |
816
| null |
5301
|
1
|
5353
| null |
4
|
499
|
I need to implement an algorithm for multiple extended string matching in text. Algorithms to match regular expression would be perhaps too slow.
Extended means the presence of wildcards (any number of characters instead of a star), for example:
```
abc*def //matches abcdef, abcpppppdef etc.
```
Multiple means that the search is going on simultaneously for multiple string patterns (not a separate search for each pattern), for example:
```
abc*def
abc
whatever
some*string
```
QUESTION:
What is the fast algorithm that can do multiple extended string matching?
Preferably, optimized for SIMD instructions and multicore implementation. Open source implementation (C/C++/Python) would be great as well. I'm interested in 10 Gbps+ performance on a single core of a modern CPU.
Thank you
|
Algorithm for multiple extended string matching
|
CC BY-SA 3.0
| null |
2015-03-10T10:20:22.510
|
2015-03-19T07:38:57.917
|
2015-03-10T19:08:52.490
|
-1
|
8597
|
[
"algorithms",
"search"
] |
5302
|
1
| null | null |
1
|
2022
|
I am attempting to work with a very large data-set (~1.5mil lines) for the first time in SAS and I am having some difficulty. The data-set I have is formatted as a "long" .txt file as follows:
```
'cat1/: Topic1_Variable1'
'cat2/: Topic1_Variable2'
'cat3/: Topic1_Variable3'
'cat4/: Topic1_Variable4'
'cat1/: Topic2_Variable1'
'cat2/: Topic2_Variable2'
'cat3/: Topic2_Variable3'
'cat4/: Topic2_Variable4'
'cat1/: Topic3_Variable1'
'cat2/: Topic3_Variable2'
'cat3/: Topic3_Variable3'
'cat4/: Topic3_Variable4'
...
```
Fro analysis and sharing with others, I really would like to see it formatted as follows:
```
cat1 cat2 cat3 cat4
Topic1_Variable1 Topic1_Variable2 Topic1_Variable3 Topic1_Variable4
Topic2_Variable1 Topic2_Variable2 Topic2_Variable3 Topic2_Variable4
Topic3_Variable1 Topic3_Variable2 Topic3_Variable3 Topic3_Variable4
```
I think that this may be easier in R, but I honestly am drawing a complete blank in SAS. I've even played with MS Access to try to get it to look the way I want but the program crashes every time (due to the size?). At any rate, I have looked into some of the statements in PROC TRANSPOSE and PROC SQL but it seems that most functions within those procedures are utilized to combine duplicate 'Topics'. In the data I have been provided, each "group" represents an individual response to a question with several thousand individuals repeated, I want to retain the independence of each occurrence and not perform a UNION as defined in PROC SQL. At this point, I feel like I am over-thinking this but I just can't get around the mental block and actually do what I am working toward.
Any help or guidance is much appreciated. I'm open to trying all suggestions or ideas, I think I have access to most statistical computing programs.
|
Transposing Every nth row to column in a large dataset
|
CC BY-SA 3.0
| null |
2015-03-10T18:03:32.227
|
2015-12-08T04:58:43.387
| null | null |
8605
|
[
"bigdata",
"dataset",
"data-formats"
] |
5303
|
1
| null | null |
5
|
966
|
I am trying my first 'project' concerning machine learning and I am a bit stuck.
However, I am not sure if it's even possible but here goes my question.
What I want to achieve is clustering user groups based on the amount of visits a user does on a certain website.
So I started out with this feature matrix:
```
USER abc.be abc.be/a abc.be/b xyz.be xyz.be/a
123 0 0 0 0 1
456 1 0 1 0 0
789 2 3 1 0 0
321 1 0 1 0 1
654 1 1 1 1 1
987 0 1 0 3 0
```
So I got in this example 5 features (my 5 different websites).
So then I used PCA to come to 2 dimensions, so I could plot it and see how it went.
My feature matrix (in my example) is 5 columns * 6 rows.
My PCA matrix is 2 columns * 6 rows.
I came to this plot (please note that this plot uses different data then the example but the idea is the same)

The green points are my PCA points
The red circles are my K-Means centroids.
But the part I am struggling with is this: so I got my clusters (red circles) but how can I use this to say:"Looks like most users go to either site A or site B)?
So how can I couple my clusters to a feature label from my feature matrix?
Or how does one approach this?
Any help is appreciated :)
|
Couple PCA plot and clusters to labels
|
CC BY-SA 3.0
| null |
2015-03-10T19:51:59.277
|
2015-08-15T19:12:31.760
| null | null |
8389
|
[
"machine-learning",
"feature-extraction",
"k-means"
] |
5304
|
1
| null | null |
2
|
243
|
Problem: I want to know methods to perform an effective sampling from a database.
The size of the database is about `250K` text documents and in this case each text document is related to some majors (Electrical Engineering, Medicine and so on). So far I have seen some simple techniques like [Simple random sample](http://en.wikipedia.org/wiki/Simple_random_sample) and [Stratified sampling](http://en.wikipedia.org/wiki/Stratified_sampling); however, I don't think it's a good idea to apply them for the following reasons:
- In the case of simple random sample, for instance, there are a few documents in the database that talk about majors like Naval Engineering or Arts. Therefore I think they are less likely to be sampled with this method but I would like to have some samples of every major as possible.
- In the case of stratified sampling, most of the documents talk about more than one major so I cannot divide the database in subgroups because they will not be mutually exclusive, at least for the case in which each major is a subgroup.
Finally, I cannot use the whole database due to expensive computational cost processing. So I would really appreciate any suggestions on other sampling methods. Thanks for any help in advance.
|
How to make an effective sampling from a database of text documents?
|
CC BY-SA 3.0
| null |
2015-03-11T04:18:21.610
|
2015-03-13T15:41:35.577
| null | null |
8609
|
[
"dataset",
"databases",
"sampling"
] |
5305
|
1
| null | null |
1
|
1311
|
I am a newbie. I would like to do visualization on twitter data : top trends based on country (over map) and time variations (for each months in 1 year or for each year) . Can someone tell me where can I get the twitter data set and any advice on how to start proceeding would be really help full.
Thanks
|
How do I get Twitter Dataset for Visualization
|
CC BY-SA 3.0
| null |
2015-03-11T05:25:47.053
|
2015-03-13T12:16:11.643
| null | null |
8610
|
[
"data-mining",
"clustering",
"data-cleaning",
"visualization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.