Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
787
|
1
|
884
| null |
3
|
11269
|
I am a relatively new user to Hadoop (using version 2.4.1). I installed hadoop on my first node without a hitch, but I can't seem to get the Resource Manager to start on my second node.
I cleared up some "shared library" problems by adding this to yarn-env.sh and hadoop-env.sh:
>
export HADOOP_HOME="/usr/local/hadoop"
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
I also added this to hadoop-env.sh:
>
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
based on the advice of this post at horton works [http://hortonworks.com/community/forums/topic/hdfs-tmp-dir-issue/](http://hortonworks.com/community/forums/topic/hdfs-tmp-dir-issue/)
That cleared up all of my error messages; when I run /sbin/start-yarn.sh I get this:
>
starting yarn daemons
starting resourcemanager,
logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-HdNode.out
localhost: starting nodemanager,
logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-HdNode.out
The only problem is, JPS says that the Resource Manager isn't running.
What's going on here?
|
Hadoop Resource Manager Won't Start
|
CC BY-SA 3.0
| null |
2014-07-19T20:51:58.527
|
2017-01-30T17:07:41.170
| null | null |
2614
|
[
"apache-hadoop"
] |
788
|
2
| null |
787
|
1
| null |
Check your version of JPS and make sure it's the same as the version of java that you are running. Sometimes you start out with an out of the box jdk install, upgrade, set alternatives for the java bin, but still have the original jps binary being referenced.
Run `ps -ef |grep java` and look for the resource manager threads. Maybe it's actually running. If it is, try `update-alternatives --config jps` to see what binary jps is pointing at and compare it with the java binary that you are using.
If the resource manager is not running, it's time to do some basic linux troubleshooting. Check log files and barring that check actual command output.
On the system I'm looking at now, the log files for resource manager are placed in the `hadoop-install/logs` directory in `yarn-username-resourcemanager-hostname.log` and `yarn-user-resourcemanager-hostname.out`. Your configuration may place them in /var/logs or what have you. Also, have a look at the syslog.
If the logs don't yield any good information, which can happen, my process is to generally try to figure out the command line from the startup script (usually by prefixing the command line with `echo`), and then trying to run the command directly to see the output as it comes out.
---
I have actually run into this problem before, but I can't remember the specific issue. I'm sure the same result can manifest itself from a variety of problems. Considering that you are as far as you are in the process of getting set up, I believe it's likely to be a minor configuration issue.
| null |
CC BY-SA 3.0
| null |
2014-07-20T21:37:27.857
|
2014-07-20T21:48:25.257
|
2014-07-20T21:48:25.257
|
434
|
434
| null |
789
|
2
| null |
786
|
7
| null |
You can pull stock data very easyly in python and R (probably other languages as well) with the following packages:
In python with [ystockquote](https://pypi.python.org/pypi/ystockquote)
[This](http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb) is also a really nice tutorial in iPython which shows you how to pull the stock data and play with it
In R with [quantmod](http://www.quantmod.com/)
HTH.
| null |
CC BY-SA 4.0
| null |
2014-07-20T23:54:09.070
|
2020-08-16T18:02:25.707
|
2020-08-16T18:02:25.707
|
98307
|
802
| null |
790
|
2
| null |
778
|
4
| null |
Absolutely. When you're working with data at that scale it's common to use a big data framework, in which case python or whatever language you're using is merely an interface. See for example [Spark's Python Programming Guide](http://spark.apache.org/docs/0.9.1/python-programming-guide.html). What kind of data do you have and what do you want to do with it?
| null |
CC BY-SA 3.0
| null |
2014-07-21T00:35:13.440
|
2014-07-21T00:35:13.440
| null | null |
381
| null |
791
|
1
| null | null |
8
|
1838
|
I'm trying to define a metric between job titles in IT field. For this I need some metric between words of job titles that are not appearing together in the same job title, e.g. metric between the words
>
senior, primary, lead, head, vp, director, stuff, principal, chief,
or the words
>
analyst, expert, modeler, researcher, scientist, developer, engineer, architect.
How can I get all such possible words with their distance ?
|
Job title similarity
|
CC BY-SA 3.0
| null |
2014-07-21T09:00:04.917
|
2014-07-23T15:46:48.977
|
2014-07-22T09:00:27.183
|
921
|
921
|
[
"machine-learning",
"dataset"
] |
792
|
2
| null |
778
|
3
| null |
To handle such amount of data, programming language is not the main concern but the programming framework is. Frameworks such as MapReduce or Spark have bindings to many languages including Python. These frameworks certainly have many ready-to-use packages for data analysis tasks. But in the end it all comes to your requirement, i.e., what is your task? People have different definitions of data analysis tasks, some of them can be easily solved with relational databases. In that case, SQL is much better than all other alternatives.
| null |
CC BY-SA 3.0
| null |
2014-07-21T11:59:49.203
|
2014-07-21T11:59:49.203
| null | null |
743
| null |
793
|
1
|
797
| null |
21
|
8179
|
How can [NoSQL](http://en.wikipedia.org/wiki/NoSQL) databases like [MongoDB](http://en.wikipedia.org/wiki/MongoDB) be used for data analysis? What are the features in them that can make data analysis faster and powerful?
|
Uses of NoSQL database in data science
|
CC BY-SA 3.0
| null |
2014-07-21T13:41:13.427
|
2015-11-07T07:51:08.303
|
2014-07-27T07:36:51.510
|
2523
|
2643
|
[
"bigdata",
"nosql",
"mongodb"
] |
794
|
2
| null |
694
|
9
| null |
Pylearn2 seems to be the library of choice, however I find their YAML configuration files off-putting.
Python itself was designed to be an easy language for prototyping, why would you not use it to define the network properties themselves? We have great editors with autocompletion that would make your life much easier and Python is not like C++ where you have to wait for long builds to finish before you can run your code.
YAML files on the other hand you have to edit using a standard text editor with no assistance whatsoever and this makes the learning curve even steeper.
I may be missing the big picture but I still don't understand what were they thinking, I don't think prototyping in code would be much slower. For that reason I'm considering Theanets or using Theano directly.
| null |
CC BY-SA 3.0
| null |
2014-07-21T15:35:59.540
|
2014-07-21T15:35:59.540
| null | null |
2648
| null |
795
|
2
| null |
791
|
1
| null |
Not sure if this is exactly what you're looking for, but r-base has a function called "adist" which creates a distance matrix of approximate string distances (according to the Levenshtein distance). Type '?adist' for more.
```
words = c("senior", "primary", "lead", "head", "vp", "director", "stuff", "principal", "chief")
adist(words)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 0 6 5 5 6 5 5 7 5
[2,] 6 0 6 6 7 7 7 6 6
[3,] 5 6 0 1 4 7 5 8 5
[4,] 5 6 1 0 4 7 5 8 4
[5,] 6 7 4 4 0 8 5 8 5
[6,] 5 7 7 7 8 0 8 8 7
[7,] 5 7 5 5 5 8 0 9 4
[8,] 7 6 8 8 8 8 9 0 8
[9,] 5 6 5 4 5 7 4 8 0
```
Also, if R isn't an option, the Levenshtein distance algorithm is implemented in many languages here:
[http://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance](http://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance)
| null |
CC BY-SA 3.0
| null |
2014-07-21T16:15:07.010
|
2014-07-21T16:15:07.010
| null | null |
375
| null |
796
|
2
| null |
778
|
14
| null |
There are couple of things you need to understand when dealing with Big data -
### What is Big data?
You might be aware of famous V's of Big data - Volume, Velocity, Variety... So, Python may not be suitable for all. And it goes with all data science tools available. You need to know which tool is good for what purpose.
If dealing with large Volume of data:
- Pig/Hive/Shark - Data cleaning and ETL work
- Hadoop/Spark - Distributed parallel computing
- Mahout/ML-Lib - Machine Learning
Now, you can use R/Python in intermediate stages but you'll realize that they become bottleneck in your entire process.
If dealing with Velocity of data:
- Kafka/Storm - High throughput system
People are trying to R/Python here but again it depends on kind of parallelism you want and your model complexity.
### What sort of analysis you wish to do?
If your model demands the entire data to be first brought into memory then your model should not be complex because if the intermediate data is large then the code will break. And if you think of writing it into disk then you'll face additional delay because disk read/write is slow as compared to RAM.
## Conclusion
You can definitely use Python in Big data space (Definitely, since people are trying with R, why not Python) but know your data and business requirement first. There may be better tools available for same and always remember:
>
Your tools shouldn’t determine how you answer questions. Your questions should determine what tools you use.
| null |
CC BY-SA 3.0
| null |
2014-07-21T16:58:22.660
|
2017-02-27T15:03:17.713
|
2020-06-16T11:08:43.077
|
-1
|
2433
| null |
797
|
2
| null |
793
|
26
| null |
To be perfectly honest, most NoSQL databases are not very well suited to applications in big data. For the vast majority of all big data applications, the performance of [MongoDB](https://en.wikipedia.org/wiki/MongoDB) compared to a relational database like [MySQL](http://en.wikipedia.org/wiki/MySQL) is [significantly](http://www.moredevs.com/mysql-vs-mongodb-performance-benchmark/) is poor enough to warrant staying away from something like MongoDB entirely.
With that said, there are a couple of really useful properties of NoSQL databases that certainly work in your favor when you're working with large data sets, though the chance of those benefits outweighing the generally poor performance of NoSQL compared to [SQL](http://en.wikipedia.org/wiki/SQL) for read-intensive operations (most similar to typical big data use cases) is low.
- No Schema - If you're working with a lot of unstructured data, it might be hard to actually decide on and rigidly apply a schema. NoSQL databases in general are very supporting of this, and will allow you to insert schema-less documents on the fly, which is certainly not something an SQL database will support.
- JSON - If you happen to be working with JSON-style documents instead of with CSV files, then you'll see a lot of advantage in using something like MongoDB for a database-layer. Generally the workflow savings don't outweigh the increased query-times though.
- Ease of Use - I'm not saying that SQL databases are always hard to use, or that Cassandra is the easiest thing in the world to set up, but in general NoSQL databases are easier to set up and use than SQL databases. MongoDB is a particularly strong example of this, known for being one of the easiest database layers to use (outside of SQLite). SQL also deals with a lot of normalization and there's a large legacy of SQL best practices that just generally bogs down the development process.
Personally I might suggest you also check out [graph databases](http://en.wikipedia.org/wiki/Graph_database) such as [Neo4j](http://en.wikipedia.org/wiki/Neo4j) that show really good performance for certain types of queries if you're looking into picking out a backend for your data science applications.
| null |
CC BY-SA 3.0
| null |
2014-07-21T19:06:43.223
|
2015-11-07T07:51:08.303
|
2015-11-07T07:51:08.303
|
2580
|
548
| null |
798
|
2
| null |
791
|
2
| null |
If I understand your question, you can look at the co-occurrence matrix formed using the terms following the title; e.g., senior FOO, primary BAR, etc. Then you can compute the similarity between any pair of terms, such as "senior" and "primary", using a suitable metric; e.g., the cosine similarity.
| null |
CC BY-SA 3.0
| null |
2014-07-21T20:42:00.143
|
2014-07-21T20:42:00.143
| null | null |
381
| null |
799
|
2
| null |
783
|
5
| null |
I would use a visual analysis. Since you know there is a repetition every 256 bytes, create an image 256 pixels wide by however many deep, and encode the data using brightness. In (i)python it would look like this:
```
import os, numpy, matplotlib.pyplot as plt
%matplotlib inline
def read_in_chunks(infile, chunk_size=256):
while True:
chunk = infile.read(chunk_size)
if chunk:
yield chunk
else:
# The chunk was empty, which means we're at the end
# of the file
return
fname = 'enter something here'
srcfile = open(fname, 'rb')
height = 1 + os.path.getsize(fname)/256
data = numpy.zeros((height, 256), dtype=numpy.uint8)
for i, line in enumerate(read_in_chunks(srcfile)):
vals = list(map(int, line))
data[i,:len(vals)] = vals
plt.imshow(data, aspect=1e-2);
```
This is what a PDF looks like:

A 256 byte periodic pattern would have manifested itself as vertical lines. Except for the header and tail it looks pretty noisy.
| null |
CC BY-SA 3.0
| null |
2014-07-21T21:17:07.303
|
2014-07-22T00:13:21.200
|
2014-07-22T00:13:21.200
|
381
|
381
| null |
800
|
2
| null |
793
|
4
| null |
One benefit of the schema-free NoSQL approach is that you don't commit prematurely and you can apply the right schema at query time using an appropriate tool like [Apache Drill](http://incubator.apache.org/drill/). See [this presentation](http://wiki.apache.org/incubator/DrillProposal?action=AttachFile&do=get&target=Drill+slides.pdf) for details. MySQL wouldn't be my first choice in a big data setting.
| null |
CC BY-SA 3.0
| null |
2014-07-21T21:29:26.270
|
2014-07-21T21:29:26.270
| null | null |
381
| null |
801
|
2
| null |
791
|
4
| null |
That's an interesting problem, thanks for bring out here on stack.
I think this problem is similar to when we apply [LSA(Latent Semantic Analysis)](http://en.wikipedia.org/wiki/Latent_semantic_analysis) in sentiment analysis to find list of positive and negative words with polarity with respect to some predefined positive and negative words.
Good reads:
- Learning Word Vectors for Sentiment Analysis
- Unsupervised Learning of Semantic Orientation from a Hundred-Billion-Word Corpus
So, according to me LSA is your best approach to begin with in this situation as it learns the underlying relation between the words from the corpus and probably that's what you are looking for.
| null |
CC BY-SA 3.0
| null |
2014-07-21T21:32:12.963
|
2014-07-21T21:32:12.963
| null | null |
2433
| null |
802
|
1
| null | null |
13
|
3019
|
I'm working on an application which requires creating a very large database of n-grams that exist in a large text corpus.
I need three efficient operation types: Lookup and insertion indexed by the n-gram itself, and querying for all n-grams that contain a sub-n-gram.
This sounds to me like the database should be a gigantic document tree, and document databases, e.g. Mongo, should be able to do the job well, but I've never used those at scale.
Knowing the Stack Exchange question format, I'd like to clarify that I'm not asking for suggestions on specific technologies, but rather a type of database that I should be looking for to implement something like this at scale.
|
Efficient database model for storing data indexed by n-grams
|
CC BY-SA 3.0
| null |
2014-07-21T23:53:11.120
|
2014-08-16T10:25:19.687
| null | null |
1163
|
[
"nlp",
"databases"
] |
803
|
2
| null |
802
|
1
| null |
I haven't done this before but it sounds like a job for a graph database given the functionality you want. [Here's a demo in neo4j](http://www.rene-pickhardt.de/download-google-n-gram-data-set-and-neo4j-source-code-for-storing-it/).
| null |
CC BY-SA 3.0
| null |
2014-07-22T00:06:10.500
|
2014-07-22T00:06:10.500
| null | null |
381
| null |
804
|
1
|
805
| null |
4
|
87
|
Basically, both are software systems that are based on data and algorithms.
|
What's the difference between data products and intelligent systems?
|
CC BY-SA 3.0
| null |
2014-07-22T01:12:03.860
|
2014-07-22T06:06:31.513
| null | null |
1117
|
[
"tools",
"education",
"definitions"
] |
805
|
2
| null |
804
|
1
| null |
This is a very vague question. However, I will try to make sense of it. Considering rules of logic as well as your statement that both entities are "software systems that are based on data and algorithms", it appears that data products are intelligent systems and intelligent systems are, to some degree, data products. Therefore, it can be argued that the difference between the terms "data products" and "intelligent systems" is purely in the focus (source of information or purpose of system dimensions) of each type of systems (data vs. intelligence/algorithms).
| null |
CC BY-SA 3.0
| null |
2014-07-22T02:27:43.710
|
2014-07-22T06:06:31.513
|
2014-07-22T06:06:31.513
|
2452
|
2452
| null |
806
|
1
|
807
| null |
97
|
101137
|
I was starting to look into area under curve(AUC) and am a little confused about its usefulness. When first explained to me, AUC seemed to be a great measure of performance but in my research I've found that some claim its advantage is mostly marginal in that it is best for catching 'lucky' models with high standard accuracy measurements and low AUC.
So should I avoid relying on AUC for validating models or would a combination be best?
|
Advantages of AUC vs standard accuracy
|
CC BY-SA 4.0
| null |
2014-07-22T03:43:20.327
|
2023-05-04T17:16:21.923
|
2023-05-04T17:16:21.923
|
43000
|
2653
|
[
"machine-learning",
"accuracy"
] |
807
|
2
| null |
806
|
84
| null |
Really great question, and one that I find that most people don't really understand on an intuitive level. `AUC` is in fact often preferred over accuracy for binary classification for a number of different reasons. First though, let's talk about exactly what `AUC` is. Honestly, for being one of the most widely used efficacy metrics, it's surprisingly obtuse to figure out exactly how `AUC` works.
`AUC` stands for `Area Under the Curve`, which curve you ask? Well, that would be the `ROC` curve. `ROC` stands for [Receiver Operating Characteristic](http://en.wikipedia.org/wiki/Receiver_operating_characteristic), which is actually slightly non-intuitive. The implicit goal of `AUC` is to deal with situations where you have a very skewed sample distribution, and don't want to overfit to a single class.
A great example is in spam detection. Generally, spam datasets are STRONGLY biased towards ham, or not-spam. If your data set is 90% ham, you can get a pretty damn good accuracy by just saying that every single email is ham, which is obviously something that indicates a non-ideal classifier. Let's start with a couple of metrics that are a little more useful for us, specifically the true positive rate (`TPR`) and the false positive rate (`FPR`):

Now in this graph, `TPR` is specifically the ratio of true positive to all positives, and `FPR` is the ratio of false positives to all negatives. (Keep in mind, this is only for binary classification.) On a graph like this, it should be pretty straightforward to figure out that a prediction of all 0's or all 1's will result in the points of `(0,0)` and `(1,1)` respectively. If you draw a line through these lines you get something like this:

Which looks basically like a diagonal line (it is), and by some easy geometry, you can see that the `AUC` of such a model would be `0.5` (height and base are both 1). Similarly, if you predict a random assortment of 0's and 1's, let's say 90% 1's, you could get the point `(0.9, 0.9)`, which again falls along that diagonal line.
Now comes the interesting part. What if we weren't only predicting 0's and 1's? What if instead, we wanted to say that, theoretically we were going to set a cutoff, above which every result was a 1, and below which every result were a 0. This would mean that at the extremes you get the original situation where you have all 0's and all 1's (at a cutoff of 0 and 1 respectively), but also a series of intermediate states that fall within the `1x1` graph that contains your `ROC`. In practice you get something like this:

So basically, what you're actually getting when you do an `AUC` over accuracy is something that will strongly discourage people going for models that are representative, but not discriminative, as this will only actually select for models that achieve false positive and true positive rates that are significantly above random chance, which is not guaranteed for accuracy.
| null |
CC BY-SA 4.0
| null |
2014-07-22T04:10:18.353
|
2019-02-08T16:20:53.637
|
2019-02-08T16:20:53.637
|
65710
|
548
| null |
808
|
1
|
809
| null |
11
|
2297
|
i want to become a data scientist. I studied applied statistics (actuarial science), so i have a great statistical background (regression, stochastic process, time series, just for mention a few). But now, I am going to do a master degree in Computer Science focus in Intelligent Systems.
Here is my study plan:
- Machine learning
- Advanced machine learning
- Data mining
- Fuzzy logic
- Recommendation Systems
- Distributed Data Systems
- Cloud Computing
- Knowledge discovery
- Business Intelligence
- Information retrieval
- Text mining
At the end, with all my statistical and computer science knowledge, can i call myself a data scientist? , or am i wrong?
Thanks for the answers.
|
Statistics + Computer Science = Data Science?
|
CC BY-SA 3.0
| null |
2014-07-22T08:39:33.810
|
2020-08-01T13:12:43.027
|
2016-02-25T13:36:09.703
|
11097
|
1117
|
[
"machine-learning",
"statistics",
"career"
] |
809
|
2
| null |
808
|
1
| null |
I think that you're on the right track toward becoming an expert data scientist. Recently I have answered related question [here on Data Science StackExchange](https://datascience.stackexchange.com/a/742/2452) (pay attention to the definition I mention there, as it essentially answers your question by itself, as well as to aspects of practicing software engineering and applying knowledge to solving real-world problems). I hope that you will find all that useful. Good luck in your career!
| null |
CC BY-SA 4.0
| null |
2014-07-22T09:03:49.270
|
2020-08-01T13:12:43.027
|
2020-08-01T13:12:43.027
|
98307
|
2452
| null |
810
|
1
|
811
| null |
59
|
15562
|
My 'machine learning' task is of separating benign Internet traffic from malicious traffic. In the real world scenario, most (say 90% or more) of Internet traffic is benign. Thus I felt that I should choose a similar data setup for training my models as well. But I came across a research paper or two (in my area of work) which have used a "class balancing" data approach to training the models, implying an equal number of instances of benign and malicious traffic.
In general, if I am building machine learning models, should I go for a dataset which is representative of the real world problem, or is a balanced dataset better suited for building the models (since certain classifiers do not behave well with class imbalance, or due to other reasons not known to me)?
Can someone shed more light on the pros and cons of both the choices and how to decide which one to go choose?
|
Should I go for a 'balanced' dataset or a 'representative' dataset?
|
CC BY-SA 3.0
| null |
2014-07-22T12:29:10.050
|
2021-02-20T23:23:35.410
|
2016-11-29T10:52:50.630
|
26596
|
2661
|
[
"machine-learning",
"dataset",
"class-imbalance"
] |
811
|
2
| null |
810
|
41
| null |
I would say the answer depends on your use case. Based on my experience:
- If you're trying to build a representative model -- one that describes the data rather than necessarily predicts -- then I would suggest using a representative sample of your data.
- If you want to build a predictive model, particularly one that performs well by measure of AUC or rank-order and plan to use a basic ML framework (i.e. Decision Tree, SVM, Naive Bayes, etc), then I would suggest you feed the framework a balanced dataset. Much of the literature on class imbalance finds that random undersampling (down sampling the majority class to the size of the minority class) can drive performance gains.
- If you're building a predictive model, but are using a more advanced framework (i.e. something that determines sampling parameters via wrapper or a modification of a bagging framework that samples to class equivalence), then I would suggest again feeding the representative sample and letting the algorithm take care of balancing the data for training.
| null |
CC BY-SA 3.0
| null |
2014-07-22T14:22:49.137
|
2014-07-22T14:22:49.137
| null | null |
2513
| null |
812
|
1
|
814
| null |
27
|
25487
|
What is the best tool to use to visualize (draw the vertices and edges) a graph with 1000000 vertices? There are about 50000 edges in the graph. And I can compute the location of individual vertices and edges.
I am thinking about writing a program to generate a svg. Any other suggestions?
|
Visualizing a graph with a million vertices
|
CC BY-SA 3.0
| null |
2014-07-22T15:17:04.810
|
2021-10-07T13:16:31.077
|
2014-07-22T18:34:44.740
|
471
|
192
|
[
"visualization",
"graphs"
] |
813
|
2
| null |
812
|
8
| null |
[https://gephi.github.io/](https://gephi.github.io/) says it can handle a million edges. If your graph has 1000000 vertices and only 50000 edges then most of your vertices won't have any edges anyway.
In fact the Gephi spec is the dual of your example: "Networks up to 50,000 nodes and 1,000,000 edges"
| null |
CC BY-SA 3.0
| null |
2014-07-22T15:44:04.160
|
2014-07-22T15:44:04.160
| null | null |
471
| null |
814
|
2
| null |
812
|
22
| null |
I also suggest `Gephi` software ([https://gephi.github.io](https://gephi.github.io)), which seems to be quite powerful. Some additional information on using `Gephi` with large networks can be found [here](https://forum.gephi.org/viewtopic.php?t=1554) and, more generally, [here](https://forum.gephi.org/viewforum.php?f=25). `Cytoscape` ([http://www.cytoscape.org](http://www.cytoscape.org)) is an alternative to `Gephi`, being an another popular platform for complex network analysis and visualization.
If you'd like to work with networks programmatically (including visualization) in R, Python or C/C++, you can check `igraph` collection of libraries. Speaking of R, you may find interesting the following blog posts: on using R with Cytoscape ([http://www.vesnam.com/Rblog/viznets1](http://www.vesnam.com/Rblog/viznets1)) and on using R with Gephi ([http://www.vesnam.com/Rblog/viznets2](http://www.vesnam.com/Rblog/viznets2)).
For extensive lists of network analysis and visualization software, including some comparison and reviews, you might want to check the following pages: 1) [http://wiki.cytoscape.org/Network_analysis_links](http://wiki.cytoscape.org/Network_analysis_links); 2) [http://www.kdnuggets.com/software/social-network-analysis.html](http://www.kdnuggets.com/software/social-network-analysis.html); 3) [http://www.activatenetworks.net/social-network-analysis-sna-software-review](http://www.activatenetworks.net/social-network-analysis-sna-software-review).
| null |
CC BY-SA 3.0
| null |
2014-07-22T16:40:17.733
|
2014-07-22T16:40:17.733
| null | null |
2452
| null |
815
|
1
|
819
| null |
5
|
172
|
I'm currently facing a project that I could solve with a relational database in a relatively painful way. Having heard so much about NOSQL, I'm wondering if there is not a more appropriate way of tackling it:
Suppose we are tracking a group of animals in a forest (n ~ 500) and would like to keep a record of a set of observations (this is a fictional scenario).
We would like to store the following information in a database:
- a unique identifier for each animal
- a description of the animal with structured fields: Species, Genus, Family, ...
- a free text field with additional information
- each time-point at which it was detected close to a reference point
- a picture of the animal
- an indication whether two given animals are siblings
And:
- there might be additional features appearing later as more data comes in
We would like to be able to execute the following types of queries:
- return all the animals spotted between in a given time interval
- return all the animals of a given Species or Family
- perform a text search on the free text field
Which particular database system would you recommend ? Is there any tutorial / examples that I could use as a starting point ?
|
is this a good case for NOSQL?
|
CC BY-SA 3.0
| null |
2014-07-22T19:20:47.580
|
2014-07-23T07:01:01.140
| null | null |
906
|
[
"nosql",
"databases"
] |
816
|
2
| null |
808
|
9
| null |
Well it depends on what kind of "Data Science" you wish to get in to. For basic analytics and reporting statistics will certainly help, but for Machine Learning and Artificial Intelligence then you'll want a few more skills
- Probability theory - you must have a solid background in pure probability so that you can decompose any problem, whether seen before or not, into probabilistic principles. Statistics helps a lot for already solved problems, but new and unsolved problems require a deep understanding of probability so that you can design appropriate techniques.
- Information Theory - this (relative to statistics) is quite a new field (though still decades old), the most important work was by Shannon, but even more important and often neglected note in literature is work by Hobson that proved that Kullback-Leibler Divergence is the only mathematical definition that truly captures the notion of a "measure of information". Now fundamental to artificial intellgence is being able to quantify information. Suggest reading "Concepts in Statistical Mechanics" - Arthur Hobson (very expensive book, only available in academic libraries).
- Complexity Theory - A big problem many Data Scientists face that do not have a solid complexity theory background is that their algorithms do not scale, or just take an extremely long time to run on large data. Take PCA for example, many peoples favourite answer to the interview question "how do you reduce the number of features in our dataset", but even if you tell the candidate "the data set is really really really large" they still propose various forms of PCA that are O(n^3). If you want to stand out, you want to be able to solve each problem on it's own, NOT throw some text book solution at it designed a long time ago before Big Data was such a hip thing. For that you need to understand how long things take to run, not only theoretically, but practically - so how to use a cluster of computers to distribute an algorithm, or which data structures take up less memory.
- Communication Skills - A huge part of Data Science is understanding business. Whether it's inventing a product driven by data science, or giving business insight driven by data science, being able to communicate well with both the Project and Product Managers, the tech teams, and your fellow data scientists is very important. You can have an amazing idea, say an awesome AI solution, but if you cannot effectively (a) communicate WHY that will make the business money, (b) convince your collegues it will work and (c) explain to tech people how you need their help to build it, then it wont get done.
| null |
CC BY-SA 3.0
| null |
2014-07-22T21:12:08.333
|
2014-07-22T21:12:08.333
| null | null |
2668
| null |
817
|
2
| null |
808
|
6
| null |
Data scientist (to me) a big umbrella term. I would see a data scientist as a person who can proficiently use techniques from the fields of data mining, machine learning, pattern classification, and statistics.
However, those terms are intertwined to: machine learning is tied together with pattern classification, and also data mining overlaps when it comes finding patterns in data. And all techniques have their underlying statistical principles. I always picture this as a Venn diagram with a huge intersection.
Computer sciences is related to all those fields too. I would say that you need "data science" techniques to do computer-scientific research, but computer science knowledge is not necessarily implied in "data science". However, programming skills - I see programming and computer science as different professions, where programming is more the tool in order solve problems - are also important to work with the data and to conduct data analysis.
You have a really nice study plan, and it all makes sense. But I am not sure if you "want" to call yourself just "data scientist", I have the impression that "data scientist" is such a ambiguous term that can mean everything or nothing. What I want to convey is that you will end up being something more - more "specialized" - than "just" a data scientist.
| null |
CC BY-SA 3.0
| null |
2014-07-22T21:24:33.820
|
2014-07-22T21:24:33.820
| null | null | null | null |
818
|
2
| null |
812
|
8
| null |
I think, that `Gephi` could face with lack-of-memory issues, you will need at least 8Gb of RAM. Though number of edges is not extremely huge.
Possibly, more appropriate tool in this case would be [GraphViz](http://www.graphviz.org/). It's a command line tool for network visualizations, and presumably would be more tolerant to graph size. Moreover, as I remember, in `GraphViz` it is possible to use precomputed coordinates to facilitate computations.
I've tried to find a real-world examples of using `GraphViz` with huge graphs, but didn't succeed. Though I found similar discussion on [Computational Science](https://scicomp.stackexchange.com/questions/3315/visualizing-very-large-link-graphs).
| null |
CC BY-SA 3.0
| null |
2014-07-23T05:29:06.903
|
2014-07-23T05:29:06.903
|
2017-04-13T12:53:55.957
|
-1
|
941
| null |
819
|
2
| null |
815
|
7
| null |
Three tables: animal, observation, and sibling. The observation has an animal_id column which links to the animal table, and the sibling table has animal_1_id and animal_2_id columns that indicates two animals are siblings for each row.
Even with 5000 animals and 100000 observations I don't think query time will be a problem for something like PostgreSQL for most reasonable queries (obviously you can construct unreasonable queries but you can do that in any system).
So I don't see how this is "relatively painful". Relative to what? The only complexity is the sibling table. In NOSQL you might store the full list of siblings in the record for each animal, but then when you add a sibling relationship you have to add it to both sibling's animal records. With the relational table approach I've outlined, it only exists once, but at the expense of having to test against both columns to find an animal's siblings.
I'd use PostgreSQL, and that gives you the option of using PostGIS if you have location data - this is a geospatial extension to PostgreSQL that lets you do spatial queries (point in polygon, points near a point etc) which might be something for you.
I really don't think the properties of NOSQL databases are a problem here for you - you aren't changing your schema every ten minutes, you probably do care that your database is ACID-compliant, and you don't need something web-scale.
[http://www.mongodb-is-web-scale.com/](http://www.mongodb-is-web-scale.com/) [warning: strong language]
| null |
CC BY-SA 3.0
| null |
2014-07-23T07:01:01.140
|
2014-07-23T07:01:01.140
| null | null |
471
| null |
820
|
1
|
822
| null |
7
|
112
|
Apologies if this is very broad question, what I would like to know is how effective is A/B testing (or other methods) of effectively measuring the effects of a design decision.
For instance we can analyse user interactions or click results, purchase/ browse decisions and then modify/tailor the results presented to the user.
We could then test the effectiveness of this design change by subjecting 10% of users to the alternative model randomly but then how objective is this?
How do we avoid influencing the user by the model change, for instance we could decided that search queries for 'David Beckham' are probably about football so search results become biased towards this but we could equally say that his lifestyle is just as relevant but this never makes it into the top 10 results that are returned.
I am curious how this is dealt with and how to measure this effectively.
My thoughts are that you could be in danger of pushing a model that you think is correct and the user obliges and this becomes a self-fulfilling prophecy.
I've read an article on this: [http://techcrunch.com/2014/06/29/ethics-in-a-data-driven-world/](http://techcrunch.com/2014/06/29/ethics-in-a-data-driven-world/) and also the book: [http://shop.oreilly.com/product/0636920028529.do](http://shop.oreilly.com/product/0636920028529.do) which discussed this so it piqued my interest.
|
How can we effectively measure the impact of our data decisions
|
CC BY-SA 3.0
| null |
2014-07-23T08:06:21.417
|
2014-07-23T13:49:01.423
| null | null |
95
|
[
"search"
] |
821
|
2
| null |
784
|
7
| null |
"Because its there".
The data has a seasonal pattern. So you model it. The data has a trend. So you model it. Maybe the data is correlated with the number of sunspots. So you model that. Eventually you hope to get nothing left to model than uncorrelated random noise.
But I think you've screwed up your STL computation here. Your residuals are clearly not serially uncorrelated. I rather suspect you've not told the function that your "seasonality" is a 24-hour cycle rather than an annual one. But hey you haven't given us any code or data so we don't really have a clue what you've done, do we? What do you think "seasonality" even means here? Do you have any idea?
Your data seems the have three peaks every 24 hours. Really? Is this 'gas'='gasoline'='petrol' or gas in some heating/electric generating system? Either way if you know a priori there's an 8 hour cycle, or an 8 hour cycle on top of a 24 hour cycle on top of what looks like a very high frequency one or two hour cycle you put that in your model.
Actually you don't even say what your x-axis is so maybe its days and then I'd fit a daily cycle, a weekly cycle, and then an annual cycle. But given how it all changes at time=85 or so I'd not expect a model to do well on both sides of that.
With statistics (which is what this is, sorry to disappoint you but you're not a data scientist yet) you don't just robotically go "And.. Now.. I.. Fit.. An... S TL model....". You look at your data, try and get some understanding, then propose a model, fit it, test it, and use the parameters it make inferences about the data. Fitting cyclic seasonal patterns is part of that.
| null |
CC BY-SA 3.0
| null |
2014-07-23T12:57:37.073
|
2014-07-23T12:57:37.073
| null | null |
471
| null |
822
|
2
| null |
820
|
4
| null |
In A/B testing, bias is handled very well by ensuring visitors are randomly assigned to either version A or version B of the site. This creates independent samples drawn from the same population. Because the groups are independent and, on average, only differ in the version of the site seen, the test measures the effect of the design decision.
Slight aside: Now you might argue that the A group or B group may differ in some demographic. That commonly happens by random chance. To a certain degree this can be taken care of by covariate adjusted randomization. It can also be taken care of by adding covariates to the model that tests the effect of the design decision. It should be noted that there is still some discussion about the proper way to do this within the statistics community. Essentially A/B testing is an application of a [Randomized Control Trial](http://en.wikipedia.org/wiki/Randomized_controlled_trial) to website design. Some people disagree with adding covariates to the test. Others, such as Frank Harrel (see [Regression Modeling Strategies](http://rads.stackoverflow.com/amzn/click/0387952322)) argue for the use of covariates in such models.
I would offer the following suggestions:
- Design the study in advance so as to take care of as much sources of bias and variation as possible.
- Let the data speak for itself. As you get more data (like about searches for David Beckham), let it dominate your assumptions about how the data should be (as how the posterior dominates the prior in Bayesian analysis when the sample size becomes large).
- Make sure your data matches the assumptions of the model.
| null |
CC BY-SA 3.0
| null |
2014-07-23T13:49:01.423
|
2014-07-23T13:49:01.423
| null | null |
178
| null |
823
|
1
|
1162
| null |
18
|
311
|
There was a recent furore with [facebook experimenting on their users to see if they could alter user's emotions](http://online.wsj.com/articles/furor-erupts-over-facebook-experiment-on-users-1404085840) and now [okcupid](http://www.bbc.co.uk/news/technology-28542642).
Whilst I am not a professional data scientist I read about [data science ethics](http://columbiadatascience.com/2013/11/25/data-science-ethics/) from [Cathy O'Neill's book 'Doing Data Science'](http://shop.oreilly.com/product/0636920028529.do) and would like to know if this is something that professionals are taught at academic level (I would expect so) or something that is ignored or is lightly applied in the professional world. Particularly for those who ended up doing data science accidentally.
Whilst the linked article touched on data integrity, the book also discussed the moral ethics behind understanding the impact of the data models that are created and the impact of those models which can have adverse effects when used inappropriately (sometimes unwittingly) or when the models are inaccurate, again producing adverse results.
The article discusses a code of practice and mentions the [Data Science Association's Code of conduct](http://www.datascienceassn.org/code-of-conduct.html), is this something that is in use? Rule 7 is of particular interest (quoted from their website):
>
(a) A person who consults with a data scientist about the possibility
of forming a client-data scientist relationship with respect to a
matter is a prospective client.
(b) Even when no client-data scientist relationship ensues, a data
scientist who has learned information from a prospective client shall
not use or reveal that information.
(c) A data scientist subject to paragraph (b) shall not provide
professional data science services for a client with interests
materially adverse to those of a prospective client in the same or a
substantially related industry if the data scientist received
information from the prospective client that could be significantly
harmful to that person in the matter
Is this something that is practiced professionally? Many users blindly accept that we get some free service (mail, social network, image hosting, blog platform etc..) and agree to an EULA in order to have ads pushed at us.
Finally how is this regulated, I often read about users being up in arms when the terms of a service change but it seems that it requires some liberty organisation, class action or a [senator](http://www.cnet.com/news/senator-asks-ftc-to-investigate-facebooks-mood-study/) to react to such things before something happens.
By the way I am not making any judgements here or saying that all data scientists behave like this, I'm interested in what is taught academically and practiced professionally.
|
How should ethics be applied in data science
|
CC BY-SA 3.0
| null |
2014-07-23T14:04:31.057
|
2016-05-27T14:02:34.900
|
2014-07-29T12:39:59.630
|
95
|
95
|
[
"social-network-analysis"
] |
824
|
2
| null |
791
|
1
| null |
(too long for a comment)
Basically, @Emre's answer is correct: simple correlation matrix and cosine distance should work well*. There's one subtlety, though - job titles are too short to carry important context. Let me explain this.
Imagine LinkedIn profiles (which is pretty good source for data). Normally, they contain 4-10 sentences describing person's skills and qualifications. It's pretty likely that you find phrases like "lead data scientist" and "professional knowledge of Matlab and R" in a same profile, but it's very unlikely to also see "junior Java developer" in it. So we may say that "lead" and "professional" (as well as "data scientist" and "Matlab" and "R") often occur in same contexts, but they are rarely found together with "junior" and "Java".
Co-occurrence matrix shows exactly this. The more 2 words occur in same context, the more similar their vectors in the matrix will look like. And cosine distance is just a good way to measure this similarity.
But what about job titles? Normally they are much shorter and don't actually create enough context to catch similarities. Luckily, you don't need source data to be titles themselves - you need to find similarities between skills in general, not specifically in titles. So you can simply build co-occurrence matrix from (long) profiles and then use it to measure similarity of titles.
* - in fact, it's already worked for me on a similar project.
| null |
CC BY-SA 3.0
| null |
2014-07-23T15:46:48.977
|
2014-07-23T15:46:48.977
| null | null |
1279
| null |
825
|
1
| null | null |
5
|
208
|
Most vehicle license/number plate extractors I've found involve reading a plate from an image (OCR) but I'm interested in something that could tag instances of license plates in a body of text. Are there any such annotators out there?
|
Are there any annotators or Named Entity Recognition for license plate numbers?
|
CC BY-SA 3.0
| null |
2014-07-24T00:01:40.760
|
2014-07-24T02:23:23.703
| null | null |
1192
|
[
"text-mining"
] |
826
|
2
| null |
825
|
-1
| null |
This can be done using `regular expressions`.
2 letters followed by a number (\d denotes digits) would be
```
[A-Z]{2} \d*
```
2 or 3 letters followed by a number is
```
[A-Z]{2,3} \d*
```
| null |
CC BY-SA 3.0
| null |
2014-07-24T00:34:41.130
|
2014-07-24T00:34:41.130
| null | null |
325
| null |
827
|
2
| null |
825
|
4
| null |
There are a lot of pretty decent tools out there for text annotation in general, and given the broad nature of the task you're approaching (license plates are about as general as words), the annotation tools you are looking at should probably come from the more classical tools for annotation.
There was actually a pretty good discussion about annotation tools on [this](https://datascience.stackexchange.com/questions/223/how-to-annotate-text-documents-with-meta-data/404#404) question, which should actually apply to this problem. The most relied-upon thing in annotation right now is probably `brat`. You can learn more about `brat` [here](http://brat.nlplab.org/).
Hope that helps! Let me know if you've got any more questions.
| null |
CC BY-SA 3.0
| null |
2014-07-24T02:23:23.703
|
2014-07-24T02:23:23.703
|
2017-04-13T12:50:41.230
|
-1
|
548
| null |
828
|
2
| null |
812
|
4
| null |
Reporting back: I ended up coding graphml and using yEd for visualization (just because I am familiar with this combination. I bet gephi or graphviz would work fine and might even be better). Since I computed the location of all nodes, memory was not such big of an issue. Coding graphml is a little easier comparing to coding svg, since I don't have to explicitly specify the placement of edges.
| null |
CC BY-SA 3.0
| null |
2014-07-24T03:31:32.087
|
2014-07-24T03:31:32.087
| null | null |
192
| null |
829
|
1
|
830
| null |
4
|
96
|
While running the below pig script I am getting an error in line4:
If it is GROUP then I am getting error.
If I change from 'GROUP' TO 'group' in line4, then the script is running.
What is the difference between group and GROUP?
```
LINES = LOAD '/user/cloudera/datapeople.csv' USING PigStorage(',') AS ( firstname:chararray, lastname:chararray, address:chararray, city:chararray, state:chararray, zip:chararray );
WORDS = FOREACH LINES GENERATE FLATTEN(TOKENIZE(zip)) AS ZIPS;
WORDSGROUPED = GROUP WORDS BY ZIPS;
WORDBYCOUNT = FOREACH WORDSGROUPED GENERATE GROUP AS ZIPS, COUNT(WORDS);
WORDSSORT = ORDER WORDBYCOUNT BY $1 DESC;
DUMP WORDSSORT;
```
|
Pig script code error?
|
CC BY-SA 4.0
| null |
2014-07-24T06:26:07.290
|
2019-06-07T17:09:20.403
|
2019-06-07T17:09:20.403
|
29169
|
1314
|
[
"bigdata",
"apache-hadoop"
] |
830
|
2
| null |
829
|
2
| null |
'group' in strictly lower case in the FOREACH is the thing you are looping/grouping over.
[http://squarecog.wordpress.com/2010/05/11/group-operator-in-apache-pig/](http://squarecog.wordpress.com/2010/05/11/group-operator-in-apache-pig/) says:
>
When you group a relation, the result is a new relation with two
columns: “group” and the name of the original relation.
Column names are case sensitive, so you have to use lower-case 'group' in your FOREACH.
'GROUP' in upper case is the grouping operator. You can't mix them. So don't do that.
| null |
CC BY-SA 3.0
| null |
2014-07-24T07:00:13.187
|
2014-07-24T07:00:13.187
| null | null |
471
| null |
831
|
1
| null | null |
5
|
96
|
Have you heard of the "Data Science Association"?
URL: [http://www.datascienceassn.org/](http://www.datascienceassn.org/)
Do you expect it to become a professional body like the Actuaries Institute?
If yes, then why?
If no, then why not and do you see anyone else becoming the professional body?
Lastly, is this question "on-topic" ?
|
Data Science Association?
|
CC BY-SA 3.0
| null |
2014-07-24T07:38:07.163
|
2014-07-24T09:32:20.100
|
2014-07-24T09:32:20.100
|
366
|
366
|
[
"knowledge-base"
] |
832
|
1
| null | null |
7
|
1621
|
My data contains a set of start times and duration for an action. I would like to plot this so that for a given time slice I can see how many actions are active. I'm currently thinking of this as a histogram with time on the x axis and number of active actions on the y axis.
My question is, how should I adjust the data so that this is able to be plotted?
The times for an action can be between 2 seconds and a minute. And, at any given time I would estimate there could be about 100 actions taking place. Ideally a single plot would be able to show hours of data. The accuracy of the data is in milliseconds.
In the past the way that I have done this is to count for each second how many actions started , ended, or were active. This gave me a count of active actions for each second. The issue I found with this technique was that it made it difficult to adjust the time slice that I was looking at. Looking at a time slice of a minute was difficult to compute and looking at time slices of less than a second was impossible.
I'm open to any advice on how to think about this issue.
Thanks in advance!
|
How do you plot overlapping durations?
|
CC BY-SA 3.0
| null |
2014-07-24T14:04:09.533
|
2014-07-26T21:03:13.470
|
2014-07-25T18:30:26.490
|
2702
|
2702
|
[
"visualization"
] |
833
|
2
| null |
832
|
1
| null |
This can be done in `R` using `ggplot`. Based on [this](https://stackoverflow.com/questions/18102224/drawing-gantt-charts-with-r-to-sub-second-accuracy) question, it could be done with this code where `limits` is the date range of the plot.
```
tasks <- c("Task1", "Task2")
dfr <- data.frame(
name = factor(tasks, levels = tasks),
start.date = c("2014-08-07 09:03:25.815", "2014-08-07 09:03:25.956"),
end.date = c("2014-08-07 09:03:28.300", "2014-08-07 09:03:30.409")
)
mdfr <- melt(dfr, measure.vars = c("start.date", "end.date"))
mdfr$time<-as.POSIXct(mdfr$value)
ggplot(mdfr, aes(time,name)) +
geom_line(size = 6) +
xlab("") + ylab("") +
theme_bw()+
scale_x_datetime(breaks=date_breaks("2 sec"),
limits = as.POSIXct(c('2014-08-07 09:03:24','2014-08-07 09:03:29')))
```

| null |
CC BY-SA 3.0
| null |
2014-07-24T21:14:45.793
|
2014-07-24T21:14:45.793
|
2017-05-23T12:38:53.587
|
-1
|
325
| null |
834
|
1
| null | null |
17
|
8539
|
I am trying to build a recommendation system using collaborative filtering. I have the usual `[user, movie, rating]` information. I would like to incorporate an additional feature like 'language' or 'duration of movie'. I am not sure what techniques I could use for such a problem.
Please suggest references or packages in python/R.
|
Recommending movies with additional features using collaborative filtering
|
CC BY-SA 4.0
| null |
2014-07-25T00:58:12.253
|
2021-02-18T20:43:25.357
|
2019-06-08T03:47:52.223
|
29169
|
1131
|
[
"python",
"r",
"recommender-system"
] |
835
|
2
| null |
793
|
4
| null |
Consider, try, and perhaps even use multiple databases. It's not just a "performance" issue at play here. It's really going to come down to your requirements. How much data are you talking about? what kind of data? how fast do you need it? Are you more read heavy or write heavy?
Here's one thing you can't do in a SQL database: Calculate sentiment. [http://www.slideshare.net/shift8/mongodb-machine-learning](http://www.slideshare.net/shift8/mongodb-machine-learning)
Of course the speed in that case may not be fast enough for your needs, but it is something that's possible. With some caching of specific aggregate values, it was quite acceptable even. Why would you do this? Convenience.
Convenience really is something that you're going to be persuaded by. That's exactly why (in my opinion) NoSQL databases were created. Performance too of course, but I'm trying to discount benchmarks and focus more on other concerns.
MongoDB (and some other NoSQL) databases have some very powerful features such as built-in map/reduce. This could result in a savings both in cost and time over using something like Hadoop. Or it could provide a prototype or MVP to launch a larger business.
What about graph databases? They're "NoSQL" too. Look at databases like OrientDB. If you want to argue performance ...I don't think you're gonna show me a SQL database that's faster there =) ...and graph databases have some really amazing application based on what you need to do.
Rule of technology (and the internet) don't get too comfortable with one thing. You're gonna be limited and set yourself up for failure.
| null |
CC BY-SA 3.0
| null |
2014-07-25T03:05:38.107
|
2014-07-25T03:05:38.107
| null | null |
2711
| null |
836
|
2
| null |
834
|
17
| null |
Here some resources that might be helpful:
- Recommenderlab - a framework and open source software for developing and testing recommendation algorithms. Corresponding R package recommenderlab.
- The following blog post illustrates the use of recommenderlab package (which IMHO can be generalized for any open source recommendation engine) for building movie recommendation application, based on collaborative filtering.
- Research on recommender systems - a nice webpage with resources on the topic, maintained by Recommenderlab's lead developer Michael Hahsler.
- Mortar Recommendation Engine - an open source customizable recommendation engine for Hadoop and Pig, written in Python and Java. Company, sponsoring the development of this project, Mortar Data, offers general commercial cloud platform for development and hosting data science software projects, including ones based on the Mortar Recommendation Engine (development and hosting of public projects are free): http://www.mortardata.com. Mortar Data provides help in form of public Q&A forum (https://answers.mortardata.com) as well as a comprehensive tutorial on building recommendation engine using open technologies (http://help.mortardata.com/data_apps/recommendation_engine).
- "Introduction to Recommender Systems" - a relevant Coursera course (MOOC), which content and description provide additional resources on the topic.
- PredictionIO - an open source machine learning server software, which allows building data science applications, including recommendation systems (source code is available on GitHub). PredictionIO includes a built-in recommendation engine (http://docs.prediction.io/current/engines/itemrec/index.html) and supports a wide range of programming languages and frameworks via RESTful APIs as well as SDKs/plug-ins. PredictionIO maintains an Amazon Machine Image on AWS Marketplace for deploying applications on the AWS infrastructure.
- Additional open source software projects, relevant to the topic (discovered via MLOSS website on machine learning open source software: http://www.mloss.org):
Jubatus
MyMediaLite
TBEEF
PREA
CofiRank
- The following relevant R blog posts are also interesting:
"Simple tools for building a recommendation engine"
"Recommendation System in R"
| null |
CC BY-SA 4.0
| null |
2014-07-25T04:54:07.787
|
2021-02-18T20:43:25.357
|
2021-02-18T20:43:25.357
|
29169
|
2452
| null |
837
|
2
| null |
783
|
1
| null |
I would look at the `raster` package for this, which can read in raw binary data and present it as NxM grids. It can even extract subsets of large binary grids without having to read in the whole file (the R raster object itself is just a proxy to the data, not the data itself).
| null |
CC BY-SA 3.0
| null |
2014-07-25T12:59:00.850
|
2014-07-25T12:59:00.850
| null | null |
471
| null |
838
|
1
| null | null |
3
|
109
|
While running the below pig script I am getting error in line4: If it is `GROUP` then I am getting error. If I change from `GROUP` TO `group` in line4, then the script is running. What is the difference between group and GROUP.
```
LINES = LOAD '/user/cloudera/datapeople.csv' USING PigStorage(',') AS ( firstname:chararray, lastname:chararray, address:chararray, city:chararray, state:chararray, zip:chararray );
WORDS = FOREACH LINES GENERATE FLATTEN(TOKENIZE(zip)) AS ZIPS;
WORDSGROUPED = GROUP WORDS BY ZIPS;
WORDBYCOUNT = FOREACH WORDSGROUPED GENERATE GROUP AS ZIPS, COUNT(WORDS);
WORDSSORT = ORDER WORDBYCOUNT BY $1 DESC;
DUMP WORDSSORT;
```
|
Pig latin code error
|
CC BY-SA 3.0
| null |
2014-07-24T06:34:50.083
|
2014-07-25T15:02:21.217
| null | null |
1314
|
[
"apache-hadoop",
"bigdata"
] |
839
|
2
| null |
838
|
0
| null |
When we do grouping of the data, Pig creates a new key named "group" and puts all the tuple matching that key into a bag and associates the bag with the key. So after the group operation schema of the grouped data will be something like
```
raw = load '$input' using PigStorage('\u0001') as (id1:int, name:chararray);
groupdata1 = group raw by (id1,name);
describe groupdata1;
{group: (id1: int,name: chararray),raw: {(id1: int,name: chararray)}}
```
The 'GROUP' in line 4 you are trying to access is one of the attribute of the schema from the last statement. These attribute name are case sensitive. It will produce the error saying it doesn't exist in the schema. So you need to use 'group' only to access it.
| null |
CC BY-SA 3.0
| null |
2014-07-25T09:05:09.493
|
2014-07-25T09:05:09.493
| null | null |
63174
| null |
840
|
1
| null | null |
2
|
553
|
I've been trying to create a similarity matrix in pandas from with a matrix multiplication operation on a document-term count matrix with 2264 rows and 20475 columns.
The calculation completes in IPython but inspection shows the results all come back as `NaN`.
I've also tried doing the same job in numpy, tried converting the original matrix `to_sparse` and even re-casting the values as integers, but still no joy.
Can anyone suggest the best approach to tackle the problem?
EDIT: Here's my code thus far:
```
path = "../../reuters.db"
%pylab inline
import pandas as pd
import numpy as np
import pandas.io.sql as psql
import sqlite3 as lite
con = lite.connect(path)
with con:
sql = "SELECT * FROM Frequency"
df = psql.frame_query(sql, con)
print df.shape
df = df.rename(columns={"term":"term_id", "count":"count_id"})
pivoted = df.pivot('docid', 'term_id', 'count_id')
pivoted.to_sparse()
similarity_matrix = pivoted.dot(pivoted.T)
```
|
How to fix similarity matrix in pandas returning all NaNs?
|
CC BY-SA 4.0
| null |
2014-07-25T17:18:21.393
|
2020-07-31T14:25:36.850
|
2020-07-31T14:25:36.850
|
98307
|
974
|
[
"pandas",
"similarity"
] |
841
|
2
| null |
834
|
5
| null |
Instead of collaborative filtering I would use the matrix factorization approach, wherein users and movies alike a represented by vectors of latent features whose dot products yield the ratings. Normally one merely selects the rank (number of features) without regard to what the features represent, and the algorithm does the rest. Like PCA, the result is not immediately interpretable but it yields good results. What you want to do is extend the movie matrix to include the additional features you mentioned and make sure that they stay fixed as the algorithm estimates the two matrices using regularizastion. The corresponding entries in the user matrix will be initialized randomly, then estimated by the matrix factorization algorithm. It's a versatile and performant approach but it takes some understanding of machine learning, or linear algebra at least.
I saw a nice ipython notebook a while back but I can't find it right now, so I'll refer you to [another one](http://nbviewer.ipython.org/github/diktat/CPSC540machinelearning/blob/master/1.4%20Collaborative%20Filtering%20for%20Movie%20Recommendation.ipynb) which, while not as nice, still clarifies some of the maths.
| null |
CC BY-SA 3.0
| null |
2014-07-25T18:12:49.847
|
2014-07-25T18:12:49.847
| null | null |
381
| null |
842
|
1
|
843
| null |
28
|
42811
|
I don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion.
I have just started with Data Science and Machine learning. I am looking for long term project ideas which I can work on for like 8 months.
A mix of Data Science and Machine learning would be great.
A project big enough to help me understand the core concepts and also implement them at the same time would be very beneficial.
|
Data Science Project Ideas
|
CC BY-SA 3.0
| null |
2014-07-25T18:36:31.340
|
2020-08-20T18:57:12.080
|
2014-07-27T03:35:06.853
|
1352
|
2725
|
[
"machine-learning",
"bigdata",
"dataset"
] |
843
|
2
| null |
842
|
29
| null |
I would try to analyze and solve one or more of the problems published on [Kaggle Competitions](https://www.kaggle.com/competitions). Note that the competitions are grouped by their expected complexity, from `101` (bottom of the list) to `Research` and `Featured` (top of the list). A color-coded vertical band is a visual guideline for grouping. You can assess time you could spend on a project by adjusting the expected length of corresponding competition, based on your skills and experience.
A number of data science project ideas can be found by browsing [Coursolve](https://www.coursolve.org/browse-needs?query=Data%20Science) webpage.
If you have skills and desire to work on a real data science project, focused on social impacts, visit [DataKind](http://www.datakind.org/projects) projects page. More projects with social impacts focus can be found at [Data Science for Social Good](http://dssg.io/projects) webpage.
Science Project Ideas page at [My NASA Data](http://mynasadata.larc.nasa.gov/804-2) site looks like another place to visit for inspiration.
If you would like to use open data, this long list of applications on `Data.gov` can provide you with some interesting [data science](http://www.data.gov/applications) project ideas.
| null |
CC BY-SA 4.0
| null |
2014-07-25T20:50:14.540
|
2020-08-20T18:57:12.080
|
2020-08-20T18:57:12.080
|
98307
|
2452
| null |
844
|
1
| null | null |
7
|
2523
|
so I'm using Spark to do sentiment analysis, and I keep getting errors with the serializers it uses (I think) to pass python objects around.
```
PySpark worker failed with exception:
Traceback (most recent call last):
File "/Users/abdul/Desktop/RSI/spark-1.0.1-bin- hadoop1/python/pyspark/worker.py", line 77, in main
serializer.dump_stream(func(split_index, iterator), outfile)
File "/Users/abdul/Desktop/RSI/spark-1.0.1-bin- hadoop1/python/pyspark/serializers.py", line 191, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/Users/abdul/Desktop/RSI/spark-1.0.1-bin- hadoop1/python/pyspark/serializers.py", line 123, in dump_stream
for obj in iterator:
File "/Users/abdul/Desktop/RSI/spark-1.0.1-bin- hadoop1/python/pyspark/serializers.py", line 180, in _batched
for item in iterator:
TypeError: __init__() takes exactly 3 arguments (2 given)
```
and the code for serializers is available [here](https://spark.apache.org/docs/latest/api/python/pyspark.serializers-pysrc.html#PickleSerializer.dumps)
and my code is [here](https://github.com/seashark97/Scalable-Sentiment-Analysis/blob/master/spark_test.py)
|
Using Apache Spark to do ML. Keep getting serializing errors
|
CC BY-SA 3.0
| null |
2014-07-25T21:03:44.663
|
2016-05-10T13:59:41.870
|
2016-05-10T13:59:41.870
|
21
|
2726
|
[
"apache-spark",
"pyspark",
"sentiment-analysis"
] |
845
|
2
| null |
844
|
10
| null |
Most often serialization error in (Py)Spark means that some part of your distributed code (e.g. functions passed to `map`) has dependencies on non-serializable data. Consider following example:
```
rdd = sc.parallelize(range(5))
rdd = rdd.map(lambda x: x + 1)
rdd.collect()
```
Here you have distributed collection and lambda function to send to all workers. Lambda is completely self-containing, so it's easy to copy its binary representation to other nodes without any worries.
Now let's make things a bit more interesting:
```
f = open("/etc/hosts")
rdd = sc.parallelize(range(100))
rdd = rdd.map(lambda x: f.read())
rdd.collect()
f.close()
```
Boom! Strange error in serialization module! What just happened is that we had attempted to pass `f`, which is a file object, to workers. Obviously, file object is a handle to local data and thus cannot be sent to other machines.
---
So what's happening in your specific code? Without actual data and knowing record format, I cannot debug it completely, but I guess that problem goes from this line:
```
def vectorizer(text, vocab=vocab_dict):
```
In Python, keyword arguments are initialized when function is called for the first time. When you call `sc.parallelize(...).map(vectorizer)` just after its definition, `vocab_dict` is available locally, but remote workers know absolutely nothing about it. Thus function is called with fewer parameters than it expects which results in `__init__() takes exactly 3 arguments (2 given)` error.
Also note, that you follow very bad pattern of `sc.parallelize(...)...collect()` calls. First you spread your collection to entire cluster, do some computations, and then pull the result. But sending data back and forth is pretty pointless here. Instead, you can just do these computations locally, and run Spark's parallel processes only when you work with really big datasets (like you main `amazon_dataset`, I guess).
| null |
CC BY-SA 3.0
| null |
2014-07-26T00:11:03.637
|
2014-07-26T00:11:03.637
| null | null |
1279
| null |
846
|
2
| null |
842
|
6
| null |
Take something from your everyday life. Create predictor of traffic jams in your region, craft personalised music recommender, analyse car market, etc. Choose real problem that you want to solve - this will not only keep you motivated, but also make you go through the whole development circle from data collection to hypothesis testing.
| null |
CC BY-SA 3.0
| null |
2014-07-26T01:12:08.167
|
2014-07-26T01:12:08.167
| null | null |
1279
| null |
847
|
2
| null |
842
|
3
| null |
[Introduction to Data Science](https://www.coursera.org/course/datasci) course that is being run on Coursera now includes real-world project assignment where companies post their problems and students are encouraged to solve them. This is done via [coursolve.com](https://www.coursolve.org/) (already mentioned here).
More information [here](https://class.coursera.org/datasci-002/wiki/OptionalRealWorldProject) (you have to be enrolled in the course to see that link)
| null |
CC BY-SA 3.0
| null |
2014-07-26T08:37:52.823
|
2014-07-26T08:37:52.823
| null | null |
816
| null |
848
|
2
| null |
761
|
71
| null |
K-means is not the most appropriate algorithm here.
The reason is that k-means is designed to minimize variance. This is, of course, appearling from a statistical and signal procssing point of view, but your data is not "linear".
Since your data is in latitude, longitude format, you should use an algorithm that can handle arbitrary distance functions, in particular geodetic distance functions. Hierarchical clustering, PAM, CLARA, and DBSCAN are popular examples of this.
[This](https://www.youtube.com/watch?v=QsGOoWdqaT8) recommends OPTICS clustering.
The problems of k-means are easy to see when you consider points close to the +-180 degrees wrap-around. Even if you hacked k-means to use Haversine distance, in the update step when it recomputes the mean the result will be badly screwed. Worst case is, k-means will never converge!
| null |
CC BY-SA 4.0
| null |
2014-07-26T16:04:25.363
|
2020-08-02T14:02:53.883
|
2020-08-02T14:02:53.883
|
98307
|
924
| null |
849
|
2
| null |
832
|
2
| null |
Since you want to show so much data, I think that your best choice is going interactive. Check out this [demo](http://demo.zunzun.se/intervals/index.html), it is close to what you want but not quite.
It is very difficult to show a lot of data in a single diagram, together with the finest details and the bird-eyes view. But you can let the user interact and look for the details. To show counts, one option is to use color-coding. Take a look at this image (code [here](https://gist.github.com/dsign/4c598cfbfc81f6d491d5)): .
Here rgb channels have been used to encode (the logarithm of) the number of active events (red), events starting (green) and events ending (blue) for windows of different size. The X axis is time, and the Y axis represents window size, that is, duration. Thus, a point with coordinates (10, 4) represents the interval of time that goes from 10 to 14.
To make a lot of data more detailed, it could be a good idea to make the diagram zoomable (like in the demo before), and to give the user the possibility of visualizing just one channel/magnitude.
| null |
CC BY-SA 3.0
| null |
2014-07-26T21:03:13.470
|
2014-07-26T21:03:13.470
| null | null |
2575
| null |
850
|
2
| null |
748
|
6
| null |
If you've got prior information then you should certainly not use simple mean in a split test. I assume you're trying to just predict which group will produce the greatest amount of revenue overall, by trying to emulate the underlying distribution.
Firstly, it's worth noting that any metrics you choose will actually reduce to mean in a pretty trivial way. Eventually mean will necessarily work out, though using a standard bayesian method to estimate the mean is probably your best bet.
If you've got a prior then using a standard bayesian approach to update the prior on your mean revenue is probably the best way to do it. Basically, just take the individual results you get and update a multinomial distribution representing your prior in each case.
If you want some more full background on multinomial distributions as bayesian priors are pretty well, [this](http://research.microsoft.com/en-us/um/people/minka/papers/minka-multinomial.pdf) Microsoft paper does a pretty good job of outlining it. In general, I wouldn't care so much about the fact that your distribution is technically discrete, as a multinomial distribution will effectively interpolate across your solution space, giving you a continuous distribution that is a very good approximation of your discrete space.
| null |
CC BY-SA 3.0
| null |
2014-07-27T03:58:22.907
|
2014-07-27T03:58:22.907
| null | null |
548
| null |
851
|
1
| null | null |
6
|
730
|
## Image Similarity based on Color Palette Distribution
I am trying to compute similarity between two images based on their color palette distribution, let's say I have two sets of key value pairs as follows,
Img1: `{'Brown': 14, 'White': 13, 'Black': 40, 'Gray': 31}`
Img2: `{'Pink': 82, 'Brown': 8, 'White': 7}`
Where the numbers denote the % of that color present in the image. What would be the best way to compute similarity on a scale of 0-100 between the two images?
|
Computing Image Similarity based on Color Distribution
|
CC BY-SA 3.0
| null |
2014-07-27T21:54:05.003
|
2020-08-17T20:32:41.403
| null | null |
2744
|
[
"data-mining",
"clustering",
"similarity"
] |
853
|
1
|
858
| null |
13
|
2818
|
I have implemented NER system with the use of CRF algorithm with my handcrafted features that gave quite good results. The thing is that I used lots of different features including POS tags and lemmas.
Now I want to make the same NER for different language. The problem here is that I can't use POS tags and lemmas. I started reading articles about deep learning and unsupervised feature learning.
My question is:
Is it possible to use methods for unsupervised feature learning with CRF algorithm? Did anyone try this and got any good result? Is there any article or tutorial about this matter?
I still don't completely understand this way of feature creation so I don't want to spend to much time for something that won't work. So any information would be really helpful. To create whole NER system based on deep learning is a bit to much for now.
|
Unsupervised feature learning for NER
|
CC BY-SA 3.0
| null |
2014-07-28T07:19:49.877
|
2018-04-19T02:02:47.127
|
2017-06-30T14:45:37.653
|
31513
|
2750
|
[
"nlp",
"text-mining",
"feature-extraction"
] |
854
|
2
| null |
554
|
5
| null |
I like @Kallestad answer very much, but I would like to add a meta-step: Make sure that you understand how the data where collected, and what types of constraints there are.
I think it is very common to think that there where no non-obvious steps when the data where collected, but this is not the case: Most of the time, some process or indivudal did somethink with the data, and these steps can and will influence the shape of the data.
Two examples:
I had a study recently where the data where collected by various con
tractors worldwide. I was not at the briefing, so that was opaque to me. Unfortunately, the measurements where off for some parts of france: People all liked ice cram, but we expected a random distribution. There was no obvious reason for this uniformity, so I began to hunt the errors. When I queried the contractors, one had misunderstood the briefing and selected only ice-cream lovers from his database.
The second error was more challenging: When doing some geographic analysis, I found that a lot of people had extremely large movement patterns, which suggested that a lot of them traveled from Munich to Hamburg in minutes. When I spoke with ppeople upstream, they found a subtle bug in their data aggregation software, which was unnoticed before.
Conclusions:
- Do not assume that your data was collected by perfect processes /humans.
- Do try to understand the limits of your data providers.
- Look at individual patterns / values and try to determine if they are logical (easy for movement / geographic data)
| null |
CC BY-SA 3.0
| null |
2014-07-28T08:36:53.633
|
2014-07-28T08:36:53.633
| null | null |
791
| null |
855
|
2
| null |
554
|
4
| null |
Below you can find a copy of my answer to a related (however, focused on data cleaning aspect) question here on Data Science StackExchange ([https://datascience.stackexchange.com/a/722/2452](https://datascience.stackexchange.com/a/722/2452)), provided in its entirety for readers' convenience. I believe that it partially answers your question as well and hope it is helpful. While the answer is focused on `R` ecosystem, similar packages and/or libraries can be found for other data analysis environments. Moreover, while the two cited papers on data preparation also contain examples in R, these papers present general workflow (framework) and best practices that are applicable to any data analysis environment.
R contains some standard functions for data manipulation, which can be used for data cleaning, in its base package (`gsub`, `transform`, etc.), as well as in various third-party packages, such as stringr, reshape, reshape2, and plyr. Examples and best practices of usage for these packages and their functions are described in the following paper: [http://vita.had.co.nz/papers/tidy-data.pdf](http://vita.had.co.nz/papers/tidy-data.pdf).
Additionally, R offers some packages specifically focused on data cleaning and transformation:
- editrules
- deducorrect
- StatMatch
- MatchIt
- DataCombine
A comprehensive and coherent approach to data cleaning in R, including examples and use of editrules and deducorrect packages, as well as a description of workflow (framework) of data cleaning in R, is presented in the following paper, which I highly recommend: [http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf](http://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf).
| null |
CC BY-SA 4.0
| null |
2014-07-28T12:07:25.573
|
2021-03-13T23:23:44.427
|
2021-03-13T23:23:44.427
|
29169
|
2452
| null |
858
|
2
| null |
853
|
6
| null |
Yes, it is entirely possible to combine unsupervised learning with the CRF model. In particular, I would recommend that you explore the possibility of using [word2vec](https://code.google.com/p/word2vec/) features as inputs to your CRF.
Word2vec trains a to distinguish between words that are appropriate for a given context and words that are randomly selected. Select weights of the model can then be interpreted as a dense vector representation of a given word.
These dense vectors have the appealing property that words that are semantically or syntactically similar have similar vector representations. Basic vector arithmetic even reveals some interesting learned relationships between words.
For example, vector("Paris") - vector("France") + vector("Italy") yields a vector that is quite similar to vector("Rome").
At a high level, you can think of word2vec representations as being similar to LDA or LSA representations, in the sense that you can convert a sparse input vector into a dense output vector that contains word similarity information.
For that matter, LDA and LSA are also valid options for unsupervised feature learning -- both attempt to represent words as combinations of "topics" and output dense word representations.
For English text Google distributes word2vec models pretrained on a huge 100 billion word Google News dataset, but for other languages you'll have to train your own model.
| null |
CC BY-SA 3.0
| null |
2014-07-28T14:48:37.743
|
2014-07-29T03:29:42.940
|
2014-07-29T03:29:42.940
|
684
|
684
| null |
859
|
1
| null | null |
8
|
1076
|
What are the different classes of data science problems that can be solved using mapreduce programming model?
|
Data science and MapReduce programming model of Hadoop
|
CC BY-SA 3.0
| null |
2014-07-28T16:17:49.823
|
2016-08-05T09:30:28.207
|
2014-07-30T03:13:17.827
| null |
2643
|
[
"apache-hadoop",
"map-reduce"
] |
860
|
2
| null |
554
|
1
| null |
I'll add one thing- if possible, do a reasonableness check by comparing you data against some other source. It seems that whenever I fail to do this, I get burnt:(
| null |
CC BY-SA 3.0
| null |
2014-07-28T16:23:39.600
|
2014-07-28T16:23:39.600
| null | null |
1241
| null |
861
|
2
| null |
859
|
9
| null |
Data Science has many different sub-areas as described in [my post](http://www.datasciencecentral.com/profiles/blogs/ingredients-of-data-science-1)). Nearly for each area, scientists and developer has significant contributions. To learn more about what can be done, please look at following websites:
- Data Mining Algorithms & Machine Learning -> Apache Mahout
- Statistics -> RHadoop
- Data Warehousing & Database Querying -> SQL-MapReduce
- Social Network Analysis -> Article
- Bio-informatics -> Article - 1 , Article - 2
Also, there are some work on MapReduce + Excel + Cloud combination but I have not found the link.
>
What are the different classes of Data Science problems ...
Each "classes" is not purely homogeneous problem domain, i.e. some problem cannot be solved via map and reduce approach due to its communication cost, or algorithm behavior. What I mean by behavior is that some problem wants to have control on all data sets instead of chunks. Thus, I refuse to list type of problem "classes".
Do not forget that knowing what MapReduce can do is not enough for Data Science. You should also aware of [What MapReduce can't do](http://www.analyticbridge.com/profiles/blogs/what-mapreduce-can-t-do), too.
| null |
CC BY-SA 3.0
| null |
2014-07-28T16:39:45.703
|
2014-07-28T16:49:55.573
|
2014-07-28T16:49:55.573
| null | null | null |
862
|
2
| null |
52
|
5
| null |
I think that there is no universal technique for "cleaning" data before doing actual research. On the other hand, I'm aiming for doing as much [reproducible research](https://en.wikipedia.org/wiki/Reproducibility#Reproducible_research) as possible. By doing reproducible research, if you used cleaning techniques with bugs or with poor parameters/assumptions it could be spot by others.
There is nice R package [knitr](http://yihui.name/knitr/) which helps a lot in reproducible research.
Of course, not all research could be fully reproduced (for example live Twitter data) , but at least you can document cleaning, formating and preprocessing steps easily.
You can check my [assessment](http://rpubs.com/QuatnumDamage/storm) prepared for [Reproducible Research course at Coursera](https://www.coursera.org/course/repdata).
| null |
CC BY-SA 3.0
| null |
2014-07-28T17:33:43.280
|
2014-07-28T17:33:43.280
| null | null |
82
| null |
863
|
2
| null |
859
|
13
| null |
Let's first split it into parts.
Data Science is about making knowledge from raw data. It uses machine learning, statistics and other fields to simplify (or even automate) decision making. Data science techniques may work with any data size, but more data means better predictions and thus more precise decisions.
Hadoop is a common name for a set of tools intended to work with large amounts of data. Two most important components in Hadoop are HDFS and MapReduce.
HDFS, or Hadoop Distributed File System, is a special distributed storage capable of holding really large data amounts. Large files on HDFS are split into blocks, and for each block HDFS API exposes its location.
MapReduce is framework for running computations on nodes with data. MapReduce heavily uses data locality exposed by HDFS: when possible, data is not transferred between nodes, but instead code is copied to the nodes with data.
So basically any problem (including data science tasks) that doesn't break data locality principle may be efficiently implemented using MapReduce (and a number of other problems may be solved not that efficiently, but still simply enough).
---
Let's take some examples. Very often analyst only needs some simple statistics over his tabular data. In this case [Hive](https://hive.apache.org/), which is basically SQL engine over MapReduce, works pretty well (there are also Impala, Shark and others, but they don't use Hadoop's MapReduce, so more on them later).
In other cases analyst (or developer) may want to work with previously unstructured data. Pure MapReduce is pretty good for transforming and standardizing data.
Some people are used to exploratory statistics and visualization using tools like R. It's possible to apply this approach to big data amounts using [RHadoop](https://github.com/RevolutionAnalytics/RHadoop/wiki) package.
And when it comes to MapReduce-based machine learning [Apache Mahout](https://mahout.apache.org/) is the first to mention.
---
There's, however, one type of algorithms that work pretty slowly on Hadoop even in presence of data locality, namely, iterative algorithms. Iterative algorithms tend to have multiple Map and Reduce stages. Hadoop's MR framework reads and writes data to disk on each stage (and sometimes in between), which makes iterative (as well as any multi-stage) tasks terribly slow.
Fortunately, there are alternative frameworks that can both - use data locality and keep data in memory between stages. Probably, the most notable of them is [Apache Spark](https://spark.apache.org/). Spark is complete replacement for Hadoop's MapReduce that uses its own runtime and exposes pretty rich API for manipulating your distributed dataset. Spark has several sub-projects, closely related to data science:
- Shark and Spark SQL provide alternative SQL-like interfaces to data stored on HDFS
- Spark Streaming makes it easy to work with continuous data streams (e.g. Twitter feed)
- MLlib implements a number of machine learning algorithms with a pretty simple and flexible API
- GraphX enables large-scale graph processing
So there's pretty large set of data science problems that you can solve with Hadoop and related projects.
| null |
CC BY-SA 3.0
| null |
2014-07-29T13:43:39.060
|
2014-07-29T13:43:39.060
| null | null |
1279
| null |
864
|
2
| null |
859
|
2
| null |
map/reduce is most appropriate for parallelizable offline computations. To be more precise, it works best when the result can be found from the result of some function of a partition of the input. Averaging is a trivial example; you can do this with map/reduce by summing each partition, returning the sum and the number of elements in the partition, then computing the overall mean using these intermediate results. It is less appropriate when the intermediate steps depend on the state of the other partitions.
| null |
CC BY-SA 3.0
| null |
2014-07-30T07:49:14.467
|
2014-07-30T07:49:14.467
| null | null |
381
| null |
865
|
1
|
875
| null |
1
|
1712
|
I need to build parse tree for some source code (on Python or any program language that describe by CFG).
So, I have source code on some programming language and BNF this language.
Can anybody give some advice how can I build parse tree in this case?
Preferably, with tools for Python.
|
How to build parse tree with BNF
|
CC BY-SA 3.0
| null |
2014-07-30T10:24:54.180
|
2014-07-30T23:40:54.700
| null | null |
988
|
[
"python",
"parsing"
] |
866
|
1
|
881
| null |
12
|
1028
|
I am currently working with a large set of health insurance claims data that includes some laboratory and pharmacy claims. The most consistent information in the data set, however, is made up of diagnosis (ICD-9CM) and procedure codes (CPT, HCSPCS, ICD-9CM).
My goals are to:
- Identify the most influential precursor conditions (comorbidities) for a medical condition like chronic kidney disease;
- Identify the likelihood (or probability) that a patient will develop a medical condition based on the conditions they have had in the past;
- Do the same as 1 and 2, but with procedures and/or diagnoses.
- Preferably, the results would be interpretable by a doctor
I have looked at things like the [Heritage Health Prize Milestone papers](https://www.heritagehealthprize.com/c/hhp/details/milestone-winners) and have learned a lot from them, but they are focused on predicting hospitalizations.
So here are my questions: What methods do you think work well for problems like this? And, what resources would be most useful for learning about data science applications and methods relevant to healthcare and clinical medicine?
EDIT #2 to add plaintext table:
CKD is the target condition, "chronic kidney disease", ".any" denotes that they have acquired that condition at any time, ".isbefore.ckd" means they had that condition before their first diagnosis of CKD. The other abbreviations correspond with other conditions identified by ICD-9CM code groupings. This grouping occurs in SQL during the import process. Each variable, with the exception of patient_age, is binary.
|
Predicting next medical condition from past conditions in claims data
|
CC BY-SA 3.0
| null |
2014-07-30T11:45:08.313
|
2017-10-02T13:27:02.367
|
2017-10-02T13:27:02.367
|
2781
|
2781
|
[
"machine-learning",
"r"
] |
867
|
1
| null | null |
3
|
886
|
Does anyone know what (from your experience) is the best open source natural language generators (NLG) out there? What are the relative merits of each?
I'm looking to do sophisticated text summarization and would like to use theme extraction/semantic modeling in conjunction with NLG tools to create accurate, context-aware, and natural-sounding text summaries.
|
Relative merits of different open source natural language generators
|
CC BY-SA 3.0
| null |
2014-07-30T13:55:36.187
|
2015-05-04T18:25:08.213
|
2014-08-12T16:52:12.833
|
84
|
2785
|
[
"nlp",
"text-mining"
] |
868
|
2
| null |
859
|
4
| null |
There is a paper you should look into:
[MapReduce: Distributed Computing for Machine Learning](http://cs.smith.edu/dftwiki/images/6/68/MapReduceDistributedComputingMachineLearning.pdf)
They distinguish 3 classes of machine-learning problems that are reasonable to address with MapReduce:
- Single pass algorithms
- Iterative algorithms
- Query based algorithms
They also give examples for each class.
| null |
CC BY-SA 3.0
| null |
2014-07-30T15:32:44.557
|
2014-07-30T15:32:44.557
| null | null |
2787
| null |
869
|
1
|
871
| null |
33
|
25187
|
So, I'm just starting to learn how a neural network can operate to recognize patterns and categorize inputs, and I've seen how an artificial neural network can parse image data and categorize the images ([demo with convnetjs](http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html)), and the key there is to downsample the image and each pixel stimulates one input neuron into the network.
However, I'm trying to wrap my head around if this is possible to be done with string inputs? The use-case I've got is a "recommendation engine" for movies a user has watched. Movies have lots of string data (title, plot, tags), and I could imagine "downsampling" the text down to a few key words that describe that movie, but even if I parse out the top five words that describe this movie, I think I'd need input neurons for every english word in order to compare a set of movies? I could limit the input neurons just to the words used in the set, but then could it grow/learn by adding new movies (user watches a new movie, with new words)? Most of the libraries I've seen don't allow adding new neurons after the system has been trained?
Is there a standard way to map string/word/character data to inputs into a neural network? Or is a neural network really not the right tool for the job of parsing string data like this (what's a better tool for pattern-matching in string data)?
|
Neural Network parse string data?
|
CC BY-SA 3.0
| null |
2014-07-30T16:27:45.177
|
2017-08-26T15:57:49.753
| null | null |
2790
|
[
"neural-network"
] |
870
|
2
| null |
869
|
3
| null |
This is not a problem about neural networks per se, but about representing textual data in machine learning. You can represent the movies, cast, and theme as categorical variables. The plot is more complicated; you'd probably want a [topic model](http://en.wikipedia.org/wiki/Topic_model) for that, but I'd leave that out until you get the hang of things. It does precisely that textual "downsampling" you mentioned.
Take a look at [this](http://visualstudiomagazine.com/articles/2013/07/01/neural-network-data-normalization-and-encoding.aspx) tutorial to learn how to encode categorical variables for neural networks. And good luck!
| null |
CC BY-SA 3.0
| null |
2014-07-30T17:42:09.797
|
2014-07-30T17:42:09.797
| null | null |
381
| null |
871
|
2
| null |
869
|
18
| null |
Using a neural network for prediction on natural language data can be a tricky task, but there are tried and true methods for making it possible.
In the Natural Language Processing (NLP) field, text is often represented using the bag of words model. In other words, you have a vector of length n, where n is the number of words in your vocabulary, and each word corresponds to an element in the vector. In order to convert text to numeric data, you simply count the number of occurrences of each word and place that value at the index of the vector that corresponds to the word. [Wikipedia does an excellent job of describing this conversion process.](https://en.wikipedia.org/wiki/Bag-of-words_model) Because the length of the vector is fixed, its difficult to deal with new words that don't map to an index, but there are ways to help mitigate this problem (lookup [feature hashing](http://en.wikipedia.org/wiki/Feature_hashing)).
This method of representation has many disadvantages -- it does not preserve the relationship between adjacent words, and results in very sparse vectors. Looking at [n-grams](http://en.wikipedia.org/wiki/N-gram) helps to fix the problem of preserving word relationships, but for now let's focus on the second problem, sparsity.
It's difficult to deal directly with these sparse vectors (many linear algebra libraries do a poor job of handling sparse inputs), so often the next step is dimensionality reduction. For that we can refer to the field of [topic modeling](http://en.wikipedia.org/wiki/Topic_model): Techniques like [Latent Dirichlet Allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) (LDA) and [Latent Semantic Analysis](http://en.wikipedia.org/wiki/Latent_semantic_analysis) (LSA) allow the compression of these sparse vectors into dense vectors by representing a document as a combination of topics. You can fix the number of topics used, and in doing so fix the size of the output vector producted by LDA or LSA. This dimensionality reduction process drastically reduces the size of the input vector while attempting to lose a minimal amount of information.
Finally, after all of these conversions, you can feed the outputs of the topic modeling process into the inputs of your neural network.
| null |
CC BY-SA 3.0
| null |
2014-07-30T17:53:40.330
|
2017-03-14T01:25:26.937
|
2017-03-14T01:25:26.937
|
29939
|
684
| null |
872
|
1
|
873
| null |
8
|
8176
|
I am attempting to use the tm package to convert a vector of text strings to a corpus element.
My code looks something like this
```
Corpus(d1$Yes)
```
where `d1$Yes` is a factor with 124 levels, each containing a text string.
For example, `d1$Yes[246] = "So we can get the boat out!"`
I'm receiving the following error:
```
"Error: inherits(x, "Source") is not TRUE"
```
I'm not sure how to remedy this.
|
R error using package tm (text-mining)
|
CC BY-SA 4.0
| null |
2014-07-30T18:45:13.790
|
2020-03-11T21:33:18.877
|
2020-03-11T21:33:18.877
|
375
|
2792
|
[
"r",
"text-mining"
] |
873
|
2
| null |
872
|
9
| null |
You have to tell Corpus what kind of source you are using. Try:
```
Corpus(VectorSource(d1$Yes))
```
| null |
CC BY-SA 3.0
| null |
2014-07-30T19:15:09.383
|
2014-07-30T19:15:09.383
| null | null |
375
| null |
874
|
2
| null |
423
|
4
| null |
I have some thoughts about your question. I hope it may help you to solve your problem.
>
I'm planning to run some experiments with very large data sets, and I'd like to distribute the computation.
In [one of my posts](http://www.datasciencecentral.com/profiles/blogs/meeting-justice-of-data-science), I have done research on topic of evaluation methods of Data Science. With Learning Curve, you can evaluate your experiments learning ability. To talk a bit more, you will fix commodity configuration, and then will run same experiment on the same number of machines with different size of data set you have (from starting from small chunk in size, incrementally increase the size until you reach the whole data set).
To point on, you should avoid having power distribution for result of performance test being run with different size of data sets. To avoid, you should carefully choose step size (step size = amount of increments).
>
I have about ten machines available, each with 200 GB of free space on hard disk. However, I would like to perform experiments on a greater number of nodes, to measure scalability more precisely.
For this type question, I have intuitively searched and read materials; afterwards, published as a [blog post](http://www.datasciencecentral.com/profiles/blogs/scale-your-vision). At the end of the post, I have briefly talked about how to test your hypothesis on real complex system. If you let me, I want to briefly talk about;
First of all, base requirement should be formed in order to run data set as a whole. The minimum requirement will build your baseline evaluation score which is calculated with one/combination of evaluation metrics you have chosen, or with one/combination of methods being used to calculate `Running Time = Computation complexity + Communication cost + Synchronization cost`.
After those steps, with an evaluation strategy, add new elements, e.g. new node, to the system you have doing scalability test; meanwhile, for each addition, measure performance w.r.t new system configuration.
Just to note, evaluation strategy must be planned along with considerations of default behavior of parallel and distributed systems. For example, what I mean by behavior is that adding just more cores will, after some point, automatically drop performance of the system not due to your algorithm characteristics. It is because more cores need more RAMs, more hard driver, or etc. In other words, there is a N-Way relationship between hardware components. As a second example, adding more nodes to the distributed system will punished you with more communication and synchronization costs.
As a last step, you will sketch two different graphs with your evaluation results via data analysis program or language (As a recommendation, use GNU Plot or R programming language). Print out and put those results at your desktop, and start to examine them, carefully. According to your investigation, modify/erase + rebuild evaluation strategy and re-do the performance test.
>
Are there commodity services which would grant me that only my application would be running at a given time? Has anyone used such services yet?
I have no much experiment on commodity services, but I can easily say whether it grants or not depends on your configuration of services. If you configured say Hadoop to your node as an only service, Hadoop will grant your code will be only running at any time.
| null |
CC BY-SA 3.0
| null |
2014-07-30T21:53:13.790
|
2014-07-30T21:53:13.790
| null | null | null | null |
875
|
2
| null |
865
|
2
| null |
I suggest you use [ANTLR](http://www.antlr.org/), which is a very powerful parser generator. It has a good GUI for entering your BNF. It has a [Python target](https://theantlrguy.atlassian.net/wiki/display/ANTLR4/Python+Target) capability.
| null |
CC BY-SA 3.0
| null |
2014-07-30T23:40:54.700
|
2014-07-30T23:40:54.700
| null | null |
609
| null |
876
|
2
| null |
866
|
3
| null |
"Identify the most influential precursor conditions (comorbidities) for a medical condition like chronic kidney disease"
I'm not sure that it's possible to ID the most influential conditions; I think it will depend on what model you're using. Just yesterday I fit a random forest and a boosted regression tree to the same data, and the order and relative importance each model gave for the variables were quite different.
| null |
CC BY-SA 3.0
| null |
2014-07-31T16:39:37.127
|
2014-07-31T16:39:37.127
| null | null |
1241
| null |
877
|
1
|
878
| null |
8
|
214
|
Well this looks like the most suited place for this question.
Every website collect data of the user, some just for usability and personalization, but the majority like social networks track every move on the web, some free apps on your phone scan text messages, call history and so on.
All this data siphoning is just for selling your profile for advertisers?
|
What is the use of user data collection besides serving ads?
|
CC BY-SA 3.0
| null |
2014-07-31T18:52:56.307
|
2020-08-16T21:51:11.787
| null | null |
2798
|
[
"data-mining"
] |
878
|
2
| null |
877
|
7
| null |
A couple of days ago developers from one product company asked me how they can understand why new users were leaving their website. My first question to them was what these users' profiles looked like and how they were different from those who stayed.
Advertising is only top of an iceberg. User profiles (either filled by users themselves or computed from users' behaviour) hold information about:
- user categories, i.e. what kind of people tend to use your website/product
- paying client portraits, i.e. who is more likely to use your paid services
- UX component performance, e.g. how long it takes people to find the button they need
- action performance comparison, e.g. what was more efficient - lower price for a weekend or propose gifts with each buy, etc.
So it's more about improving product and making better user experience rather than selling this data to advertisers.
| null |
CC BY-SA 3.0
| null |
2014-07-31T23:03:33.767
|
2014-07-31T23:03:33.767
| null | null |
1279
| null |
879
|
2
| null |
877
|
6
| null |
Most companies won't sell the data, not on any small scale anyways. Most will use it internally.
User tracking data is important for understanding a lot of things. There's basic A/B testing where you provide different experiences to see which is more effective. There is understanding how your UI is utilized. Categorizing your end users in different ways for a variety of reasons. Figuring out where your end user base is, and within that group where the end users that matter are. Correlating user experiences with social network updates. Figuring out what will draw people to your product and what drives them away. The list of potential for data mining and analysis projects could go on for days.
Data storage is cheap. If you track everything out of the gate, you can figure out what you want to do with that data later.
Scanning text messages is sketchy territory when there isn't a good reason for it. Even when there is a good reason it's sketchy territory. I'd love to say that nobody does it, but there have been instances where big companies have done it and there are a lot of cases where no-name apps at least require access to that kind of data for installation. I generally frown on that kind of thing myself as a consumer, but the data analyst in me would love to see if I could build anything useful from a set of information like that.
| null |
CC BY-SA 3.0
| null |
2014-08-01T06:03:47.700
|
2014-08-01T06:03:47.700
| null | null |
434
| null |
880
|
2
| null |
810
|
6
| null |
There always is the solution to try both approaches and keep the one that maximizes the expected performances.
In your case, I would assume you prefer minimizing false negatives at the cost of some false positive, so you want to bias your classifier against the strong negative prior, and address the imbalance by reducing the number of negative examples in your training set.
Then compute the precision/recall, or sensitivity/specificity, or whatever criterion suits you on the full, imbalanced, dataset to make sure you haven't ignored a significant pattern present in the real data while building the model on the reduced data.
| null |
CC BY-SA 3.0
| null |
2014-08-01T12:27:41.580
|
2014-08-01T12:27:41.580
| null | null |
172
| null |
881
|
2
| null |
866
|
7
| null |
I've never worked with medical data, but from general reasoning I'd say that relations between variables in healthcare are pretty complicated. Different models, such as random forests, regression, etc. could capture only part of relations and ignore others. In such circumstances it makes sense to use general statistical exploration and modelling.
For example, the very first thing I would do is finding out correlations between possible precursor conditions and diagnoses. E.g. in what percent of cases chronic kidney disease was preceded by long flu? If it is high, it [doesn't always mean causality](http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation), but gives pretty good food for thought and helps to better understand relations between different conditions.
Another important step is data visualisation. Does CKD happens in males more often than in females? What about their place of residence? What is distribution of CKD cases by age? It's hard to grasp large dataset as a set of numbers, plotting them out makes it much easier.
When you have an idea of what's going on, perform [hypothesis testing](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing) to check your assumption. If you reject null hypothesis (basic assumption) in favour of alternative one, congratulations, you've made "something real".
Finally, when you have a good understanding of your data, try to create complete model. It may be something general like [PGM](https://www.coursera.org/course/pgm) (e.g. manually-crafted Bayesian network), or something more specific like linear regression or [SVM](http://en.wikipedia.org/wiki/Support_vector_machine), or anything. But in any way you will already know how this model corresponds to your data and how you can measure its efficiency.
---
As a good starting resource for learning statistical approach I would recommend [Intro to Statistics](https://www.udacity.com/course/st101) course by Sebastian Thrun. While it's pretty basic and doesn't include advanced topics, it describes most important concepts and gives systematic understanding of probability theory and statistics.
| null |
CC BY-SA 3.0
| null |
2014-08-01T14:08:13.043
|
2014-08-01T14:08:13.043
| null | null |
1279
| null |
882
|
2
| null |
877
|
3
| null |
Here's a practical example of using web data for something other than advertising. Distil Networks (disclaimer, I work there) uses network traffic to determine whether page accesses are from humans or bots - scrapers, click fraud, form spam, etc.
Another example is some of the work that Webtrends is doing. They allow site users to build a model for each visitor to predict whether they'll leave, buy, add to cart, etc. Then based on the probability of each action you can change the users experience (e.g. if they're about to leave, give them a coupon). [Here](http://www.oscon.com/oscon2014/public/schedule/detail/34809)'s the slides from a talk by them.
| null |
CC BY-SA 4.0
| null |
2014-08-01T14:10:38.267
|
2020-08-16T21:51:11.787
|
2020-08-16T21:51:11.787
|
98307
|
403
| null |
883
|
1
|
886
| null |
3
|
5901
|
I am working on a data set that has multiple traffic speed measurements per day. My data is from the city of chicago, and it is taken every minute for about six months. I wanted to consolidate this data into days only, so this is what I did:
```
traffic <- read.csv("path.csv",header=TRUE)
traffic2 <- aggregate(SPEED~DATE, data=traffic, FUN=MEAN)
```
this was perfect because it took all of my data and averaged it by date. For example, my original data looked something like this:
```
DATE SPEED
12/31/2012 22
12/31/2012 25
12/31/2012 23
...
```
and the final looked like this:
```
DATE SPEED
10/1/2012 22
10/2/2012 23
10/3/2012 22
...
```
The only problem, is my data is supposed to start at 9/1/2012. I plotted this data, and it turns out the data goes from 10/1/2012-12/31/2012 and then 9/1/2012-9/30/2012.
What in the world is going on here?
|
R aggregate() with dates
|
CC BY-SA 3.0
| null |
2014-08-01T18:13:06.063
|
2014-10-18T03:26:04.953
| null | null |
2614
|
[
"r",
"dataset",
"beginner"
] |
884
|
2
| null |
787
|
1
| null |
So I was never able to find the error by looking through my logs. I ended up reinstalling it with CDH5 (which was MUCH easier than installing "poor" Hadoop)
Now everything runs fine!
I'm still having trouble getting things to save to the hdfs, but thats a question for another day...
| null |
CC BY-SA 3.0
| null |
2014-08-01T18:16:57.543
|
2014-08-01T18:16:57.543
| null | null |
2614
| null |
886
|
2
| null |
883
|
4
| null |
I am going to agree with @user1683454's comment. After importing, your DATE column is of either `character`, or `factor` class (depending on your settings for `stringsAsFactors`). Therefore, I think that you can solve this issue in at least several ways, as follows:
1) Convert data to correct type during import. To do this, just use the following options of `read.csv()`: `stringsAsFactors` (or `as.is`) and `colClasses`. By default, you can specify conversion to `Date` or `POSIXct` classes. If you need a non-standard format, you have two options. First, if you have a single Date column, you can use `as.Date.character()` to pass the desired format to `colClasses`. Second, if you have multiple Date columns, you can write a function for that and pass it to `colClasses` via `setAs()`. Both options are discussed here: [https://stackoverflow.com/questions/13022299/specify-date-format-for-colclasses-argument-in-read-table-read-csv](https://stackoverflow.com/questions/13022299/specify-date-format-for-colclasses-argument-in-read-table-read-csv).
2) Convert data to correct format after import. Thus, after calling `read.csv()`, you would have to execute the following code: `dateColumn <- as.Date(dateColumn, "%m/%d/%Y")` or `dateColumn <- strptime(dateColumn, "%m/%d/%Y")` (adjust the format to whatever Date format you need).
| null |
CC BY-SA 3.0
| null |
2014-08-01T23:21:34.360
|
2014-08-01T23:21:34.360
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
887
|
1
|
889
| null |
1
|
100
|
I am currently on a project that will build a model (train and test) on Client-side Web data, but evaluate this model on Sever-side Web data. Unfortunately building the model on Server-side data is not an option, nor is it an option to evaluate this model on Client-side data.
This model will be based on metrics collected on specific visitors. This is a real time system that will be calculating a likelihood based on metrics collected while visitors browse the website.
I am looking for approaches to ensure the highest possible accuracy on the model evaluation.
So far I have the following ideas,
- Clean the Server-side data by removing webpages that are never seen Client-side.
- Collect additional data Server-side data to make the Server-side data more closely resemble Client-side data.
- Collect data on the Client and send this data to the Server. This is possible and may be the best solution, but is currently undesirable.
- Build one or more models that estimate Client-side Visitor metrics from Server-side Visitor metrics and use these estimates in the Likelihood model.
Any other thoughts on evaluating over one Population while training (and testing) on another Population?
|
Modelling on one Population and Evaluating on another Population
|
CC BY-SA 3.0
| null |
2014-08-02T00:07:09.267
|
2014-08-02T04:54:14.757
| null | null |
776
|
[
"data-cleaning",
"model-evaluations"
] |
888
|
2
| null |
887
|
0
| null |
I'm not an expert on this, so take my advice with a grain of salt. It's not clear for me what is the relationship between server-side and client-side data. Are they both representative of the same population? If Yes, I think it's OK to use different data sets for testing/training and evaluating your models. If No, I think it might be a good idea to use some resampling technique, such as bootstrapping, jackknifing or cross-validation.
| null |
CC BY-SA 3.0
| null |
2014-08-02T04:21:49.927
|
2014-08-02T04:21:49.927
| null | null |
2452
| null |
889
|
2
| null |
887
|
2
| null |
If the users who you are getting client-side data from are from the same population of users who you would get server-side data from. If that is true, then you aren't really training on one population and applying to another. The main difference is that the client side data happened in the past (by necessity unless you are constantly refitting your model) and the server side data will come in the future.
Let's reformulate the question in terms of models rather than web clients and servers.
You are fitting a model on one dataset and applying it to another. That is the classic use of predictive modeling/machine learning. Models use features from the data to make estimates of some parameter or parameters. Once you have a fitted (and tested) model, all that you need is the same set of features to feed into the model to get your estimates.
Just make sure to model on a set of features (aka variables) that are available on the client-side and server-side. If that isn't possible, ask that question separately.
| null |
CC BY-SA 3.0
| null |
2014-08-02T04:54:14.757
|
2014-08-02T04:54:14.757
| null | null |
2666
| null |
890
|
2
| null |
423
|
1
| null |
If your work is parallelizable enough for a distributed network of cpus to make a difference, why not try to run it on the gpu instead? That will require rather less investment than a large network of cpus with individual software licenses and still provide parallel processing which you can do runtime tracking on yourself.
| null |
CC BY-SA 3.0
| null |
2014-08-02T14:55:37.347
|
2014-08-02T14:55:37.347
| null | null |
2809
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.