Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2590
|
1
| null | null |
2
|
89
|
I would like to pose a question about how to treat additional holders in the propensity-to-buy models of banking products.
Up to now I was only taking into considerations the clients as first holders.
For example, if a client ‘1’ appears as the first holder of a saving account ‘A’ with a balance at the end of the month of 100€ and as an additional holder of a saving account ‘B’ with a balance at the end of the month of 50€, the saving balance at the end of the month for the client is considered to be just 100€.
Moreover, if a client only appears as an additional holder (and he/she is not a first account holder of ANY product), he/she is dismissed by the model.
However I have been told to include additional holders in the models (additional holders have the same rights of the first holders).
One possibility is to recalculate all the variables summing up the position as first and additional holder (in the previous example, the balance at the end of the month of client ‘1’ would be 150€). Together with this, I would create some variable that represents the maximum degree of intervention of the client in the account (ex. 'first holder', 'second holder').
Another possibility would be to “double” all the variables, considering the client as first and additional holder (in the example, we would create two variables: the balance at the end of the month as FH =100€, : the balance at the end of the month as AH =50€).
Did any of you encounter a similar problem?It would be very helpful to understand how you solved it.
Thanks
|
Cross-sell models and additional holders
|
CC BY-SA 3.0
| null |
2014-12-03T14:18:34.003
|
2014-12-03T14:18:34.003
| null | null |
5265
|
[
"data-mining",
"dataset",
"predictive-modeling",
"logistic-regression",
"data-cleaning"
] |
2591
|
1
| null | null |
6
|
151
|
I want to create a model to predict the propensity to buy a certain product. As my proportion of 1's is very low, I decided to apply oversampling (to get a 10% of 1's and a 90% of 0's).
Now, I want to discretize some of the variables. To do so I run a tree for each variable against the target.
Should I define the prior probabilities when I do this (run the trees), or it doesn't matter and I can use the over-sampled dataset just like that?
|
Decision trees, categorizacion and oversampling
|
CC BY-SA 4.0
| null |
2014-12-03T14:23:38.830
|
2020-08-02T17:05:55.560
|
2020-06-20T13:41:21.260
|
29575
|
5265
|
[
"classification",
"predictive-modeling",
"sampling"
] |
2593
|
1
|
2635
| null |
8
|
123
|
I want to write a data-mining service in [Google Go](http://golang.org) which collects data through scraping and APIs.
However as Go lacks good ML support I would like to do the ML stuff in Python.
Having a web background I would connect both services with something like RPC but as I believe that this is a common problem in data science I think that there is some better solution.
For example most (web) protocols lack at:
- buffering between processes
- clustering over multiple instances
So what (type of libraries) do data scientists use to connect different languages/processes?
Bodo
|
How to connect data-mining with machine learner process
|
CC BY-SA 3.0
| null |
2014-12-03T15:56:50.687
|
2014-12-07T11:29:14.057
| null | null |
5266
|
[
"machine-learning",
"data-mining"
] |
2594
|
2
| null |
2579
|
3
| null |
For those not familiar, item-item recommenders calculate similarities between items, as opposed to user-user (or user-based) recommenders, which calculate similarities between users. Although some algorithms can be used for both, this question is in regard to item-item algorithms (thanks for being specific in your question).
Accuracy or effectiveness of recommenders is evaluated based on comparing recommendations to a previously collected data set (training set). For example, I have shopping cart data from the last six months; I'll use the first 5 months as training data, then run my various algorithms, and compare the quality against what really happened during the 6th month.
The reason Mahout ships with so many algorithms is because different algorithms are more or less effective in each data set you may work with. So, ideally, you do some testing as I described with many algorithms and compare the accuracy, then choose the winner.
Interestingly, you can also take other factors into account, such as the need to minimize the data set (for performance reasons), and run your tests only with a certain portion of the training data available. In such a case, one algorithm may work better with the smaller data set, but another may work with the complete set. Then, you get to weigh performance VS accuracy VS challenge of implementation (such as deploying on a Hadoop cluster).
Therefore, different algorithms are suited for different project. However, there are some general rules:
- All algorithms always do better with unreduced data sets (more data is better).
- More complex algorithms aren't necessarily better.
I suggest starting with a simple algorithm and ensuring you have high quality data. If you have additional time, you can implement more complex algorithms and create a comparison which is unique to your data set.
Most of my info comes from [This study](http://ai.arizona.edu/intranet/papers/comparative.ieeeis.pdf). You'll find lots of detail about implementation there.
| null |
CC BY-SA 3.0
| null |
2014-12-03T18:22:29.333
|
2014-12-03T18:22:29.333
| null | null |
3466
| null |
2595
|
2
| null |
2582
|
6
| null |
Within each class, you'll have distributions of values for the features. That in itself is not a reason for concern.
From a slightly theoretical point of view, you can ask yourself why you should scale your features and why you should scale them in exactly the chosen way.
One reason may be that your particular training algorithm is known to converge faster (better) with values around 0 - 1 than with features which cover other orders of magnitude. In that case, you're probably fine. My guess is that your SVM is fine: you want to avoid too large numbers because of the inner product, but a max of 1.2 vs. a max of 1.0 won't make much of a difference.
(OTOH, if you e.g. knew your algorithm to not accept negative values you'd obviously be in trouble. )
The practical question is whether your model performs well for cases that are slightly out of the range covered by training. This I believe can best and possibly only be answered by testing with such cases / inspecting test results for performance drop for cases outside the training domain. It is a valid concern and looking into this would be part of the validation of your model.
Observing differences of the size you describe is IMHO a reason to have a pretty close look at model stability.
| null |
CC BY-SA 3.0
| null |
2014-12-03T18:36:00.880
|
2014-12-03T18:57:22.773
|
2014-12-03T18:57:22.773
|
5268
|
5268
| null |
2596
|
1
|
2597
| null |
5
|
122
|
I've been toying with this idea for a while. I think there is probably some method in the text mining literature, but I haven't come across anything just right...
What is/are some methods for tackling a problem where the number of variables it its self a variable. This is not a missing data problem, but one where the nature of the problem fundamentally changes. Consider the following example:
Suppose I want to predict who will win a race, a simple multinomial classification problem. I have lots of past data on races, plenty to train on. Lets further suppose I have observed each contestant run multiple races. The problem however is that the number or racers is variable. Sometimes there are only 2 racers, sometimes there are as many as 100 racers.
One solution might be to train a separate model for each number or racers, resulting in 99 models in this case, using any method I choose. E.g. I could have 100 random forests.
Another solution might be to include an additional variable called 'number_of_contestants' and have input field for 100 racers and simply leave them blank when no racer is present. Intuitively, it seems that this method would have difficulties predicting the outcome of a 100 contestant race if the number of racers follows a Poisson distribution (which I didn't originally specify in the problem, but I am saying it here).
Thoughts?
|
Method for solving problem with variable number of predictors
|
CC BY-SA 3.0
| null |
2014-12-03T21:47:34.907
|
2014-12-04T23:26:15.803
| null | null |
5247
|
[
"machine-learning",
"data-mining",
"classification",
"statistics",
"nlp"
] |
2597
|
2
| null |
2596
|
2
| null |
I don't see the problem. All you need is a learner to map a bit string as long as the total number of contestants, representing the subset who are taking part, to another bit string (with only one bit set) representing the winner, or a ranked list, if you want them all (assuming you have the whole list in your training data). In the latter case you would have a learning-to-rank problem.
If the contestant landscape can change it would help to find a vector space embedding for them so you can use the previous embeddings as an initial guess and rank anyone, even hypothetical, given their vector representation. As the number of users increases the embedding should stabilize and retraining should become less costly. The question is how to find the embedding, of course. If you have a lot of training data, you could probably find a randomized one along with the ranking function. If you don't, you would have to generate the embedding by some algorithm and estimate only the ranking function. I have not faced your problem before so I can't direct you to a particular paper, but the recent NLP literature should give you some inspiration, e.g. [this](http://jmlr.org/papers/volume13/shalit12a/shalit12a.pdf). I still think it is feasible.
| null |
CC BY-SA 3.0
| null |
2014-12-04T03:42:48.357
|
2014-12-04T20:11:17.383
|
2014-12-04T20:11:17.383
|
381
|
381
| null |
2598
|
1
|
2599
| null |
16
|
30469
|
I would like to know how exactly mahout user based and item based recommendation differ from each other.
It defines that
[User-based](https://mahout.apache.org/users/recommender/userbased-5-minutes.html): Recommend items by finding similar users. This is often harder to scale because of the dynamic nature of users.
[Item-based](https://mahout.apache.org/users/recommender/intro-itembased-hadoop.html): Calculate similarity between items and make recommendations. Items usually don't change much, so this often can be computed off line.
But though there are two kind of recommendation available, what I understand is that both these will take some data model ( say 1,2 or 1,2,.5 as item1,item2,value or user1,user2,value where value is not mandatory) and will perform all calculation as the similarity measure and recommender build-in function we chose and we can run both user/item based recommendation on the same data ( is this a correct assumption ?? ).
So I would like to know how exactly and in which all aspects these two type of algorithm differ.
|
Item based and user based recommendation difference in Mahout
|
CC BY-SA 3.0
| null |
2014-12-04T05:18:03.720
|
2020-08-16T13:02:03.173
|
2015-11-24T12:20:21.137
|
5091
|
5091
|
[
"machine-learning",
"data-mining",
"algorithms",
"recommender-system"
] |
2599
|
2
| null |
2598
|
12
| null |
You are correct that both models work on the same data without any problem. Both items operate on a matrix of user-item ratings.
In the user-based approach the algorithm produces a rating for an item `i` by a user `u` by combining the ratings of other users `u'` that are similar to `u`. Similar here means that the two user's ratings have a high Pearson correlation or cosine similarity or something similar.
In the item-based approach we produce a rating for `i` by `u` by looking at the set of items `i'` that are similar to `i` (in the same sense as above except now we'd be looking at the ratings that items have received from users) that `u` has rated and then combines the ratings by `u` of `i'` into a predicted rating by `u` for `i`.
The item-based approach was invented at [Amazon](http://dl.acm.org/citation.cfm?id=642471) to address their scale challenges with user-based filtering. The number of things they sell is much less and much less dynamic than the number of users so the item-item similarities can be computed offline and accessed when needed.
| null |
CC BY-SA 4.0
| null |
2014-12-04T06:08:25.000
|
2020-08-16T13:02:03.173
|
2020-08-16T13:02:03.173
|
98307
|
2724
| null |
2600
|
1
| null | null |
2
|
205
|
I am interested in graph problems like 2-color, max-clique, stable sets, etc but the documentation for scipy.optimize.anneal seems to be for ordinary functions. How would one apply this library towards graph formulations?
|
How does one feed graph optimization problems into Python's anneal function in SciPy?
|
CC BY-SA 3.0
| null |
2014-12-04T07:47:32.740
|
2022-04-09T15:50:19.863
|
2022-04-09T15:50:19.863
|
1330
|
5273
|
[
"python",
"optimization",
"graphs",
"scipy"
] |
2601
|
1
| null | null |
4
|
2096
|
I came across a package in R which has a function called `sann` for simulated annealing.
`sann` uses parameters `fn` and `gr` to optimize and to select new points, respectively.
For something like the max-clique or max-stable set problems, `fn` would be a summing function, but it's less clear how one would formulate `gr` to fix these graph computations. In these cases, how would `gr` "select"?
|
How to use "sann" function in R to solve graph problems?
|
CC BY-SA 3.0
| null |
2014-12-04T08:54:32.863
|
2015-05-05T03:35:17.127
|
2014-12-05T23:04:05.373
|
2723
|
5273
|
[
"r",
"graphs"
] |
2602
|
2
| null |
2598
|
23
| null |
Item Based Algorithm
```
for every item i that u has no preference for yet
for every item j that u has a preference for
compute a similarity s between i and j
add u's preference for j, weighted by s, to a running average
return the top items, ranked by weighted average
```
User Based Algorithm
```
for every item i that u has no preference for yet
for every other user v that has a preference for i
compute a similarity s between u and v
add v's preference for i, weighted by s, to a running average
return the top items, ranked by weighted average
```
Item vs User based:
1) Recommenders scale with the number of items or users they must deal with, so there are scenarios in which each type can perform better than the other
2) Similarity estimates between items are more likely to converge over time than similarities between users
3) We can compute and cache similarities that converge, which can give item based recommenders a performance advantage
4) Item based recommenders begin with a list of a user's preferred items and therefore do not need a nearest item neighborhood as user based recommenders do
| null |
CC BY-SA 3.0
| null |
2014-12-04T15:12:57.327
|
2014-12-04T19:08:33.323
|
2014-12-04T19:08:33.323
|
5043
|
5043
| null |
2604
|
2
| null |
155
|
18
| null |
Custom Google Search
You can use the Custom Google Search for datasets:
[Google Custom Search: Datasets](https://www.google.com/cse/publicurl?cx=002720237717066476899:v2wv26idk7m)
It includes 230 sources and meta-sources of datasets, including all mentioned in this question. Please, feel free to exclude .gov and any other websites from results by adding " -.gov" or " -site.com" to the search line. Other Google Search Operators work.
Don't hesitate to contact me if you have ideas what websites to add.
IOGDS
The following service categorizes more than 1,000,000 public datasets:
[IOGDS: International Open Government Dataset Search](http://logd.tw.rpi.edu/node/9903)
| null |
CC BY-SA 3.0
| null |
2014-12-04T18:36:57.947
|
2014-12-04T18:36:57.947
| null | null |
5279
| null |
2606
|
2
| null |
2586
|
2
| null |
Classes related to Artificial Intelligence are typically taught in Computer Science departments. Looking at the [IT Project Subjects offered by your university](https://my.feit.uts.edu.au/pages/course/undergraduate/it_project_subjects), I suspect Data Analytics would indeed be more relevant to AI than Internetworking and Applications.
Looking at the courses offered by your department, the following likely involve aspects of AI:
- Image Processing and Pattern Recognition
- Intelligent Agents
- Building Intelligent Agents
For self-directed study in AI, I recommend starting with Russell & Norvig's essential textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).
As to what it will take to create a human-like strong AI, I recommend this collection of essays: [The Philosophy of Artificial Intelligence](http://rads.stackoverflow.com/amzn/click/0198248547)... even though the material is getting a bit out-of-date by now.
Good luck!
| null |
CC BY-SA 3.0
| null |
2014-12-04T22:13:30.747
|
2014-12-04T22:13:30.747
| null | null |
819
| null |
2607
|
2
| null |
2596
|
1
| null |
Would it be possible to use Approximate Bayesian computation (ABC)? If you assume a distribution for the number of competitors (e.g. Poisson), select a subset of competitors each iteration and simulate your data using multinomial distributions with probabilities based on competitors' features, after discarding parameters that don't match your training data, you should be able to obtain parameters for each competitor (that is, posterior distributions) and generate more races.
This might not work if the number of competitors is so important that it affects the coefficients of features for each competitor.
| null |
CC BY-SA 3.0
| null |
2014-12-04T23:26:15.803
|
2014-12-04T23:26:15.803
| null | null |
4621
| null |
2608
|
1
| null | null |
3
|
96
|
My data looks like this:

Why is this error showing up?
|
Simple Excel Question: VLookup Error
|
CC BY-SA 3.0
| null |
2014-12-05T00:26:12.983
|
2014-12-05T15:31:06.383
| null | null |
2647
|
[
"data-cleaning"
] |
2609
|
1
|
2616
| null |
8
|
3992
|
Can anyone suggest any good books to learn hadoop and map reduce basics?
Also something for Spark, and Spark Streaming?
Thanks
|
Good books for Hadoop, Spark, and Spark Streaming
|
CC BY-SA 3.0
| null |
2014-12-05T05:50:29.903
|
2014-12-05T15:38:09.867
| null | null |
5283
|
[
"apache-hadoop"
] |
2610
|
2
| null |
2609
|
1
| null |
for hadoop try [hadoop in action](http://www.amazon.in/Hadoop-Action-Chuck-Lam/dp/8177228137?tag=googinhydr18418-21) or [Hadoop: The Definitive Guide](http://www.amazon.in/Hadoop-The-Definitive-Guide-White/dp/9350237563?tag=googinhydr18418-21)
| null |
CC BY-SA 3.0
| null |
2014-12-05T11:05:15.257
|
2014-12-05T11:05:15.257
| null | null |
5091
| null |
2611
|
2
| null |
2609
|
2
| null |
Here's a good [open source book for spark](http://databricks.gitbooks.io/databricks-spark-reference-applications/)
| null |
CC BY-SA 3.0
| null |
2014-12-05T11:22:01.423
|
2014-12-05T11:22:01.423
| null | null |
4679
| null |
2612
|
1
| null | null |
6
|
620
|
Although I have seen a few good questions asked about data anonymization, I was wondering if there were answers to this more specific variant.
I am seeking a tool (or to design one) that will anonymize human names from a specific country: particularly first names in unstructured text. Many of the tools that I have seen have considered the wider dimensions of data anonymization; with an equal focus on dates of birth, addresses, etc.
An imperative aspect is that it needs to have near absolute recall. The major pitfalls, as far as I can see, are diminutive variants ("Tommy" instead of "Thomas", "Ben" instead of "Benjamin", etc.) and typos. These two factors prevent a simple regex based on a database of names (based on censuses, etc.)
|
Name Anonymization Software
|
CC BY-SA 3.0
| null |
2014-12-05T14:03:58.777
|
2015-01-09T00:18:05.037
| null | null |
5290
|
[
"text-mining",
"anonymization"
] |
2613
|
2
| null |
1007
|
0
| null |
Ruby together with [Nokogiri](http://www.nokogiri.org/) allows to access HTML and XML documents via XPath and CSS selectors. Here is a [tutorial](http://thaiwood.io/screen-scraping-with-a-saw-a-nokogiri-tutorial-with-examples/).
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:28:39.640
|
2014-12-05T15:28:39.640
| null | null |
223
| null |
2614
|
2
| null |
2612
|
2
| null |
I don't think you really need some special software, but rather to employee existing tools, such as encryption algorithms.
Why not just encrypt the names with any key-based algorithm and store the key securely?
If you didn't need to be able to recover the names, but just to identify variation to the level of differences in diminutives, then you could simply use hashing rather than encryption.
I'm not sure what environment you want to carry this out it, but any language such as `R` or SQL/NoSQL database could easily carry this out programmatically.
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:29:04.743
|
2014-12-05T15:29:04.743
| null | null |
2723
| null |
2615
|
2
| null |
2608
|
1
| null |
This is a common result of imprecise matching, such as with whitespace problems.
For example, if you imagine that the grey shading used in codeblocks on StackExchange represents the whitespaces in your Excel cells, then
`AK`
is technically the same as
```
AK
```
You can trim and/or use wildcard chars to allow matching to skip spaces. For example:
`=VLOOKUP(CONCATENATE("*",TRIM(A2),"*"), $E$2:$F$5, 2,FALSE)`
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:31:06.383
|
2014-12-05T15:31:06.383
| null | null |
2723
| null |
2616
|
2
| null |
2609
|
4
| null |
There's such an overwhelming amount of literature that with programming, databases, and Big Data I like to stick to the O'reilly series as my go-to source. O'reilly books are extremely popular in the industry and I've been very satisfied.
A current version of
- Hadoop: The Definitive Guide,
- MapReduce Design Patterns, and
- Learning Spark
might suit your needs by providing high quality, immediately useful information and avoiding information overload -- all are published by O'reilly.
Spark Streaming is covered in Chapter 13 of "Learning Spark".
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:38:09.867
|
2014-12-05T15:38:09.867
| null | null |
2723
| null |
2617
|
2
| null |
867
|
3
| null |
In terms of open source NLG components, I'm most familiar with Mumble and FUF/SURGE. They've got both similarities and differences, so it's hard to say which is better...
[Mumble](https://code.google.com/p/sparser/source/browse/#svn%2Ftrunk%2FMumble):
- written in Lisp
- EPL license
- based on tree-adjoining grammar
- focuses on linguistic message planning
[FUF/SURGE](http://www.cs.bgu.ac.il/surge/index.html):
- written in Lisp
- GPL license
- based on functional unification grammar
- focuses on syntactic realization
Since it sounds like you're interested in abstractive summarization (which is much harder than traditional extractive summarization), I'd recommend the following academic papers:
- Text Generation for Abstractive Summarization
- Framework for Abstractive Summarization using Text-to-Text Generation
- Towards a Framework for Abstractive Summarization of Multimodal Documents -- full disclosure: I'm the author of this one
Also, consider checking out this textbook to get started: [Building Natural Language Generation Systems](http://rads.stackoverflow.com/amzn/click/0521620368)
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:43:40.720
|
2014-12-05T15:43:40.720
| null | null |
819
| null |
2618
|
1
|
2624
| null |
13
|
366
|
Few things in life give me pleasure like scraping structured and unstructured data from the Internet and making use of it in my models.
For instance, the Data Science Toolkit (or `RDSTK` for R programmers) allows me to pull lots of good location-based data using IP's or addresses and the `tm.webmining.plugin` for R's `tm` package makes scraping financial and news data straightfoward. When going beyond such (semi-) structured data I tend to use `XPath`.
However, I'm constantly getting throttled by limits on the number of queries you're allowed to make. I think Google limits me to about 50,000 requests per 24 hours, which is a problem for Big Data.
From a technical perspective getting around these limits is easy -- just switch IP addresses and purge other identifiers from your environment. However, this presents both ethical and financial concerns (I think?).
Is there a solution that I'm overlooking?
|
Ethically and Cost-effectively Scaling Data Scrapes
|
CC BY-SA 3.0
| null |
2014-12-05T15:51:54.690
|
2017-10-02T06:16:35.300
|
2015-11-19T17:42:19.730
|
11097
|
2723
|
[
"text-mining",
"scraping"
] |
2619
|
2
| null |
2494
|
1
| null |
I'm not sure I fully understand your question, but it seems to me that you're trying to determine the category of the string/entity "titanic" out of context. Your data tells you that "titanic" could be a book, a movie, or a product, and you want to figure out which one is correct -- is that what you're trying to do?
If so, the problem is that you've dropped the context in which the string/entity "titanic" appears in your original text. For example...
- In the sentence "I couldn't stop reading Titanic," the word "titanic" refers to a book.
- In the sentence "Titanic was one of the highest-grossing films of all time," the word "titanic" refers to a movie.
- In the sentence "The Titanic was the world's largest ocean liner," the word "titanic" refers to a product.
Without that context, there's no way to know which is the correct category. I'd suggest looking into how named entity recognition tools like [Stanford NER](http://nlp.stanford.edu/software/CRF-NER.shtml) work -- that will help you better understand how to do something like this. You'll see that the input to an NER tool generally needs to be a sentence, in order to take advantage of the context to properly categorize the extracted entities.
| null |
CC BY-SA 3.0
| null |
2014-12-05T15:57:07.737
|
2014-12-05T15:57:07.737
| null | null |
819
| null |
2620
|
2
| null |
954
|
1
| null |
From an older version of the [OpenNLP README](http://opennlp.sourceforge.net/README.html):
>
Training the Tools
There are training tools for all components expect the coref component. Please consult the help message of the tool and the javadoc to figure out how to train the tools.
The tutorials in our wiki might also be helpful.
The following modules currently support training via the WordFreak opennlp.plugin v1.4 (http://wordfreak.sourceforge.net/plugins.html).
coreference: org.annotation.opennlp.OpenNlpCoreferenceAnnotator (use opennlp 1.4.3 for training, models are compatible)
Note: In order to train a model you need all the training data. There is not currently a mechanism to update the models distributed with the project with additional data.
As you can see, OpenNLP does not provide training tools for the coreference component. However, it seems at one point it was possible to train new models for OpenNLP's coref component using the third-party WordFreak plugin... however, it hasn't been updated in over a decade, so your mileage may vary.
| null |
CC BY-SA 3.0
| null |
2014-12-05T16:14:01.097
|
2014-12-05T16:14:01.097
| null | null |
819
| null |
2621
|
2
| null |
2601
|
1
| null |
First, some clarification on terminology.
A package in R is a collection of R functions, data, and compiled code in a well-defined format.
SANN (`sann`) is not a package. Depending on which package you're using, `sann` is either a function or, more often, a method used within an optimization function.
Packages containing `sann` include `optim`, `trustOptim`, `consPlan`, and `constrOptim`.
In the package `optim`, the `sann` method is implemented as:
```
> func <- function(x){
+ out <- (x[1]-2)^2 + (x[2]-1)^2
+ return <- out
+ }>
> optim(par=c(0,0), fn=func, gr = NULL,
+ method = c("SANN"),
+ lower = -Inf, upper = Inf,
+ control = list(), hessian = T)
```
As you said, for the "SANN" (`sann`) method `gr` is used to generate a new candidate point. If it is NULL a default Gaussian Markov kernel is used.
Now in your use case -- the case of a graph -- what you probably want to do is to [use par and value to pass values to fn and gr](http://www.inside-r.org/r-doc/graphics/par). This is a nice feature of this implementation of SANN in `optim` which is covered a little more than half way through this [documentation page](http://www.inside-r.org/r-doc/stats/optim).
`par` is the best set of parameters found and `value` is the value of `fn` corresponding to `par`.
| null |
CC BY-SA 3.0
| null |
2014-12-05T16:35:42.910
|
2014-12-05T16:35:42.910
| null | null |
2723
| null |
2622
|
2
| null |
2593
|
4
| null |
The `Data Science Toolkit` is [a powerful library](http://www.datasciencetoolkit.org/) (or collection of libraries, technically) which are available in a number of languages. For instance, I use the implementation called `RDSTK` in R.
In the case of your preferred language, Google Go, there's [a list](https://code.google.com/p/go-wiki/wiki/Projects#Web_Libraries) of web-related libraries here which looks very useful.
| null |
CC BY-SA 3.0
| null |
2014-12-05T16:37:54.327
|
2014-12-05T16:37:54.327
| null | null |
2723
| null |
2623
|
1
| null | null |
2
|
485
|
The `mnlogit` package in R allows for the fast estimation of multinomial logit models.
The specification of forumlas is a bit different from most other regression models/packages in R, however.
Using the `Fish` dataset as a reproducible example,
```
> require(mnlogit)
Loading required package: mnlogit
Package: mnlogit
Version: 1.1.1
Multinomial Logit Choice Models.
Scientific Computing Group, Sentrana Inc, 2013.
> data(Fish, package ='mnlogit')
> head(Fish)
mode income alt price catch chid
1.beach FALSE 7083.332 beach 157.930 0.0678 1
1.boat FALSE 7083.332 boat 157.930 0.2601 1
1.charter TRUE 7083.332 charter 182.930 0.5391 1
1.pier FALSE 7083.332 pier 157.930 0.0503 1
2.beach FALSE 1250.000 beach 15.114 0.1049 2
2.boat FALSE 1250.000 boat 10.534 0.1574 2
```
I'm trying to understand the difference between the model specification of
```
fm <- formula(mode ~ 0 + price | income | catch)
```
and
```
fm <- formula(mode ~ 0 + price | income + catch)
```
while the documentation covers the detail of such changes in the general coeffcient area of the forumla (i.e. where `price` is), I don't see an explanation of how operators like `+` affect the alternative-specific area of the formula/code, relative to `|`.
|
How to Interpret Multinomial Specification in R's `mnlogit` package
|
CC BY-SA 3.0
| null |
2014-12-05T16:54:13.967
|
2014-12-05T16:54:13.967
| null | null |
2723
|
[
"r",
"logistic-regression",
"regression"
] |
2624
|
2
| null |
2618
|
6
| null |
For many APIs (most I've seen) ratelimiting is a function of your API Key or OAuth credentials. (Google, Twitter, NOAA, Yahoo, Facebook, etc.) The good news is you won't need to spoof your IP, you just need to swap out credentials as they hit there rate limit.
A bit of shameless self promotion here but I wrote a python package specifically for handling this problem.
[https://github.com/rawkintrevo/angemilner](https://github.com/rawkintrevo/angemilner)
[https://pypi.python.org/pypi/angemilner/0.2.0](https://pypi.python.org/pypi/angemilner/0.2.0)
It requires a mongodb daemon and basically you make a page for each one of your keys. So you have 4 email addresses each with a separate key assigned. When you load the key in you specify the maximum calls per day and minimum time between uses.
Load keys:
```
from angemilner import APIKeyLibrarian
l= APIKeyLibrarian()
l.new_api_key("your_assigned_key1", 'noaa', 1000, .2)
l.new_api_key("your_assigned_key2", 'noaa', 1000, .2)
```
Then when you run your scraper for instance the NOAA api:
```
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': 'your_assigned_key'})
```
becomes:
```
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': l.check_out_api_key('noaa')['key']})
```
so if you have 5 keys, `l.check_out_api_key` returns the key that has the least uses and waits until enough time has elapsed for it to be used again.
Finally to see how often your keys have been used / remaining useage available:
```
pprint(l.summary())
```
I didn't write this for R because most scraping is done in python (most of MY scraping). It could be easily ported.
Thats how you can technically get around rate limiting. Ethically ...
UPDATE The example uses Google Places API [here](https://github.com/rawkintrevo/angemilner/blob/master/example2.py)
| null |
CC BY-SA 3.0
| null |
2014-12-05T17:13:14.073
|
2014-12-05T18:53:24.863
|
2014-12-05T18:53:24.863
|
5247
|
5247
| null |
2625
|
2
| null |
2593
|
2
| null |
If your only motivation for using Google Go is webscraping, and you want to do you ML in python, I would recommend the following stack:
[Python requests for scraping data](http://docs.python-requests.org/en/latest/)
[MongoDB for caching data](http://www.mongodb.org/) (MongoDB's page oriented format makes it a natural home for storing JSON objects commonly returned by APIs)
[pymongo for interfacing python and mongodb](http://api.mongodb.org/python/current/)
[scikit-learn for doing your machine learning](http://scikit-learn.org/stable/)
This all happens in python and you can extend it multiple processors with
[multiprocessing](https://docs.python.org/2/library/multiprocessing.html) or to multiple nodes with django
| null |
CC BY-SA 3.0
| null |
2014-12-05T18:09:27.670
|
2014-12-05T18:43:57.927
|
2014-12-05T18:43:57.927
|
5247
|
5247
| null |
2626
|
2
| null |
2432
|
0
| null |
If you are willing to pay for a vendor solution, [Teradata](http://www.teradata.com/) is designed to solve the problem of large scale joins with low latency.
| null |
CC BY-SA 3.0
| null |
2014-12-05T20:13:17.793
|
2014-12-05T20:13:17.793
| null | null |
1330
| null |
2627
|
1
| null | null |
6
|
332
|
I'm trying to create 3D bars on this map. Can anyone please advise if this is possible, and how?
[http://leafletjs.com/examples/choropleth.html](http://leafletjs.com/examples/choropleth.html)
My data: UFO sightings in the USA (location wise).
Count of these sightings per location will be the height of the 3D bar.
Base map is a choropleth with US population density values.
I don't mind integrating Javascript or d3.js into the code to create the 3D bars.
|
3D map using leaflet
|
CC BY-SA 3.0
| null |
2014-12-06T00:41:24.933
|
2016-07-10T10:39:59.713
| null | null |
2647
|
[
"javascript",
"visualization"
] |
2628
|
1
| null | null |
2
|
571
|
Using SAS Studio (online, student version)...
Need to do a "nested likelihood ratio test" for a logistic regression.
Entirety of instructions are: "Perform a nested likelihood ratio test comparing your full model (all predictors included)to a reduced model of interest."
The two models I have are:
```
Proc Logistic Data=Project_C;
Model Dem (event='1') = VEP TIF Income NonCit Unemployed Swing;
Run;
```
and
```
Proc Logistic Data=Project_C;
Model Dem (Event='1') = VEP TIF Income / clodds=Wald clparm=Wald expb rsquare;
Run;
```
I honestly have no idea where to even start.
Any suggestions would be appreciated.
Thanks!
|
SAS Nested Likelihood Ratio Test for a Logistic Model
|
CC BY-SA 3.0
| null |
2014-12-06T01:10:30.817
|
2022-04-09T15:50:01.330
|
2022-04-09T15:50:01.330
|
1330
|
5298
|
[
"logistic-regression",
"sas"
] |
2629
|
1
|
2630
| null |
3
|
429
|
In recent years, the term "data" seems to have become a term widely used without specific definition. Everyone seems to use the phrase. Even people as technology-impaired as my grandparents use the term and seem to understand words like "data breach." But I don't understand what makes "data science" a new discipline. Data has been the foundation of science for centuries. Without data, there would be no Mendel, no Schrödinger, etc. You can't have science without interpreting and analyzing data.
But clearly it means something. Everyone is talking about it. So what exactly do people mean by data when they use terms like "big data" and why has this become a discipline in itself? Also, if it is an emerging discipline, where can I find more serious/in-depth information so I can better educate myself?
Thanks!
|
What is "data science"?
|
CC BY-SA 3.0
| null |
2014-12-06T06:53:14.617
|
2014-12-06T13:02:19.373
| null | null |
5301
|
[
"bigdata",
"definitions"
] |
2630
|
2
| null |
2629
|
5
| null |
I get asked this question all the time, so earlier this year I wrote an article ([What is Data Science?](http://oss4ds.com/what-is-data-science.html)) based on a presentation I've given a few times. Here's the gist...
First, a few definitions of data science offered by others:
[Josh Wills from Cloudera says](https://twitter.com/josh_wills/status/198093512149958656) a data scientist is someone "who is better at statistics than any software engineer and better at software engineering than any statistician."
[A frequently-heard joke](https://twitter.com/nivertech/status/180109930139893761) is that a "Data Scientist" is a Data Analyst who lives in California.
[According to Big Data Borat](https://twitter.com/BigDataBorat/status/372350993255518208), Data Science is statistics on a Mac.
In [Drew Conway's famous Data Science Venn Diagram](http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram), it's the intersection of Hacking Skills, Math & Statistics Knowledge, and Substantive Expertise.
Here's another good definition I found on the [ITProPortal blog](http://www.itproportal.com/2014/02/11/how-to-pick-a-data-scientist-the-right-way/):
>
"A data scientist is someone who understands the domains of programming, machine learning, data mining, statistics, and hacking"
Here's how we define Data Science at Altamira (my current employer):

The bottom four rows are the table stakes -- the cost of admission just to play the game. These are foundational skills that all aspiring data scientists must obtain. Every data scientist must be a competent programmer. He or she must also have a solid grasp of math, statistics, and analytic methodology. Data science and "big data" go hand-in-hand, so all data scientists need to be familiar with frameworks for distributed computing. Finally, data scientists must have a basic understanding of the domains in which they operate, as well as excellent communications skills and the ability to tell a good story with data.
With these basics covered, the next step is to develop deep expertise in one or more of the vertical areas. "Data Science" is really an umbrella term for a collection of interrelated techniques and approaches taken from a variety of disciplines, including mathematics, statistics, computer science, and software engineering. The goal of these diverse methods is to extract actionable intelligence from data of all kinds, enabling clients to make better data-driven decisions. No one person can ever possibly master all aspects of data science; doing so would require multiple lifetimes of training and experience. The best data scientists are therefore "T-shaped" individuals -- that is, they possess a breadth of knowledge across all areas of data science, along with deep expertise in at least one. Accordingly, the best data science teams bring together a set of individuals with complementary skillsets spanning the entire spectrum.
| null |
CC BY-SA 3.0
| null |
2014-12-06T13:02:19.373
|
2014-12-06T13:02:19.373
| null | null |
819
| null |
2631
|
1
| null | null |
7
|
2272
|
For simplicity let's assume the feature space is the XY plane.
|
What would be a good way to use clustering for outlier detection?
|
CC BY-SA 3.0
| null |
2014-12-06T15:04:03.823
|
2020-08-17T18:55:17.380
|
2016-03-02T15:51:27.790
|
13727
|
5306
|
[
"machine-learning",
"data-mining",
"clustering",
"anomaly-detection"
] |
2632
|
1
| null | null |
3
|
41
|
I'm running a test on MapReduce algorithm in different environments, like Hadoop and MongoDB, and using different types of data. What are the different methods or techniques to find out the execution time of a query.
If I'm inserting a huge amount of data, consider it to be 2-3GB, what are the methods to find out the time for the process to be completed.
|
Timing sequence in MapReduce
|
CC BY-SA 3.0
| null |
2014-12-06T17:56:53.157
|
2022-05-02T06:00:45.083
|
2014-12-13T00:14:33.460
|
84
|
5310
|
[
"efficiency",
"map-reduce",
"performance",
"experiments"
] |
2633
|
1
| null | null |
0
|
1327
|
I'm trying to set up a cluster (1 namenode, 1 datanode) on AWS.
I'm using free one year trial period of AWS, but the challenge is, instance is created with 1GB of RAM.
As I'm a student, I cannot afford much. Can anyone please suggest me some solution?
Also, it would be great if you could provide any links for setting up multi cluster hadoop with spark on AWS.
Note: I cannot try in GCE as my trial period is exhausted.
|
Can hadoop with Spark be configured with 1GB RAM
|
CC BY-SA 3.0
| null |
2014-12-07T04:40:53.677
|
2015-02-28T13:28:18.943
|
2015-02-28T13:28:18.943
|
84
|
5172
|
[
"bigdata",
"apache-hadoop",
"nosql",
"aws"
] |
2634
|
1
| null | null |
2
|
156
|
Having:
- a set of soft fuzzy classifiers (classification onto overlapping sets) $C_i(x) \to [0,1]$;
- a corresponding set of weak estimators $R_i(z)$ of the form $R_i(z) = \mathit{EX}(y\mid z)$.
The estimators $R_i$ are just some kind of regression, Kalman or particle filters. The classifiers $C_i$ are fixed and static. How to make a strong estimator out of a weighted combination of the form:
$$L(x, z) = \sum_{i}C_i(x)R_i(z)Q_i$$
In other words how to choose the weights $Q_i$? Is there some kind of online approach to this problem?
Here is brief description of a practical application. When an event $E$ is registered, multiple measurements are made. Based on these measurements, the classifiers $C_i$ make a soft assignment of the event to multiple overlapping categories. What we get is fit ratios for the soft clusters.
Now there is some chance that event $E$ may trigger a subsequent event $D$, depending on another variable $z$ -- independent from the event $E$. We know that all the soft cluster "memberships" may influence the probability of event $D$ being triggered.
We want to estimate the probability that $E$ triggers $D$, given the $C_i$ fitness ratios and value of $z$.
|
Linear combination of weak estimators over fuzzy classifiers?
|
CC BY-SA 3.0
| null |
2014-12-07T10:43:01.697
|
2014-12-13T00:11:04.017
|
2014-12-13T00:11:04.017
|
84
|
5314
|
[
"machine-learning",
"regression",
"online-learning"
] |
2635
|
2
| null |
2593
|
1
| null |
I am not 100% if a message queue library will be the right tool for this job but so far it looks to me so.
With a messaging library like:
- nsq
- zeromq
- mqtt (?)
You can connect different processes operating on different environment through a TCP based protocol. As these systems run distributed it is possible to connect multiple nodes.
For nsq we even have a library in Python and Go!
| null |
CC BY-SA 3.0
| null |
2014-12-07T11:29:14.057
|
2014-12-07T11:29:14.057
| null | null |
5266
| null |
2636
|
2
| null |
2631
|
1
| null |
Perhaps you could cluster the items, then those items with the furthest distance from the midpoint of any cluster would be candidates for outliers.
| null |
CC BY-SA 3.0
| null |
2014-12-07T12:16:10.207
|
2014-12-07T12:16:10.207
| null | null |
819
| null |
2637
|
2
| null |
2633
|
4
| null |
So if 4GB of RAM isn't sufficient, 1GB isn't going to be. That is really too little to run an HDFS namenode, a datanode, YARN, Spark driver alone, let alone leaving room for your workers.
Much more reasonable is to simply run Spark locally on that instance without Hadoop at all.
But I would question whether Spark is the right choice if you are definitely limited to such a small machine.
| null |
CC BY-SA 3.0
| null |
2014-12-07T13:37:40.373
|
2014-12-07T13:37:40.373
| null | null |
21
| null |
2639
|
2
| null |
2634
|
1
| null |
You might find a solution for this by checking out Viola & Jones face detection algorithm (and object detection in general) [http://www.cs.ubc.ca/~lowe/425/slides/13-ViolaJones.pdf](http://www.cs.ubc.ca/~lowe/425/slides/13-ViolaJones.pdf). Particularly the AdaBoost algorithm for building a strong classifier from weak classifiers. [https://www.cs.princeton.edu/~schapire/papers/explaining-adaboost.pdf](https://www.cs.princeton.edu/~schapire/papers/explaining-adaboost.pdf)
In this algorithm this is used particularly for feature selection, where it is wanted to select just the features that grouped together classify better, discarding the other as noisy features.
The approach to obtain this strong classifier is having a set of examples X, a set of weights W and a set of expected results (classifications) Y. For a two-class example (e.g. face o no face image) the first weak classifier is selected and then the classification error is found between this classifier and the expected output. For the samples that were misclassified their weights are incremented, so the next weak classifier to find is more biased to neglect this wrong classification.
The algorithm (AdaBoost) converges when the sign of the sum of classifications for each sample outputs the correct classification.
For example:
```
Y = +1 -1 +1 +1
WC1=+1 +1 +1 +1
WC2=-1 -1 -1 -1
WC3=+1 -1 +1 +1
```
So the strong classifier is WC1+WC2+WC3:
```
SC=+1 -1 +1 +1 == Y
```
Hope this solves your question.
| null |
CC BY-SA 3.0
| null |
2014-12-07T15:39:00.947
|
2014-12-07T15:39:00.947
| null | null |
5143
| null |
2640
|
2
| null |
2631
|
6
| null |
A very robust clustering algorithm against outliers is [PFCM from Bezdek](http://www.comp.ita.br/%7Eforster/CC-222/material/fuzzyclust/fuzzy01492404.pdf).
In this paper Bezdek proposes Possibilistic-Fuzzy-C-Means which is an improvement of the different variations of fuzzy posibilistic clustering. This algorithm is particularly good at detecting outliers and avoiding them to influence the clusterization. So using PFCM you could find which points are identified as outliers and at the same time have a very robust fuzzy clustering of your data.
| null |
CC BY-SA 4.0
| null |
2014-12-07T15:46:40.670
|
2020-08-17T16:15:44.087
|
2020-08-17T16:15:44.087
|
98307
|
5143
| null |
2641
|
2
| null |
2568
|
1
| null |
The following link contains a list of positive and negative polarised emotions on the scale of [-5, 5]. Just try to count up the scores based on the word matches and you can get the overall movie review score.
[AFINN](http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=6010)
| null |
CC BY-SA 4.0
| null |
2014-12-07T16:17:20.613
|
2021-03-30T20:43:25.393
|
2021-03-30T20:43:25.393
|
85045
|
1131
| null |
2642
|
1
| null | null |
5
|
4398
|
Are there any algorithms which were developed using partial differential equations for tackling some of the machine learning problems? Most works I see online are in the field of computer vision and a few bizarre ones in topic modelling. But just curious if someone has used or seen it being used for some decision making process or classification problems?
|
Machine Learning & Partial Differential Equations
|
CC BY-SA 3.0
| null |
2014-12-07T16:24:43.997
|
2014-12-13T00:14:11.770
| null | null |
1131
|
[
"machine-learning",
"algorithms"
] |
2643
|
1
| null | null |
4
|
1002
|
I'm new to apache spark.
Is it possible to configure multi cluster spark without hadoop?
If so, can you please provide the steps.
I would like to create clusters on Google Compute Engine (1-master, 1-worker)
|
How to set up multi cluster spark without hadoop on Google Compute engine
|
CC BY-SA 3.0
| null |
2014-12-07T16:31:57.913
|
2015-05-16T20:06:16.560
| null | null |
5172
|
[
"bigdata",
"apache-hadoop",
"scala"
] |
2645
|
1
| null | null |
1
|
691
|
Say I used spectral clustering to cluster a data-set $D$ of points $X_0 - X_n$ into a number $C$ of clusters. How can I efficiently assign a new single point $X_{n+1}$ to his convenient cluster?
Do I have to do the classification from the beginning (destroy all the clusters and apply the algorithm to the data-set $X_0 - X_{n+1}$), or is there an optimized way to extend to the point $X_{n+1}$?
|
Assign new point to a class using spectral clustering
|
CC BY-SA 3.0
| null |
2014-12-08T10:47:52.340
|
2014-12-12T21:01:42.370
|
2014-12-12T21:01:42.370
|
84
|
5332
|
[
"machine-learning",
"data-mining",
"clustering"
] |
2646
|
1
|
2648
| null |
20
|
23373
|
Are there any articles or discussions about extracting part of text that holds the most of information about current document.
For example, I have a large corpus of documents from the same domain. There are parts of text that hold the key information what single document talks about. I want to extract some of those parts and use them as kind of a summary of the text. Is there any useful documentation about how to achieve something like this.
It would be really helpful if someone could point me into the right direction what I should search for or read to get some insight in work that might have already been done in this field of Natural language processing.
|
Extract most informative parts of text from documents
|
CC BY-SA 3.0
| null |
2014-12-08T14:51:27.613
|
2019-03-19T15:01:33.430
| null | null |
2750
|
[
"nlp",
"text-mining"
] |
2648
|
2
| null |
2646
|
25
| null |
What you're describing is often achieved using a simple combination of [TF-IDF](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) and [extractive summarization](http://en.wikipedia.org/wiki/Automatic_summarization#Extraction-based_summarization).
In a nutshell, TF-IDF tells you the relative importance of each word in each document, in comparison to the rest of your corpus. At this point, you have a score for each word in each document approximating its "importance." Then you can use these individual word scores to compute a composite score for each sentence by summing the scores of each word in each sentence. Finally, simply take the top-N scoring sentences from each document as its summary.
Earlier this year, I put together an iPython Notebook that culminates with an implementation of this in Python using NLTK and Scikit-learn: [A Smattering of NLP in Python](https://github.com/charlieg/A-Smattering-of-NLP-in-Python).
| null |
CC BY-SA 3.0
| null |
2014-12-08T15:48:25.647
|
2014-12-08T15:48:25.647
| null | null |
819
| null |
2649
|
2
| null |
2645
|
1
| null |
I think you do not, since I was able to find several papers that proposed algorithms for the same:
- Incremental spectral clustering by efficiently updating the eigen-system
- Incremental kernel spectral clustering for online learning of non-stationary data
- A Fast Incremental Spectral Clustering for Large Data Sets
Such algorithms are called sequential, incremental, streaming, or online. Armed with this knowledge, you can find more papers on your own.
| null |
CC BY-SA 3.0
| null |
2014-12-08T20:55:55.677
|
2014-12-08T20:55:55.677
| null | null |
381
| null |
2650
|
2
| null |
2642
|
1
| null |
Neil is correct. There are partial derivatives evwrywhere in gradient computation for machine learning models training.
For instance you can look at the gradient descent method used in the backpropagation method for a neural network. The course from AndrewNg on coursera describes it very well.
| null |
CC BY-SA 3.0
| null |
2014-12-08T22:32:05.817
|
2014-12-13T00:14:11.770
|
2014-12-13T00:14:11.770
|
5342
|
5342
| null |
2651
|
1
| null | null |
29
|
9873
|
I am looking for a paper detailing the very basics of deep learning. Ideally like the Andrew Ng course for deep learning. Do you know where I can find this ?
|
Deep learning basics
|
CC BY-SA 3.0
| null |
2014-12-08T22:37:32.777
|
2020-08-20T18:59:26.887
| null | null |
5342
|
[
"machine-learning",
"deep-learning"
] |
2652
|
1
|
5132
| null |
3
|
3037
|
Im traying to integrate Hadoop and R, I was install the pachages rJava and Rhipe in R, I do this steps to start Hadoop and R:
-starting Hadoop services.,
-loading rJava and Rhipe packages by library function.
-Calling rhinit() to initialize Rhipe.
the problem here is when I call rhinit() funtion, it show this error:
>
Initializing Rhipe v0.73
Error in .jnew("org/godhuli/rhipe/PersonalServer") :
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream'
please some helps to fixe this problem.
|
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
|
CC BY-SA 3.0
| null |
2014-12-09T00:58:03.057
|
2015-08-13T00:36:49.403
| null | null |
4705
|
[
"r",
"apache-hadoop"
] |
2653
|
2
| null |
2651
|
19
| null |
The subject is new so most of the wisdom is scattered in papers, but here are two recent books:
- Deep Learning, Yoshua Bengio, Ian J. Goodfellow, Aaron Courville.
- Deep Learning: Methods and Applications, Li Deng and Dong Yu.
And [some practical material](http://deeplearning.net/tutorial/).
- ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)
| null |
CC BY-SA 4.0
| null |
2014-12-09T01:19:32.290
|
2020-08-20T18:58:20.740
|
2020-08-20T18:58:20.740
|
98307
|
381
| null |
2654
|
1
|
2664
| null |
1
|
135
|
I'm wondering if there is a web framework well suited for placing recommendations on content.
In most cases, a data scientist goes through after the fact and builds (or uses) a completely different tool to create recommendations. This involves analyzing traffic logs, a history of shopping cart data, ratings, and so forth. It usually comes from multiples sources (the web server, the application's database, Google Analytics, etc) and then has to be cleaned up and processed, THEN delivered back to the application in way it understands.
Is there a web framework on the market which handles collecting this data up front, as to minimize the retrospective data wrangling?
|
Web Framework Built for Recommendations
|
CC BY-SA 3.0
| null |
2014-12-09T02:28:51.430
|
2014-12-10T02:05:33.430
| null | null |
3466
|
[
"predictive-modeling",
"data-cleaning"
] |
2655
|
2
| null |
2651
|
14
| null |
Neural Networks and Deep Learning by Michael Nielsen. The book is still in progress, but it looks quite interesting and promising. And it's free! [Here](http://neuralnetworksanddeeplearning.com/)'s the link.
There are only 5 chapters so far, and the most of them talk about usual neural networks, but it's still worth having a look.
Update: the book has been finished!
| null |
CC BY-SA 4.0
| null |
2014-12-09T08:26:57.997
|
2020-08-20T18:58:36.587
|
2020-08-20T18:58:36.587
|
98307
|
816
| null |
2656
|
2
| null |
2491
|
2
| null |
Found it myself.
- Go to context menu right clicking to the dimension field.
- Go to Aliases... and change the labels.

| null |
CC BY-SA 3.0
| null |
2014-12-09T08:30:28.000
|
2014-12-09T08:30:28.000
| null | null |
97
| null |
2658
|
1
| null | null |
9
|
2200
|
I have a huge file of customer complaints about the products my company owns and I would like to do a data analysis on those descriptions and tag a category to each of them.
For example: I need to figure out the number of complaints on Software and Hardware side of my product from the customer complaints. Currently, I am using excel to do the data analysis which do seek a significant amount of manual work to get a tag name to the complaints.
Is there a way in NLP to build and train a model to automate this process? I have been reading stuffs about NLP for the past couple of days and it looks like NLP has a lot of good features to get a head start in addressing this issue. Could someone please guide me with the way I should use NLP to address this issue?
|
Using NLP to automate the categorization of user description
|
CC BY-SA 3.0
| null |
2014-12-09T20:49:37.093
|
2020-08-17T01:00:02.987
| null | null |
5043
|
[
"data-mining",
"classification",
"nlp",
"categorical-data"
] |
2659
|
1
|
2662
| null |
1
|
6643
|
EDIT It was pointed out in the Answers-section that I am confusing k-means and kNN. Indeed I was thinking about kNN but wrote k-means since I'm still new to this topic and confuse the terms quite often. So here is the changed question.
I was looking at kNN today and something struck me as odd or - to be more precise - something that I was unable to find information about namely the following situation.
Imagine that we pick kNN for some dataset. I want to remain as general as possible, thus $k$ will not be specified here. Further we select, at some point, an observation where the number of neighbors that fulfill the requirement to be in the neighbourhood are actually more than the specified $k$.
What criterion/criteria should be applied here if we are restricted to use the specific K and thus cannot alter the structure of the neighborhood (number of neighbors). Which observations will be left out and why? Also is this a problem that occurs often, or is it something of an anomaly?
|
kNN - what happens if more than K observation have the same distance to the centroid of the cluster
|
CC BY-SA 3.0
| null |
2014-12-09T21:13:10.797
|
2014-12-12T21:05:30.020
|
2014-12-12T21:05:30.020
|
84
|
5356
|
[
"machine-learning",
"classification"
] |
2660
|
2
| null |
2658
|
7
| null |
One way to handle this is to use 'supervised classification'. In this model, you manually classify a subset of the data and use it to train your algorithm. Then, you feed the remaining data into your software to classify it.
This is accomplished with NLTK for Python (nltk.org).
If you are simply looking for strings like "hardware" and "software", this is a simple use case, and you will likely get decent results using a 'feature extractor', which informs your classifier which phrases in the document are relevant.
While it's possible to implement an automated method for finding the keywords, it sounds like you have a list in mind already, so you can skip that step and just use the tags you are aware of. (If your results aren't satisfactory the first time, this is something you might try later on).
That's an overview for getting started. If you are unhappy with the initial results, you can refine your classifier by introducing more complex methods, such as sentence segmentation, identification of dialogue act types, and decision trees. The sky is the limit (or more likely, your time is the limit)!
More info [here](http://www.nltk.org/book/ch06.html).
| null |
CC BY-SA 4.0
| null |
2014-12-09T21:19:23.747
|
2020-08-17T01:00:02.987
|
2020-08-17T01:00:02.987
|
98307
|
3466
| null |
2661
|
2
| null |
2659
|
3
| null |
K-means does not make an assumption regarding how many observations should be assigned to each cluster. `K` is simply the number of clusters one chooses to generate. During each iteration, each observation is assigned to the cluster having the nearest mean. There is no assumption that all clusters should have a comparable number of observations assigned (i.e., for `N` observations, there is no expectation that each cluster should have ~ `N/K` observations assigned).
It is quite possible that the numbers of observations in the various clusters are highly imbalanced. This can be due to the distribution of the data, the number of clusters chosen (`K`), or even how the cluster means are initialized.
| null |
CC BY-SA 3.0
| null |
2014-12-09T21:33:31.200
|
2014-12-09T21:33:31.200
| null | null |
964
| null |
2662
|
2
| null |
2659
|
4
| null |
You are mixing up kNN classification and k-means.
There is nothing wrong with having more than k observations near a center in k-means. In fact, this it the usual case; you shouldn't choose k too large. If you have 1 million points, a k of 100 may be okay. K-means does not guarantee clusters of a particular size. Worst case, clusters in k-means can have only one element (outliers) or even disappear.
What you probably meant to write, but got mixed up, is what to do if a point is at the same distance to two centers.
From a statistical point of view, it doesn't matter. Both have the same squared error.
From an implementation point of view, choose any deterministic rule, so that your algorithm converges and doesn't go into an infinite loop of reassignment.
Update: with respect to kNN classification:
There are many ways to resolve this, that will surprisingly often work just as good as the other, without a clear advantage of one over the other:
- randomly choose a winner from the tied objects
- take all into account with equal weighting
- if you have m objects at the same distance where you expected only r, then put a weight of r/k on each of them.
E.g. k=5.
```
distance label weight
0 A 1
1 B 1
1 A 1
2 A 2/3
2 B 2/3
2 B 2/3
```
yields A=2.66, B=2.33
The reason that randomly choosing works just as good as the others is that usually, the majority decision in kNN will not be changed by contributions with a weight of less than 1; in particular when k is larger than say 10.
| null |
CC BY-SA 3.0
| null |
2014-12-09T22:32:22.660
|
2014-12-10T16:21:24.247
|
2014-12-10T16:21:24.247
|
924
|
924
| null |
2663
|
2
| null |
2631
|
2
| null |
Gaussian mixture modeling can - if your data is nicely gaussian-like - be used for outlier detection. Points with a low density in every cluster are likely to be outliers.
Works well in idealistic scenarios.
| null |
CC BY-SA 3.0
| null |
2014-12-09T22:33:38.640
|
2014-12-09T22:33:38.640
| null | null |
924
| null |
2664
|
2
| null |
2654
|
1
| null |
I haven't seen anything like that and very much doubt that such frameworks exist, at least, as complete frameworks. The reason for this is IMHO the fact that data transformation and cleaning is very domain- and project-specific. Having said that, there are multiple tools that can help with these activities in terms of partial automation and integration with and between existing statistical and Web frameworks.
For example, for Python, the use of data manipulation library `pandas` as well as machine learning library `scikit-learn` can be easily integrated with Web frameworks (especially Python-based, but not necessarily), as these libraries are also Python-based. These and other Python data science tools that might be of interest can be found here: [http://pydata.org/downloads](http://pydata.org/downloads). Specifically, for cleaning and pre-processing tasks, which you asked about, `pandas` seem to be the first tool to explore. Again, for Python, the following discussion on StackOverflow on methods and approaches might be helpful: [https://stackoverflow.com/q/14262433/2872891](https://stackoverflow.com/q/14262433/2872891).
Consider an example of another platform. The use of `pandas` for data transformation and cleaning is rather low-level. The platform that I like very much and currently use as the platform of choice for data science tasks is `R`. Rich ecosystem of R packages especially shines in the area of data transformation and cleaning. This is because, in addition to very flexible low-level methods of performing these tasks, there are some R packages, which take a higher-level approach to the problem, which may potentially improve developer's productivity and decrease the amount of defects. In particular, I'm talking about two packages, which I find very promising: `editrules` and `deducorrect`. You can find more detailed information about these and other R packages for data transformation and cleaning in my another answer here on Data Science StackExchange (paper that I reference in the last link there could be especially useful, as it presents an approach to data transformation and cleaning that is generic enough, so that could be used as a framework for this on any decent platform): [https://datascience.stackexchange.com/a/722/2452](https://datascience.stackexchange.com/a/722/2452).
UPDATE: On the topic of recommender systems and their integration with data wrangling tools and Web frameworks, you may find my other answer here on DS SE useful: [https://datascience.stackexchange.com/a/836/2452](https://datascience.stackexchange.com/a/836/2452).
| null |
CC BY-SA 3.0
| null |
2014-12-10T01:59:48.700
|
2014-12-10T02:05:33.430
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
2667
|
1
| null | null |
6
|
3530
|
How can I connect to Titan database from Python ?
What I understand is that Titan (Graph database) provides an interface (Blueprint) to Cassandra (Column Store) and
bulb is a python interface to graph DB.
Now how can I start programming in python to connect with titan DB?
Is there any good documentation/tutorial available ?
|
Python interface to Titan Database
|
CC BY-SA 3.0
| null |
2014-12-10T04:33:31.350
|
2015-05-27T03:38:45.067
|
2014-12-11T09:03:16.463
|
5091
|
5091
|
[
"python",
"databases",
"nosql"
] |
2668
|
1
|
3696
| null |
15
|
596
|
I know that Spark is fully integrated with Scala. It's use case is specifically for large data sets. Which other tools have good Scala support? Is Scala best suited for larger data sets? Or is it also suited for smaller data sets?
|
Data Science Tools Using Scala
|
CC BY-SA 3.0
| null |
2014-12-10T06:37:17.193
|
2021-03-11T20:14:30.817
| null | null |
3466
|
[
"scalability",
"scala"
] |
2669
|
2
| null |
2427
|
0
| null |
It is important to see all the rules if one of the states of target column is more important than others. For example, if you are predicting fraudulent transactions, you might want to flag something as fraud even if is has 5% probability.
| null |
CC BY-SA 3.0
| null |
2014-12-10T07:12:33.640
|
2014-12-12T20:00:23.620
|
2014-12-12T20:00:23.620
|
84
|
5366
| null |
2670
|
1
|
2679
| null |
15
|
4056
|
I'm trying to find an equivalent of Hinton Diagrams for multilayer networks to plot the weights during training.
The trained network is somewhat similar to a Deep SRN, i.e. it has a high number of multiple weight matrices which would make the simultaneous plot of several Hinton Diagrams visually confusing.
Does anyone know of a good way to visualize the weight update process for recurrent networks with multiple layers?
I haven't found much papers on the topic. I was thinking to display time-related information on the weights per layer instead if I can't come up with something. E.g. the weight-delta over time for each layer (omitting the use of every single connection). PCA is another possibility, though I'd like to not produce much additional computations, since the visualization is done online during training.
|
Visualizing deep neural network training
|
CC BY-SA 3.0
| null |
2014-12-10T10:15:00.940
|
2020-08-05T14:57:38.660
| null | null |
5316
|
[
"machine-learning",
"neural-network",
"visualization",
"deep-learning"
] |
2671
|
2
| null |
2659
|
2
| null |
[Since you've updated your question to refer to a different algorithm (changed k-means to kNN), I'm adding this as a separate answer specifically for kNN.]
It appears you may still be confusing `kNN` with `k-means`. `kNN` does not have multiple stages of refinement, nor does it require computing centroids. It is a [lazy learner](http://en.wikipedia.org/wiki/Lazy_learning) that classifies a new observation by examining the `k` training observations that are closest to the new observation and picks whichever class is most prevalent among those `k` training observations. Note that the distances are not relative to a cluster centroid. You don't have to worry about there being more than `k` training observations in the neighborhood of the new observation because the neighborhood isn't based on a threshold distance - it is defined simply by the `k` points nearest to the new observation (i.e., it changes for each new observation being evaluated).
A possible pathological case is when multiple training observations lie at exactly the same distance from the new observation, which would require you to evaluate more than `k` neighbors. But you would still simply pick the class that is most prevalent among that group of observations.
| null |
CC BY-SA 3.0
| null |
2014-12-10T13:54:43.770
|
2014-12-10T13:54:43.770
| null | null |
964
| null |
2673
|
1
| null | null |
3
|
1250
|
I'm currently going into the world of machine learning and Neural Networks, thanks to [synaptic (js)](https://github.com/cazala/synaptic) that interests me a lot.
So I read a lot, wikipedia links and [synaptic's NN 101](https://github.com/cazala/synaptic/wiki/Neural-Networks-101), but there's a lot of basics questions that I don't understand (but I'd like to) in the use of machine learning (NN) and the point of these technologies.
Let's say, I wan't my network to (kind of) learn (something like) gravity, so to train it I set in input 10 objects with a mass, and a position x, y (and z) and I set output the new x, y (and z) of each objects.
I guess I should give it several configurations and everything but here is the question; can it, then, be able to compute the interactions between 10000, 100000 objects?
At this stage in my learning, what I don't clearly get is what is the point of teaching/training neurons to compute XOR like it's shown in [synaptic's documentation](https://github.com/cazala/synaptic#documentation):
```
var trainingSet = [
{
input: [0,0],
output: [0]
},
{
input: [0,1],
output: [1]
},
{
input: [1,0],
output: [1]
},
{
input: [1,1],
output: [0]
},
];
var trainer = new Trainer(myNetwork);
trainer.train(trainingSet);
```
Were we just give it all the possible inputs and outputs to a XOR.
Well, as I'm all new to the technologies I think my questions are full of non-sense and everything but thanks for reading and help you might bring :)
|
Training Neural Networks with unknown length of input
|
CC BY-SA 3.0
| null |
2014-12-10T14:18:36.933
|
2022-04-22T13:15:54.290
| null | null |
5369
|
[
"neural-network"
] |
2675
|
2
| null |
2658
|
1
| null |
Sheldon is correct, this sounds like a fairly typical use case for supervised classification. If all of your customer complaints are either software or hardware (i.e., zero individual complaints cover both categories, and zero are outside these two classes), then all you need is a binary classifier, which makes things simpler than they otherwise could be.
If you're looking for a Java-based NLP toolkit that supports something like this, you should check out the [Stanford Classifier](http://nlp.stanford.edu/software/classifier.shtml) -- it's licensed as open source software under the GPL.
[Their wiki page](http://www-nlp.stanford.edu/wiki/Software/Classifier) should help you get started using the classifier -- keep in mind that you'll need to manually annotate a large sample of your data as a training set, as Sheldon mentioned.
| null |
CC BY-SA 4.0
| null |
2014-12-10T15:37:41.853
|
2020-08-17T00:59:48.127
|
2020-08-17T00:59:48.127
|
98307
|
819
| null |
2676
|
2
| null |
2673
|
1
| null |
Gravity
Basically you'd like to find a function that maps an input (an object with a mass and a location) to an output (new location). It's not necessary to have one set of input neurons for each different object. It's sufficient to encode the input variables generically for all objects.
Like a function:
$$f( x, y, z, mass ) \to ( x_n, y_n, z_n )$$
XOR
Training the XOR function is special as the early neural networks, the perceptrons, were unable to learn the XOR function (because it is not linear separable). Multi-Layer Perceptrons however are able to learn the XOR function. This is just to demonstrate that the NN implementation of Synaptic is capable of learning problems that are not linear separable
| null |
CC BY-SA 3.0
| null |
2014-12-10T15:41:03.987
|
2014-12-12T23:53:01.607
|
2014-12-12T23:53:01.607
|
84
|
3132
| null |
2677
|
1
|
2678
| null |
8
|
1509
|
I have a document classification project where I am getting site content and then assigning one of numerous labels to the website according to content.
I found out that [tf-idf](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) could be very useful for this. However, I was unsure as to when exactly to use it.
Assumming a website that is concerned with a specific topic makes repeated mention of it, this was my current process:
- Retrieve site content, parse for plain text
- Normalize and stem content
- Tokenize into unigrams (maybe bigrams too)
- Retrieve a count of each unigram for the given document, filtering low length and low occurrence words
- Train a classifier such as NaiveBayes on the resulting set
My question is the following: Where would tf-idf fit in here? Before normalizing/stemming? After normalizing but before tokenizing? After tokenizing?
Any insight would be greatly appreciated.
---
Edit:
Upon closer inspection, I think I may have run into a misunderstanding at to how TF-IDF operates. At the above step 4 that I describe, would I have to feed the entirety of my data into TF-IDF at once? If, for example, my data is as follows:
```
[({tokenized_content_site1}, category_string_site1),
({tokenized_content_site2}, category_string_site2),
...
({tokenized_content_siten}, category_string_siten)}]
```
Here, the outermost structure is a list, containing tuples, containing a dictionary (or hashmap) and a string.
Would I have to feed the entirety of that data into the TF-IDF calculator at once to achieve the desired effect? Specifically, I have been looking at the [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) TfidfVectorizer to do this, but I am a bit unsure as to its use as examples are pretty sparse.
|
Document classification: tf-idf prior to or after feature filtering?
|
CC BY-SA 3.0
| null |
2014-12-10T16:08:03.537
|
2014-12-11T16:38:53.367
|
2014-12-11T16:38:53.367
|
5199
|
5199
|
[
"classification",
"feature-selection",
"feature-extraction"
] |
2678
|
2
| null |
2677
|
8
| null |
As you've described it, Step 4 is where you want to use TF-IDF. Essentially, TD-IDF will count each term in each document, and assign a score given the relative frequency across the collection of documents.
There's one big step missing from your process, however: annotating a training set. Before you train your classifier, you'll need to manually annotate a sample of your data with the labels you want to be able to apply automatically using the classifier.
To make all of this easier, you might want to consider using the [Stanford Classifier](http://nlp.stanford.edu/software/classifier.shtml). It will perform the feature extraction and build the classifier model (supporting several different machine learning algorithms), but you'll still need to annotate the training data by hand.
| null |
CC BY-SA 3.0
| null |
2014-12-10T17:36:20.550
|
2014-12-10T17:36:20.550
| null | null |
819
| null |
2679
|
2
| null |
2670
|
10
| null |
The closes thing I know is [ConvNetJS](http://cs.stanford.edu/people/karpathy/convnetjs/):
>
ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you're training. No software requirements, no compilers, no installations, no GPUs, no sweat.
Demos on this site plot weighs and how do they change with time (bear in mind, its many parameters, as practical networks do have a lot of neurons). Moreover, if you are not satisfied with their plotting, there is access to networks parameters and you can plot as you wish (since it is JavaScript).
| null |
CC BY-SA 3.0
| null |
2014-12-10T19:50:17.797
|
2014-12-10T19:50:17.797
| null | null |
289
| null |
2682
|
1
|
4941
| null |
4
|
5867
|
I have used Neo4J to implement a content recommendation engine. I like Cypher, and find graph databases to be intuitive.
Looking at scaling to a larger data set, I am not confident No4J + Cypher will be performant. Spark has the GraphX project, which I have not used in the past.
Has anybody switched from Neo4J to Spark GraphX? Do the use cases overlap, aside from scalability? Or, does GraphX address a completely different problem set than Neo4J?
|
Use Cases of Neo4J and Spark GraphX
|
CC BY-SA 3.0
| null |
2014-12-10T22:47:49.880
|
2015-02-02T07:31:51.050
| null | null |
3466
|
[
"scalability",
"graphs",
"neo4j"
] |
3685
|
2
| null |
2670
|
5
| null |
Based on my cursory understanding of the topics, associated with your question, I think that [Gephi](https://gephi.github.io) (the original gephi.org link redirects there) should be able to handle neural network dynamic visualization. It seems that, in order to achieve your goal, you need to [stream](https://forum.gephi.org/viewtopic.php?t=1875) your graph(s) with corresponding weights. For streaming, you most likely will need this [plug-in](https://marketplace.gephi.org/plugin/graph-streaming).
UPDATE: You may also find useful [SoNIA software](http://web.stanford.edu/group/sonia.).
| null |
CC BY-SA 4.0
| null |
2014-12-11T07:08:00.540
|
2020-08-05T14:57:38.660
|
2020-08-05T14:57:38.660
|
98307
|
2452
| null |
3687
|
1
| null | null |
1
|
758
|
I am trying to connect with cassandra in R using [RJDBC](http://cran.r-project.org/web/packages/RJDBC/index.html). When I execute
```
casscon <- dbConnect(cassdrv, "jdbc:cassandra://ipaddrs:9160/demodb")
```
I am getting
```
Error in .jcall(drv@jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1],
: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
```
I can't figure out the problem. I need a solution for this.
|
R-Cassandra connection with RJDBC
|
CC BY-SA 3.0
| null |
2014-12-11T13:37:29.430
|
2014-12-12T04:13:53.157
|
2014-12-12T04:13:53.157
|
2961
|
5161
|
[
"r",
"databases"
] |
3688
|
1
| null | null |
10
|
1437
|
I have a corpus of text with a corresponding topics. For example `"A rapper Tupac was shot in LA"` and it was labelled as `["celebrity", "murder"]`. So basically each vector of features can have many labels (not the same amount. The first feature vector can have 3 labels, second 1, third 5).
If I would have just one label corresponded to each text, I would try a [Naive Bayes classifier](http://en.wikipedia.org/wiki/Naive_Bayes_classifier), but I do not really know how should I proceed if I can have many labels.
Is there any way to transform Naive Bayes into multi label classification problem (if there is a better approach - please let me know)?
P.S. few things about the data I have.
- approximately 10.000 elements in the dataset
- text is approximately 2-3 sentences
- maximum 7 labels per text
|
Multiple labels in supervised learning algorithm
|
CC BY-SA 3.0
| null |
2014-12-11T19:23:55.907
|
2014-12-12T00:13:10.680
| null | null |
6387
|
[
"machine-learning",
"text-mining"
] |
3689
|
2
| null |
2668
|
4
| null |
From listening to presentations by Martin Odersky, the creator of Scala, it is especially well suited for building highly scalable systems by leveraging functional programming constructs in conjuction with object orientation and flelxible syntax. It is also useful for development of small systems and rapid prototyping because it takes less lines of code than some other languages and it has an interactive mode for fast feedback. One notable Scala framework is Akka which uses the actor model of concurrent computation. Many of Odersky's presentations are on YouTube and there is a list of tools implemented with Scala on wiki.scala-lang.org.
An implicit point is that tools and frameworks written in Scala inherently have Scala integration and usually a Scala API. Then other APIs may be added to support other languages beginning with Java since Scala is already integrated and in fact critically depends on Java. If a tool or framework is not written in Scala, it is unlikely that it offers any support for Scala. That is why in answer to your question I have pointed towards tools and frameworks written in Scala and Spark is one example. However, Scala currently has a minor share of the market but its adoption rate is growing and the high growth rate of Spark will enhance that. The reason I use Scala is because Spark's API for Scala is richer than the Java and Python APIs.
The main reasons I prefer Scala generally is because it is much more expressive than Java because it allows and facilitates the use of functions as objects and values while retaining object oriented modularity, which enables development of complex and correct programs with far less code than Java which I had preferred because of widespread use, clarity and excellent documentation.
| null |
CC BY-SA 3.0
| null |
2014-12-11T20:27:27.847
|
2014-12-24T20:10:01.487
|
2014-12-24T20:10:01.487
|
5280
|
5280
| null |
3690
|
1
|
3703
| null |
0
|
1424
|
I'm wonder if it's possible to export a model trained in R, to OpenCV's Machine Learning (ML) library format? The latter appears to save/read models in [XML/YAML](http://docs.opencv.org/modules/ml/doc/statistical_models.html#cvstatmodel-load), whereas the former might be exportable via [PMML](http://cran.r-project.org/web/packages/pmml/index.html). Specifically, I'm working with Random Forests, which are classifiers available both in R and OpenCV's ML library.
Any advice on how I can get the two to share models would be greatly appreciated.
|
Exporting R model to OpenCV's Machine Learning Library
|
CC BY-SA 3.0
| null |
2014-12-11T23:56:03.023
|
2014-12-14T03:42:25.213
| null | null |
6390
|
[
"machine-learning",
"r",
"open-source"
] |
3691
|
2
| null |
3688
|
6
| null |
For starters, Naive Bayes is probably not appropriate here. It assumes independence among the inputs (hence the "Naive") and words in a sentence are very dependent.
But, assuming you really want to run with NB as an algorithm to start your experimentation, there are two options I'd consider:
## Ungraceful: Lots of NB classifiers
This would be an alternative approach. Make a corupus of all the words observed as your vector of inputs. Make a corpus off all the tags that are observed as your vector of outputs. An NB classifier with multiple outputs is the equivalent of having multiple NB classifiers with one output each (so do whichever is easier to implement in whatever software framework you're using). Treat each element as a training sample where a given input (a word) is a `1` if that word is present and a `0` if that word isn't. Use the same binary scheme for the output.
This brute forces the application of the NB Classifier to your data, and leaves you to find meaning by still haivng to mine the huge set of classifiers you'll be left with.
## More Graceful: Process your data
This is the approach I'd recommend if you want to run with one multiple-class NB Classifier.
Your goal here is to figure out how to map each set of tags to a single class. I'm sure there is some sort of clustering scheme or network analysis (perhaps ["celebrity"] linked to ["murder"] could become a the segment ["debauchery"]) that will sensibly map your tags to one single cluster. If you treat tags as nodes and two given tags together as links, then you'll want to look into community detection algorithms (which is where I'd start). But, if you just want something working, then some sort of hack on the tags that converts a list of tags to only the tag that's most commonly seen in your dataset would be enough.
This method front-loads the work of cleaning your data and would make the NB Classifier's output easier to understand.
| null |
CC BY-SA 3.0
| null |
2014-12-12T00:13:10.680
|
2014-12-12T00:13:10.680
| null | null |
6391
| null |
3692
|
2
| null |
2631
|
2
| null |
- Apply your clustering algorithm
- Calculate distance from all data points to its assigned cluster
- Label the data points furthest from a center as an outlier
Randomly generating 100 data points from three gaussians, clustering them with k-means, and marking the 10 'furthest from a center' data points gave the following graph:

see [this notebook](http://nbviewer.ipython.org/github/TheGrimmScientist/Snippets/blob/master/OutlierDetectionWithClustering/OutlierDetectionWithClustering.ipynb) for the full example
The burden of solving what "distance" means will already have to be solved for you to run a clustering algorithm. It will still be up to you to pick off what distance means an outlier. In this example, I just picked the N most distant data point, though you'll probably want to pick any number of data points over a certain number of standard deviations from a center.
| null |
CC BY-SA 3.0
| null |
2014-12-12T01:30:26.723
|
2014-12-12T01:30:26.723
| null | null |
6391
| null |
3693
|
1
|
3701
| null |
8
|
2905
|
I'm working on a project which asks fellow students to share their original text data for further analysis using data mining techniques, and, I think it would be appropriate to anonymize student names with their submissions.
Setting aside the better solutions of a url where students submit their work and a backend script inserts the anonymized ID, What sort of solutions could I direct students to implement on their own to anonymized their own names?
I'm still a noob in this area. I don't know what are the norms. I was thinking the solution could be a hashing algorithm. That sounds like a better solution than making up a fake name as two people could pick the same fake name.possible people could pick the same fake name. What are some of the concerns I should be aware of?
|
What are the best practices to anonymize user names in data?
|
CC BY-SA 3.0
| null |
2014-12-12T03:00:57.507
|
2014-12-13T21:47:49.987
| null | null |
2742
|
[
"machine-learning",
"data-cleaning"
] |
3694
|
2
| null |
3693
|
2
| null |
A standard practice in psychology (where you want to code participants in order to link different measurements together) is to have participants choose their mother's maiden name initials and birthdate, e.g., in the format XX-YYMMDD.
This if course can still run into conflicts. Then again, I don't think there is any surefire conflict-free anonymization algorithm your students could do without knowing all the other students. Mothers' names and birthdates could be identical, own birthdates could be identical, shoe sizes could be, favorite superhero characters... The only thing I could think of would be (US) Social Security numbers, but you [really don't want to use them](http://www.ssa.gov/pubs/EN-05-10064.pdf).
Bottom line: anonymize on the backend. Or, as [@Emre suggests](https://datascience.stackexchange.com/questions/3693/what-are-the-best-practices-to-anonymize-user-names-in-data#comment3919_3693), think about whether you really need an identifier at all. Maybe the DB-generated index is enough?
| null |
CC BY-SA 3.0
| null |
2014-12-12T09:02:16.350
|
2014-12-12T09:02:16.350
|
2017-04-13T12:50:41.230
|
-1
|
2853
| null |
3696
|
2
| null |
2668
|
11
| null |
## Re: size of data
### The short answer
Scala works for both small and large data, but its creation and development is motivated by needing something scalable. [Scala is an acronym for “Scalable Language”](http://www.scala-lang.org/what-is-scala.html).
### The long answer
Scala is a [functional programming language](http://en.wikipedia.org/wiki/Functional_programming) that runs on the [jvm](http://en.wikipedia.org/wiki/Java_virtual_machine). The 'functional' part of this is a fundamental difference in the language that makes you think differently about programming. If you like that way of thinking, it lets you quickly work with small data. Whether you like it or not, functional languages are fundamentally easier to massively scale. The jvm piece is also important because the jvm is basically everywhere and, thus, Scala code can run basically everywhere. (Note there are plenty of other [languages written on the jvm](http://en.wikipedia.org/wiki/List_of_JVM_languages) and plenty of other [functional programming languages](http://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Functional_languages), and languages beyond Scala do appear in both lists.)
[This talk](https://www.youtube.com/watch?v=3jg1AheF4n0) give a good overview of the motivation behind Scala.
## Re: other tools that have good Scala support:
As you mentioned, Spark (distributable batch processing better at iteratative algorithms than its counterpart) is a big one. With Spark comes its libraries [Mllib](https://spark.apache.org/mllib/) for machine learning and [GraphX](https://spark.apache.org/graphx/) for graphs. As mentioned by Erik Allik and Tris Nefzger, [Akka](http://akka.io) and [Factorie](https://github.com/factorie/factorie) exist. There is also [Play](https://www.playframework.com/).
Generally, I can't tell if there is a specific use case you're digging for (if so, make that a part of your question), or just want a survey of big data tools and happen to know Scala a bit and want to start there.
| null |
CC BY-SA 3.0
| null |
2014-12-12T22:39:58.470
|
2014-12-12T22:39:58.470
| null | null |
6391
| null |
3697
|
2
| null |
19
|
11
| null |
Data becomes "big" when a single [commodity computer](http://en.wikipedia.org/wiki/Commodity_computing) can no longer handle the amount of data you have. It denotes the point at which you need to start thinking about building supercomputers or using clusters to process your data.
| null |
CC BY-SA 3.0
| null |
2014-12-12T22:48:41.970
|
2014-12-12T22:48:41.970
| null | null |
6391
| null |
3698
|
2
| null |
2631
|
4
| null |
If your Data points are dense and noise points are away from the dense region, you can try [DBSCAN algorithm](http://en.wikipedia.org/wiki/DBSCAN).

Tweak its parameters until u get a best fit.
| null |
CC BY-SA 4.0
| null |
2014-12-13T05:28:04.013
|
2020-08-17T18:55:17.380
|
2020-08-17T18:55:17.380
|
98307
|
6412
| null |
3699
|
1
| null | null |
10
|
8100
|
For an imbalanced data set, is it better to choose an L1 or L2 regularization?
Is there a cost function more suitable for imbalanced datasets to improve the model score (`log_loss` in particular)?
|
What cost function and penalty are suitable for imbalanced datasets?
|
CC BY-SA 4.0
| null |
2014-12-13T17:42:54.927
|
2020-08-13T11:46:33.743
|
2019-06-07T17:12:43.867
|
29169
|
6418
|
[
"machine-learning",
"classification",
"deep-learning",
"scikit-learn",
"logistic-regression"
] |
3700
|
1
|
3725
| null |
0
|
135
|
In [his thesis](http://web.cse.ohio-state.edu/~mbelkin/papers/PLM_UCTHESIS_03.pdf) (section 2.3.3) Belkin uses the heat equation to derive an approximation for $\mathcal{L}f$:
$$\mathcal{L}f(x_i)\approx \frac{1}{t}\Big(f(x_i)-\alpha \sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}f(x_j)\Big)$$
where $$\alpha=\Big(\sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}\Big)^{-1}$$.
However, I'm not sure how these considerations lead to this choice of weights for the weight matrix (which will be used to construct the Laplacian):
$$W_{ij} =
\begin{cases}
e^{-\frac{||x_i-x_j||^2}{4t}} & if\ ||x_i-x_j||<\epsilon \\
0 & otherwise
\end{cases}$$
A very vague idea of mine was that the factors $\alpha$ and $\frac{1}{t}$ don't change for a given $x_i$ so if one choses the weights like above the resulting discrete Laplacian would (let aside those two constants) converge to the continuous version.
Any ideas or tips what I'd have to read up to in order to get a better understanding?
|
Choice of weights for the Laplacian Eigenmaps algorithm
|
CC BY-SA 3.0
| null |
2014-12-13T19:27:56.213
|
2014-12-19T01:20:39.907
| null | null |
6415
|
[
"machine-learning"
] |
3701
|
2
| null |
3693
|
5
| null |
I suspected you were using the names as identifiers. You shouldn't; they're not unique and they raise this privacy issue. Use instead their student numbers, which you can verify from their IDs, stored in hashed form. Use the student's last name as a salt, for good measure (form the string to be hashed by concatenating the ID number and the last name).
| null |
CC BY-SA 3.0
| null |
2014-12-13T20:37:07.843
|
2014-12-13T21:47:49.987
|
2014-12-13T21:47:49.987
|
381
|
381
| null |
3702
|
1
|
3713
| null |
4
|
1339
|
Is the k-nearest neighbour algorithm a discriminative or a generative classifier? My first thought on this was that it was generative, because it actually uses Bayes' theorem to compute the posterior. Searching further, it seems like it is a discriminative model. But I couldn't find the explanation.
So is KNN discriminative first of all? And if it is, is that because it doesn't model the priors or the likelihood?
|
K nearest neighbour
|
CC BY-SA 4.0
| null |
2014-12-13T23:08:53.930
|
2022-02-28T19:28:12.320
|
2022-02-28T19:28:12.320
|
132929
|
6419
|
[
"classification"
] |
3703
|
2
| null |
3690
|
1
| null |
Instead of exporting your models, consider creating an R-based interoperable environment for your modeling needs. Such environment would consists of R environment proper as well as integration layers for your third-party libraries. In particular, for the OpenCV project, consider either using `r-opencv` open source project ([https://code.google.com/p/r-opencv](https://code.google.com/p/r-opencv)), or integration via OpenCV C++ APIs and R `Rcpp` package ([http://dirk.eddelbuettel.com/code/rcpp.html](http://dirk.eddelbuettel.com/code/rcpp.html)). Finally, if you want to add PMML support to the mix and create a deployable-to-cloud solution, take a look at the following excellent blog post with relevant examples: [http://things-about-r.tumblr.com/post/37861967022/predictive-modeling-using-r-and-the](http://things-about-r.tumblr.com/post/37861967022/predictive-modeling-using-r-and-the).
| null |
CC BY-SA 3.0
| null |
2014-12-14T03:42:25.213
|
2014-12-14T03:42:25.213
| null | null |
2452
| null |
3705
|
1
| null | null |
3
|
2459
|
I'm trying to get to grips with sci-kit learn for some simple machine learning projects but I'm coming unstuck with Pipelines and wonder what I've done wrong...
I'm trying to work through a [tutorial on Kaggle](http://www.kaggle.com/c/data-science-london-scikit-learn/data)
Here's my code:
```
import pandas as pd
train = pd.read_csv(local path to training data) train_labels =
pd.read_csv(local path to labels)
from sklearn.decomposition import PCA
from sklearn.svm import LinearSVC
from sklearn.grid_search import GridSearchCV
pca = PCA()
clf = LinearSVC()
n_components = arange(1, 39)
loss = ['l1','l2']
penalty = ['l1','l2']
C = arange(0, 1, .1)
whiten = [True, False]
from sklearn.pipeline import Pipeline
#set up pipeline
pipe = Pipeline(steps=[('pca', pca), ('clf', clf)])
#set up GridsearchCV
estimator = GridSearchCV(pipe, dict(pca__n_components = n_components, pca__whiten = whiten, clf__loss = loss, clf__penalty = penalty, clf__C = C))
> estimator
```
Returns:
```
GridSearchCV(cv=None,
estimator=Pipeline(steps=[('pca', PCA(copy=True, n_components=None, whiten=False)), ('clf', LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.0001, verbose=0))]),
fit_params={}, iid=True, loss_func=None, n_jobs=1,
param_grid={'clf__penalty': ['l1', 'l2'], 'clf__loss': ['l1', 'l2'], 'clf__C': array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]), 'pca__n_components': array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35, 36, 37, 38]), 'pca__whiten': [True, False]},
pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring=None,
verbose=0)
```
But when I try to train data:
```
estimator.fit(train, train_labels)
The error is:
428 for test_fold_idx, per_label_splits in enumerate(zip(*per_label_cvs)):
429 for label, (_, test_split) in zip(unique_labels, per_label_splits):
--> 430 label_test_folds = test_folds[y == label]
431 # the test split can be too big because we used
432 # KFold(max(c, self.n_folds), self.n_folds) instead of
IndexError: too many indices for array
```
Can anyone point me in the right direction?
|
Sci-kit Pipeline and GridsearchCV returns indexError: too many indices for array
|
CC BY-SA 3.0
| null |
2014-12-16T01:19:12.477
|
2015-06-20T19:58:59.403
|
2015-05-21T19:28:55.737
|
9433
|
974
|
[
"scikit-learn"
] |
3706
|
2
| null |
3705
|
1
| null |
It turns out that the Pandas dataframe is the wrong shape.
```
estimator.fit(train.values, train_labels[0].values)
```
works, although I also had to drop the penalty term.
| null |
CC BY-SA 3.0
| null |
2014-12-16T02:44:28.950
|
2014-12-16T02:44:28.950
| null | null |
974
| null |
3707
|
2
| null |
421
|
2
| null |
There's a lot of online tutorial. Especially in youtube, but if you will want accurate website you can see from here [http://ttic.uchicago.edu/~shai/icml08tutorial/](http://ttic.uchicago.edu/~shai/icml08tutorial/) or [http://cs229.stanford.edu/materials.html](http://cs229.stanford.edu/materials.html). you can visit them now.
| null |
CC BY-SA 3.0
| null |
2014-12-16T06:12:35.423
|
2014-12-16T06:12:35.423
| null | null |
6444
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.