Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
2476
2
null
2469
1
null
Here are some things to try. - Plot a bar graph. The bar graph will clearly show jobless people are often choosing NO. Try an 1-way ANOVA test. If the p < delta (i.e. delta=0.05), try a post-hoc test (i.e. Tukey's HSD) to do a pairwise comparison. - Like I said earlier, try a multiple comparison test (1-way ANOVA) first, if there is a statistically significant difference, you can try a pairwise comparison test (post-hoc test). - Maybe try a clustering algorithm? Be careful, because the marginal sums (by rows or columns) are not equal. Maybe create a similarity matrix by profession? To me, it seems that Employees and Businessmen are in one group (very similar), while Workers and Jobless are each in their own group. If you turn those frequencies into proportions, then you might just have 2 groups; one for employees + workers + businessmen, and one for jobless. - Use contingency table analysis to see if the responses (yes/no/don't know) are associated with profession.
null
CC BY-SA 3.0
null
2014-11-15T01:26:16.583
2014-11-15T01:26:16.583
null
null
3083
null
2477
2
null
2451
1
null
How about - Ordinary Least Square (OLS) regression? Since you have a class imbalance, you might want to combine that with boosting algorithms. - If you have a function to quantify the cost involved with FP's and FN's, use any optimization technique you can find. My favorite is genetic algorithms. You may also try linear programming.
null
CC BY-SA 3.0
null
2014-11-15T01:30:00.870
2014-11-15T01:30:00.870
null
null
3083
null
2478
2
null
1075
1
null
Data volume is not the only criterion for using Hadoop. Big Data is often characterized by the 3 V's: - volume, - velocity, and - variety. More V's than these 3 have been invented since. I suppose the V's were a catchy way to characterize what is Big Data. But as hinted, computational intensity is a perfect reason for using Hadoop (if your algorithm is computationally expensive). And, as hinted, the problem you describe is perfect for Hadoop, especially since it is embarrassingly parallel in nature. Is Hadoop a good choice for you? I would argue, yes. Why? Because - Hadoop is open source (compared with proprietary systems which may be expensive and black boxes), - your problem lends itself well to the MapReduce paradigm (embarassingly parallel, shared-nothing), - Hadoop is easily scalable with commodity hardware (as opposed to specialized hardware, and you should get linear speed-up in your performance with just throwing hardware at the problem, and you can just spin a cluster as needed on cloud service providers), - Hadoop allows multiple client languages (Java is only one of many supported languages), - there might be a library available already to do your cross-product operation, and - you're shipping compute code, not data, around the network (which you should benefit from, and as opposed to other distributed platforms where you are shipping data to compute nodes which is the bottleneck). Please note, Hadoop is not a distributed file system (as mentioned, and corrected already). Hadoop is distributed storage and processing platform. The distributed storage component of Hadoop is called the Hadoop Distributed File System (HDFS), and the distributed processing component is called MapReduce. Hadoop has now evolved slightly. They keep the HDFS part for distributed storage. But they have a new component called YARN (Yet Another Resource Negotiator), which serves to appropriate resources (CPU, RAM) for any compute task (including MapReduce). On the "overhead" part, there is noticeable overhead with starting/stopping a Java Virtual Machine (JVM) per tasks (map tasks, reduce tasks). You can specify for your MapReduce Jobs to reuse JVMs to mitigate this issue. If "overhead" is really an issue, look into Apache Spark, which is part of the Hadoop ecosystem, and they are orders of magnitude faster than MapReduce, especially for iterative algorithms. I have used Hadoop to compute pairwise comparisons (e.g. correlation matrix, similarity matrix) that are O(N^2) (n choose 2) in worst case running time complexity. Imagine computing the correlations between 16,000 variables (16,000 choose 2); Hadoop can easily process and store the results if you have the commodity resources to support the cluster. I did this using the preeminent cloud service provider (I won't name it, but you can surely guess who it is), and it cost me < $100 and under 18 hours.
null
CC BY-SA 3.0
null
2014-11-15T03:18:13.223
2014-11-15T03:18:13.223
null
null
3083
null
2479
2
null
2473
1
null
You can try Bayesian belief networks (BBNs). BBNs can easily handle categorical variables and give you the picture of the multivariable interactions. Furthermore, you may use sensitivity analysis to observe how each variable influences your class variable. Once you learn the structure of the BBN, you can identify the Markov blanket of the class variable. The variables in the Markov blanket of the class variable is a subset of all the variables, and you may use optimization techniques to see which combination of values in this Markov blanket maximizes your class prediction.
null
CC BY-SA 3.0
null
2014-11-15T09:34:47.653
2014-11-15T09:40:33.473
2014-11-15T09:40:33.473
3083
3083
null
2480
1
null
null
2
288
I am new to D3 programming (any programming, for that matter). I have protein-protein interaction data in JSON format and csv format. I would like to use that data for network visualization. Data attributes: Protein Name, Protein Group, Protein type, Protein Source Node, Protein Target Node Can anyone suggest good network visualizations for such data. How does it work with hive plots?
Visualization using D3
CC BY-SA 3.0
null
2014-11-15T15:49:07.213
2016-10-16T06:21:39.837
null
null
2647
[ "visualization", "javascript" ]
2482
2
null
2480
3
null
Why dont you have a look at the following example? [http://bl.ocks.org/mbostock/2066421](http://bl.ocks.org/mbostock/2066421) You can also find a fiddle here: [http://jsfiddle.net/boatrokr/rk2s5/](http://jsfiddle.net/boatrokr/rk2s5/) Pay attention to the part where the links are defined.
null
CC BY-SA 3.0
null
2014-11-16T01:05:02.603
2014-11-16T01:05:02.603
null
null
5041
null
2483
2
null
2474
2
null
If you have R is quite simple - Copy the lines into a file, let's say: "mydata.json" - Be sure you have installed the rjson package install.packages("rjson") - Import your data library("rjson") json_data <- fromJSON(file="mydata.json")
null
CC BY-SA 3.0
null
2014-11-16T01:13:21.363
2014-11-16T01:13:21.363
null
null
5041
null
2486
1
2496
null
3
6403
I was wondering if anyone was aware of any methods for visualizing an SVM model where there are more than three continuous explanatory variables. In my particular situation, my response variable is binomial, with 6 continuous explanatory variables (predictors), one categorical explanatory variable (predictor). I have already reduced the number of predictors and I am primarily using R for my analysis. (I am unaware if such a task is possible/ worth pursuing.) Thanks for your time.
Visualizing Support Vector Machines (SVM) with Multiple Explanatory Variables
CC BY-SA 3.0
null
2014-11-16T21:03:09.037
2014-11-21T18:55:07.860
2014-11-21T10:39:38.337
847
5023
[ "machine-learning", "classification", "r", "visualization", "svm" ]
2487
1
null
null
1
94
this is my first ever stack exchange question. I'm trying to build a tool right now and one of the features of the tool is the ability to break down a product or service into it's associated attributes/properties/classes/keywords/entities. (Choose which word best suits, as I have no idea). For example if we had a Camera as the product. I would like to be able to generate a breakdown of everything that is associated to a camera. Such as; Digital, Film, Optical, LCD, Glass, CCD, CMOS, RGB, Lens, Shutter, Negative, Polaroid, Darkroom, Flash, Resolution, Stabilisation, Batteries, Zoom, Angle, Telephoto, Macro, Filters, Memory, CF, SD The list could go on for quite some time, those were jsut a few off the top of my head. How on earth could I go about retrieving such attributes automatically? Is there a database out there that has such info? Are there any special tricks anyone has up their sleeve to be able to accumulate datasets such as the example above? Very interested in your answers. Thanks :)
Ontology database
CC BY-SA 3.0
null
2014-11-17T11:54:39.933
2014-11-18T01:26:51.900
null
null
5066
[ "data-mining" ]
2488
2
null
2487
1
null
It seems to me as if a good starting point would be to read up on the semantic web, perhaps starting with [DBpedia](http://dbpedia.org/About) and maybe LinkedData. You could go from there and build up your own database. Example of a [SPARQL query](http://dbpedia.org/snorql/?query=%0D%0Aselect%20%3Flabel%20where%20%7B%0D%0A%20%20%23%3ACamera%20dbpedia-owl%3Aabstract%20%3Fabstract.%0D%0A%20%20%23FILTER%20langMatches%28%20lang%28%3Fabstract%29%2C%20%27en%27%29.%0D%0A%20%20%23%3Fprod%20dbpedia2%3Aproducts%20%3Frelated%20.%0D%0A%20%20%3Fprod%20dbpedia-owl%3Aproduct%20%3ACamera%20.%0D%0A%20%20%3Fprod%20dcterms%3Asubject%20%3Fcategories%20.%0D%0A%20%20%3Fentity%20dcterms%3Asubject%20%3Fcategories.%0D%0A%20%20%3Fentity%20rdf%3Atype%20yago%3APhysicalEntity100001930%20.%0D%0A%20%20%3Fentity%20rdfs%3Alabel%20%3Flabel%20.%0D%0A%20%20filter%20langMatches%28%20lang%28%3Flabel%29%2C%20%27en%27%29.%0D%0A%7D) starting with the DBpedia page of 'Camera': ``` select ?label where { ?prod dbpedia-owl:product :Camera . ?prod dcterms:subject ?categories . ?entity dcterms:subject ?categories. ?entity rdf:type yago:PhysicalEntity100001930 . ?entity rdfs:label ?label . filter langMatches( lang(?label), 'en'). } ``` Generating a lot of words somehow related to 'Camera'. ``` ... "Shutter button"@en "Rangefinder camera"@en "Still camera"@en "Lomo LC-A"@en "Flexaret"@en "Land Camera"@en "Robot (camera)"@en "Speed Graphic"@en "Ansco Panda"@en "Image trigger"@en "Still video camera"@en "Hidden camera"@en "Mainichi Shimbun"@en "Ōhiradai Station"@en "Depth-of-field adapter"@en "Banquet camera"@en "Digital versus film photography"@en "Fernseh"@en "Remote camera"@en "Professional video camera"@en .... ``` The above result is just an excerpt.
null
CC BY-SA 3.0
null
2014-11-18T01:26:51.900
2014-11-18T01:26:51.900
null
null
5073
null
2489
1
null
null
4
137
The questionnaire for the data is [here](http://www.cc.gatech.edu/gvu/user_surveys/survey-1997-10/questions/general.html) The first question takes multiple entry for the same question, I want to reduce this to a single variable. How do I do it? The clean data is available [here](http://wikisend.com/download/586046/DataRaw.arff). NB: The Column CompuPlat has missing values. part of dataset `CMFam CMHobb CMNone CMOther CMPol CMProf CMRel 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 Community Membership_Family Community Membership_Hobbies Community Membership_None Community Membership_Other Community Membership_Political Community Membership_Professional Community Membership_Religious Community Membership_Support ` I want to club all of them in a variable CM
Reduction of multiple answers to single variable
CC BY-SA 3.0
null
2014-11-18T09:07:00.867
2015-05-19T14:14:42.537
2015-05-19T14:14:42.537
8953
5075
[ "dataset", "dimensionality-reduction" ]
2490
2
null
2463
1
null
Your dataset can be viewed as a directed graph. The party's location (latitude and longitude) can be denoted as a node and the directed edge can denote who referred whom. Once the dataset can be viewed as this, the problem boils down to joining co-ordinates with lines.
null
CC BY-SA 3.0
null
2014-11-18T09:48:19.333
2014-11-18T09:48:19.333
null
null
847
null
2491
1
2656
null
1
14146
Data sample contains a single feature: random integer number from 1 to 4. Is it possble to change `1,2,3,4` representation on the filter card to some custom names, say: `Type1,Type2,Type3,Type4`? (not changing data set) ![enter image description here](https://i.stack.imgur.com/4ZPYM.png)
Change aliases of filter items in Tableau
CC BY-SA 3.0
null
2014-11-18T09:56:06.157
2014-12-09T08:30:28.000
2014-11-18T10:02:47.110
97
97
[ "visualization", "tableau" ]
2492
2
null
2489
2
null
The variable represents the answer to the first question. One straightforward way is to allow for all possible categories in this variable. For example, if there are 5 options in this answer, you will have to treat it as a categorical variable with 2^5 = 32 categories. However, the number of categories increase exponentially with the number of options (check boxes) provided for the answer. In that case, it might be better to restrict the number of categories to, for example, 5. This can be done by leaving the top 4 choices/ options (by count) as they are and treating every other choice as "other".
null
CC BY-SA 3.0
null
2014-11-18T10:19:59.510
2014-11-18T10:32:32.317
2014-11-18T10:32:32.317
847
847
null
2493
1
null
null
0
264
Currently we are regularly analyzing sets of paragraphs every month. I would like to automate this and split each paragraphs into chunks of data. To do this I would like to employ a neural network. However, I am not really very familiar with creating neural networks. Any ideas or starting point on how to do this using Neuroph or maybe in other framework/approaches? Edit for more info as suggested. I have very little experience on neural networks though I have some introduction with it in college. However I am very much familar with java The data is around 3 megabytes only and consists of rules and relationships for a single domain. This means that the data is complex but relatively limited though still free-form English language.
Analyze paragraphs using Neuroph
CC BY-SA 3.0
null
2014-11-18T11:04:28.837
2014-11-19T11:34:03.507
2014-11-19T11:34:03.507
5077
5077
[ "text-mining", "clustering", "neural-network", "java" ]
2494
1
null
null
3
466
I have 1-4 gram text data from wikipedia for 14 categories, which I am using for NE classification. I feed named entity from sentence to lucene indexer which searches named entity from these 14 categories. Issue I am facing is, for single entity I get multiple classes as a result with same score. like while search `titanic`, indexer gives this result Score - 11.23 Title - titanic Category - Book Score - 11.23 Title - titanic Category - Movie Score - 11.23 Title - titanic Category - Product now problem is which class to be considered? I already tried with classifiers (NB,ME in nltk,scikit learn), but as it consider each entity from dataset as feature, it works as indexer only. Why lucene? ![enter image description here](https://i.stack.imgur.com/Tz8Uy.jpg)
What is the best practice to classify category of named entity in sentence
CC BY-SA 3.0
null
2014-11-18T12:41:54.973
2015-05-04T20:25:06.967
null
null
5079
[ "machine-learning", "data-mining", "classification", "nlp" ]
2495
1
2511
null
3
752
I am running SVM algorithm in R.It is taking long time to run the algorithm.I have system with 32GB RAM.How can I use that whole RAM memory to speed my process.
How to run R programs on multicore using doParallel package?
CC BY-SA 3.0
null
2014-11-18T14:03:26.760
2016-02-24T15:33:03.407
null
null
3551
[ "r" ]
2496
2
null
2486
3
null
Does it matter that the model is created in the form of SVM? If no, I have seen a clever 6-D visualization. Its varieties are becoming popular in medical presentations. 3 dimensions are shown as usual, in orthographic projection. Dimension 4 is color (0..255) Dimension 5 is thickness of the symbol Dimension 6 requires animation. It is a frequency of vibration of a dot on the screen. In static, printed versions, one can replace frequency of vibration by blur around the point, for a comparable visual perception. If yes, and you specifically need to draw separating hyperplanes, and make them look like lines\planes, the previous trick will not produce good results. Multiple 3-D images are better.
null
CC BY-SA 3.0
null
2014-11-18T19:52:54.050
2014-11-18T19:52:54.050
null
null
5083
null
2499
1
null
null
3
1013
I'm looking for an (ideally free) API that would have time series avg/median housing prices by zip code or city/state. Quandl almost fits the bill, but it returns inconsistent results across different zip codes and the data is not as up to date as I'd like (it's mid November, and the last month is August). I also looked at Zillow, but storing their data is against TOS, and at 1,000 calls daily--it would take forever to pull in the necessary data. Any suggestions (even if they aren't free) would be much appreciated!
API for historical housing prices
CC BY-SA 3.0
null
2014-11-19T01:07:10.283
2014-12-01T00:26:39.927
null
null
5086
[ "dataset" ]
2500
1
null
null
3
1119
I have a list user data: user name, age, sex, address, location etc., and a set of product data: Product name, Cost, description etc. Now I would like to build a recommendation engine that will be able to: 1 Figure out similar products eg : name : category : cost : ingredients x : x1 : 15 : xx1, xx2, xx3 y : y1 : 14 : yy1, yy2, yy3 z : x1 : 12 : xx1, xy1 here x and z are similar. 2 Recommend relevant products from the product list to a user How can I implement this using mahout?
Recommendation engine with mahout
CC BY-SA 3.0
null
2014-11-19T09:26:58.700
2018-12-21T22:38:05.487
2014-11-19T12:46:27.353
5091
5091
[ "machine-learning", "data-mining", "recommender-system" ]
2501
1
5626
null
5
204
I do at the moment some data experiments with the [Graphlab toolkit](http://graphlab.com/products/create/docs/). I have at the first next SFrame, with the three columns: ``` Users Items Rating ``` The pair in the same row from every `Users` and `Items` values build the unique key and the `Rating` is the corresponded float value. These values are not normalised. First of all, I do someself next normalisation: - Division of every rating value of specific user by the rating maximum from this user (scale between 0 and 1) - Take the logarithm by every rating value Afterward I create a recommender model and evaluate the basic metrics for it. In this topic I invite everybody to discuss another interesting normalisation methods. If anybody could tell some good method for data preparation, it would be great. The results could be evaluated because of the metrics and I can publish it here. PS My dataset is comming from some music site, the users rated some tracks. I have approximately 100 000 users and 300 000 tracks. Total number of ratings is over 3 millions (actually the matrix is sparse). This is the most simple data set, which I analyze now. In the future I can (and will) use some additional information about the users and tracks (f.e. duration, year, genre, band etc). At the moment I just interest to collect some methods for rating normalisation without to use additional information (users & items features). My problem is, the data set doesn't have any `Rating` at the first. I create someself the column `Rating`, based on the number of events for unique `User-Item` pair (I have this information). You can of course understand that some users can hear some tracks many times, and another users only one time. Consequently the dispersion is very high and I want to reduce it (normalise the ratings value).
Data scheduling for recommender
CC BY-SA 3.0
null
2014-11-19T12:38:05.597
2015-04-27T11:58:50.443
2014-11-20T14:56:14.813
3281
3281
[ "recommender-system", "data-cleaning" ]
2502
1
null
null
4
1311
I'm working through the [Coursera NLP course by Jurafsky & Manning](https://www.coursera.org/course/nlp), and the [lecture on Good-Turing smoothing](https://class.coursera.org/nlp/lecture/32) struck me odd. The example given was: > You are fishing (a scenario from Josh Goodman), and caught: 10 carp, 3 perch, 2 whitefish, 1 trout, 1 salmon, 1 eel = 18 fish ... How likely is it that the next species is new (i.e. catfish or bass) Let's use our estimate of things-we-saw-once to estimate the new things. 3/18 (because N_1=3) I get the intuition of using the count of uniquely seen items to estimate the number of unseen item types (N = 3), but the next steps seem counterintuitive. Why is the denominator left unchanged instead of incremented by the estimate of unseen item types? I.e., I would expect the probabilities to become: > Carp : 10 / 21 Perch : 3 / 21 Whitefish : 2 / 21 Trout : 1 / 21 Salmon : 1 / 21 Eel : 1 / 21 Something new : 3 / 21 It seems like the Good-Turing count penalizes seen items too much (trout, salmon, & eel are each taken down to 1/27); coupled with the need to adjust the formula for gaps in the counts (e.g., Perch & Carp would be zeroed out otherwise), it just feels like a bad hack.
Good-Turing Smoothing Intuition
CC BY-SA 3.0
null
2014-11-19T19:25:02.770
2014-11-21T10:44:06.783
null
null
5095
[ "nlp" ]
2503
1
null
null
2
337
I am trying to build an item-item similarity matching recommendation engine with mahout. The data set is as in the following format ( attributes are in text not in numerals format ) ``` name : category : cost : ingredients x : xx1 : 15 : xxx1, xxx2, xxx3 y : yy1 : 14 : yyy1, yyy2, yyy3 z : xx1 : 12 : xxx1, xxy1 ``` So in-order to use this data set for mahout to train, what is the right way to convert this in to numeric (as CSV Boolean data set) format accepted by mahout.
Creating Data model for mahout recommendation engine
CC BY-SA 3.0
null
2014-11-20T05:38:27.303
2022-04-18T18:15:12.270
2022-04-18T18:15:12.270
1330
5091
[ "machine-learning", "dataset", "data-mining", "recommender-system", "apache-mahout" ]
2504
1
5152
null
50
41238
I have a big data problem with a large dataset (take for example 50 million rows and 200 columns). The dataset consists of about 100 numerical columns and 100 categorical columns and a response column that represents a binary class problem. The cardinality of each of the categorical columns is less than 50. I want to know a priori whether I should go for deep learning methods or ensemble tree based methods (for example gradient boosting, adaboost, or random forests). Are there some exploratory data analysis or some other techniques that can help me decide for one method over the other?
Deep Learning vs gradient boosting: When to use what?
CC BY-SA 3.0
null
2014-11-20T06:49:00.357
2020-08-20T18:33:44.403
null
null
847
[ "machine-learning", "classification", "deep-learning" ]
2505
2
null
2486
3
null
Have you looked into tourr package in R. This package does hyperplane reduction. In addition it has an optimizer that tries to find the best reduction. There is a very nice video in [https://www.youtube.com/watch?v=iSXNfZESR5I](https://www.youtube.com/watch?v=iSXNfZESR5I) That shows what R is capable even beyound tourr package. Also I refer you to [https://stackoverflow.com/questions/8017427/plotting-data-from-an-svm-fit-hyperplane](https://stackoverflow.com/questions/8017427/plotting-data-from-an-svm-fit-hyperplane)
null
CC BY-SA 3.0
null
2014-11-20T09:33:20.320
2014-11-20T09:33:20.320
2017-05-23T12:38:53.587
-1
5100
null
2506
2
null
2463
0
null
Just go through Neo4j (graph data base and will be useful for social network analysis) also.. may be helpful
null
CC BY-SA 3.0
null
2014-11-20T13:07:45.903
2014-11-20T13:07:45.903
null
null
5091
null
2507
1
null
null
10
3588
I'm trying to build a data set on several log files of one of our products. The different log files have their own layout and own content; I successfully grouped them together, only one step remaining... Indeed, the log "messages" are the best information. I don't have the comprehensive list of all those messages, and it's a bad idea to hard code based on those because that list can change every day. What I would like to do is to separate the indentification text from the value text (for example: "Loaded file XXX" becomes (identification: "Loaded file", value: "XXX")). Unfortunately, this example is simple, and in real world there are different layouts and sometimes multiple values. I was thinking about using string kernels, but it is intended for clustering ... and cluseting is not applicable here (I don't know the number of different types of messages and eventhough, it would be too much). Do you have any idea? Thanks for your help. P.S: For those who programs, this can be easier to understand. Let's say that the code contains as logs printf("blabla %s", "xxx") -> I would like to have "blabla" and "xxx" seperatated
Log file analysis: extracting information part from value part
CC BY-SA 3.0
null
2014-11-20T14:26:10.463
2014-12-29T08:38:44.673
null
null
3024
[ "text-mining", "clustering" ]
2508
1
2519
null
2
1115
I would like to run an R script using a single command (e.g. bat file or shortcut). This R script asks the user to choose a file and then plots information about that file. All is done via dialog boxes. I don't want the user to go inside R - because they don't know it at all. So, I was using r cmd and other similar stuffs, but as soon as the plots are displayed, R exits and closes the plots. What can I do? Thanks for your help.
How to run R scripts without closing X11
CC BY-SA 3.0
null
2014-11-20T15:02:17.937
2014-11-21T21:39:29.617
null
null
3024
[ "r" ]
2509
2
null
2507
3
null
How about considering each string as a process trace and applying alpha-algorithm? That would give you a graph and nodes with a big number out-edges will most likely point to values. You can mark these nodes and for every new string parse/traverse the graph until you reach those areas.
null
CC BY-SA 3.0
null
2014-11-20T16:28:48.343
2014-11-20T16:28:48.343
null
null
5041
null
2510
1
null
null
1
1267
I'm currently finishing up a B.S. in mathematics and would like to attend graduate school (a master's degree for starters, with the possibility of a subsequent Ph.D.) with an eye toward entering the field of data science. I'm also particularly interested in machine learning. What are the graduate degree choices that would get me to where I want to go? Is there a consensus as to whether a graduate degree in applied mathematics, statistics, or computer science would put me in a better position to enter the field of data science? Thank you all for the help, this is a big choice for me and any input is very much appreciated. Typically I ask my questions on Mathematics Stack Exchange, but I thought asking here would give me a broader and better rounded perspective.
Graduate Degree Choices for Data Science
CC BY-SA 3.0
null
2014-11-20T17:54:11.990
2016-07-28T17:08:46.557
null
null
null
[ "career" ]
2511
2
null
2495
1
null
I would add a comment but I do not have enough reputation points. I might suggest Using "R revolution open". It is a Build of R that includes a lot of native support for multi-core processing. I have not used it much as my computer is very old, but it is defiantly worth looking at. Plus it is free.
null
CC BY-SA 3.0
null
2014-11-20T21:11:12.187
2014-11-20T21:11:12.187
null
null
5023
null
2512
2
null
2510
2
null
UCL - CSML. It covers computer science, machine learning and statistics. Firstly, reputation of the university. Secondly, you are from Mathematics background, hence I assume you don't have sufficient programming knowledge. Thirdly, Statistics and Machine Learning dominates this field. Employers would prefer these 2 before Mathematics. In short, this course provides everything that you are lacking. HOWEVER, they don't teach programming languages like Java, C++,... but Matlab, R, and Mathematica. Hence, it would be essential if you pick up the former from somewhere.
null
CC BY-SA 3.0
null
2014-11-21T10:24:18.623
2014-11-21T10:47:20.207
2014-11-21T10:47:20.207
5110
5110
null
2513
1
5046
null
7
195
From [A_Roadmap_to_SVM_SMO.pdf](http://nshorter.com/ResearchPapers/MachineLearning/A_Roadmap_to_SVM_SMO.pdf), pg 12. [](https://i.stack.imgur.com/Hzn23.png) (source: [postimg.org](https://s13.postimg.org/9dx9t4w47/whatwhat.png)) Assume I am using linear kernel, how will I be able to get both the first and second inner product? My guess, inner product of datapoint with datapoint j labelled class A for the first inner product of the equation and inner product of datapoint j with datapoints labelled class B for second inner product?
Please enlighten me with Platt's SMO algorithm (for SVM)
CC BY-SA 4.0
null
2014-11-21T10:34:31.673
2019-02-16T16:57:28.567
2020-06-16T11:08:43.077
-1
5110
[ "svm" ]
2514
2
null
2502
3
null
There are no unseen item types in the given data, by definition. 3 is the count of items seen once, and they are already included in the denominator 18. If the next item were previously unseen, it would become seen once when it appears. Since 3-of-18 examples were seen-once items, this is an estimate of the probability that the next item will be seen-once too on its first appearance. It is certainly a heuristic. There is no way to know whether there are 0 or 1000 other types out there.
null
CC BY-SA 3.0
null
2014-11-21T10:44:06.783
2014-11-21T10:44:06.783
null
null
21
null
2515
2
null
1246
10
null
First of all, word "sample" is normally used to describe [subset of population](http://en.wikipedia.org/wiki/Sample_%28statistics%29), so I will refer to the same thing as "example". Your SGD implementation is slow because of this line: ``` for each training example i: ``` Here you explicitly use exactly one example for each update of model parameters. By definition, vectorization is a technique for converting operations on one element into operations on a vector of such elements. Thus, no, you cannot process examples one by one and still use vectorization. You can, however, approximate true SGD by using mini-batches. Mini-batch is a small subset of original dataset (say, 100 examples). You calculate error and parameter updates based on mini-batches, but you still iterate over many of them without global optimization, making the process stochastic. So, to make your implementation much faster it's enough to change previous line to: ``` batches = split dataset into mini-batches for batch in batches: ``` and calculate error from batch, not from a single example. Though pretty obvious, I should also mention vectorization on per-example level. That is, instead of something like this: ``` theta = np.array([...]) # parameter vector x = np.array([...]) # example y = 0 # predicted response for i in range(len(example)): y += x[i] * theta[i] error = (true_y - y) ** 2 # true_y - true value of response ``` you should definitely do something like this: ``` error = (true_y - sum(np.dot(x, theta))) ** 2 ``` which, again, easy to generalize for mini-batches: ``` true_y = np.array([...]) # vector of response values X = np.array([[...], [...]]) # mini-batch errors = true_y - sum(np.dot(X, theta), 1) error = sum(e ** 2 for e in errors) ```
null
CC BY-SA 3.0
null
2014-11-21T11:50:47.717
2014-11-21T11:50:47.717
null
null
1279
null
2516
1
null
null
5
78
To all: I have been wracking my brain at this for a while and thought maybe someone here would know of a package or algorithm to handle the following: I have nominal multivariant timeseries that look like the following: ``` Time Var1 Var2 Var3 Var4 Var5 ... VarN 0 A A B C A ... H 1 A A B D D ... H 2 B A C D D ... H .. ``` And so on from times 0 to 1,000,000. What I would like to do is search the time series for rules of the type: Given Var3 is in state B in the previous step and Var5 is in state D in the previous step, than Var1 will be in state B. What I want to do is have the rules that include the time interval explicitly. A simpler case of interest would simply be to reduce the time series to ``` Time Var1 Var2 Var3 Var4 Var5 ... VarN 0 0 0 0 0 0 ... 0 1 0 0 0 1 1 ... 0 2 1 0 1 0 0 ... 0 ``` Where the the variable is 1 if its state is different from the previous step and zero otherwise. Then I just want to have rules that say something like: If Var4 and Var5 changed in the previous step than Var1 will change in the current step. Which would be easy for a lag of one, as I could just make the data into something like: ``` Var1 Var2 Var3 Var4 Var5 ... VarN Var1_t-1 Var2_t-1 Var3_t-1 ... ``` and then do sequence mining, but if I want to have rules that aren't just a single lag but could be lags from 1 to 500 than my data set begins to be a little difficult to work with. Any help would be greatly appreciated. Edit to respond to comment: Each column could be in one of 7 different states. As far as a target, it is non-specific, any rules between the columns would be of interest. However, predicting columns 30-40 and 62-75 would be particularly interesting.
Relation mining of multivariant categorical timeseries without excluding the temporal nature
CC BY-SA 3.0
null
2014-11-21T15:23:37.360
2014-12-25T09:27:16.340
2014-11-24T13:47:38.860
5134
5113
[ "data-mining", "statistics", "text-mining", "time-series", "categorical-data" ]
2517
2
null
2486
2
null
Dimension reduction (like PCA) is an excellent way to visualize the results of classification on a high-dimensional feature space. The simplest approach is to project the features to some low-d (usually 2-d) space and plot them. Then either project the decision boundary onto the space and plot it as well, or simply color/label the points according to their predicted class. You can even use, say, shape to represent ground-truth class, and color to represent predicted class. This is true for any categorical classifier, but here's an SVM-specific example: [http://www.ece.umn.edu/users/cherkass/predictive_learning/Resources/Visualization%20and%20Interpretation%20of%20SVM%20Classifiers.pdf](http://www.ece.umn.edu/users/cherkass/predictive_learning/Resources/Visualization%20and%20Interpretation%20of%20SVM%20Classifiers.pdf) In particular, see figures 1a and 2a.
null
CC BY-SA 3.0
null
2014-11-21T18:55:07.860
2014-11-21T18:55:07.860
null
null
1156
null
2518
2
null
2500
1
null
Try using the item-based similarity algorithm available under Apache Mahout. It is easy to implement and you will have a good sense how the recommendation system for your data set will work. You could provide ingredients and category as the major inputs to get the similar products.As a neophyte to this field, I would say that this approach is an easy way for all the neophytes to get a good heads up of what kind of a result one can expect from building a recommendation system of their own.
null
CC BY-SA 3.0
null
2014-11-21T21:33:06.777
2014-11-21T21:33:06.777
null
null
5043
null
2519
2
null
2508
2
null
[This](https://stackoverflow.com/questions/24220676/r-script-using-x11-window-only-opens-for-a-second) looks like a similar kind of a problem. Solutions: (taken from above source) - Just sleep via Sys.sleep(10) which would wait ten seconds. - Wait for user input via readLines(stdin()) or something like that [untested] - Use the tcltk package which comes with R and is available on all platforms to pop up a window the user has to click to make the click disappear. That solution has been posted a few times over the years on r-help. 2nd option is better to use for the user. P.S. Since I did not come up with the answer myself, I tried to put it in comment but my reputation is too low for that.
null
CC BY-SA 3.0
null
2014-11-21T21:39:29.617
2014-11-21T21:39:29.617
2017-05-23T12:38:53.587
-1
4622
null
2520
2
null
2510
3
null
Every field has their own variation of "data science," so I would suggest choosing a subject that interest you and going from there. I can't offer what the go to subject is for your particular interest. A graduate degree that would "get you where you want to go" is quite a personal understanding, so I can' answer that. But what I will say is, from my own personal experience, when I graduated with my undergrad degree in economics, I was really interested in data science, and economics allowed me to use data science in a field I'm really interested in. So I applied to Ph.D programs to further my knowledge and am using data science extensively in many different forms of analysis. My suggestion is to apply to graduate degrees that have interesting subject matter to you and will allow you to use data science as understanding. You would fit well in an economics degree because of your background :)
null
CC BY-SA 3.0
null
2014-11-22T08:10:40.480
2014-11-22T08:10:40.480
null
null
4697
null
2521
1
2526
null
2
256
I browsed a sample for available data at [http://dbpedia.org/page/Sachin_Tendulkar](http://dbpedia.org/page/Sachin_Tendulkar). I wanted these properties as columns, so I downloaded the CSV files from [http://wiki.dbpedia.org/DBpediaAsTables](http://wiki.dbpedia.org/DBpediaAsTables). Now, when I browse the data for the same entity "Sachin_Tendulkar", I find that many of the properties are not available. e.g. the property "dbpprop:bestBowling" is not present. How can I get all the properties that I can browse through the direct resource page.
DBPedia as Table not having all the properties
CC BY-SA 3.0
null
2014-11-22T11:54:54.780
2014-11-23T02:27:31.947
null
null
5126
[ "dataset" ]
2522
2
null
2510
4
null
Why not do an MSc in ooh... Data Science? I wrote a quick review of [UK Data Science Masters](http://barry.rowlingson.com/blog/uk-data-science-masters.html)' offerings recently. That should help you get an idea what is offered. Mostly they are mashups of stats and computing, but there are specialisms (health, finance for example) that might interest you. Note that list was compiled for courses that have already started, so some of those courses might not be available for starting next October, or have different syllabus contents.
null
CC BY-SA 3.0
null
2014-11-22T16:08:49.147
2014-11-22T18:16:40.737
2014-11-22T18:16:40.737
471
471
null
2523
1
null
null
11
11139
Common model validation statistics like the [Kolmogorov–Smirnov test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test) (KS), [AUROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic), and [Gini coefficient](https://en.wikipedia.org/wiki/Gini_coefficient) are all functionally related. However, my question has to do with proving how these are all related. I am curious if anyone can help me prove these relationships. I haven't been able to find anything online, but I am just genuinely interested how the proofs work. For example, I know Gini=2AUROC-1, but my best proof involves pointing at a graph. I am interested in formal proofs. Any help would be greatly appreciated!
Relationship between KS, AUROC, and Gini
CC BY-SA 3.0
null
2014-11-23T01:05:06.473
2018-11-25T05:15:13.307
2015-04-18T15:49:57.947
9146
5132
[ "data-mining", "statistics", "predictive-modeling", "accuracy" ]
2524
2
null
2521
0
null
The official answer: ``` Date: Sun, 23 Nov 2014 03:08:19 +0100 From: Petar Ristoski <[email protected]> To: 'Barry Carter' <[email protected]> Subject: RE: CSV tables don't have all properties? Hi Carter, The question was already answered on the DBpedia mailing list, but I will try to clarify it again. On the DBpedia as Tables web page says that "For each class in the DBpedia ontology (such as Person, Radio Station, Ice Hockey Player, or Band) we provide a single CSV/JSON file which contains all instances of this class. Each instance is described by its URI, an English label and a short abstract, the MAPPING-BASED INFOBOX data describing the instance (extracted from the English edition of Wikipedia), geo-coordinates, and external links." As you can see we only provide the mapping-based infobox properties (dbpedia-owl namespace), while the properties from the dbpprop (raw infobox properties) namespace are completely ignored. Therefore, dbpprop:bestBowling is missing from the file. Also, there is a section "Generating your own Custom Tables" [1], where we explain how to generate your own tables that will contain the properties you need. Regards, Petar [1] http://wiki.dbpedia.org/DBpediaAsTables#h347-4 ```
null
CC BY-SA 3.0
null
2014-11-23T02:26:42.470
2014-11-23T02:26:42.470
null
null
null
null
2525
1
2589
null
4
480
I am having some difficulty in seeing connection between PCA on second order moment matrix in estimating parameters of Gaussian Mixture Models. Can anyone connect the above??
Can some one explain how PCA is relevant in extracting parameters of Gaussian Mixture Models
CC BY-SA 3.0
null
2014-11-23T02:27:10.670
2014-12-03T13:55:16.150
null
null
4686
[ "clustering" ]
2526
2
null
2521
0
null
The question was already answered on the DBpedia-discussion mailing list, by Daniel: > Hi Abhay, the DBpediaAsTables dataset only contains the properties in the dbpedia-owl namespace (mapping-based infobox data) and not those from the dbpprop (raw infobox properties) namespace (regarding the differences see [1]). However, as you are only interested in the data about specific entities, take a look at the CSV link at the bottom of the entity's description page, e.g., for your example this link is [2]. Cheers, Daniel [1] wiki.dbpedia.org/Datasets#h434-10 [2] dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=DESCRIBE+%3Chttp://dbpedia.org/resource/Sachin_Tendulkar%3E&format=text%2Fcsv On the [DBpediaAsTables web page](http://wiki.dbpedia.org/DBpediaAsTables), you can find out which datasets were used to generate the tables: instance_types_en, labels, short_abstracts_en, mappingbased_properties_en, geo_coordinates_en. Also, I want to clarify that DBpediaAsTables contains all instances from DBpedia 2014, and with "we provide some of the core DBpedia data" we want to say that not all datasets are included in the tables (but only the 5 I stated before) If you want to generate your own tables that will contain custom properties, please refer to the section [Generate your own Custom Tables](http://wiki.dbpedia.org/DBpediaAsTables#h347-4). Cheers, Petar
null
CC BY-SA 3.0
null
2014-11-23T02:27:31.947
2014-11-23T02:27:31.947
2020-06-16T11:08:43.077
-1
5133
null
2527
1
2562
null
11
499
Hi this is my first question in the Data Science stack. I want to create an algorithm for text classification. Suppose i have a large set of text and articles. Lets say around 5000 plain texts. I first use a simple function to determine the frequency of all the four and above character words. I then use this as the feature of each training sample. Now i want my algorithm to be able to cluster the training sets to according to their features, which here is the frequency of each word in the article. (Note that in this example, each article would have its own unique feature since each article has a different feature, for example an article has 10 "water and 23 "pure" and another has 8 "politics" and 14 "leverage"). Can you suggest the best possible clustering algorithm for this example?
Using Clustering in text processing
CC BY-SA 3.0
null
2014-11-23T14:58:34.127
2017-06-08T00:24:37.560
null
null
5138
[ "text-mining", "clustering" ]
2528
2
null
2527
6
null
If you want to proceed on your existing path I suggest normalizing each term's frequency by its popularity in the entire corpus, so rare and hence predictive words are promoted. Then use random projections to reduce the dimensionality of these very long vectors down to size so your clustering algorithm will work better (you don't want to cluster in high dimensional spaces). But there are other ways of topic modeling. Read [this](http://www.cs.columbia.edu/~blei/papers/Blei2012.pdf) tutorial to learn more.
null
CC BY-SA 3.0
null
2014-11-23T17:49:04.487
2017-06-08T00:24:37.560
2017-06-08T00:24:37.560
33169
381
null
2529
2
null
1227
1
null
Store the edges (relations) in your server: ``` (TeamID, playerID) ``` When you want to find common elements just filter all edges where: ``` TeamID="TeamA" or TeamID="TeamB" ``` (You could use indexes to speed is up, etc) Then group by playerID and check how many items are in each group. The groups with two items belong to both teams and are shared.
null
CC BY-SA 3.0
null
2014-11-23T23:00:30.007
2014-11-23T23:00:30.007
null
null
5041
null
2530
2
null
2500
3
null
I recommend you to take a look to Oryx ([https://github.com/OryxProject/oryx](https://github.com/OryxProject/oryx)). Oryx is based on Apache Mahout (actually one of the creators of Mahout Sean Owen built it) and provides recommendation using collaborative filtering. Oryx is a very practical tool for implementing recommendation. I have used it in several projects: recommending products in retail stores (small businesses), building an e-commerce recommender and user similarity from mobile app interaction. You just have to represent data in the form: UserId ItemId Value Where value is a measure (subjective) of the importance or influence of the interaction between that user and the item. User and item can be anything actually, and the same procedure can be used for tagging. For example, for recommending songs, finding similar songs and bands, and finding similar users according to their music tastes you can represent data as UserId SongId NumberOfPlays Where NumberOfPlays is the amount of times a song has been played by user (in an online music service for example). This exampl was given in Myrrix the predecessor of Oryx. They also show how to recommend tags to questions in StackOverflow. The github site is not that well documented but it will be enough to get it running (and working :))
null
CC BY-SA 4.0
null
2014-11-24T09:20:47.543
2018-12-21T22:38:05.487
2018-12-21T22:38:05.487
64659
5143
null
2534
1
2535
null
3
346
[https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD](https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD) According to the description given in the above link, the Attribute information specifies "average and covariance over all 'segments', each segment being described by a 12-dimensional timbre vector". So the covariance matrix should have 12*12 = 144 elements. But why is the number of timbre covariance features only 78 ?
Confused about description of YearPrediction Dataset
CC BY-SA 3.0
null
2014-11-25T03:08:34.843
2014-11-25T03:38:33.600
null
null
4947
[ "dataset" ]
2535
2
null
2534
1
null
You are right, the covariance matrix should have n^2 elements. However, since cov_{i,j} = cov_{j,i}, there is no need to have a repeated feature cov_{j,i} if cov_{i,j} is already accounted for. Hence there will be only n*(n+1)/2 = 12*13/2 = 78 unique covariances and thus only 78 unique covariance based features (n of those will be variances).
null
CC BY-SA 3.0
null
2014-11-25T03:38:33.600
2014-11-25T03:38:33.600
null
null
847
null
2536
2
null
2516
2
null
This problem is one of estimating the lag. Once that is estimated, you could create additional features representing the lagged values and move forward with "sequence mining" as you have already suggested in the question itself. For each variable, Var_i, you will have to estimate its lag l_i. This lag can be calculated by estimating the order of a Markov chain with seven symbols (you could use either [BIC](http://en.wikipedia.org/wiki/Bayesian_information_criterion) or [AIC](http://en.wikipedia.org/wiki/Akaike_information_criterion) to estimate this order; both would require calculating likelihood of candidate orders and pick the order that maximizes either of these criteria). Once you are done calculating the order of the Markov chain for each of the variables, then you could represent your dataset such that each row will have the current value of Var_i and its preceding values, all the way back to its estimated lag l_i. While this methodology is laborious, it pays rich dividends as its automated and parsimonious way of representing the necessary information.
null
CC BY-SA 3.0
null
2014-11-25T09:17:45.020
2014-11-25T09:17:45.020
null
null
847
null
2537
1
10952
null
8
225
GBMs, like random forests, build each tree on a different sample of the dataset and hence, going by the spirit of ensemble models, produce higher accuracies. However, I have not seen GBM being used with dimension sampling at every split of the tree like is common practice with random forests. Are there some tests that show that dimensional sampling with GBM would decrease its accuracy because of which this is avoided, either in literature form or in practical experience?
Why isn't dimension sampling used with gradient boosting machines (GBM)?
CC BY-SA 3.0
null
2014-11-25T09:40:20.040
2016-03-30T06:58:54.947
null
null
847
[ "random-forest", "accuracy", "gbm", "ensemble-modeling" ]
2538
1
null
null
5
716
I have historic error of time series. I want to analyze error series to improve forecast series. Are there any methods to do this?
Error analysis for better accuracy
CC BY-SA 3.0
null
2014-11-25T10:30:17.567
2014-12-28T13:16:47.620
null
null
5099
[ "time-series", "forecast" ]
2540
2
null
1166
2
null
I have just finished my Ph.D. and have used some NLP in it. My university didn't offer any NLP courses. So I ended up teaching myself NLP. I used this [book](http://www.nltk.org/book_1ed/). Which serves as a great introduction to NLP using NLTK (Natural Language Tool Kit). It gives a good introduction into programming with Python. So handy if you've never programmed in Python before too. I would highly suggest using nltk from nltk.org (sorry can't post more than two links) The book I used is now out of date as NLTK is now on version 3.0, the book mentioned previously is for NLTK 2.x. But the Authors are working on a new version of the book for NLTK 3.X, you can view the unfinished book [here](http://www.nltk.org/book/). I would highly suggest using NLTK and if you're new to natural language processing. I would highly suggest you try and get yourself a copy of the following book: [Foundations of Statistical Natural Language Processing by Manning and Shutze](https://nlp.stanford.edu/fsnlp/) Even though it doesn't contain any code, it servers a great introduction to natural language processing.
null
CC BY-SA 3.0
null
2014-11-26T02:07:06.460
2017-12-14T12:37:39.437
2017-12-14T12:37:39.437
29575
5170
null
2541
1
null
null
8
269
What is the standard way for evaluating and comparing different algorithms while developing recommendation system? Whether we need to have a predetermined annotated ranked dataset and then compare with precision/recall/F measure of different algorithms ? Is this the best way for evaluation ? Or is there any other way to compare results of various recommendation algorithms ?
Evaluating Recommendation engines
CC BY-SA 3.0
null
2014-11-26T04:40:17.840
2023-01-28T03:46:59.557
null
null
5091
[ "machine-learning", "data-mining", "dataset", "statistics", "recommender-system" ]
2543
1
null
null
6
519
I'm from programming background. I'm now learning Analytics. I'm learning concepts from basic statistics to model building like linear regression, logistic regression, time-series analysis, etc., As my previous experience is completely on programming, I would like to do some analysis on the data which programmer has. Say, Lets have the details below(I'm using SVN repository) personname, code check-in date, file checked-in, number of times checkedin, branch, check-in date and time, build version, Number of defects, defect date, file that has defect, build version, defect fix date, defect fix hours, (please feel free to add/remove how many ever variables needed) I Just need a trigger/ starting point on what can be done with these data. can I bring any insights with this data. or can you provide any links that has information about similar type of work done.
Can Machine Learning be applied in software developement
CC BY-SA 3.0
null
2014-11-26T08:47:49.650
2017-05-19T16:12:37.980
2017-05-19T16:12:37.980
21
5172
[ "machine-learning", "predictive-modeling", "software-development" ]
2544
2
null
2543
3
null
Definately - Yes. Good question. Was thinking about it myself. (1) Collect the data. The first problem you have: gather enough data. All the attributes you mentioned (date, name, check-in title/comment, N of deffects etc) are potentially useful - gather as much as possible. As soon as you have a big project, a number of developers, many branches, frequent commits and you have started collecting all the data, you a ready to go further. (2) Ask good questions. The next question you should ask yourself: what effect are you going to measure, estimate and maybe predict. Frequency of possible bugs? Tracking inaccurate "committers"? Risky branches? Want to see some groups of users/bugs/commits according to some metrics? (3) Select the model. As soon as you have the questions formulated, you should follow the general approach in data science - extract needed features in your data, select appropriate model, train you model and test it, apply it. This is too broad process to discuss it this thread, so please use this site to get right answers.
null
CC BY-SA 3.0
null
2014-11-26T09:37:43.970
2014-11-26T09:58:47.527
2014-11-26T09:58:47.527
97
97
null
2545
2
null
2543
1
null
As you are also looking for examples, then github is a good place to check out. I took a random repository and went to "Graphs" on the right hand side, which opens up [contribution frequency graph](https://github.com/mbostock/d3/graphs/contributors). There's several tabs next to it that display other aspects of a repository and commit history graphically - commits, code frequency, punch card, etc.
null
CC BY-SA 3.0
null
2014-11-26T09:57:00.047
2014-11-26T09:57:00.047
null
null
587
null
2546
2
null
2507
1
null
If you're simply trying to separate textual and numeric information then there is a solution based on regular expressions or even just string splitting. You could even do something like finding the first numeric character and split the text in half right before that. With regular expressions you can match all numeric characters that follow eachother. The pattern would be `([0-9]+)` with a global flag. It would match all the groups of numbers and you can do whatever you with with them afterwards. [Regex Tester](http://www.regextester.com/index.html) is good for playing around with that stuff.
null
CC BY-SA 3.0
null
2014-11-26T11:18:13.227
2014-11-26T11:18:13.227
null
null
587
null
2547
1
null
null
3
92
I am trying to determine whether or not we are 90% confident that the mean of a proposed population is at least 2 times that of the mean of the incumbant population based on samples from each population which is all the data I have right now. Here are the data. incumbantvalues = (7.3, 8.4, 8.4, 8.5, 8.7, 9.1, 9.8, 11.0, 11.1, 11.9) proposedvalues = (17.3, 17.9, 19.2, 20.3, 20.5, 20.6, 21.1, 21.2, 21.3, 21.7) I have no idea if either population is or will be normal. The ratio of the sample means does exceed 2.0 but how does that translate to confidence that the proposed population mean will be at least twice that of the mean of the incumbant population with 90% confidence ? Can re-sampling (bootstrapping with replacement) help answer this question ?
Statistical comparison of 2 small data sets for 2X increase in the population mean
CC BY-SA 3.0
null
2014-11-26T20:01:23.187
2014-11-27T21:58:15.113
null
null
5180
[ "statistics", "sampling" ]
2549
2
null
2543
3
null
Without a doubt you can. The key is to have a set of hypotheses (i.e. assumptions \ scenarios that you want to evaluate) and wrangle the data together to prove \ disprove what you thought is true. Here are a few things to watch-out for: - Be ready for Disappointments: Often times, once you have invested time and energy in building these models, analysts tend to get biased towards publishing results (publication bias). Treat this as an exploration that with a lot of dead-ends and the goal should be to find the ones that are not. - Know your Data: You cannot will your data into doing things magically without truly understanding it. Ensure that you know the different attributes (predictors and dependents) very well. Knowing your data well will allow you to cleanse it and think about appropriate models. All models don't work equally well on all data - data that has a lot of categorical variables might require creative solutions like Dimension reduction before it can be modeled. - Know the "Operational" Processes: Knowing how things operate within your firm will help you refine the set of hypothesis that you want to test. For e.g. in your scenario above, knowing how developers work with your change management software and what types of administrative setups have been done will help you figure out why the data is coming in the way it is. Some developers might only be focused on certain modules that are more mature than others, might work only on certain shifts and that might limit how many lines of code are checked in, how many bugs are found etc. Having said that here are some scenarios you might want to test: - Developer Effectiveness : How different developers working on same modules overtime has resulted into increase or decrease in bugs. Does more line of code results in more bugs? Maybe this might be an indicator that the programs need to be split further into smaller components Folks might be more productive during certain times of day than others - does time of day affect bug introductions? - Module Maturity: Which Modules have the most number of issues? Are they worked upon by more developers or less? Do defects keep aging for a long time before they are fixed? Of course, these questions will change depending on what you are working on. Hope this helps.
null
CC BY-SA 3.0
null
2014-11-26T21:20:52.973
2014-11-26T21:20:52.973
null
null
5182
null
2550
1
null
null
4
1159
I am doing a text classification task(5000 essays evenly distributed by 10 labels). I explored `LinearSVC` and got an accuracy of 80%. Now I guess whether accuracy could be raised by using `ensemble` classifier with `SVM` as base estimator? However, I do not know how to employ an `ensemble` classifier incorporating all the features? Please note that I do not want to combine the different features directly in a single vector. Therefore, My first question: in order to improve the current accuracy, is it possible to use `ensemble` classifier with `svm` as base estimator? My second question How to employ an `ensemble` classifier incorporating all features?
How to ensemble classifier incorporating all features in python?
CC BY-SA 3.0
null
2014-11-27T03:21:11.110
2016-07-01T09:27:40.830
null
null
4950
[ "machine-learning", "python", "nlp", "scikit-learn", "ensemble-modeling" ]
2551
2
null
2474
0
null
If these data are available in the actual excel spreadsheet cells (ie, before you export them to the JSON format provided in your question), you can use the following to get them into R: - highlight the region of interest within excel - copy it to the clipboard (eg. Ctrl-C) - At an R prompt type: d <- read.delim('clipboard') The data will now be available as a data.frame in R. ``` d from response value 1 4 TRUE 20 2 8 TRUE 20 3 9 TRUE 20 4 3 TRUE 20 5 14 FALSE 20 6 15 TRUE 20 7 17 FALSE 20 8 13 TRUE 20 ```
null
CC BY-SA 3.0
null
2014-11-27T03:27:46.193
2014-11-27T03:27:46.193
null
null
5153
null
2552
2
null
2541
3
null
The standard way to evaluate a recommendation engine is by using the [RMSE (root mean square error)](http://en.wikipedia.org/wiki/Root-mean-square_deviation) of the predicted values and the ground truth. It is almost a SOP that, after finishing developing a recommendation engine, we will evaluate this engine by comparing its RMSE with other famous, common recommendation algorithms like [SVD](http://en.wikipedia.org/wiki/Singular_value_decomposition), [tranditional CF](http://en.wikipedia.org/wiki/Collaborative_filtering), even [RBM](http://en.wikipedia.org/wiki/Restricted_Boltzmann_machine), etc. Some terms mentioned above do not seem to be related with recommendation, but you can easily find on the internet how these techniques can be used in this topic.
null
CC BY-SA 3.0
null
2014-11-27T03:37:08.010
2014-11-27T03:37:08.010
null
null
5184
null
2553
2
null
2547
2
null
Yes, in principle, resampling can help answer this question. ``` incumbent <- c(7.3, 8.4, 8.4, 8.5, 8.7, 9.1, 9.8, 11.0, 11.1, 11.9) proposed <- c(17.3, 17.9, 19.2, 20.3, 20.5, 20.6, 21.1, 21.2, 21.3, 21.7) set.seed(42) M <- 2000 rs <- double(M) for (i in 1:M) { rs[i] <- mean(sample(proposed, replace=T)) - 2 * mean(sample(incumbent, replace=T)) } ``` To make the assessment, you should choose one (not both) of the following: A. The (two-tailed) 90% confidence interval for the difference in the (weighted) means using Hall's method is: ``` ci.hall <- 2 * (mean(proposed)-2*mean(incumbent)) - rev(quantile(rs,prob=c(0.05, 0.95))) names(ci.hall) <- rev(names(ci.hall)) ci.hall 5% 95% -0.29 2.95 ``` This is appropriate if you have any concern about missing the possibility that mean(proposed) might actually be less than 2 * mean(incumbent). B. The proportion of resample means >= 0 provides the (one-tailed) estimate that mean(proposed) is at least twice mean(incumbent): ``` sum(rs>=0)/M [1] 0.8915 ``` The problem is that the samples are really rather small and resampling estimates can be unstable for small n. The same issue applies if you want to assess normality and go with parametric comparisons. If you can get to, say, n >= 30, the approach described here should be fine.
null
CC BY-SA 3.0
null
2014-11-27T04:23:31.500
2014-11-27T05:03:58.203
2014-11-27T05:03:58.203
5153
5153
null
2555
2
null
641
3
null
Check these : Repository of Test Domains for Information Extraction : [http://www.isi.edu/info-agents/RISE/repository.html](http://www.isi.edu/info-agents/RISE/repository.html) DBpedia : [http://wiki.dbpedia.org/Downloads32](http://wiki.dbpedia.org/Downloads32) ([mirror](https://web.archive.org/web/20150415010508/http://wiki.dbpedia.org:80/Downloads32)) Link Updated : [http://www.isi.edu/integration/RISE/](http://www.isi.edu/integration/RISE/) [https://github.com/dbpedia/extraction-framework/wiki/The-DBpedia-Data-Set](https://github.com/dbpedia/extraction-framework/wiki/The-DBpedia-Data-Set)
null
CC BY-SA 3.0
null
2014-11-27T07:21:56.277
2017-05-23T02:57:59.047
2017-05-23T02:57:59.047
843
5091
null
2557
2
null
2547
0
null
Here is what I programmed within a loop. - randomly take 10 values (with replacement) from the incumbant sample, determine its mean - randomly take 10 values (with replacement) from the proposed sample, determine its mean - form the ratio of the above two means and append it to a master list - repeat steps 1 thru 3 many times (I chose 1 million) - % Confidence=(number of ratios that equal or exceed 2.0/1000000)*100 Results: Exactly 897450 ratios were found to be greater than or equal to 2.0, producing a confidence of 89.745%. Conclusion: We are less than 90% confident that the proposed population will have a mean at least twice that of the incumbant population.
null
CC BY-SA 3.0
null
2014-11-27T21:58:15.113
2014-11-27T21:58:15.113
null
null
5180
null
2558
1
4896
null
3
66
What are some possible techniques for smoothing proportions across very large categories, in order to take into account the sample size? The application of interest here is to use the proportions as input into a predictive model, but I am wary of using the raw proportions in cases where there is little evidence and I don't want to overfit. Here is an example, where the ID denotes a customer and impressions and clicks are the number of ads shown and clicks the customer has made, respectively. ![enter image description here](https://i.stack.imgur.com/3oHzQ.jpg)
Smoothing Proportions :: Massive User Database
CC BY-SA 3.0
null
2014-11-28T03:02:44.323
2015-01-17T05:20:32.763
null
null
1138
[ "machine-learning", "predictive-modeling", "feature-extraction" ]
2559
2
null
2543
1
null
Data analysis is always driven by the request. It could be: "I want to find out this, so I need to collect those data first. Then I would use this model to analyze". If you just want to practice, by reviewing your data set, there is one: Task: Which issue affects the "number of check in " most? Data set: what you have Model: Correlation (e.g. Spearman, which is nonparametric measure of statistical dependence between two variables)
null
CC BY-SA 3.0
null
2014-11-28T06:02:59.210
2014-11-28T06:02:59.210
null
null
5198
null
2561
2
null
2527
2
null
Cannot say it is the best one, but Latent Semantic Analysis could be one option. Basically it is based on co-occurrence, you need to weight it first. [http://en.wikipedia.org/wiki/Latent_semantic_analysis](http://en.wikipedia.org/wiki/Latent_semantic_analysis) [http://lsa.colorado.edu/papers/dp1.LSAintro.pdf](http://lsa.colorado.edu/papers/dp1.LSAintro.pdf) The problem is that LSA does not have firm statistic support. Have fun
null
CC BY-SA 3.0
null
2014-11-28T06:17:34.480
2014-11-28T06:17:34.480
null
null
5198
null
2562
2
null
2527
5
null
I don't know if you ever read SenseCluster by Ted Pedersen : [http://senseclusters.sourceforge.net/](http://senseclusters.sourceforge.net/). Very good paper for sense clustering. Also, when you analyze words, think that "computer", "computers", "computering", ... represent one concept, so only one feature. Very important for a correct analysis. To speak about the clustering algorithm, you could use a [hierarchical clustering](http://en.wikipedia.org/wiki/Hierarchical_clustering). At each step of the algo, you merge the 2 most similar texts according to their features (using a measure of dissimilarity, euclidean distance for example). With that measure of dissimilarity, you are able to find the best number of clusters and so, the best clustering for your texts and articles. Good luck :)
null
CC BY-SA 3.0
null
2014-11-28T08:55:25.813
2014-11-28T08:55:25.813
null
null
5165
null
2563
1
null
null
3
88
I have a visualization problem. Creating a comparison report of PR event efficiency. Say, show or exhibition. There are two dimensions of comparison: - compare vs the same event performance in the past years - compare vs another type of analogical/competitive events There is also a number of comparison aspects: - Audience - Media Coverage - Social Buzz - ROI - .... etc Each aspect is a set of some final KPI-s (just numbers, which can be compared vs another "dimensions"), plus maybe some descriptive text and pictures (which couldn't be a metric but should be attached to the report). So finaly it looks like a three-dimensional coube: - Years - Another Events - Aspects If I put it in plain Word or PPT it will look like a document with dozen of slides/papers and linear structure. Any ideas how to compile an elegant user-friendly report?
Visualization of three-dimensional report
CC BY-SA 3.0
null
2014-11-28T09:19:32.867
2014-11-30T08:44:27.780
2014-11-30T08:44:27.780
97
97
[ "marketing", "infographics", "visualization" ]
2564
1
null
null
5
430
A short while ago, I came across this ML framework that has implemented several different algorithms ready for use. The site also provides a handy API that you can access with an API key. I have need of the framework to solve a website classification problem where I basically need to categorize several thousand websites based on their HTML content. As I don't want to be bound to their existing API, I wanted to use the framework to implement my own. However, besides some introductory-level data mining courses and associated reading, I know very little as to what exactly I would need to use. Specifically, I'm at a loss as to what exactly I need to do to train the classifier and then model the data. The framework already includes some classification algorithms like NaiveBayes, which I know is well suited to the task of text classification, but I'm not exactly sure how to apply it to the problem. Can anyone give me a rough guidelines as to what exactly I would need to do to accomplish this task?
Using the Datumbox Machine Learning Framework for website classification - guidelines?
CC BY-SA 3.0
null
2014-11-28T11:39:09.537
2019-12-31T13:02:20.290
null
null
5199
[ "machine-learning", "classification", "java" ]
2565
2
null
806
28
null
AUC and accuracy are fairly different things. AUC applies to binary classifiers that have some notion of a decision threshold internally. For example logistic regression returns positive/negative depending on whether the logistic function is greater/smaller than a threshold, usually 0.5 by default. When you choose your threshold, you have a classifier. You have to choose one. For a given choice of threshold, you can compute accuracy, which is the proportion of true positives and negatives in the whole data set. AUC measures how true positive rate (recall) and false positive rate trade off, so in that sense it is already measuring something else. More importantly, AUC is not a function of threshold. It is an evaluation of the classifier as threshold varies over all possible values. It is in a sense a broader metric, testing the quality of the internal value that the classifier generates and then compares to a threshold. It is not testing the quality of a particular choice of threshold. AUC has a different interpretation, and that is that it's also the probability that a randomly chosen positive example is ranked above a randomly chosen negative example, according to the classifier's internal value for the examples. AUC is computable even if you have an algorithm that only produces a ranking on examples. AUC is not computable if you truly only have a black-box classifier, and not one with an internal threshold. These would usually dictate which of the two is even available to a problem at hand. AUC is, I think, a more comprehensive measure, although applicable in fewer situations. It's not strictly better than accuracy; it's different. It depends in part on whether you care more about true positives, false negatives, etc. F-measure is more like accuracy in the sense that it's a function of a classifier and its threshold setting. But it measures precision vs recall (true positive rate), which is not the same as either above.
null
CC BY-SA 3.0
null
2014-11-28T12:48:14.443
2014-11-28T12:48:14.443
null
null
21
null
2566
2
null
2538
3
null
NARMAX Methodology and Residual analysis both address this issue. Search for the following articles:(Error = Residual = Noise) - Chaotic Time Series Prediction with residual Analysis Method Using Hybrid Elman–NARX Neural Networks, Muhammad Ardalani-Farsa (2010) - Orthogonal Least Squares Methods and their Application to Non-Linear System Identification, S. Chen, S. A. Billings, W. Luo (1989) - Any article working on NARMAX, NARMA and Residual Analysis. Remember in NARX and NAR there is no error estimation and analysis. Notice in general you can follow this steps: - Estimate a time series and calculate Error or Residuals using any . - Consider errors or residuals as a new time series. Try to estimate Error-Time-Series. Now you can add this estimations to your initial model. - You can do this residual analysis as many times as you need. In practice 2 or 3 times suffices. Remember in practice, residual time series are noisy and SNR in this time series is so small. So you should use some Noise-Robust methods for residual analysis.
null
CC BY-SA 3.0
null
2014-11-28T13:16:02.520
2014-11-28T13:16:02.520
null
null
5200
null
2567
1
null
null
11
1306
I want to make a prediction for the result of the parliamentary elections. My output will be the % each party receives. There is more than 2 parties so logistic regression is not a viable option. I could make a separate regression for each party but in that case the results would be in some manner independent from each other. It would not ensure that the sum of the results would be 100%. What regression (or other method) should I use? Is it possible to use this method in R or Python via a specific library?
What regression to use to calculate the result of election in a multiparty system?
CC BY-SA 3.0
null
2014-11-29T16:05:08.810
2017-08-09T15:35:01.097
2014-12-03T13:50:12.587
847
5211
[ "classification", "r", "python", "regression", "predictive-modeling" ]
2568
1
null
null
2
1748
I have some text files containing movie reviews I need to find out whether the review is good or bad. I tried the following code but it's not working: ``` import nltk with open("c:/users/user/desktop/datascience/moviesr/movies-1-32.txt", 'r') as m11: mov_rev = m11.read() mov_review1=nltk.word_tokenize(mov_rev) bon="crap aweful horrible terrible bad bland trite sucks unpleasant boring dull moronic dreadful disgusting distasteful flawed ordinary slow senseless unoriginal weak wacky uninteresting unpretentious " bag_of_negative_words=nltk.word_tokenize(bon) bop="Absorbing Big-Budget Brilliant Brutal Charismatic Charming Clever Comical Dazzling Dramatic Enjoyable Entertaining Excellent Exciting Expensive Fascinating Fast-Moving First-Rate Funny Highly-Charged Hilarious Imaginative Insightful Inspirational Intriguing Juvenile Lasting Legendary Pleasant Powerful Ripping Riveting Romantic Sad Satirical Sensitive Sentimental Surprising Suspenseful Tender Thought Provoking Tragic Uplifting Uproarious" bop.lower() bag_of_positive_words=nltk.word_tokenize(bop) vec=[] for i in bag_of_negative_words: if i in mov_review1: vec.append(1) else: for w in bag_of_positive_words: if w in moview_review1: vec.append(5) ``` So I am trying to check whether the review contains a positive word or a negative word. If it contains a negative word then a value of 1 will be assigned to the vector vec else a value of 5 will be assigned. But the output I am getting is an empty vector. Please help. Also, please suggest others way of solving this problem.
Sentiment analysis using python
CC BY-SA 4.0
null
2014-11-30T00:42:28.237
2021-04-01T08:23:10.203
2021-04-01T08:23:10.203
85045
5214
[ "python", "nlp", "sentiment-analysis" ]
2569
2
null
2567
5
null
Robert is right, multinomial logistic regression is the best tool to use. Although you would need to have a integer value representing the party as the dependent variable, for example: 1= Conservative majority, 2= Labour majority, 3= Liberal majority....(and so on) You can perform this in R using the nnet package. [Here](https://stats.idre.ucla.edu/r/dae/multinomial-logistic-regression/) is a good place to quickly run through how to use it.
null
CC BY-SA 3.0
null
2014-11-30T20:01:23.930
2017-08-09T15:15:09.553
2017-08-09T15:15:09.553
8878
5219
null
2570
2
null
2567
3
null
On what do you want to base your prediction? I've tried to predict multiparty election results for my thesis based on previous years and then using results for some polling stations from this year predict the results in all other polling stations. For this the linear model with which I compared estimated the number of votes each party would obtain by regressing over the votes from previous years. If you have the estimated number of votes for all parties you can calculate the percentage from that. See [Forecasts From Nonrandom Samples](http://amstat.tandfonline.com/doi/abs/10.1198/016214504000001835) for the relevant paper, which extends the linear model.
null
CC BY-SA 3.0
null
2014-11-30T20:14:15.560
2017-08-09T15:35:01.097
2017-08-09T15:35:01.097
381
5220
null
2571
2
null
2499
1
null
The United States Census Bureau has many free housing datasets (some of which are updated more than once every 10 years). There is an [API for American Community Survey 1 Year Data](http://www.census.gov/data/developers/data-sets/acs-survey-1-year-data.html) that includes housing data. There are raw data sets at [American Fact Finder](http://factfinder2.census.gov/faces/nav/jsf/pages/searchresults.xhtml?refresh=t).
null
CC BY-SA 3.0
null
2014-12-01T00:23:30.580
2014-12-01T00:23:30.580
null
null
1330
null
2572
2
null
2499
1
null
There is real estate data for sale at [DataQuick](http://www.dataquick.com/) or [Real Quest](http://www.realquest.com/).
null
CC BY-SA 3.0
null
2014-12-01T00:26:39.927
2014-12-01T00:26:39.927
null
null
1330
null
2573
2
null
2568
0
null
try ``` vec =[] for word in bag_of_negative_words: if word in mov_review1: vec.append(1) for word in bag_of_positive_words: if word in moview_review1: vec.append(5) ```
null
CC BY-SA 3.0
null
2014-12-01T09:58:00.280
2014-12-01T09:58:00.280
null
null
5091
null
2574
2
null
660
1
null
I suggest using machine learning libraries with already functional linear regression [Spark MLlib](https://spark.apache.org/docs/1.1.0/mllib-guide.html) or [hivemall](https://github.com/myui/hivemall).
null
CC BY-SA 3.0
null
2014-12-01T12:23:03.083
2014-12-01T12:23:03.083
null
null
5224
null
2575
1
null
null
4
1849
In this [wiki page](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) there is a function `corr()` that calculates the Pearson coefficient of correlation, but my question is that: is there any function in Hive that enables to calculate the [Kendall coefficient](http://en.wikipedia.org/wiki/Kendall%27s_W) of correlation of a pair of a numeric columns in the group?
Hive: How to calculate the Kendall coefficient of correlation of a pair of a numeric columns in the group?
CC BY-SA 3.0
null
2014-12-01T14:52:31.827
2015-04-28T21:46:01.880
2014-12-08T14:22:40.260
21
5224
[ "apache-hadoop", "correlation", "hive" ]
2576
1
null
null
5
630
I'm looking for API suggestions for enriching data on companies. Currently I use the Crunchbase API to look up a company's name or domain and I am trying to gather the domain/name (if I don't already have both), contact email (this one is a long shot), and the location of their headquarters. This works incredibly well if Crunchbase has the company in their API, but I'd say this only happens about 25% of the time. I'd love to get some suggestions on some free APIs that I could use along with Crunchbase. I'd also love to see if anyone has had positive or negative experiences with paid APIs!
API for Company Data Enrichment Suggestions
CC BY-SA 3.0
null
2014-12-01T20:10:29.967
2016-01-26T11:08:14.953
null
null
5227
[ "dataset" ]
2577
2
null
2567
2
null
This is not a regression but a multi-class classification problem. The output is typically the probabilities of all classes for any given test instance (test row). So in your case, the output for any given test row from the trained model will be of the form: ``` prob_1, prob_2, prob_3,..., prob_k ``` where prob_i denotes the probability of the i-th class (in your case i-th party), assuming there are k classes in the response variable. Note that the sum of these k probabilities is going to be 1. The class prediction in this case is going to be the class that has the maximum probability. There are many classifiers in R that do multi-class classification. You could use logistic regression with multi-class support through the [nnet](http://cran.r-project.org/web/packages/nnet/nnet.pdf) package in R and invoking the `multinom` command. As an alternative, you could also use the [gbm](http://cran.r-project.org/web/packages/gbm/gbm.pdf) package in R and invoke the `gbm` command. To create a multi-class classifier, just use `distribution="multinomial" while using the`gbm` function.
null
CC BY-SA 3.0
null
2014-12-01T21:59:47.540
2014-12-01T22:18:54.130
2014-12-01T22:18:54.130
847
847
null
2578
1
null
null
2
281
I have some question regarding to the choice of the better implementation. I would know the differences and advantages of [Mahout Apache](https://mahout.apache.org/) (Java implementation) versus [Graphlab](http://graphlab.com/index.html) (Python implementation) in the area of the data sciences. Specially in the area of recommenders and classifiers. Can anybody here get some (qualified) feedback about both possibilities?
Graphlab vs Mahout
CC BY-SA 3.0
null
2014-12-02T10:58:30.950
2015-01-04T22:29:14.980
2014-12-03T07:43:18.727
3466
3281
[ "bigdata", "classification", "python", "recommender-system", "java" ]
2579
1
2594
null
5
2342
Which of the following is best (or widely used) for calculating item-item similarity measure in mahout and why ? ``` Pearson Correlation Spearman Correlation Euclidean Distance Tanimoto Coefficient LogLikelihood Similarity ``` Is there any thumb-rule to chose from these set of algorithm also how to differentiate each of them ?
Mahout Similarity algorithm comparison
CC BY-SA 3.0
null
2014-12-02T11:12:06.103
2014-12-11T06:16:59.327
null
null
5091
[ "machine-learning", "data-mining", "statistics", "algorithms", "recommender-system" ]
2580
2
null
2578
0
null
The advantage of mahout is that it is scalable, apache license , good community and documentation support. Also Fast-prototyping and evaluation to evaluate a different configuration of the same algorithm we just need to update a parameter and run again.
null
CC BY-SA 3.0
null
2014-12-02T14:35:33.447
2014-12-02T16:00:07.753
2014-12-02T16:00:07.753
5091
5091
null
2581
1
null
null
7
2640
I am trying to match new product description with the existing ones. Product description looks like this: Panasonic DMC-FX07EB digital camera silver. These are steps to be performed: - Tokenize description and recognize attributes: Panasonic => Brand, DMC-FX07EB => Model, etc. - Get few candidates with similar features - Get the best candidate. I am having problem with the first step (1). In order to get 'Panasonic => Brand', DMC-FX07EB => Model, silver => color, I need to have index where each token of the product description correspond to certain attribute name (Brand, model, color, etc.) in the existing database. The problem is that in my database product descriptions are presented as one atomic attribute e.g. 'description' (no separated product attributes). Basically I don't have training data, so I am trying to build index of all product attributes so I can build training data. So far I have attributes from bestbuy.com and semantics3.com APIs, but both sources lack most of attributes or contain irrelevant ones. Any suggestions for better APIs to get product attributes? Better approach to do this? P.S. For every product there is a matched product description in the Database, which is as well in a form of one atomic attribute. I have checked this [question on SO](https://stackoverflow.com/questions/18496925/how-to-parse-product-titles-unstructured-into-structured-data), it helped me and it seems we have same approach but I am still trying to get training data.
Attributes extraction from unstructured product descriptions
CC BY-SA 3.0
null
2014-12-02T16:09:35.333
2014-12-03T14:35:06.510
2017-05-23T12:38:53.587
-1
5241
[ "machine-learning", "nlp", "feature-extraction" ]
2582
1
2595
null
11
2129
I am currently using SVM and scaling my training features to the range of [0,1]. I first fit/transform my training set and then apply the same transformation to my testing set. For example: ``` ### Configure transformation and apply to training set min_max_scaler = MinMaxScaler(feature_range=(0, 1)) X_train = min_max_scaler.fit_transform(X_train) ### Perform transformation on testing set X_test = min_max_scaler.transform(X_test) ``` Let assume that a given feature in the training set has a range of [0,100], and that same feature in the testing set has a range of [-10,120]. In the training set that feature will be scaled appropriately to [0,1], while in the testing set that feature will be scaled to a range outside of that first specified, something like [-0.1,1.2]. I was wondering what the consequences of the testing set features being out of range of those being used to train the model? Is this a problem?
Consequence of Feature Scaling
CC BY-SA 3.0
null
2014-12-02T16:19:19.043
2014-12-03T18:57:22.773
null
null
802
[ "machine-learning", "svm", "feature-scaling" ]
2583
2
null
2568
1
null
Try to search from the databases of official "bad words" that google publishes in this link [Google's official list of bad words](http://fffff.at/googles-official-list-of-bad-words/). Also, here is the link for the good words [Not the official list of good words](http://www.enchantedlearning.com/wordlist/positivewords.shtml) For the code, I would do it like this: ``` textArray = file('dir_to_your_text','r').read().split() #Bad words should be listed like this for the split function to work # "*** ****** **** ****" the stars are for the cenzuration :P badArray = file('dir_to_your_bad_word_file).read().split() goodArray = file('dir_to_your_good_word_file).read().split() # Then you use matching algorithm from difflib on good and bad word for every word in an array of words import difflib goodMachingCouter = 0; badMacihngCouter = 0; for iGood in range(0, len(goodArray)): for iWord in range(0, len(textArray)): goodMachingCounter += difflib.SequenceMatcher(None, goodArray[iGood], textArray[iWord]).ratio() for iBad in range(0, len(badArray)): for iWord in range(0, len(textArray)): badMachingCounter += difflib.SequenceMatcher(None, badArray[ibad], textArray[iWgoodord]).ratio() goodMachingCouter *= 100/(len(goodArray)*len(textArray)) badMacihngCouter *= 100/(len(badArray)*len(textArray)) print('Show the good measurment of the text in %: '+goodMachingCouter) print('Show the bad measurment of the text in %: '+badMacihngCouter) print('Show the hootnes of the text: ' + len(textArray)*goodMachingCounter) ``` The code will be slow but accurate :) I didn't run and test it please do it for me and post the correct code :) because I wanna test it too :)
null
CC BY-SA 4.0
null
2014-12-02T20:07:14.887
2021-03-30T20:43:19.287
2021-03-30T20:43:19.287
85045
null
null
2584
2
null
2581
5
null
Left you a quick response on SO. The gist is that you can collect a lot of information from electronics shops and manufacturers' web sites, and lots you can annotate manually. If your goal is to only get training data, that's all you need: My answer form the cross-post: "Having developed a commercial analyzer of this kind, I can tell you that there is no easy solution for this problem. But there are multiple shortcuts, especially if your domain is limited to cameras/electronics. Firstly, you should look at more sites. Many have product brand annotated in the page (proper html annotations, bold font, all caps in the beginning of the name). Some sites have entire pages with brand selectors for search purposes. This way you can create a pretty good starter dictionary of brand names. Same with product line names and even with models. Alphanumeric models can be extracted in bulk by regular expressions and filtered pretty quickly. There are plenty of other tricks, but I'll try to be brief. Just a piece of advice here: there is always a trade-off between manual work and algorithms. Always keep in mind that both approaches can be mixed and both have return-on-invested-time curves, which people tend to forget. If your goal is not to create an automatic algorithm to extract product brands and models, this problem should have limited time budget in your plan. You can realistically create a dictionary of 1000 brands in a day, and for decent performance on known data source of electronic goods (we are not talking Amazon here or are we?) a dictionary of 4000 brands may be all you need for your work. So do the math before you invest weeks into the latest neural network named entity recognizer."
null
CC BY-SA 3.0
null
2014-12-02T22:23:33.700
2014-12-03T14:35:06.510
2014-12-03T14:35:06.510
5249
5249
null
2585
2
null
2582
7
null
This was meant as a comment but it is too long. The fact that your test set has a different range might be a sign that the training set is not a good representation of the test set. However, if the difference is really small as in your example, it is likely that it won't affect your predictions. Unfortunately, I don't think I have a good reason to think it won't affect a SVM in any circumstance. Notice that the rationale for using MinMaxScalar is (according to the documentation): > The motivation to use this scaling include robustness to very small standard deviations of features and preserving zero entries in sparse data. Therefore, it is important for you to make sure that your data fits that case. If you are really concerned about having a difference range, you should use a regular standardization (such as `preprocessing.scale`) instead.
null
CC BY-SA 3.0
null
2014-12-03T05:00:30.500
2014-12-03T05:00:30.500
null
null
4621
null
2586
1
2606
null
2
140
I'm very passionate about how computers can be made able to think intelligently and independently (in our favour, of course!). I'm currently studying Bachelors science of information technology at UTS (University of Technology:Sydney). I have two months before I start my second year, and have not yet been able to decide on which major should I select that can lead myself towards dedicated study of Artificial Intelligence (which I love with my life). I have the following majors available: - Internetworking and Applications - Data Analytics - (there are other two as well, but business oriented). [Here](http://uts.edu.au) is the link to my subjects. I believe that being able to play with data is a sign of intelligence (I may be wrong too!). Will one of these subjects form me a good foundation for my further study in A.I.? Or should I jump into Engineering? Or Pure Science?
Can data analytics be a basis for artificial intelligence?
CC BY-SA 3.0
null
2014-12-03T07:46:30.967
2014-12-12T23:48:02.230
2014-12-12T23:48:02.230
84
5185
[ "bigdata", "career" ]
2587
1
null
null
3
438
Is there a method/class available in Apache Mahout to perform n-fold cross validation? If yes how it can be done?
N - fold cross validation in mahout
CC BY-SA 3.0
null
2014-12-03T11:05:59.490
2016-01-29T06:16:35.847
2014-12-03T13:50:48.697
21
5091
[ "machine-learning", "data-mining", "java", "apache-mahout" ]
2588
2
null
2313
4
null
The difference between these methods is the assumptions they make about the task. [Multi-class classification](http://en.wikipedia.org/wiki/Multiclass_classification) assumes that each document has exactly one label. So a document can either be about sports or weather, not both. [Multi-label classification](http://en.wikipedia.org/wiki/Multi-label_classification) allows a document to have any combination of labels, including none. So a document can be about only sports, only weather, sports AND weather, or neither. You could train a multi-label classifier with data where each document has exactly one label, but there is no guarantee that the predictions made at test time will have only one label. Also you are forcing the classifier to do more work (and potentially make more errors) by considering more possible labelings than it needs to. Therefore, if the multi-class assumption makes sense for your problem, you are better off with a multi-class classifier. The method that you describe for training individual binary classifiers corresponds to multi-label classification. The binary classifiers that you use could each be trained from one-class data or two-class data. However, this is only one of the many ways to do multi-label classification (see the wikipedia page above for more). Unfortunately, the problem that you describe does not cleanly fit into either multi-class or multi-label classification, since you want each document to have at most one label.
null
CC BY-SA 3.0
null
2014-12-03T13:21:34.153
2014-12-03T13:21:34.153
null
null
5263
null
2589
2
null
2525
5
null
I believe the claim that you are referring to is that the maximum-likelihood estimate of the component means in a GMM must lie in the span of the eigenvectors of the second moment matrix. This follows from two steps: - Each component mean in the maximum-likelihood estimate is a linear combination of the data points. (You can show this by setting the gradient of the log-likelihood function to zero.) - Any linear combination of the data points must lie in the span of the eigenvectors of the second moment matrix. (You can show this by first showing that any individual data point must lie in the span, and therefore any linear combination must also be in the span.)
null
CC BY-SA 3.0
null
2014-12-03T13:55:16.150
2014-12-03T13:55:16.150
null
null
5263
null