Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
1002
1
1006
null
10
1420
Caveat: I am a complete beginner when it comes to machine learning, but eager to learn. I have a large dataset and I'm trying to find pattern in it. There may / may not be correlation across the data, either with known variables, or variables that are contained in the data but which I haven't yet realised are actually variables / relevant. I'm guessing this would be a familiar problem in the world of data analysis, so I have a few questions: - The 'silver bullet' would be to throw this all this data into a stats / data analysis program and for it to crunch the data looking for known / unknown patterns trying to find relations. Is SPSS suitable, or are there other applications which may be better suited. - Should I learn a language like R, and figure out how to manually process the data. Wouldn't this comprimise finding relations as I would have to manually specify what and how to analyse the data? - How would a professional data miner approach this problem and what steps would s/he take?
What initial steps should I use to make sense of large data sets, and what tools should I use?
CC BY-SA 3.0
null
2014-08-19T17:50:52.583
2020-08-16T18:02:41.913
2016-07-17T14:45:20.330
9420
2861
[ "machine-learning", "data-mining", "tools", "beginner" ]
1003
1
null
null
3
221
I recently read [Similarity Measures for Short Segments of Text](http://research.microsoft.com/en-us/um/people/sdumais/ecir07-metzlerdumaismeek-final.pdf) (Metzler et al.). It describes basic methods for measuring query similarity, and in the paper, the data consists of queries and their top results. Results are lists of page urls, page titles, and short page snippets. In the paper, the authors collect 200 results per query. When using the public Google APIs to retrieve results, I was only able to collect 4-10 results per query. There's a substantial difference between 10 and 200. Hence, how much data is commonly used in practice to measure query similarity (e.g., how many results per query)? References are a plus!
Query similarity: how much data is used in practice?
CC BY-SA 3.0
null
2014-08-19T18:59:03.013
2015-05-24T22:46:06.903
2014-08-20T12:14:01.593
97
1097
[ "machine-learning", "dataset", "text-mining", "search" ]
1005
2
null
997
3
null
Another idea is to combine OpenStreetMap project map data, for example, using corresponding nice R package ([http://www.r-bloggers.com/the-openstreetmap-package-opens-up](http://www.r-bloggers.com/the-openstreetmap-package-opens-up)), with census data (population census data, such as the US data: [http://www.census.gov/data/data-tools.html](http://www.census.gov/data/data-tools.html), as well as census data in other categories: [http://national.census.okfn.org](http://national.census.okfn.org)) to analyze temporal patterns of geosocial trends. HTH.
null
CC BY-SA 3.0
null
2014-08-20T03:06:01.753
2014-08-20T03:06:01.753
null
null
2452
null
1006
2
null
1002
11
null
I will try to answer your questions, but before I'd like to note that using term "large dataset" is misleading, as "large" is a relative concept. You have to provide more details. If you're dealing with bid data, then this fact will most likely affect selection of preferred tools, approaches and algorithms for your data analysis. I hope that the following thoughts of mine on data analysis address your sub-questions. Please note that the numbering of my points does not match the numbering of your sub-questions. However, I believe that it better reflects general data analysis workflow, at least, how I understand it. - Firstly, I think that you need to have at least some kind of conceptual model in mind (or, better, on paper). This model should guide you in your exploratory data analysis (EDA). A presence of a dependent variable (DV) in the model means that in your machine learning (ML) phase later in the analysis you will deal with so called supervised ML, as opposed to unsupervised ML in the absence of an identified DV. - Secondly, EDA is a crucial part. IMHO, EDA should include multiple iterations of producing descriptive statistics and data visualization, as you refine your understanding about the data. Not only this phase will give you valuable insights about your datasets, but it will feed your next important phase - data cleaning and transformation. Just throwing your raw data into a statistical software package won't give much - for any valid statistical analysis, data should be clean, correct and consistent. This is often the most time- and effort-consuming, but absolutely necessary part. For more details on this topic, read this nice paper (by Hadley Wickham) and this (by Edwin de Jonge and Mark van der Loo). - Now, as you're hopefully done with EDA as well as data cleaning and transformation, your ready to start some more statistically-involved phases. One of such phases is exploratory factor analysis (EFA), which will allow you to extract the underlying structure of your data. For datasets with large number of variables, the positive side effect of EFA is dimensionality reduction. And, while in that sense EFA is similar to principal components analysis (PCA) and other dimensionality reduction approaches, I think that EFA is more important as it allows to refine your conceptual model of the phenomena that your data "describe", thus making sense of your datasets. Of course, in addition to EFA, you can/should perform regression analysis as well as apply machine learning techniques, based on your findings in previous phases. Finally, a note on software tools. In my opinion, current state of statistical software packages is at such point that practically any major software packages have comparable offerings feature-wise. If you study or work in an organization that have certain policies and preferences in term of software tools, then you are constrained by them. However, if that is not the case, I would heartily recommend open source statistical software, based on your comfort with its specific programming language, learning curve and your career perspectives. My current platform of choice is R Project, which offers mature, powerful, flexible, extensive and open statistical software, along with amazing ecosystem of packages, experts and enthusiasts. Other nice choices include Python, Julia and specific open source software for processing big data, such as Hadoop, Spark, NoSQL databases, WEKA. For more examples of open source software for data mining, which include general and specific statistical and ML software, see this section of a [Wikipedia page](http://en.wikipedia.org/wiki/Data_mining#Free_open-source_data_mining_software_and_applications). UPDATE: Forgot to mention [Rattle](http://rattle.togaware.com), which is also a very popular open source R-oriented GUI software for data mining.
null
CC BY-SA 4.0
null
2014-08-20T05:43:08.610
2020-08-16T18:02:41.913
2020-08-16T18:02:41.913
98307
2452
null
1007
1
1008
null
-3
1376
I want to scrape some data from a website. I have used import.io but still not much satisfied.. can any of you suggest about it.. whats the best tool to get the unstructured data from web
Looking for Web scraping tool for unstructured data
CC BY-SA 3.0
null
2014-08-20T14:12:03.870
2015-01-06T01:26:41.587
2014-08-21T11:52:35.660
471
867
[ "tools", "crawling" ]
1008
2
null
1007
3
null
Try BeautifulSoup - [http://www.crummy.com/software/BeautifulSoup/](http://www.crummy.com/software/BeautifulSoup/) From the website "Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping." I have no personally used it, but it often comes up in regards to a nice library for scraping. Here's a blog post on using it to scrape Craigslist [http://www.gregreda.com/2014/07/27/scraping-craigslist-for-tickets/](http://www.gregreda.com/2014/07/27/scraping-craigslist-for-tickets/)
null
CC BY-SA 3.0
null
2014-08-20T15:34:00.830
2014-08-20T15:34:00.830
null
null
403
null
1009
2
null
997
9
null
If you have R and the `spacetime` package then you are only `data(package="spacetime")` away from a list of space-time data sets bundled with the package: ``` Data sets in package ‘spacetime’: DE_NUTS1 (air) Air quality data, rural background PM10 in Germany, daily averages 1998-2009 fires Northern Los Angeles County Fires rural (air) Air quality data, rural background PM10 in Germany, daily averages 1998-2009 ``` then for example: ``` > data(fires) > str(fires) 'data.frame': 313 obs. of 3 variables: $ Time: int 5863 5870 6017 6018 6034 6060 6176 6364 6366 6372 ... $ X : num 63.9 64.3 64.1 64 64.4 ... $ Y : num 19.4 20.1 19.7 19.8 20.3 ... ```
null
CC BY-SA 3.0
null
2014-08-20T15:53:29.403
2014-08-20T15:53:29.403
null
null
471
null
1010
2
null
1007
2
null
You don't mention what language you're programming in (please consider adding it as a tag), so general help would be to seek out a HTML parser and use that to pull the data. Some web sites can have simply awful HTML code and can be very difficult to scrape, and just when you think you have it... A HTML parser will parse all the html and allow you to access it in a structured sort of way, whether that's from an array, an object etc.
null
CC BY-SA 3.0
null
2014-08-20T19:08:39.807
2014-08-20T19:08:39.807
null
null
2861
null
1013
1
1037
null
2
173
I've just started reading about AB testing, as it pertains to optimizing website design. I find it interesting that most of the methods assume that changes to the layout and appearance are independent of each other. I understand that the most common method of optimization is the ['multi-armed bandit'](http://en.wikipedia.org/wiki/Multi-armed_bandit) procedure. While I grasp the concept of it, it seems to ignore the fact that changes (changes to the website in this case) are not independent to each other. For example, if company is testing the placement and color of the logo on the website, they find the optimal color first then the optimal placement. Not that I'm some expert on human psychology, but shouldn't these be related? Can the multi-armed bandit method be efficiently used in this case or more complicated cases? My first instinct is to say no. On that note, why haven't people used heuristic algorithms to optimize over complicated AB testing sample spaces? For an example, I thought someone might have used a genetic algorithm to optimize a website layout, but I can find no examples of something like this out there. This leads me to believe that I'm missing something important in my understanding of AB testing as it applies to website optimization. Why isn't heuristic optimization used on more complicated websites?
Using Heuristic Methods for AB Testing
CC BY-SA 3.0
null
2014-08-20T21:12:49.927
2014-08-25T16:52:44.817
null
null
375
[ "optimization", "consumerweb" ]
1015
1
null
null
2
251
I have installed [Drake](https://github.com/Factual/drake) on Windows 7 64-bit. I am using JDK 1.7.0_51. I tried both using the pre-compiled jar file and compiling from the Clojure source using [leiningen](https://github.com/technomancy/leiningen). The resulting Drake version is 0.1.6, the current development version. When running Drake, I get the current version number. Next, I tried to go through [the tutorial](https://github.com/Factual/drake/wiki/Tutorial). The command: ``` java -jar drake.jar -w .\workflow.d ``` results in the following Exception: ``` java.lang.Exception: no input data found in locations: D:\tools\drake\in.c sv ``` Even though the file exists and has text inside it. The same scenario works in a similar installation on Ubuntu 12.04. Am I doing something wrong, or is this a Windows-specific bug?
Making Factual drake work on Windows 7 64-bit
CC BY-SA 3.0
null
2014-08-21T06:31:50.197
2014-08-21T06:31:50.197
null
null
895
[ "tools" ]
1017
1
null
null
3
127
I'm studying reinforcement learning in order to implement a kind of time series pattern analyzer such as market. The most examples I have seen are based on the maze environment. But in real market environment, the signal changes endlessly as time passes and I can not guess how can I model environment and states. Another question is about buy-sell modeling. Let's assume that the agent randomly buy at time $t$ and sell at time $t + \alpha$. It's simple to calculate reward. The problem is how can I model $Q$ matrix and how can I model signals between buy and sell actions. Can you share some source code or guidance for similar situation?
How can I model open environment in reinforcement learning?
CC BY-SA 3.0
null
2014-08-21T10:13:54.130
2016-01-18T14:36:33.730
2016-01-18T14:36:33.730
8820
3030
[ "machine-learning", "reinforcement-learning" ]
1019
2
null
810
9
null
I think it always depends on the scenario. Using a representative data set is not always the solution. Assume that your training set has 1000 negative examples and 20 positive examples. Without any modification of the classifier, your algorithm will tend to classify all new examples as negative. In some scenarios this is O.K. But in many cases the costs of missing postive examples is high so you have to find a solution for it. In such cases you can use a cost sensitive machine learning algorithm. For example in the case of medical diagnosis data analysis. In summary: Classification errors do not have the same cost!
null
CC BY-SA 3.0
null
2014-08-22T09:03:13.333
2017-07-25T23:18:26.250
2017-07-25T23:18:26.250
10806
979
null
1020
1
null
null
4
84
I am hoping to model the characteristics of the users of a specific page on Facebook, which has roughly 2 million likes. I have been looking at the Facebook SDK/API, but I can't really see if what I would like to do is possible. It seems that the users share quite different amounts of data so I probably discard a lot of users and only use the ones with a quite open public profile. I would like to have the following data: 1) See the individuals that have 'liked' the page. 2) See the list of friends for each person that have 'liked' the page. 3) See gender for each person (optional) 4) See other pages that each person has liked (optional) Could anyone tell me if it is possible to get this data? As mentioned earlier it is okay if I discard data for users that don't like to share this data.
Available data about 'likers' as a page on Facebook
CC BY-SA 3.0
null
2014-08-22T14:01:23.640
2014-10-22T09:51:35.993
2014-08-23T09:22:19.073
21
3044
[ "social-network-analysis" ]
1021
1
null
null
10
403
I have thousands of lists of strings, and each list has about 10 strings. Most strings in a given list are very similar, though some strings are (rarely) completely unrelated to the others and some strings contain irrelevant words. They can be considered to be noisy variations of a canonical string. I am looking for an algorithm or a library that will convert each list into this canonical string. Here is one such list. - Star Wars: Episode IV A New Hope | StarWars.com - Star Wars Episode IV - A New Hope (1977) - Star Wars: Episode IV - A New Hope - Rotten Tomatoes - Watch Star Wars: Episode IV - A New Hope Online Free - Star Wars (1977) - Greatest Films - [REC] 4 poster promises death by outboard motor - SciFiNow For this list, any string matching the regular expression `^Star Wars:? Episode IV (- )?A New Hope$` would be acceptable. I have looked at Andrew Ng's course on Machine Learning on Coursera, but I was not able to find a similar problem.
Extract canonical string from a list of noisy strings
CC BY-SA 3.0
null
2014-08-22T15:59:07.097
2014-08-25T08:11:49.307
2014-08-25T08:11:49.307
3047
3047
[ "nlp", "similarity", "information-retrieval" ]
1022
2
null
1020
3
null
Want to wish you good luck. Some time ago faced with the same problem, but didn't find any satisfying solution. First of all, there is no way to get list of users, who "liked" a particular page. Even, if you are an administrator of this page (I was). One only can get list of last 3 or 5 hundred users. Friendships data for most of the users is also inaccessible. Looks like gender is the only thing from your list, that you can get. Data about pages, that exact user "likes", should be available (as it's written in docs), but in reality, through API you can collect something only for friends and FoF. Even though this data is available through web interface. So the only way is to try dirty trick with parsing and scraping (but remember, that I didn't advise it ;) ).
null
CC BY-SA 3.0
null
2014-08-22T18:27:04.377
2014-08-22T18:27:04.377
null
null
941
null
1023
2
null
1021
4
null
As a naive solution I would suggest to first select the strings which contain the most frequent tokens inside the list. In this way you can get rid of irrelevant string. In the second phrase I would do a majority voting. Assuming the 3 sentences: - Star Wars: Episode IV A New Hope | StarWars.com - Star Wars Episode IV - A New Hope (1977) - Star Wars: Episode IV - A New Hope - Rotten Tomatoes I would go through the tokens one by one. We start by "Star". It wins as all the string start with it. "Wars" will also win. The next one is ":". It will also win. All the tokens will ein in majority voting till "Hope". The next token after "Hope" will be either "|", or "(" or "-". None of the will win in majority voting so I will stop here! Another solution would be probably to use [Longest common subsequence](http://en.wikipedia.org/wiki/Longest_common_subsequence_problem). As I said I have not though about it much. So there might be much more better solutions to your problem :-)
null
CC BY-SA 3.0
null
2014-08-23T09:19:08.577
2014-08-23T09:19:08.577
null
null
979
null
1024
1
1030
null
3
194
I've fit a GLM (Poisson) to a data set where one of the variables is categorical for the year a customer bought a product from my company, ranging from 1999 to 2012. There's a linear trend of the coefficients for the values of the variable as the year of sale increases. Is there any problem with trying to improve predictions for 2013 and maybe 2014 by extrapolating to get the coefficients for those years?
Extrapolating GLM coefficients for year a product was sold into future years?
CC BY-SA 3.0
null
2014-08-23T13:47:01.907
2014-08-26T11:36:01.420
2014-08-26T11:36:01.420
21
1241
[ "statistics", "glm", "regression" ]
1025
1
null
null
10
630
I have been developing a chess program which makes use of alpha-beta pruning algorithm and an evaluation function that evaluates positions using the following features namely material, kingsafety, mobility, pawn-structure and trapped pieces etc..... My evaluation function is derived from the $$f(p) = w_1 \cdot \text{material} + w_2 \cdot \text{kingsafety} + w_3 \cdot \text{mobility} + w_4 \cdot \text{pawn-structure} + w_5 \cdot \text{trapped pieces}$$ where $w$ is the weight assigned to each feature. At this point i want to tune the weights of my evaluation function using temporal difference, where the agent plays against itself and in the process gather training data from its environment (which is a form of reinforcement learning). I have read some books and articles in order to have an insight on how to implement this in Java but they seem to be theoretical rather than practical. I need a detailed explanation and pseudo codes on how to automatically tune the weights of my evaluation function based on previous games.
implementing temporal difference in chess
CC BY-SA 3.0
null
2014-08-23T13:56:43.813
2016-03-29T04:52:04.283
2016-01-18T14:36:50.730
8820
3052
[ "machine-learning", "algorithms", "reinforcement-learning" ]
1026
2
null
810
3
null
I think there are two separate issues to consider: Training time, and prediction accuracy. Take a simple example : consider you have two classes, that have a multivariate normal distribution. Basically, you need to estimate the respective class means and class covariances. Now the first thing you care about is your estimate of the difference in the class means: but your performance is limited by the accuracy of the worst estimated mean: it's no good estimating one mean to the 100th decimal place - if the other mean is only estimated to 1 decimal place. So it's a waste of computing resources to use all the data - you can instead undersample the more common class AND reweight the classes appropriately. ( those computing resources can then be used exploring different input variables etc) Now the second issue is predictive accuracy: different algorithms use different error metrics, which may or may not agree with your own objectives. For example, logistic regression will penalize overall probability error, so if most of your data is from one class, then it will tend to try to improve accurate probability estimates ( e.g. 90 vs 95% probability) of that one class rather than trying to identify the rare class. In that case, you would definitely want to try to reweight to emphasize the rare class ( and subsequently adjust the estimate [by adjusting the bias term] to get the probability estimates realigned)
null
CC BY-SA 3.0
null
2014-08-23T14:15:58.660
2016-11-29T10:18:41.597
2016-11-29T10:18:41.597
1256
1256
null
1027
2
null
1025
2
null
A first remark, you should watch 'Wargames' to know what you're getting yourself into. What you want is f(p) such that f(p) is as close as possible to strength of position. A very simple solution using genetic algo would be to setup 10000 players with different weights and see which wins. Then keep the top 1000 winners' weight, copy them 10 times, alter them slightly to explore weight space, and run the simulation again. That's standard GA, given a functional form, what are the best coefficients for it. Another solution is to extract the positions, so you have a table '(material, kingsafety, mobility, pawn-structure, trappedpieces) -> goodness of position' where goodness of position is some objective factor (outcome win/lose computed using simulations above or known matches, depth of available tree, number of moves under the tree where one of the 5 factors gets better. You can then try different functional forms for your f(p), regression, svm.
null
CC BY-SA 3.0
null
2014-08-23T15:25:44.903
2014-08-23T15:25:44.903
null
null
3053
null
1028
1
2315
null
37
55112
I have been reading around about Random Forests but I cannot really find a definitive answer about the problem of overfitting. According to the original paper of Breiman, they should not overfit when increasing the number of trees in the forest, but it seems that there is not consensus about this. This is creating me quite some confusion about the issue. Maybe someone more expert than me can give me a more concrete answer or point me in the right direction to better understand the problem.
Do Random Forest overfit?
CC BY-SA 3.0
null
2014-08-23T16:54:06.380
2021-02-08T02:11:50.497
null
null
3054
[ "machine-learning", "random-forest" ]
1029
1
null
null
-3
1119
Which one will be the dominating programming language for next 5 years for analytics , machine learning . R verses python verses SAS. Advantage and disadvantage.
Which one will be the dominating programming language for next 5 years for analytics , machine learning . R or python or SAS
CC BY-SA 3.0
0
2014-08-23T19:34:09.417
2014-08-24T11:03:08.773
null
null
3057
[ "machine-learning", "r", "python" ]
1030
2
null
1024
4
null
I believe that this is a case for applying time series analysis, in particular time series forecasting ([http://en.wikipedia.org/wiki/Time_series](http://en.wikipedia.org/wiki/Time_series)). Consider the following resources on time series regression: - http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471363553.html - http://www.stats.uwo.ca/faculty/aim/tsar/tsar.pdf (especially section 4.6) - http://arxiv.org/abs/0802.0219 (Bayesian approach)
null
CC BY-SA 3.0
null
2014-08-23T20:05:38.700
2014-08-23T20:37:43.057
2014-08-23T20:37:43.057
2452
2452
null
1031
2
null
1029
2
null
There is a great [survey](http://blog.revolutionanalytics.com/2014/01/in-data-scientist-survey-r-is-the-most-used-tool-other-than-databases.html) published by O'Reilly collected at Strata. You can see that SAS is not widely popular, and there is no reason why that should change at this point. One can rule that out. R is barely ahead of Python, 43% vs 41%. You can find many blogs expressing the rise of Python in data science. I would go with Python in the near future. But 5 years is a very long time. I think Golang will steal a lot of developers from Python in general. This might spill over to data science usage as well. Code can be written to execute in parallel very easily, which makes it a perfect vehicle for Big Data processing. [Julia's](http://julialang.org/) benchmarks for technical computing are even more impressive, and you can have iPython like stuff with iJulia. Hence Python is likely to lose some steam to both. But there are ways to call Julia functions from R and Python, so you can experiment using best sides of each.
null
CC BY-SA 3.0
null
2014-08-24T00:46:03.813
2014-08-24T00:46:03.813
null
null
3051
null
1032
2
null
1028
12
null
You may want to check [cross-validated](https://stats.stackexchange.com/) - a stachexchange website for many things, including machine learning. In particular, this question (with exactly same title) has already been answered multiple times. Check [these links](https://stats.stackexchange.com/search?q=random%20forest%20overfit). But I may give you the short answer to it: yes, it does overfit, and sometimes you need to control the complexity of the trees in your forest, or even prune when they grow too much - but this depends on the library you use for building the forest. E.g. in `randomForest` in R you can only control the complexity
null
CC BY-SA 4.0
null
2014-08-24T08:22:23.497
2020-08-20T18:50:02.613
2020-08-20T18:50:02.613
98307
816
null
1033
2
null
1029
4
null
Due to the very Big increase in Big Data (pun intended) and the desire for robust stable scalable applications I actually believe it to be Scala. Spark will inevitably become the main Big Data Machine Learning tool, and it's main API is in Scala. Furthermore you simply cannot build a product with scripting languages like Python and R, one can only experiment with these languages. What Scala brings is a way to BOTH experiment and produce a product. More reasons - Think functionally - write faster code and more readable code - Scala means the end of the two team development cycle. So better product ownership, more agile cross functional teams, and half as many employees required to make a product as we will no longer need both a "research" team and an engineering team, Data Scientists will be able to do both. This is because Scala is; A production quality language - static typing, but with the flexibility of dynamic typing due to implicits Interoperable with rest of Java world (so Apache Commons Math, Databases, Cassandra, HBase, HDFS, Akka, Storm, many many databases, more spark components (e.g. graphx, SparkStreaming) - Step into Spark code easily and understand it, also helps with debugging - Scala is awesome: Amazing IDE support due to static typing Property based tests with ScalaCheck - insane unit testing Very concise language Suits mathematicians perfectly (especially Pure Mathematicians) - A little more efficient as compiled not interpreted - Python Spark API sits on Scala API and therefore will always be behind Scala API - Much easier to do Mathematics in Scala as it's a Scalable Language where one can easily define DSLs and due to being so functional - Akka - another way other than storm to do High Velocity - Pimp my library pattern makes adding methods to Spark RDDs really easy
null
CC BY-SA 3.0
null
2014-08-24T11:03:08.773
2014-08-24T11:03:08.773
null
null
2668
null
1034
1
null
null
13
4355
I am exploring different types of parse tree structures. The two widely known parse tree structures are a) Constituency based parse tree and b) Dependency based parse tree structures. I am able to use generate both types of parse tree structures using Stanford NLP package. However, I am not sure how to use these tree structures for my classification task. For e.g If I want to do sentiment analysis and want to categorize text into positive and negative classes, what features can I derive from parse tree structures for my classification task?
What features are generally used from Parse trees in classification process in NLP?
CC BY-SA 3.0
null
2014-08-24T17:09:40.510
2017-04-20T09:20:31.550
null
null
3064
[ "machine-learning", "nlp", "feature-selection", "feature-extraction" ]
1035
2
null
1021
3
null
First compute the edit distance between all pairs of strings. See [http://en.wikipedia.org/wiki/Edit_distance](http://en.wikipedia.org/wiki/Edit_distance) and [http://web.stanford.edu/class/cs124/lec/med.pdf](http://web.stanford.edu/class/cs124/lec/med.pdf). Then exclude any outliers strings based on some distance threshold. With remaining strings, you can use the distance matrix to identify the most central string. Depending on the method you use, you might get ambiguous results for some data. No method is perfect for all possibilities. For your purposes, all you need is some heuristic rules to resolve ambiguities -- i.e. pick two or more candidates. Maybe you don't want to pick "most central" from your list of strings, but instead want to generate a regular expression that captures the pattern common to all the non-outlier strings. One way to do this is to synthesize a string that is equidistant from all the non-outlier strings. You can work out the required edit distance from the matrix, and then you'd randomly generate regular using those distances as constraints. Then you'd test candidate regular expressions and accept the first one that fits the constraints and also accepts all the strings in your non-outlier list. (Start building regular expressions from longest common substring lists, because those are non-wildcard characters.)
null
CC BY-SA 3.0
null
2014-08-24T22:02:01.050
2014-08-24T22:02:01.050
null
null
609
null
1036
1
null
null
7
1198
I am exploring how to model a data set using normal distributions with both mean and variance defined as linear functions of independent variables. Something like N ~ (f(x), g(x)). I generate a random sample like this: ``` def draw(x): return norm(5 * x + 2, 3 *x + 4).rvs(1)[0] ``` So I want to retrieve 5, 2 and 4 as the parameters for my distribution. I generate my sample: smp = np.zeros((100,2)) ``` for i in range(0, len(smp)): smp[i][0] = i smp[i][1] = draw(i) ``` The likelihood function is: ``` def lh(p): p_loc_b0 = p[0] p_loc_b1 = p[1] p_scl_b0 = p[2] p_scl_b1 = p[3] l = 1 for i in range(0, len(smp)): x = smp[i][0] y = smp[i][1] l = l * norm(p_loc_b0 + p_loc_b1 * x, p_scl_b0 + p_scl_b1 * x).pdf(y) return -l ``` So the parameters for the linear functions used in the model are given in the p 4-variable vector. Using scipy.optimize, I can solve for the MLE parameters using an extremely low xtol, and already giving the solution as the starting point: ``` fmin(lh, x0=[2,5,3,4], xtol=1e-35) ``` Which does not work to well: ``` Warning: Maximum number of function evaluations has been exceeded. array([ 3.27491346, 4.69237042, 5.70317719, 3.30395462]) ``` Raising the xtol to higher values does no good. So i try using a starting solution far from the real solution: ``` >>> fmin(lh, x0=[1,1,1,1], xtol=1e-8) Optimization terminated successfully. Current function value: -0.000000 Iterations: 24 Function evaluations: 143 array([ 1., 1., 1., 1.]) ``` Which makes me think: PDF are largely clustered around the mean, and have very low gradients only a few standard deviations away from the mean, which must be not too good for numerical methods. So how does one go about doing these kind of numerical estimation in functions where gradient is very near to zero away from the solution?
How to numerically estimate MLE estimators in python when gradients are very small far from the optimal solution?
CC BY-SA 3.0
null
2014-08-25T00:28:09.003
2020-08-02T14:02:59.000
null
null
3068
[ "python", "statistics" ]
1037
2
null
1013
2
null
If I understand you question correctly, there are two reasons why genetic algorithm might not a good idea for optimizing website features: 1) Feedback data is coming in too slow, say once a day, genetic algorithm might take a while to converge. 2) In the process of testing genetic algorithm will probably come up with combinations that are 'strange' and that might not be the risk the company wants to take.
null
CC BY-SA 3.0
null
2014-08-25T03:01:09.837
2014-08-25T16:52:44.817
2014-08-25T16:52:44.817
3070
3070
null
1038
1
null
null
1
530
I have created external table in Hive in the hdfs path 'hdfs://localhost.localdomain:8020/user/hive/training'. If I apply describe command I can find the table path as shown below. But when I browse through the namenode web page, the table name does not showing up in the path. ``` hive> describe extended testtable4; OK firstname string lastname string address string city string state string country string ***Detailed Table Information Table(tableName:testtable4, dbName:default, owner:cloudera, createTime:1408765301, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:firstname, type:string, comment:null), FieldSchema(name:lastname, type:string, comment:null), FieldSchema(name:address, type:string, comment:null), FieldSchema(name:city, type:string, comment:null), FieldSchema(name:state, type:string, comment:null), FieldSchema(name:country, type:string, comment:null)], location:hdfs://localhost.localdomain:8020/user/hive/training, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=,, field.delim=,, line.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{EXTERNAL=TRUE, transient_lastDdlTime=1408765301}, viewOriginalText:null, viewExpandedText:null, tableType:EXTERNAL_TABLE) Time taken: 0.7 seconds*** ```
Hive External table does not showing in Namenode (Cloudera-QuickstartVm)
CC BY-SA 3.0
null
2014-08-25T09:53:13.823
2014-08-25T09:53:13.823
null
null
1314
[ "bigdata" ]
1039
2
null
1034
0
null
I think dependencies can be used to improve the accurary of your sentiment classifier. Consider the following examples: E1: Bill is not a scientist and assume that the token "scientist" has a positive sentiment in a specific domain. Knowing the dependency neg(scientist, not) we can see that the example above has a negative sentiment. Without knowing this dependency we would probably classify the sentence as positive. Another types of dependencies can be used probably in the same way to improve the accuracy of the classifiers.
null
CC BY-SA 3.0
null
2014-08-25T11:29:35.047
2014-08-25T11:29:35.047
null
null
979
null
1041
2
null
1024
6
null
If you suspect your response is linear with year, then put year in as a numeric term in your model rather than a categorical. Extrapolation is then perfectly valid based on the usual assumptions of the GLM family. Make sure you correctly get the errors on your extrapolated estimates. Just extrapolating the parameters from a categorical variable is wrong for a number of reasons. The first one I can think of is that there may be more observations in some years than others, so any linear extrapolation needs to weight those year's estimates more. Just eyeballing a line - or even fitting a line to the coefficients - won't do this.
null
CC BY-SA 3.0
null
2014-08-26T07:14:54.767
2014-08-26T07:20:52.097
2014-08-26T07:20:52.097
471
471
null
1042
1
null
null
4
126
I am seeking a basic list of key data analysis methods used for studying social media platforms online. Are there such key methods, or does this process generally vary according to topic? And is there a standard order in which these methods are applied?(The particular context I'm interested in is how the news is impacting on social media)
Studying social media platforms - key data analysis methods?
CC BY-SA 3.0
null
2014-08-26T07:33:40.080
2014-09-29T17:46:58.617
2014-08-26T17:25:29.930
3058
3058
[ "social-network-analysis" ]
1044
1
null
null
-6
297
As what I described in the title, we are especially interested in those for dealing with big data----ts efficiency and stability, and used in industry not in experiment or university. Thanks!
which programming language has a large library that can do machine learning algorithm, R, matlab or python
CC BY-SA 3.0
null
2014-08-26T14:53:40.647
2014-08-28T11:24:38.743
null
null
3097
[ "machine-learning", "bigdata", "data-mining", "python", "r" ]
1045
1
null
null
2
103
Have anyone used Shark as repository from resulting datasets from Apache Spark? I'm starting some tests with Spark and read about this database tecnology. Have anyone been using it?
Using Shark with Apache Spark
CC BY-SA 3.0
null
2014-08-26T21:37:12.107
2014-08-26T21:37:12.107
null
null
3050
[ "apache-hadoop" ]
1046
2
null
492
3
null
Try [this](http://deeplearning4j.org/word2vec.html). This has an implementation of Word2Vec used instead of Bag of Words for NER and other NLP tasks.
null
CC BY-SA 4.0
null
2014-08-26T21:51:07.247
2020-08-02T12:40:00.340
2020-08-02T12:40:00.340
98307
3100
null
1047
2
null
1002
3
null
- SPSS is a great tool, but you can accomplish a great deal with resources that you already have on your computer, like Excel, or that are free, like the R-project. Although these tools are powerful, and can help you identify patterns, you'll need to have a firm grasp of your data before running analyses (I'd recommend running descriptive statistics on your data, and exploring the data with graphs to make sure everything is looking normal). In other words, the tool that you use won't offer a "silver bullet", because the output will only be as valuable as the input (you know the saying... "garbage in, garbage out"). Much of what I'm saying has already been stated in the reply by Aleksandr - spot on. - R can be challenging for those of us who aren't savvy with coding, but the free resources associated with R and its packages are abundant. If you practice learning the program, you'll quickly gain traction. Again, you'll need to be familiar with your data and the analyses you want to run anyway, and that fact remains regardless of the statistical tools you utilize. - I'd begin by getting super familiar with my data (follow the steps outlined in the reply from Aleksandr, for starters). You might consider picking up John Foreman's book called Data Smart. It's a hands-on book, as John provides datasets and you follow along with his examples (using Excel) to learn various ways of navigating and exploring data. For beginners, it's a great resource.
null
CC BY-SA 3.0
null
2014-08-26T22:48:16.617
2014-08-26T22:48:16.617
null
null
3101
null
1048
2
null
1044
3
null
I've making some researches last months and I could find more libraries, contente and active community with Python. Actually I'm using it to ETL processes, some minning jobs and to make map/reduce.
null
CC BY-SA 3.0
null
2014-08-27T03:41:20.317
2014-08-27T03:41:20.317
null
null
3050
null
1050
1
11966
null
10
894
General description of the problem I have a graph where some vertices are labeled with a type with 3 or 4 possible values. For the other vertices, the type is unknown. My goal is to use the graph to predict the type for vertices that are unlabeled. Possible framework I suspect this fits into the general framework of label propagation problems, based on my reading of the literature (e.g., see [this paper](http://lvk.cs.msu.su/~bruzz/articles/classification/zhu02learning.pdf) and [this paper](http://www.csc.ncsu.edu/faculty/samatova/practical-graph-mining-with-R/slides/pdf/Frequent_Subgraph_Mining.pdf)) Another method that is mentioned often is `Frequent Subgraph Mining`, which includes algorithms like `SUBDUE`,`SLEUTH`, and `gSpan`. Found in R The only label propagation implementation I managed to find in `R` is `label.propagation.community()` from the `igraph` library. However, as the name suggests, it is mostly used to find communities, not for classifying unlabeled vertices. There also seems to be several references to a `subgraphMining` library (here for example), but it looks like it is missing from CRAN. Question Do you know of a library or framework for the task described?
Libraries for (label propagation algorithms/frequent subgraph mining) for graphs in R
CC BY-SA 3.0
null
2014-08-27T13:01:14.643
2016-05-27T18:36:35.830
2015-10-25T05:44:40.023
609
3108
[ "classification", "r", "graphs" ]
1051
2
null
1003
2
null
> When using the public Google APIs to retrieve results, I was only able to collect 4-10 results per query. Here's how to get more than 10 results per query: [https://support.google.com/customsearch/answer/1361951?hl=en](https://support.google.com/customsearch/answer/1361951?hl=en) > Google Custom Search and Google Site Search return up to 10 results per query. If you want to display more than 10 results to the user, you can issue multiple requests (using the start=0, start=11 ... parameters) and display the results on a single page. In this case, Google will consider each request as a separate query, and if you are using Google Site Search, each query will count towards your limit. There are other search engine APIs as well (e.g., [Bing](http://datamarket.azure.com/dataset/bing/search))
null
CC BY-SA 3.0
null
2014-08-27T18:33:23.110
2014-08-27T18:33:23.110
null
null
819
null
1053
1
null
null
6
1209
I would like to [summarize](http://stat.ethz.ch/R-manual/R-devel/library/base/html/summary.html) (as in R) the contents of a CSV (possibly after [loading](http://www.endmemo.com/program/R/readcsv.php) it, or storing it somewhere, that's not a problem). The summary should contain the quartiles, mean, median, min and max of the data in a CSV file for each numeric (integer or real numbers) dimension. The standard deviation would be cool as well. I would also like to generate some plots to visualize the data, for example 3 plots for the 3 pairs of variables that are more correlated ([correlation coefficient](http://www.r-tutor.com/elementary-statistics/numerical-measures/correlation-coefficient)) and 3 plots for the 3 pairs of variables that are least correlated. R requires only a few lines to implement this. Are there any libraries (or tools) that would allow a similarly simple (and efficient if possible) implementation in Java or Scala? PD: This is a specific use case for a [previous (too broad) question](https://datascience.stackexchange.com/questions/948/any-clear-winner-for-data-science-in-scala).
Summarize and visualize a CSV in Java/Scala?
CC BY-SA 3.0
null
2014-08-28T01:36:40.540
2014-09-01T08:10:56.967
2017-04-13T12:50:41.230
-1
1281
[ "tools", "visualization", "scala", "csv" ]
1054
2
null
1036
8
null
There are several reasons why you are getting erroneous results. First, you should consider using log likelihood instead of likelihood. There are numerical issues with multiplying many small numbers(imagine if you had millions of samples you had to multiply millions of small numbers for the lhd). Also taking gradients for optimization methods that require gradients is often easier when you are dealing with the log likelihood. In general, it is good to have an objective which is a sum rather than a product of variables when dealing with optimization problems. Second, fmin is using Nelder-Mead simplex algorithm which has no convergence guarantees according to [scipy documentation](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html#scipy.optimize.fmin). This means the convergence is totally random and you should not expect to find parameters close to the originals. To get around this, I would suggest you to use a gradient based method like stochastic gradient descent or BFGS. Since you know the generative model (rvs are Gaussian distributed) you can write the likelihood and log likelihood as: ![equations](https://i.stack.imgur.com/bfsvr.png) Where a,b,c and d are your model parameters 5,2,3 and 4 respectively. Then take the [gradient](http://en.wikipedia.org/wiki/Gradient) with respect to [a,b,c,d] and feed that into prime input of fmin_bfgs. Note that due to varying variance what could be solved by just linear regression is now a nastier problem. Finally, you may also want to check Generalized least squares [here](http://en.wikipedia.org/wiki/Linear_regression#Least-squares_estimation_and_related_techniques) and [here](http://en.wikipedia.org/wiki/Heteroscedasticity), which talk about your problem and offer several available solutions. Good luck!
null
CC BY-SA 4.0
null
2014-08-28T06:49:05.210
2020-08-02T14:02:59.000
2020-08-02T14:02:59.000
98307
1350
null
1055
2
null
1044
1
null
Scala is the only real language that has Big Data at it's core. You have MLLib that sits on Spark, and as Scala is Functional it makes parallel computing really natural. R, Python and Matlab are not suitable for industry productization, well some would say Python's horrible dynamic typing can be handled a little using special build tools, but really its not type safe and there is no way to solve that problem.
null
CC BY-SA 3.0
null
2014-08-28T11:24:38.743
2014-08-28T11:24:38.743
null
null
2668
null
1056
2
null
985
2
null
It sounds as if you want to use unsupervized learning to create a training set. Am I right? You use your cluster analysis to determine which docs come from UK, US or Oz -- or which docs are talking about Soccer, Football or Australian football respectively? Then feed those tagged docs into a supervized learning algorithm of some sort? How well this works will depend entirely on how well you can distinguish UK, US and OZ. I would have thought it would be fairly straightforward to find documents where national origin was known, so that you could build a supervized algorithm for detecting language variant. You wouldn't even need a corpus that talked about football, since dialectical differences show up in other ways that are subject matter independent. (For example, I am clearly from North America, since I just wrote "in ways that are subject matter independent" rather than "Since dialectical differences do not depend on subject matter"). However, the answer to your question, "can I use unsupervized learning and then supervized learning" is No, if you are looking for supervized learning. If the results of an unsupervized learning algorithm are fed to a supervized learning algorithm, the net result is unsupervized --- there are still no grown-ups in the room. And the classification errors of the resulting process will contain error terms from both stages. You won't get the same performance as you would if you did a SVM with properly tagged training data. This doesn't mean you shouldn't use the method you propose ... it might still work well ... but it won't be a supervized learning algorithm.
null
CC BY-SA 3.0
null
2014-08-28T16:56:45.577
2014-08-29T19:26:39.963
2014-08-29T19:26:39.963
3077
3077
null
1057
2
null
1042
1
null
You might want to try this book [Mining the Social Web](http://shop.oreilly.com/product/0636920030195.do) for an overview of different techniques. Obviously, the methods you need will depend on the use case. A lot of people do interesting things with graphs, displaying relationships between users, with respect to certain topics. Or you might simply to a timeline showing how a news topic builds in interest and wanes.
null
CC BY-SA 3.0
null
2014-08-28T18:53:57.297
2014-08-28T18:53:57.297
null
null
3077
null
1058
1
1066
null
-2
177
I was wondering if there is any research or study made to calculate the volume of space is used by all scientific articles. It could be in pdf, txt, compressed, or any other format. Is there even a way to measure it? Can some one point me towards realizing this study? Regards and thanks.
How much data space is used by all scientific articles?
CC BY-SA 3.0
null
2014-08-29T01:36:36.320
2015-05-26T06:35:34.860
2015-05-26T06:35:34.860
75
3128
[ "bigdata", "research" ]
1059
1
null
null
7
1114
I have a question regarding the use of neural network. I am currently working with R ([neuralnet package](http://cran.r-project.org/web/packages/neuralnet/index.html)) and I am facing the following issue. My testing and validation set are always late with respect to the historical data. Is there a way of correcting the result? Maybe something is wrong in my analysis - I use the daily log return - I normalise my data with the sigmoid function (sigma and mu computed on my whole set) - I train my neural networks with 10 dates and the output is the normalised value that follows these 10 dates. I tried to add the trend but there is no improvement, I observed 1-2 days late. My process seems ok, what do you think about it?
Forecasting Foreign Exchange with Neural Network - Lag in Prediction
CC BY-SA 3.0
0
2014-08-29T06:00:53.420
2015-04-08T14:15:39.407
2014-11-07T08:28:10.637
97
3055
[ "r", "neural-network", "time-series", "forecast" ]
1060
2
null
1053
2
null
Checkout Breeze and apache commons math for the maths, and ScalaLab for some nice examples of how to plot things in Scala. I've managed to get an environment setup where this would just be a couple of lines. I dont actually use ScalaLab, rather borrow some of its code, I use Intellij worksheets instead.
null
CC BY-SA 3.0
null
2014-08-29T10:42:14.957
2014-08-29T10:42:14.957
null
null
2668
null
1061
1
null
null
1
37
I'm looking for the best solution to manage and host datasets for journalistic pursuits. I am assessing [https://www.documentcloud.org](https://www.documentcloud.org) and [http://datahub.io/](http://datahub.io/). Can anyone explain the differences between them, or recommend a superior solution?
Apps to manage/host data sets
CC BY-SA 3.0
null
2014-08-29T11:40:28.657
2014-08-29T11:40:28.657
null
null
3133
[ "dataset", "optimization" ]
1062
2
null
985
5
null
You can definitely try to first cluster your data, and then try to see if the cluster information helps your classification task. For example if your data looked like this (in 1D): ``` AA A AA A A BBB B B B BB BB BB AA AA A A AAA ``` then it may be reasonable to run a clustering algorithm on each class, to obtain two different kinds of A, and learn two separate classifiers for A1 and A2, and just drop the cluster distinction for the final output. Other common unsupervised techniques used include PCA. As for your football example, the problem is that the unsupervised algorithm does not know what it should be looking for. Instead of learning to separate american football and soccer, it may just as well decide to cluster on international vs. national games. Or Europe vs. U.S.; which may look like it learned about american football and soccer at first, but it put american soccer into the same cluster as american football, and american football teams in Europe into the Europe cluster... because it does not have guidance on what structure you are interested in; and the continents are a valid structure, too! So usually, I would not blindly assume that unsupervised techniques yield a distrinction that matches your desired result. They can yield any kind of structure, and you will want to carefully inspect what they found before using it. If you use it blindly, make sure you spend enough time on evaluation (e.g. if the clustering improves your classifier performance, then it probably worked as intended ...)
null
CC BY-SA 3.0
null
2014-08-29T16:19:06.903
2014-08-29T16:19:06.903
null
null
924
null
1063
2
null
1053
1
null
If your data is numeric, try loading it into ELKI (Java). With the `NullAlgorithm` it will give you scatterplots, histograms and parallel coordinate plots. It's fast in reading the data; only the current Apache Batik-based visualization is slooow because it's using SVG. :-( I'm mostly using it "headless". It also has classes for various statistics (including higher order moments on data streams), but I havn't seen them in the default UI yet.
null
CC BY-SA 3.0
null
2014-08-30T17:05:23.197
2014-08-30T17:05:23.197
null
null
924
null
1064
2
null
89
4
null
MapReduce is not used in searching. It was used a long time ago to build the index; but it is a batch processing framework, and most of the web does not change all the time, so the newer architectures are all incremental instead of batch oriented. Search in Google will largely work the same it works in Lucene and Elastic Search, except for a lot of fine tuned extra weighting and optimizations. But at the very heart, they will use some form of an inverted index. In other words, they do not search several terabytes when you enter a search query (even when it is not cached). They likely don't look at the actual documents at all. But they use a lookup table that lists which documents match your query term (with stemming, misspellings, synonyms etc. all preprocessed). They probably retrieve the list of the top 10000 documents for each word (10k integers - just a few kb!) and compute the best matches from that. Only if there aren't good matches in these lists, they expand to the next such blocks etc. Queries for common words can be easily cached; and via preprocessing you can build a list of the top 10k results and then rerank them according to the user profile. There is nothing to be gained by computing an "exact" answer, too. Looking at the top 10k results is likely enough; there is no correct answer; and if a better result somewhere at position 10001 is missed, nobody will know or notice (or care). It likely was already ranked down in preprocessing and would not have made it into the top 10 that is presented to the user at the end (or the top 3, the user actually looks at) Rare terms on the other hand aren't much of a challenge either - one of the lists only contains a few matching documents, and you can immediately discard all others. I recommend reading this article: > The Anatomy of a Large-Scale Hypertextual Web Search Engine Sergey Brin and Lawrence Page Computer Science Department, Stanford University, Stanford, CA 94305 http://infolab.stanford.edu/~backrub/google.html And yes, that's the Google founders who wrote this. It's not the latest state, but it will already work at a pretty large scale.
null
CC BY-SA 3.0
null
2014-08-30T18:14:05.577
2014-08-30T18:40:02.403
2014-08-30T18:40:02.403
924
924
null
1065
2
null
730
10
null
State of the art as in: used in practise or worked on in theory? APRIORI is used everywhere, except in developing new frequent itemset algorithms. It's easy to implement, and easy to reuse in very different domains. You'll find hundreds of APRIORI implementations of varying quality. And it's easy to get APRIORI wrong, actually. FPgrowth is much harder to implement, but also much more interesting. So from an academic point of view, everybody tries to improve FPgrowth - getting work based on APRIORI accepted will be very hard by now. If you have a good implementation, every algorithm has it's good and it's bad situations in my opinion. A good APRIORI implementation will only need to scan the database k times to find all frequent itemsets of length k. In particular if your data fits into main memory this is cheap. What can kill APRIORI is too many frequent 2-itemsets (in particular when you don't use a Trie and similar acceleration techniques etc.). It works best on large data with a low number of frequent itemsets. Eclat works on columns; but it needs to read each column much more often. There is some work on diffsets to reduce this work. If your data does not fit into main memory, Eclat suffers probably more than Apriori. By going depth first, it will also be able to return a first interesting result much earlier than Apriori, and you can use these results to adjust parameters; so you need less iterations to find good parameters. But by design, it cannot exploit pruning as neatly as Apriori did. FPGrowth compresses the data set into the tree. This works best when you have lots of duplicate records. You could probably reap of quite some gains for Apriori and Eclat too if you can presort your data and merge duplicates into weighted vectors. FPGrowth does this at an extreme level. The drawback is that the implementation is much harder; and once this tree does not fit into memory anymore it gets a mess to implement. As for performance results and benchmarks - don't trust them. There are so many things to implement incorrectly. Try 10 different implementations, and you get 10 very different performance results. In particular for APRIORI, I have the impression that most implementations are broken in the sense of missing some of the main contributions of APRIORI... and of those that have these parts right, the quality of optimizations varies a lot. There are actually even papers on how to implement these algorithms efficiently: > Efficient Implementations of Apriori and Eclat. Christian BorgeltWorkshop of Frequent Item Set Mining Implementations (FIMI 2003, Melbourne, FL, USA). You may also want to read these surveys on this domain: - Goethals, Bart. "Survey on frequent pattern mining." Univ. of Helsinki (2003). - Ferenc Bodon, A Survey on Frequent Itemset Mining, Technical Report, Budapest University of Technology and Economic, 2006, - Frequent Item Set MiningChristian BorgeltWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2(6):437-456. 2012
null
CC BY-SA 3.0
null
2014-08-30T18:36:07.490
2014-08-30T18:36:07.490
null
null
924
null
1066
2
null
1058
1
null
Perhaps you are looking to quantify the amount of filespace used by a specific subset of data that we will label as "academic publications." Well, to estimate, you could find stats on how many publications are housed at all the leading libraries (JSTOR, EBSCO, AcademicHost, etc) and then get the mean average size of each. Multiply that by the number of articles and whamo, you've got yourself an estimate. Here's the problem, though: PDF files store the text from string `s` differently (in size) than, say, a text document stores that same string. Likewise, a compressed JPEG will store an amount of information `i` differently than a non-compressed JPEG. So you see we could have two of the same articles containing the same information `i` but taking up different amounts of memory `m`. Are you looking to get a wordcount on the amount of scientific literature? Are you looking to get an approximation of file system space used to store all academically published content in the world?
null
CC BY-SA 3.0
null
2014-09-01T00:01:58.917
2014-09-01T00:01:58.917
null
null
3152
null
1067
2
null
1059
0
null
It is likely to be very hard to draw any conclusion if you are training with only 10 input samples. With more data, your diagnosis that the model is predicting lagged values would have more plausibility. As it stands, it seems pretty likely that your model is just saying that the last observed value is pretty close to correct. This isn't the same as a real lag model, but it is a very reasonable thing to guess if you haven't seen enough data.
null
CC BY-SA 3.0
null
2014-09-01T00:17:56.070
2014-09-01T00:17:56.070
null
null
3153
null
1068
2
null
1053
0
null
I'd have a closer look at one of Apache Spark's modules: [MLlib](https://spark.apache.org/mllib/).
null
CC BY-SA 3.0
null
2014-09-01T08:10:56.967
2014-09-01T08:10:56.967
null
null
3150
null
1069
1
null
null
6
234
I am trying to evaluate and compare several different machine learning models built with different parameters (i.e. downsampling, outlier removal) and different classifiers (i.e. Bayes Net, SVM, Decision Tree). I am performing a type of cross validation where I randomly select 67% of the data for use in the training set and 33% of the data for use in the testing set. I perform this for several iterations, say, 20. Now, from each iteration I am able to generate a confusion matrix and compute a kappa. My question is, what are some ways to aggregate these across the iterations? I am also interested in aggregating accuracy and expected accuracy, among other things. For the kappa, accuracy, and expected accuracy, I have just been taking the average up to this point. One of the problems is that when I recompute kappa with the aggregated average and expected average, it is not the same with the aggregated kappa. For the confusion matrix, I have been first normalizing the confusion matrix from each iteration and then averaging them, in an attempt to avoid an issue of confusion matrices with different numbers of total cases (which is possible with my cross validation scheme). When I recompute the kappa from this aggregated confusion matrix, it is also different from the previous two. Which one is most correct? Is there another way of computing an average kappa that is more correct? Thanks, and if more concrete examples are needed in order to illustrate my question please let me know.
Kappa From Combined Confusion Matrices
CC BY-SA 3.0
null
2014-09-02T03:24:12.793
2016-02-18T05:18:34.363
2016-02-18T05:18:34.363
3169
3169
[ "machine-learning", "confusion-matrix" ]
1070
1
1072
null
3
1577
I am interested in knowing the differences in functionality between SAP HANA and Exasol. Since this is a bit of an open ended question let me be clear. I am not interested in people debating which is "better" or faster. I am only interested in what each was designed to do so please keep your opinions out of it. I suspect it is a bit like comparing HANA to Oracle Exalytics where there is some overlap but the functionality goals are different.
SAP HANA vs Exasol
CC BY-SA 3.0
null
2014-09-02T08:47:38.737
2014-09-02T14:01:07.407
null
null
2511
[ "bigdata" ]
1071
1
null
null
1
57
How is the concept of data different for different disciplines? Obviously, for physicists and sociologists, "data" is something different.
How is the concept of data different for different disciplines?
CC BY-SA 3.0
null
2014-09-02T09:48:08.150
2014-09-05T16:36:03.227
null
null
3178
[ "definitions" ]
1072
2
null
1070
1
null
There's not an enormous difference between what you can do with the two databases, it's more a question of the focus and the way the functionality is implemented and that's where it becomes difficult to explain without using words like "better" and "faster" (and for sure words like "cheaper") EXASOL was designed for speed and ease of use with Analytical processing and is designed to run on clusters of commodity hardware. SAP is a more complex, aims to do more than "just" Analytical processing and runs only on a range of "approved" hardware. What type of differences did you have in mind ?
null
CC BY-SA 3.0
null
2014-09-02T14:01:07.407
2014-09-02T14:01:07.407
null
null
3181
null
1073
1
null
null
10
3764
I am looking for packages (either in python, R, or a standalone package) to perform online learning to predict stock data. I have found and read about Vowpal Wabbit ([https://github.com/JohnLangford/vowpal_wabbit/wiki](https://github.com/JohnLangford/vowpal_wabbit/wiki)), which seems to be quite promising but I am wondering if there are any other packages out there. Thanks in advance.
Libraries for Online Machine Learning
CC BY-SA 3.0
null
2014-09-02T19:17:43.210
2022-09-20T18:13:11.997
null
null
802
[ "machine-learning", "online-learning" ]
1074
1
null
null
5
7342
In SVMs the polynomial kernel is defined as: (scale * crossprod(x, y) + offset)^degree How do the scale and offset parameters affect the model and what range should they be in? (intuitively please) Are the scale and offset for numeric stability only (that's what it looks like to me), or do they influence prediction accuracy as well? Can good values for scale and offset be calculated/estimated when the data is known or is a grid search required? The caret package always sets the offset to 1, but it does a grid search for scale. (Why) is an offset of 1 a good value? Thanks PS.: Wikipedia didn't really help my understanding: > For degree-d polynomials, the polynomial kernel is defined as where x and y are vectors in the input space, i.e. vectors of features computed from training or test samples, is a constant trading off the influence of higher-order versus lower-order terms in the polynomial. When , the kernel is called homogeneous.(A further generalized polykernel divides by a user-specified scalar parameter .) Neither did ?polydot's explanation in R's help system: > scale: The scaling parameter of the polynomial and tangent kernel is a convenient way of normalizing patterns (<-!?) without the need to modify the data itself offset: The offset used in a polynomial or hyperbolic tangent kernel (<- lol thanks)
Polynomial Kernel Parameters in SVMs
CC BY-SA 3.0
null
2014-09-02T19:29:07.490
2015-05-18T07:34:41.633
2015-05-18T07:34:41.633
9667
676
[ "machine-learning", "classification", "svm" ]
1075
1
null
null
4
777
Background: I run a product that compares sets of data (data matching and data reconciliation). To get the result we need to compare each row in a data set with every N rows on the opposing data set Now however we get sets of up to 300 000 rows of data in each set to compare and are getting 90 Billion computations to handle. So my question is this: Even though we dont have the data volumes to use Hadoop, we have the computational need for something distributed. Is Hadoop a good choice for us?
Hadoop for grid computing
CC BY-SA 3.0
null
2014-09-04T18:13:57.343
2014-11-15T03:18:13.223
null
null
3203
[ "apache-hadoop" ]
1076
2
null
1073
3
null
You could look at scikit-learn or Orange module in Python. Scikit-learn has a SGD classifier and regressor that could do a partial fit data in case of online learning. In R, take a look at caret package
null
CC BY-SA 3.0
null
2014-09-05T06:17:28.533
2016-01-23T22:19:03.783
2016-01-23T22:19:03.783
15527
3211
null
1077
2
null
1075
4
null
Your job seems like a map-reduce job and hence might be good for Hadoop. Hadoop has a zoo of an ecosystem though. Hadoop is a distributed file system. It distributes data on a cluster and because this data is split up it can be analysed in parallel. Out of the box, Hadoop allows you to write map reduce jobs on the platform and this is why it might help with your problem. The following technologies work on Hadoop: - If the data can be represented in a table format you might want to check out technologies like hive and impala. Impala uses all the distributed memory across a cluster and is very performant while it allows you to still work with a table structure. - A more new, but promising alternative might also be spark which allows for more iterative procedures to be run on the cluster. Don't underestimate the amount of time setting up and the amount of time needed to understand Hadoop.
null
CC BY-SA 3.0
null
2014-09-05T11:00:22.053
2014-09-05T11:00:22.053
null
null
3213
null
1078
1
null
null
13
6941
I'm going to classify unstructured text documents, namely web sites of unknown structure. The number of classes to which I am classifying is limited (at this point, I believe there is no more than three). Does anyone have a suggested for how I might get started? Is the "bag of words" approach feasible here? Later, I could add another classification stage based on document structure (perhaps decision trees). I am somewhat familiar with Mahout and Hadoop, so I prefer Java-based solutions. If needed, I can switch to Scala and/or Spark engine (the ML library).
Unstructured text classification
CC BY-SA 3.0
null
2014-09-05T12:08:11.347
2017-03-25T07:31:56.423
2015-10-24T00:20:42.453
13413
3215
[ "machine-learning", "classification", "text-mining", "beginner" ]
1079
1
null
null
3
3616
I'm searching for data sets for evaluating text retrieval quality. TF-IDF is a popular similarity measure, but is it the best choice? And which variant is the best choice? [Lucenes Scoring](https://lucene.apache.org/core/3_6_1/api/all/org/apache/lucene/search/Similarity.html) for example uses IDF^2, and IDF defined as 1+log(numdocs/(docFreq+1)). TF in lucene is defined as sqrt(frequency)... Many more variants exist, including [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25), which is used by the [Xapian search engine](http://xapian.org/docs/bm25.html)... I'd like to study the different variants, and I'm looking for evaluation data sets. Thanks!
Data sets for evaluating text retrieval quality
CC BY-SA 3.0
null
2014-09-05T14:47:52.127
2014-11-12T19:00:05.013
null
null
2920
[ "dataset", "text-mining", "similarity", "information-retrieval" ]
1080
1
null
null
3
508
I was curious about the ANOVA RBF kernel provided by kernlab package available in R. I tested it with a numeric dataSet of 34 input variables and one output variable. For each variable I have 700 different values. Comparing with other kernels, I got very bad results with this kernel. For example using the simple RBF kernel I could predict with 0,88 R2 however with the anova RBF I could only get 0,33 R2. I thought that ANOVA RBF would be a very good kernel. Any thoughts? Thanks The code is as follows: ``` set.seed(100) #use the same seed to train different models svrFitanovaacv <- train(R ~ ., data = trainSet, method = SVManova, preProc = c("center", "scale"), trControl = ctrl, tuneLength = 10) #By default, RMSE and R2 are computed for regression (in all cases, selects the tunning and cross-val model with best value) , metric = "ROC" ``` define custom model in caret package: ``` library(caret) #RBF ANOVA KERNEL SVManova <- list(type = "Regression", library = "kernlab", loop = NULL) prmanova <- data.frame(parameter = c("C", "sigma", "degree", "epsilon"), class = rep("numeric", 4), label = c("Cost", "Sigma", "Degree", "Epsilon")) SVManova$parameters <- prmanova svmGridanova <- function(x, y, len = NULL) { library(kernlab) sigmas <- sigest(as.matrix(x), na.action = na.omit, scaled = TRUE, frac = 1) expand.grid(sigma = mean(sigmas[-2]), epsilon = 0.000001, C = 2^(-40:len), degree = 1:2) # len = tuneLength in train } SVManova$grid <- svmGridanova svmFitanova <- function(x, y, wts, param, lev, last, weights, classProbs, ...) { ksvm(x = as.matrix(x), y = y, kernel = "anovadot", kpar = list(sigma = param$sigma, degree = param$degree), C = param$C, epsilon = param$epsilon, prob.model = classProbs, ...) #default type = "eps-svr" } SVManova$fit <- svmFitanova svmPredanova <- function(modelFit, newdata, preProc = NULL, submodels = NULL) predict(modelFit, newdata) SVManova$predict <- svmPredanova svmProb <- function(modelFit, newdata, preProc = NULL, submodels = NULL) predict(modelFit, newdata, type="probabilities") SVManova$prob <- svmProb svmSortanova <- function(x) x[order(x$C), ] SVManova$sort <- svmSortanova ``` load data: ``` dataA2<-read.csv("C:/results/A2.txt",header = TRUE, blank.lines.skip = TRUE,sep = ",") set.seed(1) inTrainSet <- createDataPartition(dataA2$R, p = 0.75, list = FALSE) #[[1]] trainSet <- dataA2[inTrainSet,] testSet <- dataA2[-inTrainSet,] #----------------------------------------------------------------------------- #K-folds resampling method for fitting svr ctrl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, allowParallel = TRUE) #10 separate 10-fold cross-validations ``` link to data: ``` wuala.com/jpcgandre/Documents/Data%20SVR/?key=BOD9NTINzRHG ```
ANOVA RBF kernel returns very poor results
CC BY-SA 3.0
null
2014-09-05T16:34:55.170
2014-09-05T16:34:55.170
null
null
3216
[ "machine-learning", "r" ]
1081
2
null
1071
1
null
Data is, at it's most basic reduction, a raw element of something. Data is a raw "thing" that exists in any form from which we can analyze it and construct intelligence. When I was an Intelligence Analyst, we used to define data as "anything and everything that could be used to construct a hypothesis." Thus, data for any discipline is interchangeable; as a sociologist, I have a vector of discrete variables indicating ethnicity, as an economist I have a vector with housing prices, and as an anthropologist I have a vector of tablet names used in some long-gone civilization. Data is data.
null
CC BY-SA 3.0
null
2014-09-05T16:36:03.227
2014-09-05T16:36:03.227
null
null
3152
null
1082
2
null
1078
4
null
Here are a couple of really great open source software packages for text classification that should help get you started: - MALLET is a CPL-licensed Java-based machine learning toolkit built by UMass for working with text data. It includes implementations of several classification algorithms (e.g., naïve Bayes, maximum entropy, decision trees). - The Stanford Classifier from the Stanford NLP Group is a GPL-licensed Java implementation of a maximum entropy classifier designed to work with text data.
null
CC BY-SA 3.0
null
2014-09-05T19:13:39.993
2014-09-05T19:13:39.993
null
null
819
null
1084
2
null
1078
14
null
Let's work it out from the ground up. Classification (also known as categorization) is an example of supervised learning. In supervised learning you have: - model - something that approximates internal structure in your data, enabling you to reason about it and make useful predictions (e.g. predict class of an object); normally model has parameters that you want to "learn" - training and testing datasets - sets of objects that you use for training your model (finding good values for parameters) and further evaluating - training and classification algorithms - first describes how to learn model from training dataset, second shows how to derive class of a new object given trained model Now let's take a simple case of spam classification. Your training dataset is a corpus of emails + corresponding labels - "spam" or "not spam". Testing dataset has the same structure, but made from some independent emails (normally one just splits his dataset and makes, say, 9/10 of it to be used for training and 1/10 - for testing). One way to model emails is to represent each of them as a set (bag) of words. If we assume that words are independent of each other, we can use Naive Bayes classifier, that is, calculate prior probabilities for each word and each class (training algorithm) and then apply Bayes theorem to find posterior probability of a new document to belong to particular class. So, basically we have: ``` raw model + training set + training algorithm -> trained model trained model + classification algorithm + new object -> object label ``` Now note that we represented our objects (documents) as a bag of words. But is the only way? In fact, we can extract much more from raw text. For example, instead of words as is we can use their [stems or lemmas](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html), throw out noisy [stop words](http://en.wikipedia.org/wiki/Stop_words), add [POS tags](http://nlp.stanford.edu/software/tagger.shtml) of words, extract [named entities](http://nlp.stanford.edu/software/CRF-NER.shtml) or even explore HTML structure of the document. In fact, more general representation of a document (and, in general, any object) is a feature vector. E.g. for text: ``` actor, analogue, bad, burn, ..., NOUN, VERB, ADJ, ..., in_bold, ... | label 0, 0, 1, 1, ..., 5, 7, 2, ..., 2, ... | not spam 0, 1, 0, 0, ..., 3, 12, 10, ..., 0, ... | spam ``` Here first row is a list of possible features and subsequent rows show how many times that feature occurs in a document. E.g. in first document there's no occurrences of word "actor", 1 occurrence of word "burn", 5 nouns, 2 adjectives and 2 pieces of text in bold. Last column corresponds to a resulting class label. Using feature vector you can incorporate any properties of your texts. Though finding good set of features may take some time. And what about model and algorithms? Are we bound to Naive Bayes. Not at all. logistic regression, SVM, decision trees - just to mention few popular classifiers. (Note, that we say "classifier" in most cases we mean model + corresponding algorithms for training and classification). As for implementation, you can divide task into 2 parts: - Features extraction - transforming raw texts into feature vectors. - Object classification - building and applying model. First point is well worked out in many [NLP libraries](https://stackoverflow.com/questions/4115526/natural-language-processing). Second is about machine learning, so, depending on your dataset, you can use either [Weka](http://www.cs.waikato.ac.nz/ml/weka/), or [MLlib](https://spark.apache.org/docs/latest/mllib-guide.html).
null
CC BY-SA 3.0
null
2014-09-07T01:08:04.330
2014-09-07T01:08:04.330
2017-05-23T12:38:53.587
-1
1279
null
1085
2
null
1075
1
null
If I understand your description correctly, hadoop seems a huge overhead, for the wrong problem. basically you just need a standard distributed architecture: don't you just have to pass pairs of rows - eg mpi.... ipython.parallel, ...
null
CC BY-SA 3.0
null
2014-09-07T08:02:39.670
2014-09-07T08:02:39.670
null
null
1256
null
1086
1
null
null
3
210
We have ~500 biomedical documents each of some 1-2 MB. We want to use a non query-based method to rank the documents in order of their unique content score. I'm calling it "unique content" because our researchers want to know from which document to start reading. All the documents are of the same topic, in the biomedical world we know that there is always a lot of content overlap. So all we want to do is to arrange the documents in the order of their unique content. Most Information Retrieval literature suggest query-based ranking which does not fit our need.
non query-based document ranking
CC BY-SA 3.0
null
2014-09-07T22:51:27.490
2015-09-01T15:30:20.963
2015-09-01T15:30:20.963
10189
3232
[ "machine-learning", "data-mining", "text-mining", "information-retrieval" ]
1087
2
null
1086
1
null
Here's a simple initial approach to try: - Calculate the TF-IDF score of each word in each document. - Sort the documents by the average TF-IDF score of their words. - The higher the average TF-IDF score, the more unique a document is with respect to the rest of the collection. You might also try a clustering-based approach where you look for outliers, or perhaps something with the [Jaccard index](http://en.wikipedia.org/wiki/Jaccard_index) using a bag-of-words model.
null
CC BY-SA 3.0
null
2014-09-08T00:47:21.590
2014-09-08T00:47:21.590
null
null
819
null
1088
2
null
1078
5
null
Topic Modeling would be a very appropriate method for your problem. Topic Models are a form of unsupervised learning/discovery, where a specified (or discovered) number of topics are defined by a list of words that have a high probability of appearing together. In a separate step, you can label each topic using subject matter experts, but for your purposes this isn't necessary since you are only interested in getting to three clusters. You treat each document as a bag of words, and pre-process to remove stop words, etc. With the simplest methods, you pre-specify the number of topics. In your case, you could either specify "3", which is your fixed limit on categories, or pick a larger number of topics (between 10 and 100), and then in a separate step, form three clusters for documents with common emphasis on topics. K-means or other clustering methods could be used. (I'd recommend the latter approach) You don't need to code topic modeling software from scratch. Here's a [web page with many resources, including software libraries/packages](https://web.archive.org/web/20161027083945/http://www.cs.princeton.edu/~blei/topicmodeling.html). None are in Java, but there are ways to run C++ and Python under Java.
null
CC BY-SA 3.0
null
2014-09-08T02:45:42.653
2017-03-25T07:31:56.423
2017-03-25T07:31:56.423
29575
609
null
1089
2
null
1086
2
null
You could use Topic Modeling as described in this paper: [http://faculty.chicagobooth.edu/workshops/orgs-markets/pdf/KaplanSwordWin2014.pdf](http://faculty.chicagobooth.edu/workshops/orgs-markets/pdf/KaplanSwordWin2014.pdf) They performed Topic Modeling on abstracts of patents (limited to 150 words). They identified papers as "novel" if they were the first to introduce a topic, and measured degree of novelty by how many papers in the following year used the same topic. (Read the paper for details). I suggest that you follow their lead and only process paper abstracts. Processing the body of each paper might reveal some novelty that the abstract does not, but you also run the risk of having much more noise in your topic model (i.e. extraneous topics, extraneous words). While you say that all 500 papers are on the same "topic", it's probably safer to say that they are all on the same "theme" or in the same "sub-category" of Bio-medicine. Topic modeling permits decomposition of the "theme" into "topics". The good news is that there are plenty of good packages/libraries for Topic Modeling. You still have to do preprocessing, but you don't have to code the algorithms yourself. See this page for many resources: [http://www.cs.princeton.edu/~blei/topicmodeling.html](http://www.cs.princeton.edu/~blei/topicmodeling.html)
null
CC BY-SA 3.0
null
2014-09-08T03:07:07.413
2014-09-08T03:07:07.413
null
null
609
null
1090
1
null
null
1
38
It may be unlikely that anyone knows this but I have a specific question about Freebase. Here is the Freebase page from the [Ford Taurus automotive model](http://www.freebase.com/m/014_d3) . It has a property called "Related Models". Does anyone know how this list of related models was compiled. What is the similarity measure that they use? I don't think it is only about other wikipedia pages that link to or from this page. Alternatively, it may be that this is user generated. Does anyone know for sure?
Freebase Related Models
CC BY-SA 3.0
null
2014-09-08T14:57:11.223
2014-09-08T14:57:11.223
null
null
387
[ "dataset" ]
1091
1
null
null
2
1504
What is the best technology to be used to create my custom bag of words with N-grams to apply to. I want to know a functionality that can be achieved over GUI. I cannot use spot fire as it is not available in the organization. Though i can get SAP Hana or R-hadoop. But R-hadoop is bit challenging, any suggessions.
Creating Bag of words
CC BY-SA 3.0
null
2014-09-08T19:33:00.253
2014-11-05T17:07:02.563
null
null
3244
[ "bigdata", "text-mining" ]
1092
1
1097
null
14
2054
Are there any machine learning libraries for Ruby that are relatively complete (including a wide variety of algorithms for supervised and unsupervised learning), robustly tested, and well-documented? I love Python's [scikit-learn](http://scikit-learn.org/) for its incredible documentation, but a client would prefer to write the code in Ruby since that's what they're familiar with. Ideally I am looking for a library or set of libraries which, like `scikit` and `numpy`, can implement a wide variety of data structures like sparse matrices, as well as learners. Some examples of things we'll need to do are binary classification using SVMs, and implementing bag of words models which we hope to concatenate with arbitrary numeric data, as described in [this StackOverflow post](https://stackoverflow.com/q/20106940/1435804).
Machine learning libraries for Ruby
CC BY-SA 4.0
null
2014-09-08T21:25:26.183
2018-12-29T02:25:42.790
2018-12-29T02:25:42.790
134
2487
[ "machine-learning" ]
1094
1
4897
null
2
249
Problem For my machine learning task, I create a set of predictors. Predictors come in "bundles" - multi-dimensional measurements (3 or 4 - dimensional in my case). The hole "bundle" makes sense only if it has been measured, and taken all together. The problem is, different 'bundles' of predictors can be measured only for small part of the sample, and those parts don't necessary intersect for different 'bundles'. As parts are small, imputing leads to considerable decrease in accuracy(catastrophical to be more accurate) Possible solutions I could create dummy variables that would mark whether the measurement has taken place for each variable. The problem is, when random forests draws random variables, it does so individually. So there are two basic ways to solve this problem: 1) Combine each "bundle" into one predictor. That is possible, but it seems information will be lost. 2) Make random forest draw variables not individually, but by obligatory "bundles". Problem for random forest As random forest draws variables randomly, it takes features that are useless (or much less useful) without other from their "bundle". I have a feeling that leads to a loss of accuracy. Example For example I have variables `a`,`a_measure`, `b`,`b_measure`. The problem is, variables `a_measure` make sense only if variable `a` is present, same for `b`. So I either have to combine `a`and `a_measure` into one variable, or make random forest draw both, in case at least one of them is drawn. Question What are the best practice solutions for problems when different sets of predictors are measured for small parts of overall population, and these sets of predictors come in obligatory "bundles"? Thank you!
Creating obligatory combinations of variables for drawing by random forest
CC BY-SA 3.0
null
2014-09-09T06:33:00.730
2015-04-17T06:30:43.227
2014-09-09T06:45:25.510
3108
3108
[ "machine-learning", "r", "random-forest" ]
1095
1
null
null
33
67014
The problem refers to decision trees building. According to Wikipedia '[Gini coefficient](http://en.wikipedia.org/wiki/Gini_coefficient)' should not be confused with '[Gini impurity](http://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity)'. However both measures can be used when building a decision tree - these can support our choices when splitting the set of items. 1) 'Gini impurity' - it is a standard decision-tree splitting metric (see in the link above); 2) 'Gini coefficient' - each splitting can be assessed based on the AUC criterion. For each splitting scenario we can build a ROC curve and compute AUC metric. According to Wikipedia AUC=(GiniCoeff+1)/2; Question is: are both these measures equivalent? On the one hand, I am informed that Gini coefficient should not be confused with Gini impurity. On the other hand, both these measures can be used in doing the same thing - assessing the quality of a decision tree split.
Gini coefficient vs Gini impurity - decision trees
CC BY-SA 3.0
null
2014-09-09T12:44:16.967
2023-03-13T18:03:52.697
null
null
3250
[ "data-mining" ]
1096
2
null
1091
3
null
> create my custom bag of words with N-grams to apply to My initial recommendation would be to use the [NLTK library for Python](http://www.nltk.org/). NLTK offers methods for [easily extracting bigrams from text](http://www.nltk.org/book/ch01.html#bigrams_index_term) or [ngrams of arbitrary length](https://nltk.googlecode.com/svn/trunk/doc/api/nltk.util-module.html#ngrams), as well as methods for analyzing the [frequency distribution of those items](http://www.nltk.org/book/ch01.html#frequency_distribution_index_term). However, all of this requires a bit of programming. > a functionality that can be achieved over GUI That's tricky. Have you looked into [GATE](https://gate.ac.uk/)? I'm not exactly sure if/how GATE does what you want, but it does offer a GUI.
null
CC BY-SA 3.0
null
2014-09-09T15:05:46.093
2014-09-09T15:05:46.093
null
null
819
null
1097
2
null
1092
8
null
I'll go ahead and post an answer for now; if someone has something better I'll accept theirs. At this point the most powerful option appears to be accessing WEKA using jRuby. We spent yesterday scouring the 'net, and this combination was even used by a [talk at RailsConf 2012](http://www.confreaks.com/videos/867-railsconf2012-practical-machine-learning-and-rails), so I would guess if there were a comparable pure ruby package, they would have used it. Note that if you know exactly what you need, there are plenty of individual libraries that either [wrap standalone packages like libsvm](https://github.com/febeling/rb-libsvm) or [re-implement some individual algorithms like Naive Bayes in pure Ruby](https://github.com/alexandru/stuff-classifier) and will spare you from using jRuby. But for a general-purpose library, WEKA and jRuby seem to be the best bet at this time.
null
CC BY-SA 3.0
null
2014-09-09T16:58:10.537
2014-09-09T16:58:10.537
null
null
2487
null
1098
2
null
1091
1
null
You can use SKlearn, It is a python library. It is simplest method which i like with minimal code. You can follow this link [http://scikit-learn.org/stable/modules/feature_extraction.html](http://scikit-learn.org/stable/modules/feature_extraction.html)
null
CC BY-SA 3.0
null
2014-09-10T07:33:06.470
2014-09-10T07:33:06.470
null
null
3259
null
1099
2
null
1095
34
null
No, despite their names they are not equivalent or even that similar. - Gini impurity is a measure of misclassification, which applies in a multiclass classifier context. - Gini coefficient applies to binary classification and requires a classifier that can in some way rank examples according to the likelihood of being in a positive class. Both could be applied in some cases, but they are different measures for different things. Impurity is what is commonly used in [decision trees](https://en.wikipedia.org/wiki/Decision_tree).
null
CC BY-SA 3.0
null
2014-09-10T08:15:17.343
2016-11-30T23:06:49.983
2016-11-30T23:06:49.983
26596
21
null
1100
1
null
null
11
741
I'm looking at pybrain for taking server monitor alarms and determining the root cause of a problem. I'm happy with training it using supervised learning and curating the training data sets. The data is structured something like this: - Server Type A #1 Alarm type 1 Alarm type 2 - Server Type A #2 Alarm type 1 Alarm type 2 - Server Type B #1 Alarm type 99 Alarm type 2 So there are n servers, with x alarms that can be `UP` or `DOWN`. Both `n` and `x` are variable. If Server A1 has alarm 1 & 2 as `DOWN`, then we can say that service a is down on that server and is the cause of the problem. If alarm 1 is down on all servers, then we can say that service a is the cause. There can potentially be multiple options for the cause, so straight classification doesn't seem appropriate. I would also like to tie later sources of data to the net. Such as just scripts that ping some external service. All the appropriate alarms may not be triggered at once, due to serial service checks, so it can start with one server down and then another server down 5 minutes later. I'm trying to do some basic stuff at first: ``` from pybrain.tools.shortcuts import buildNetwork from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer INPUTS = 2 OUTPUTS = 1 # Build network # 2 inputs, 3 hidden, 1 output neurons net = buildNetwork(INPUTS, 3, OUTPUTS) # Build dataset # Dataset with 2 inputs and 1 output ds = SupervisedDataSet(INPUTS, OUTPUTS) # Add one sample, iterable of inputs and iterable of outputs ds.addSample((0, 0), (0,)) # Train the network with the dataset trainer = BackpropTrainer(net, ds) # Train 1000 epochs for x in xrange(10): trainer.train() # Train infinite epochs until the error rate is low trainer.trainUntilConvergence() # Run an input over the network result = net.activate([2, 1]) ``` But I[m having a hard time mapping variable numbers of alarms to static numbers of inputs. For example, if we add an alarm to a server, or add a server, the whole net needs to be rebuilt. If that is something that needs to be done, I can do it, but want to know if there's a better way. Another option I'm trying to think of, is have a different net for each type of server, but I don't see how I can draw an environment-wide conclusion, since it will just make evaluations on a single host, instead of all hosts at once. Which type of algorithm should I use and how do I map the dataset to draw environment-wide conclusions as a whole with variable inputs?
Neural net for server monitoring
CC BY-SA 3.0
null
2014-09-10T14:50:13.720
2016-03-07T17:23:56.783
null
null
3263
[ "machine-learning", "neural-network" ]
1101
2
null
1095
0
null
I think they both represent the same concept. In classification trees, the Gini Index is used to compute the impurity of a data partition. So Assume the data partition D consisiting of 4 classes each with equal probability. Then the Gini Index (Gini Impurity) will be: $Gini(D) = 1 - (0.25^2 + 0.25^2 + 0.25^2 + 0.25^2)$ In CART we perform binary splits. So The gini index will be computed as the weighted sum of the resulting partitions and we select the split with the smallest gini index. So the use of Gini Impurity (Gini Index) is not limited to binary situations. Another term for Gini Impurity is Gini Coefficient which is used normally as a measure of income distribution.
null
CC BY-SA 4.0
null
2014-09-10T15:52:04.743
2021-02-08T02:17:02.927
2021-02-08T02:17:02.927
29169
979
null
1102
1
1103
null
4
1302
The CoreNLP parts of speech tagger and name entity recognition tagger are pretty good out of the box, but I'd like to improve the accuracy further so that the overall program runs better. To explain more about accuracy -- there are situations in which the POS/NER is wrongly tagged. For instance: - "Oversaw car manufacturing" gets tagged as NNP-NN-NN Rather than VB* or something similar, since it's a verb-like phrase (I'm not a linguist, so take this with a grain of salt). So what's the best way to accomplish accuracy improvement? - Are there better models out there for POS/NER that can be incorporated into CoreNLP? - Should I switch to other NLP tools? - Or create training models with exception rules?
Improve CoreNLP POS tagger and NER tagger?
CC BY-SA 3.0
null
2014-09-11T17:09:52.313
2014-09-12T00:40:07.877
null
null
2785
[ "nlp", "language-model" ]
1103
2
null
1102
1
null
Your best best is to train your own models on the kind of data you're going to be working with.
null
CC BY-SA 3.0
null
2014-09-12T00:40:07.877
2014-09-12T00:40:07.877
null
null
819
null
1104
1
null
null
1
50
I have been tasked with creating a pipeline chart with the live data and the budgeted numbers. I know what probability of each phase of reaching the next. The problem is I have no Idea what to do about the pipeline budgeting with regards to time. For instance what period of time should I have closed sales in the chart. I have honestly been working on trying to figure it out. Each successive revision gets me farther from the answer.
Modeling Pipeline Budget
CC BY-SA 3.0
null
2014-09-12T04:22:38.823
2014-09-12T04:22:38.823
null
null
3279
[ "recommender-system", "time-series" ]
1105
1
1150
null
2
1195
I am currently trying to implement logistic regression with iteratively reweightes LS, according to "Pattern Recognition and Machine Learning" by C. Bishop. In a first approach I tried to implement it in C#, where I used Gauss' algorithm to solve eq. 4.99. For a single feature it gave very promising (nearly exact) results, but whenever I tried to run it with more than one feature my system matrix became singular, and the weights did not converge. I first thought that it was my implementation, but when I implemented it in SciLab the results sustained. The SciLab (more concise due to matrix operators) code I used is ``` phi = [1; 0; 1; 1]; t = [1; 0; 0; 0]; w= [1]; w' * phi(1,:)' for in=1:100 y = []; R = zeros(size(phi,1)); R_inv = zeros(size(phi,1)); for i=1:size(phi,1) y(i) = 1/(1+ exp(-(w' * phi(i,:)'))); R(i,i) = y(i)*(1 - y(i)); R_inv(i,i) = 1/R(i,i); end z = phi * w - R_inv*(y - t) w = inv(phi'*R*phi)*phi'*R*z end ``` With the values for phi (input/features) and t (output/classes), it yields a weight of -0.6931472, which is pretty much 1/3, which seems fine to me, for there is 1/3 probability of beeing assigned to class 1, if feature 1 is present (please forgive me, if my terms do not comply with ML-language completely, for I am an software developer). If I now added an intercept feature, which would accord to ``` phi = [1, 1; 1, 0; 1, 1; 1, 1]; w = [1; 1]; ``` my R-matrix becomes singular and the last weights value is ``` w = - 5.8151677 1.290D+30 ``` which - to my reading - would mean, that the probability of belonging to class 1 would be close to 1 if feature 1 is present about 3% for the rest. There has got to be any error I made, but I do not get which one. For both implementations yield the same results I suspect that there is some point I've been missing or gotten wrong, but I do not understand which one.
Logistic Regression implementation does not converge
CC BY-SA 3.0
null
2014-09-12T11:06:48.203
2014-09-23T02:20:24.670
null
null
3283
[ "logistic-regression" ]
1106
1
null
null
9
505
I am having an HTML string and want to find out if a word I supply is relevant in that string. Relevancy could be measured based on frequency in the text. An example to illustrate my problem: ``` this is an awesome bike store bikes can be purchased online. the bikes we own rock. check out our bike store now ``` Now I want to test a few other words: ``` bike repairs dog poo ``` `bike repairs` should be marked as relevant whereas `dog poo` should not be marked as relevant. Questions: - How could this be done? - How to I filter out ambiguous words like in or or Thanks for your ideas! I guess it's something Google does to figure out what keywords are relevant to a website. I am basically trying to reproduce their on-page rankings.
How to build a textual search engine?
CC BY-SA 3.0
null
2014-09-12T11:48:21.617
2020-08-17T01:00:18.380
2014-09-15T01:19:20.410
381
3284
[ "machine-learning", "data-mining" ]
1107
1
1112
null
35
15673
I have a classification problem with approximately 1000 positive and 10000 negative samples in training set. So this data set is quite unbalanced. Plain random forest is just trying to mark all test samples as a majority class. Some good answers about sub-sampling and weighted random forest are given here: [What are the implications for training a Tree Ensemble with highly biased datasets?](https://datascience.stackexchange.com/questions/454/what-are-the-implications-for-training-a-tree-ensemble-with-highly-biased-datase) Which classification methods besides RF can handle the problem in the best way?
Quick guide into training highly imbalanced data sets
CC BY-SA 3.0
null
2014-09-12T15:20:51.767
2016-07-15T22:10:08.333
2017-04-13T12:50:41.230
-1
97
[ "machine-learning", "classification", "dataset", "class-imbalance" ]
1108
1
1121
null
6
4363
As mentioned [before](https://datascience.stackexchange.com/questions/1107/quick-guide-into-training-highly-imbalanced-data-sets), I have a classification problem and unbalanced data set. The majority class contains 88% of all samples. I have trained a Generalized Boosted Regression model using `gbm()` from the `gbm` package in `R` and get the following output: ``` interaction.depth n.trees Accuracy Kappa Accuracy SD Kappa SD 1 50 0.906 0.523 0.00978 0.0512 1 100 0.91 0.561 0.0108 0.0517 1 150 0.91 0.572 0.0104 0.0492 2 50 0.908 0.569 0.0106 0.0484 2 100 0.91 0.582 0.00965 0.0443 2 150 0.91 0.584 0.00976 0.0437 3 50 0.909 0.578 0.00996 0.0469 3 100 0.91 0.583 0.00975 0.0447 3 150 0.911 0.586 0.00962 0.0443 ``` Looking at the 90% accuracy I assume that model has labeled all the samples as majority class. That's clear. And what is not transparent: how Kappa is calculated. - What does this Kappa values (near to 60%) really mean? Is it enough to say that the model is not classifying them just by chance? - What do Accuracy SD and Kappa SD mean?
Kappa near to 60% in unbalanced (1:10) data set
CC BY-SA 3.0
null
2014-09-12T16:26:15.827
2020-04-01T21:11:58.403
2017-04-13T12:50:41.230
-1
97
[ "r", "class-imbalance", "gbm" ]
1109
2
null
1107
20
null
Undersampling the majority class is usually the way to go in such situations. If you think that you have too few instances of the positive class, you may perform oversampling, for example, sample 5n instances with replacement from the dataset of size n. Caveats: - Some methods may be sensitive to changes in the class distribution, e.g. for Naive Bayes - it affects the prior probabilities. - Oversampling may lead to overfitting
null
CC BY-SA 3.0
null
2014-09-12T20:30:51.740
2014-09-12T20:30:51.740
null
null
816
null
1110
1
1118
null
6
1585
I want to cluster a set of long-tailed / pareto-like data into several bins (actually the bin number is not determined yet). Which algorithm or model would anyone recommend?
Binning long-tailed / pareto data before clustering
CC BY-SA 3.0
null
2014-09-13T06:33:17.360
2017-05-30T14:50:23.443
2017-05-30T14:50:23.443
14372
3289
[ "clustering", "k-means" ]
1112
2
null
1107
24
null
- Max Kuhn covers this well in Ch16 of Applied Predictive Modeling. - As mentioned in the linked thread, imbalanced data is essentially a cost sensitive training problem. Thus any cost sensitive approach is applicable to imbalanced data. - There are a large number of such approaches. Not all implemented in R: C50, weighted SVMs are options. Jous-boost. Rusboost I think is only available as Matlab code. - I don't use Weka, but believe it has a large number of cost sensitive classifiers. - Handling imbalanced datasets: A review: Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas' - On the Class Imbalance Problem: Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, Guangtong Zhou
null
CC BY-SA 3.0
null
2014-09-13T15:36:19.867
2014-09-13T15:36:19.867
null
null
3294
null
1113
1
1146
null
2
158
I have a general methodological question. I have two columns of data, with one a column a numeric variable for age and another column a short character variable for text responses to a question. My goal is to group the age variable (that is, create cut points for the age variable), based on the text responses. I'm unfamiliar with any general approaches for doing this sort of analysis. What general approaches would you recommend? Ideally I'd like to categorize the age variable based on linguistic similarity of the text responses.
General approahces for grouping a continuous variable based on text data?
CC BY-SA 3.0
null
2014-09-13T17:13:23.373
2015-08-21T07:33:06.983
2015-08-21T07:33:06.983
4647
36
[ "bigdata", "clustering", "text-mining" ]
1114
2
null
1107
14
null
Gradient boosting is also a good choice here. You can use the gradient boosting classifier in sci-kit learn for example. Gradient boosting is a principled method of dealing with class imbalance by constructing successive training sets based on incorrectly classified examples.
null
CC BY-SA 3.0
null
2014-09-13T18:17:33.253
2014-09-13T18:17:33.253
null
null
92
null