Id
stringlengths 1
6
| PostTypeId
stringclasses 6
values | AcceptedAnswerId
stringlengths 2
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
⌀ | Body
stringlengths 0
32.5k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 2
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4826
|
2
| null |
3804
|
1
| null |
Suppose that a forum post has, on average, 2000 characters, which is more or less the equivalent of a page of text, than the total memory needed is 10MB if text is ASCII. Even if the text is Unicode encoded in an Asian language it will take 40MB.
This is far too little for modern computers, so a simple text format is the best since it can be parsed in the fastest way, and loaded into RAM all at once.
| null |
CC BY-SA 3.0
| null |
2015-01-07T07:39:04.027
|
2015-01-07T07:39:04.027
| null | null |
108
| null |
4827
|
1
| null | null |
1
|
278
|
Is there any way to use package `dplyr` on RStudio having R base 3.0.2 ?
I am not interested in `plyr` package.
|
Can we use package "dplyr" on R base 3.0.2?
|
CC BY-SA 4.0
| null |
2015-01-07T16:50:18.803
|
2018-12-28T00:50:36.973
|
2018-12-28T00:50:36.973
|
134
|
7702
|
[
"data-mining"
] |
4828
|
2
| null |
4827
|
5
| null |
dplyr 0.3 requires R 3.1+. If you're stuck on R 3.0.x, you have to use dplyr 0.2.x.
| null |
CC BY-SA 3.0
| null |
2015-01-07T22:03:37.357
|
2015-01-07T22:03:37.357
| null | null |
404
| null |
4829
|
2
| null |
713
|
1
| null |
We faced this problem and analysed the issue.
Cloudera, Hue or Hive didn't have any error reported but the users via Beeswax were unable to run queries.
It was good to analyze below:
Duplicate PID for Hive (check for all hive servers. If you do have a tool to check the duplicate PID then use that or use the kinit to login to cloudera admin node and analyze it.)
We killed the duplicate PID
AND
restarted hive server. It fixed the issue.
| null |
CC BY-SA 3.0
| null |
2015-01-08T00:25:05.277
|
2015-01-08T00:25:05.277
| null | null |
7707
| null |
4830
|
2
| null |
3804
|
0
| null |
I think one has to be very careful when storing textual data. If they are user comments then, for security concerns it's better if it is encoded in some format before storage.
A protobuf object can then be defined to resolve the encoding.
Depending on the query pattern, and accepted latency in retrieval of the data, DB should be decided. Just a recommendation, if the idea is to store comments over a period of time for each user, consider HBase or Cassandra. They are optimized for time range queries.
Recommend read: [http://info.mapr.com/rs/mapr/images/Time_Series_Databases.pdf](http://info.mapr.com/rs/mapr/images/Time_Series_Databases.pdf)
| null |
CC BY-SA 3.0
| null |
2015-01-08T07:55:31.067
|
2015-01-08T07:55:31.067
| null | null |
5179
| null |
4831
|
1
|
4833
| null |
9
|
4161
|
Could anyone recommend a good similarity measure for objects which have multiple classes, where each class is part of a hierarchy?
For example, let's say the classes look like:
```
1 Produce
1.1 Eggs
1.1.1 Duck eggs
1.1.2 Chicken eggs
1.2 Milk
1.2.1 Cow milk
1.2.2 Goat milk
2 Baked goods
2.1 Cakes
2.1.1 Cheesecake
2.1.2 Chocolate
```
An object might be tagged with items from the above at any level, e.g.:
```
Omelette: eggs, milk (1.1, 1.2)
Duck egg omelette: duck eggs, milk (1.1.1, 1.2)
Goat milk chocolate cheesecake: goat milk, cheesecake, chocolate (1.2.2, 2.1.1, 2.1.2)
Beef: produce (1)
```
If the classes weren't part of a hierarchy, I'd probably I'd look at cosine similarity (or equivalent) between classes assigned to an object, but I'd like to use the fact that different classes with the same parents also have some similarity value (e.g. in the example above, beef has some small similarity to omelette, since they both have items from the class '1 produce').
If it helps, the hierarchy has ~200k classes, with a maximum depth of 5.
|
Similarity measure based on multiple classes from a hierarchical taxonomy?
|
CC BY-SA 3.0
| null |
2015-01-08T10:09:32.010
|
2015-01-08T12:08:43.113
| null | null |
474
|
[
"similarity"
] |
4833
|
2
| null |
4831
|
11
| null |
While I don't have enough expertise to advise you on selection of the best similarity measure, I've seen a number of them in various papers. The following collection of research papers hopefully will be useful to you in determining the optimal measure for your research. Please note that I intentionally included papers, using both frequentist and Bayesian approaches to hierarchical classification, including class information, for the sake of more comprehensive coverage.
Frequentist approach:
- Semantic similarity based on corpus statistics and lexical taxonomy
- Can’t see the forest for the leaves: Similarity and distance measures for hierarchical taxonomies with a patent classification example (also see additional results and data)
- Learning hierarchical similarity metrics
- A new similarity measure for taxonomy based on edge counting
- Hierarchical classification of real life documents
- Hierarchical document classification using automatically generated hierarchy
- Split-Order distance for clustering and classification hierarchies
- A hierarchical k-NN classifier for textual data
Bayesian approach:
- Improving classification when a class hierarchy is available using a hierarchy-based prior
- Bayesian aggregation for hierarchical genre classification
- Hierarchical classification for multiple, distributed web databases
| null |
CC BY-SA 3.0
| null |
2015-01-08T12:08:43.113
|
2015-01-08T12:08:43.113
| null | null |
2452
| null |
4834
|
1
|
4835
| null |
1
|
35
|
I am working on a project where we would like to take the ratio of two measurements A/B and subject these ratios to a ranking algorithm. The ratio is normalized prior to ranking (though the ranking/normalization are not that import to my question).
In most cases measurement A (the starting measurement) is a count with values greater than 1000. We expect an increase for measurement B for positive effects and a decrease in measurement B for negative effects.
Here is the issue, some of our starting counts are nearly zero which we believe is an artifact of experimental preparation. This of course leads to some really high ratios/scaling issues for these data points.
What is the best way to adjust these values in order to better understand the real role in our experiment?
One suggestion we received was to add 1000 to all counts (from measurement A and B) to scale the values and remove the bias of such a low starting count, is this a viable option? Thank you in advance for your assistance, let me know if I am not being clear enough.
|
Correcting Datasets with artificially low starting values
|
CC BY-SA 3.0
| null |
2015-01-08T13:44:20.160
|
2015-01-08T23:25:55.630
|
2015-01-08T23:25:55.630
|
1367
|
7713
|
[
"statistics",
"data-cleaning"
] |
4835
|
2
| null |
4834
|
1
| null |
Yes, the general idea is to add a baseline small count to every category. The technical term for this is [Laplace smoothing](http://en.wikipedia.org/wiki/Additive_smoothing). Really it's not so much of a hack, as encoding the idea that you think there is some (uniform?) prior distribution of the events occurring.
| null |
CC BY-SA 3.0
| null |
2015-01-08T14:22:33.347
|
2015-01-08T23:25:48.103
|
2015-01-08T23:25:48.103
|
1367
|
21
| null |
4836
|
1
| null | null |
9
|
10739
|
I would like to know if you people have some good tutorials (fast and straightforward) about topic models and LDA, teaching intuitively how to set some parameters, what they mean and if possible, with some real examples.
|
Tutorials on topic models and LDA
|
CC BY-SA 3.0
| null |
2015-01-08T15:47:34.103
|
2018-11-30T15:24:04.310
| null | null |
7715
|
[
"topic-model",
"lda"
] |
4837
|
1
| null | null |
0
|
471
|
(Me: Never learned calculus or advanced math and I started Stanford openclasses for machine learning. I know basic matrix calculations.)
One chapter of my course is about cost function. I have been trying to find any example calculation of it with numbers. Googling only finds the same formula everytime, and also on Octave. But I want to do the same thing first with pen+paper and without it, I cannot understand. Please give me a very simple example of using the formula with numbers. Thanks a lot.
I require a cost function calculation example for following sample dataset:
```
#Rooms = Rent
1 = 4000
2 = 10000
3 = 22000
4 = 30000
```
|
Learning cost function for linear regression
|
CC BY-SA 3.0
| null |
2015-01-08T16:13:36.613
|
2015-01-10T11:11:10.110
|
2015-01-10T11:11:10.110
|
1367
|
7712
|
[
"machine-learning"
] |
4838
|
2
| null |
4836
|
8
| null |
I highly recommend this tutorial: [Getting Started with Topic Modeling and MALLET](http://programminghistorian.org/lessons/topic-modeling-and-mallet)
Here are some additional links to help you get started...
Good introductory materials (including links to research papers): [http://www.cs.princeton.edu/~blei/topicmodeling.html](http://www.cs.princeton.edu/~blei/topicmodeling.html)
Software:
- MALLET (Java): http://mallet.cs.umass.edu/topics.php
topic modeling developer's guide: http://mallet.cs.umass.edu/topics-devel.php
- gensim (Python): http://radimrehurek.com/gensim/
- topicmodels (R): http://cran.r-project.org/web/packages/topicmodels/index.html
- Stanford Topic Modeling Toolbox (designed for use by social scientists): http://www-nlp.stanford.edu/software/tmt/tmt-0.4/
- Mr.LDA (scalable topic modeling using MapReduce): http://lintool.github.io/Mr.LDA/
If you're working with massive amounts of input text, you might want to consider using Mr.LDA to build your topics models -- its MapReduce-based approach might be more efficient when working with lots of data.
Even more here on the Biased Estimates blog: [Topic Models Reading List](http://www.biasedestimates.com/p/topic-models-reading-list.html)
| null |
CC BY-SA 3.0
| null |
2015-01-08T20:58:44.547
|
2015-01-08T20:58:44.547
| null | null |
819
| null |
4839
|
2
| null |
3804
|
1
| null |
Let me assume you intend to use Python libraries to analyze the data, since you are using Scrapy to gather the data.
If this is true, then a factor to consider for storage would be compatibility with other Python libraries. Of course, plain text is compatible with anything. But e.g. Pandas has a [host of IO tools](http://pandas.pydata.org/pandas-docs/stable/io.html) that simplifies reading from certain formats. If you intend to use `scikit-learn` for modeling, then Pandas can still read the data in for you, if you then cast it from a DataFrame to a Numpy array as an intermediate step.
These tools allow you to read CSV and JSON, but also HDF5 ... particularly, I would draw your attention to the [experimental support for msgpack](http://pandas.pydata.org/pandas-docs/stable/io.html#io-msgpack), which seems to be a [binary version of JSON](http://msgpack.org). Binary means here that the stored files will be smaller and therefore faster to read and write. A somewhat similar alternative is [BSON](http://bsonspec.org), which has a Python implementation — no Pandas or Numpy involved.
Considering these formats only makes sense if you intend to give at least some formatting to the stored text, e.g. storing the post title separately from the post content, or storing all posts in a thread in order, or storing the timestamp ... If you considered JSON at all, then I suppose this is what you intended. If you just intend to store the plain post contents, then use plain text.
| null |
CC BY-SA 3.0
| null |
2015-01-08T21:07:57.643
|
2015-01-08T21:07:57.643
| null | null |
1367
| null |
4842
|
2
| null |
2612
|
3
| null |
You have a few problems here. The first is cleaning your data. That's a whole separate issue form anonymization and belongs in another question if you're still having problems with it.
The second is your anonymization. After you have some sort of identifier you're satisfied with (sounds like you're using people's real names), try hashing their names to generate a new id. This id is useful because you'll always be able to take the original name and figure out what id it is but won't be able to derive the real names from just the hashed id (providing your hashing algorithm is good).
Further reading:
- https://security.stackexchange.com/a/61878
- https://stackoverflow.com/a/21563966/4435034
| null |
CC BY-SA 3.0
| null |
2015-01-09T00:18:05.037
|
2015-01-09T00:18:05.037
|
2017-05-23T12:38:53.587
|
-1
|
7719
| null |
4843
|
2
| null |
3795
|
2
| null |
Looks like lots of marketing around a specific case of what everyone was already doing to sound bigger than it is. So, are people implementing mathematical frameworks in software to extract and use the structure within data? Absolutely. Are people using the word 'Correlithm' when doing it? Not as far as I've seen.
| null |
CC BY-SA 3.0
| null |
2015-01-09T00:25:16.067
|
2015-01-09T00:25:16.067
| null | null |
7719
| null |
4844
|
1
|
4846
| null |
1
|
8235
|
I want to build a home server/workstation to run my R projects. Based on what I have gathered, it should probably be Linux based. I want to buy the hardware now, but I am confused with the many available options for processors/ram/motherboards. I want to be able to use parallel processing, at least 64GB? of memory and enough storage space (~10TB?). Software wise, Ubuntu?, R, RStudio, PostgreSQL, some NOSQL database, probably Hadoop. I do a lot of text/geospatial/network analytics that are resource intensive. Budget ~$3000US.
My Questions:
What could an ideal configuration look like? (Hardware + Software)
What type of processor?
Notes:
No, I don't want to use a cloud solution.
I know it is a vague question, but any thoughts will help, please?
If it is off-topic or too vague, I will gladly delete.
Cheers B
|
Hardware requirements for Linux server to run R & RStudio
|
CC BY-SA 3.0
| null |
2015-01-09T03:46:04.837
|
2015-01-09T08:22:40.920
| null | null |
7722
|
[
"r"
] |
4845
|
1
| null | null |
2
|
1013
|
I'm interested in an overview of the modern/state-of-the-art approaches to bag-of-words information retrieval, where you have a single query $q$ and a set of documents which you hope to rank by relevance $d_1,...,d_n$.
I'm specifically interested in approaches which require absolutely no linguistic knowledge and rely on no outside linguistic or lexical resources for boosting performance (such as thesauri or prebuilt word-nets and the like). Thus where a ranking is produced entirely by evaluating query-document similarity and where the problems of synonymy and polysemy are overcome by exploiting inter-document word co-occurence.
I've spent some amount of time in the literature (reading papers/tutorials), however there is so much information out there it's hard to get a bird's eye view. The best I can make out is that modern approaches involve some combination of a weighted vector space model (such as a generalized vector space model, LSI, or a topic-based vector space model using LDA), in conjunction with pseudo-relevance feedback (using either Rocchio or some more advanced approach).
All these vector space models tend to use cosine-similarity as the similarity function, however I have seen some literature discussing similarity functions which are more exotic.
Is this basically where we currently are in attacking this particular type of problem?
|
Approaches to Bag-Of-Words Information Retrieval
|
CC BY-SA 3.0
| null |
2015-01-09T05:42:54.587
|
2015-01-12T16:39:40.640
|
2015-01-09T06:30:31.983
|
7723
|
7723
|
[
"information-retrieval"
] |
4846
|
2
| null |
4844
|
2
| null |
There is no ideal configuration, for `R` or in general - product selection is always a difficult task and many factors are at play. I think that the solution is rather simple - get the best computer that your budget allows.
Having said that, since you want to focus on `R` development and one of `R`'s pressing issues is its [critical dependence](https://stackoverflow.com/q/5171593/2872891) on the amount of available physical memory (RAM), I would suggest favoring more RAM to other parameters. The second most important parameter, in my opinion, would be number of cores (or processors - see details below), due to your potential multiprocessing focus. Finally, the two next most important criteria I'd pay attention to would be compatibility with Linux and system/manufacturer's quality.
As far as the storage goes, I suggest considering solid state drives (SSD), if you'd rather prefer to have a bit more more speed than more space (however, if your work will involve intensive disk operations, you might want to investigate the issue of SSD reliability or consult with people, knowledgeable in this matter). However, I think that for R-focused work, disk operations are much less critical than memory ones, as I've mentioned above.
When choosing a specific Linux distribution, I suggest using a well-supported one, such as Debian or, even better, Ubuntu (if you care more about support, choose their LTS version). I'd rather not buy parts and assemble custom box, but some people would definitely prefer that route - for that you really need to know hardware well, but potential compatibility could still be an issue. The next paragraph provides some examples for both commercial-off-the-shelf (COTS) and custom solutions.
Should you be interested in the custom system route, [this discussion](http://www.tomshardware.com/forum/295552-28-multiprocessor-multicore-system-rendering-workstation) might be worth reading, as it contains some interesting pricing numbers (just to get an idea of potential savings) and also sheds some light on multiprocessor vs. multi-core alternatives (obviously, the context is different, but nevertheless could be useful). As I said, I would go the COTS route, mainly due to reliability and compatibility issues. In terms of single-processor multi-core systems, your budget is more than enough. However, when we go to multiprocessor workstations (I'm not even talking about servers), even two-processor configurations can go over your budget easily. Some, not far away, such as [HP Z820 Workstation](http://www8.hp.com/us/en/campaigns/workstations/z820.html). It starts from 2439 USD, but in minimal configuration. When you upgrade it to match your desired specs (if it's even possible), I'm sure that we'll be talking about 5K USD price range (extrapolating from the series' higher-level models). What I like about HP Z820, though, is the fact that this system is [Ubuntu certified](http://www.ubuntu.com/certification/hardware/201011-6754). Considering system compatibility and assuming your desire to run Ubuntu, the best way to approach your problem is to go through [Ubuntu-certified hardware lists](http://www.ubuntu.com/certification) and shortlist systems that you like. Just for the sake of completeness, take a look at this [interesting multiprocessor system](http://www.mediaworkstations.net/i-x2.html), which in compatible configuration might cost less than from HP or other major vendors. However, it's multimedia-oriented as well as it's reliability and compatibility are unknown, not to mention that it's way over your specified budget.
In terms of `R` and `R`-focused software, I highly recommend you to use RStudio Server instead of RStudio, as that will provide you with an opportunity to be able to work from any Internet-enabled location (provided you computer will be running, obviously). Another advice that I have is to keep an eye on alternative `R` distributions. I'm not talking about commercial expensive ones, but about emerging open source projects, such as `pqR`: [http://www.pqr-project.org](http://www.pqr-project.org). Will update as needed. I hope this is helpful.
| null |
CC BY-SA 3.0
| null |
2015-01-09T08:22:40.920
|
2015-01-09T08:22:40.920
|
2017-05-23T12:38:53.587
|
-1
|
2452
| null |
4847
|
2
| null |
4837
|
2
| null |
There is a worked out example [in the Wikipedia page for "simple linear regression"](http://en.wikipedia.org/wiki/Simple_linear_regression#Numerical_example)
Just for the sake of it, let me plug in your example into the formulas:
The fitted model should be a straight line with parameters $\alpha$ (value at $x = 0$) and $\beta$ (the slope):
$$f(x) = \alpha + \beta x$$
The values for these parameters that minimize the distance between line and data points are called $\hat{\alpha}$ and $\hat{\beta}$. They can be computed out of the data point values by using these formulae, [derived here](http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line):
$$\begin{align}
\hat{\beta} & = \frac{ \overline{xy} - \bar{x}\bar{y} }{ \overline{x^2} - \bar{x}^2 } ,
\\
\\
\hat{\alpha} & = \bar{y} - \hat{\beta}\bar{x}
\end{align}
$$
where an expression with an overline $\overline{xy}$ means the sample average of that expression: $\overline{xy} = \tfrac{1}{n} \sum_{i=1}^n{x_iy_i}$.
Here are the values I find for the datapoints you have listed in your question:
$$
\begin{align}
\overline{xy} &= \frac{1}{4} \sum{<(1 \times 4000), (2 \times 10000), (3 \times 22000), (4 \times 30000)>} \\
&= \frac{1}{4}(4000 + 20000 + 66000 + 120000) = 52500, \\
\overline{x} &= \frac{1}{4} \sum{<1, 2, 3, 4>} = 2.5 ,\\
\overline{y} &= \frac{1}{4} \sum{<4000, 10000, 22000, 30000>} = 16500 , \\
\overline{x^2} &= \frac{1}{4} \sum{<1^2, 2^2, 3^2, 4^2>} = 7.5 , \\
\overline{x}^2 &= 2.5^2 = 6.25
\end{align}
$$
and the fitted line should be:
$$
\begin{align}\
\hat{\beta} &= \frac{52500 - 2.5 \times 16500}{7.5 - 6.25} = \frac{41250}{1.25} = 33000 , \\
\hat{\alpha} &= 16500 - 33000 \times 2.5 = - 66000 , \\
\Rightarrow f(x) &= - 66000 + 33000 x
\end{align}
$$
Therefore, the model would predict, for a house with 10 rooms, a rent of:
$$ f(x) = -66000 + 33000 \times 10 = 264000 $$
| null |
CC BY-SA 3.0
| null |
2015-01-09T10:02:08.437
|
2015-01-09T10:55:33.910
|
2015-01-09T10:55:33.910
|
1367
|
1367
| null |
4848
|
2
| null |
3797
|
3
| null |
What you are asking about is, in my view, the main problem of implementing a lambda architecture. Here are some suggestions on how to solve it.
The combination of Spark and [Spark Streaming](https://spark.apache.org/docs/1.1.1/streaming-programming-guide.html) largely supersedes the original lambda architecture (which usually involved Hadoop and Storm). [Read here](http://blog.cloudera.com/blog/2014/08/building-lambda-architecture-with-spark-streaming/) an example of how to use a `SparkContext` and a separate `StreamingContext` to produce different `RDD`s, one for batch processed results and another for real-time results.
Once you have replicated that in your system, you still have to think about how to query both kind of `RDD`s. The trivial case would be to just `union` both of them:
```
scala> rdd1.union(rdd2).collect
```
Or maybe you can create a new `DStream`, similar to `stateStream` in the linked example, where some keys are kept for real-time results, and others for batch results.
| null |
CC BY-SA 3.0
| null |
2015-01-09T11:25:11.623
|
2015-01-09T11:25:11.623
| null | null |
1367
| null |
4850
|
2
| null |
2346
|
3
| null |
I think this point is the core of your question:
>
How should I define an "exception"? How can I know if my definition is a good one?
I would go about it as follows:
- Go back to the managers who receive the current reporting and ask them to give you examples of reported exceptions that were actually exceptions, i.e. they were acted upon and the action yielded some benefit.
- Ask them also if there were any features in that report that were not plainly viewable, but led them to think it was an exception, e.g. the month-on-month difference was not reported, but it could be computed from the week-on-week difference.
Treat the examples from the first as labels for training; you want to learn "what is an exception" and you need to have an expert answer that question for you.
Treat the suggested features as new features for your classification.
If you cannot get the experts to answer the questions you make, then try to infer them from their interaction with your reports: how many of those reports were downloaded, how many times are they mentioned, how are they ever used in decision-making ... Try to separate the important ones from the uninteresting ones, and there you have your labels.
| null |
CC BY-SA 3.0
| null |
2015-01-09T18:26:08.933
|
2015-01-09T18:26:08.933
| null | null |
1367
| null |
4851
|
2
| null |
3781
|
3
| null |
I don't have enough reputation to ask questions in comment for clarification before answering it, so I'm going to do both here.
### Here are the things that would help answering this question for now:
Can you post part of the `process_set.hpc`?
What's its format? Is it a numpy array? Is it a Pandas dataframe?
What's the value of `cclasses`?
### And now the answer:
First of all, `k-means` algorithm is able to find clusters in any n-dimensional data. If `n` is too big, it is better to use PCA but for `n=3` that wouldn't necessarily add any value.
The second thing that looks suspicious to me is that in the documentation for `kmeans` in `scikit-learn`, there is no `compute_labels` option, as seen [here](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html). However, that option exists for `MiniBatchKMeans` as seen [here](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.MiniBatchKMeans.html).
Also, if you make your data in the form of a pandas dataframe (if it is not already so), things would be much easier to track and you won't have to reattach the timing information to your data afterwards.
I may be able to give you a more thorough answer if I know a bit more about the format of the data.
Good luck!
| null |
CC BY-SA 3.0
| null |
2015-01-09T19:05:38.040
|
2015-01-09T19:05:38.040
|
2020-06-16T11:08:43.077
|
-1
|
7735
| null |
4852
|
1
|
4853
| null |
0
|
5332
|
I have a mysql database with the following format:
```
id string
1 foo1...
2 foo2...
.. ...
```
There are >100k entries in this db.
What I want to do is for each string, compare it to each other string and store some metric of the comparison. Doing this will essentially yield a 2D matrix of size `NxN` where `N` is the number of row in the db.
My initial thought was creating another db where each index corresponds to the string of the index in the first db and each column is the value from comparing the two strings. For example, id 1 column 2 in the second db would be the value outputted from comparing id1 and id2 in the first db.
The format of the second db:
```
id col1 col2 col3 ....
1 1 0.4 0.5 .....
... ... ... ...
```
This way of creating the second db would result in 100k rows x 100k columns, which is the issue at hand. What is the best way to handle large data sets like this? Is storing the data in a text file more efficient (say each text file corresponds to one row in the second db.)
|
Storing Big Matrix in DataBase
|
CC BY-SA 3.0
| null |
2015-01-10T00:15:45.990
|
2015-04-13T05:26:58.717
| null | null |
7739
|
[
"bigdata",
"relational-dbms"
] |
4853
|
2
| null |
4852
|
2
| null |
It is usually more practical to compute the distances on the fly rather than storing $N^2$ values. If possible, you will want to use a heuristic to select only the items could be sufficiently similar so you don't waste time computing distance to irrelevant items. PostgreSQL can create an [index on character trigrams](http://www.postgresql.org/docs/9.1/static/pgtrgm.html) in strings that enables efficient querying for other strings that are sufficiently similar (measured by the Jaccard similarity coefficient).
Storing an $N$ column wide table is not possible because the maximum number of columns allowed in MySQL is 4096. Even if that were not the case, using an RDBMS in such a way is considered bad practice and such a table would be difficult to work with.
If for some reason you must store a dense distance matrix, a format such as [HDF5](http://www.hdfgroup.org/HDF5/) would be more efficient than an RDBMS.
| null |
CC BY-SA 3.0
| null |
2015-01-10T04:33:10.407
|
2015-01-13T04:21:38.920
|
2015-01-13T04:21:38.920
|
7740
|
7740
| null |
4855
|
1
| null | null |
1
|
32
|
I am currently collecting second by second data regarding buyer vs seller initiated trades for different financial instruments (securities mostly). If there are more buyer initiated trades in a given second, then that second's data point would contain a positive value in the pertinent feature. If there are more seller initiated trades, then there would be a negative value. And if either there is an equal amount of buy vs seller initiated trades OR if there are simply not any trades in a given second, there will be a 0 for the feature in that data point. Along with this feature, there are several other features that are based on what occurred in the preceding seconds (eg if the value discussed above was 12 for the data point immediately preceding the current point, then the second feature for the current data point would be 12 - please let me know if this is not clear) After much troubleshooting, I have concluded that if there are too many data points with too many 0's for features, the classifier simply wont work. When I print out the probabilities of evaluation data points falling into different classes, I simply get
```
0:NaN,1:NaN
```
for all model evaluation points I try to classify. (I am using logistic regression from apache-mahout. In total have 183 features, but over 40million data points. There are three categories to which the data point can be classified)
I have found that if I set the default value to 1, then I no longer encounter this error, e.g. if there are no trades, the value will be a 1, if there is one seller initiated trade, the value will be 0.
So with all this in mind, I have two related questions:
1) Has anyone else encountered this issue? e.g. if you have a vector with x features, and for a majority of the data points, a majority of the features contain 0's, is this know to give issues?
2) Is shifting all values up by a constant (such as 1) a valid fix to this issue? I assume that if this constant is applied to all values, then it shouldn't skew the data, but I figure it won't hurt to check with the experts.
---
Also, I'm new to this, so if you believe that my question could use more info please let me know, and if you could give me ideas of what information to include, it would be greatly appreciated.
thanks in advance
|
Shifting dataPoints up by a constant (Is there an issue with too many 0's for features?)
|
CC BY-SA 3.0
| null |
2015-01-10T22:53:14.353
|
2015-04-12T11:25:44.157
| null | null |
6636
|
[
"feature-selection",
"logistic-regression"
] |
4856
|
2
| null |
3771
|
1
| null |
The derivative is a linear transform, and you're using a linear model. As you've demonstrated, nothing is gained by adding linear combination of other features (like the derivative) as a new feature in a linear model.
You may want a derivative coefficient to help interpret the model. You could use the normal model $x_{n+1} = a * x_n + b * n_{n-1}$ and then report the value $a-b$ to the client. A more general approach is to examine the [frequency response](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.freqz.html) of the model (the derivative is a high-pass filter, your model is a FIR filter).
Alternately you can rotate the basis of the input such that the derivative is a feature and the dimensions are still independent as follows:
You can think of the independent variables $x_n$ and $x_{n-1}$ as forming a [standard basis](https://en.wikipedia.org/wiki/Standard_basis) for the two dimensions of your model's input, that is $\langle 1, 0\rangle$ and $\langle 0, 1\rangle$.
Neither $a\langle 1, 0 \rangle$ or $b \langle 0, 1 \rangle$ are orthogonal to c$\langle1, -1\rangle$.
Therefore, if you want to use $c(x_n - x_{n-1})$ as a feature, I would also use the orthogonal feature $d(x_n + x_{n-1})$. That is,
$$ x_{n+1} = c(x_n - x_{n-1}) + d(x_n + x_{n-1}) $$.
You could think of this model as the derivative plus the integral.
The same logic applies to the choice of scaling function for the Haar wavelet.
| null |
CC BY-SA 3.0
| null |
2015-01-11T01:57:49.490
|
2015-01-13T22:55:35.117
|
2015-01-13T22:55:35.117
|
7738
|
7738
| null |
4859
|
1
| null | null |
3
|
3850
|
I am very new in machine learning. I have annotated data with category, aspect, opinion word and sentiment. for example, for the bellow text
"The apple was really tasty"
I have category->food, aspect-> apple, opinion word ->tasty and sentiment->positive. I have training data like this format.
How can I train a SVM classifier using this format of training set?
How to extract features like n-gram, POS and sentiment word to train the classifier?
Could you please suggest any beginning step for this aspect based sentiment analysis using machine learning algorithms?
|
Aspect based sentiment analysis using machine learning approach
|
CC BY-SA 3.0
| null |
2015-01-11T11:43:38.253
|
2015-01-13T17:00:04.647
| null | null |
7750
|
[
"machine-learning",
"svm",
"feature-extraction"
] |
4860
|
2
| null |
4859
|
1
| null |
I would recommend you to start from reading the draft of the introductory book ["Sentiment analysis and opinion mining"](http://rads.stackoverflow.com/amzn/click/1608458849) by Bing Liu. The draft in a PDF document format is available for free [here](http://www.cs.uic.edu/~liub/FBS/SentimentAnalysis-and-OpinionMining.pdf).
More details about the new upcoming book of this author, as well as comprehensive information on the topic of aspect-based sentiment analysis, with references and links to data sets, are available at this page: [http://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html](http://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html).
Another interesting resource is a survey book ["Opinion mining and sentiment analysis](http://www.cs.cornell.edu/home/llee/opinion-mining-sentiment-analysis-survey.html) by Bo Pang and Lillian Lee. The book is available in print and as a downloadable PDF e-book in a [published version](http://www.cs.cornell.edu/home/llee/omsa/omsa-published.pdf) or an [author-formatted version](http://www.cs.cornell.edu/home/llee/omsa/omsa.pdf), which are almost identical in terms of contents.
| null |
CC BY-SA 3.0
| null |
2015-01-11T13:45:04.360
|
2015-01-11T13:45:04.360
| null | null |
2452
| null |
4861
|
2
| null |
4845
|
1
| null |
While reading blogs and papers is helpful to identify the latest and greatest, having a solid foundation helps a lot, too. But I assume you already have gone over Manning's great (and free in e-book form) book on IR, right?
[http://nlp.stanford.edu/IR-book/](http://nlp.stanford.edu/IR-book/)
It contains information on creating your own thesaurus from your document collection to solve synonymy problems, LSA for polysemy, etc..
As for similarity measures, you will see there that Okapi BM25 (Robertson et al.) is considered superior to cosine similarity (but more expensive to implement and run).
Regarding the current state of the art, there was a small emergence of Bayesian Network-based classifiers in the early nineties (starting with Turtle & Croft), but that went quiet for a while.
However, right now, using BNs for IR is again finding some revival, particularly in biomedical IR.
In that respect, I think most ongoing work is directed towards using Bayesian models incl. topic models and deep learning for word-sense disambiguation (WSD) and semantic similarity.
Here is a pointer to a recent paper with good references on the topic.
[http://arxiv.org/abs/1412.6629](http://arxiv.org/abs/1412.6629)
| null |
CC BY-SA 3.0
| null |
2015-01-11T17:38:19.187
|
2015-01-11T17:38:19.187
| null | null |
6672
| null |
4862
|
2
| null |
3781
|
0
| null |
If you can provide more details about the processing you're doing to the data, I think the responses would be a little more helpful.
Here are a few things to consider:
- What's the shape of your data going into k-means? Are you aggregating up to the week level? If so, then your data won't look like the example you posted, since that seems like daily data.
- Try using .shape on your output data set. Does the # of rows match the data set you put into it? (This ties in to question #1. You'll be unable to join back directly if your original data is daily and the k-means data is weekly.
- I'm not sure if this is what you're attempting, but just declaring data_classes as the labels isn't going to add them back in to your original data. Like oxtay suggested, using pandas is a good choice because it will allow you to join.
| null |
CC BY-SA 3.0
| null |
2015-01-12T05:47:37.127
|
2015-01-12T05:47:37.127
| null | null |
7761
| null |
4863
|
2
| null |
2436
|
2
| null |
I would like to recommend to check the following open data repositories and meta-repositories (they are not focused on categorical data, but I'm sure that many data sets, listed there, contain such data):
- http://www.kdnuggets.com/datasets
- http://www.data.gov
- http://www.datasciencecentral.com/profiles/blogs/big-data-sets-available-for-free
Also check built-in data sets in the open source software Parallel Sets, which is focused on the categorical data visualization: [https://eagereyes.org/parallel-sets](https://eagereyes.org/parallel-sets).
| null |
CC BY-SA 3.0
| null |
2015-01-12T07:58:39.780
|
2015-01-12T07:58:39.780
| null | null |
2452
| null |
4864
|
1
| null | null |
0
|
996
|
I have 11 lottery tickets (used) and I have discovered that in each ticket, the 3rd digit's value is +1 of the value of the 6th digit. I have 11 tickets, each ticket is composed of 16 digits. Would someone (anyone) help me find the logic used here, in creating the other digits and their interrelation? I am a complete noob in data analysis, and any help would be greatly appreciated. For those who would like to know, I incidentally discovered the certain pattern, and made me realize these numbers are not totally random and if I could find the underlying pattern I'd be able to predict. The excel file is here [Excel file](http://s000.tinyupload.com/index.php?file_id=39010250377074241779)
|
What is the underlying pattern, or algorithm(s) used in these numbers?
|
CC BY-SA 3.0
| null |
2015-01-12T09:27:15.250
|
2015-01-14T09:59:34.743
|
2015-01-12T10:33:17.913
|
6496
|
7762
|
[
"data-mining",
"algorithms"
] |
4866
|
2
| null |
4864
|
6
| null |
- 11 instances of data is insufficient for discovering the behavior (even finding patterns) for such presumably complex systems. You need at least thousands of them.
- These numbers are encoded versions of numbers generated by Serial Key Generation Algorithms, which in my experience are usually implemented by Python. So it's not at all strange that there may be a correlation between some digits of them. As it is necessary for the system to recognize its created numbers from the numbers created by other lotteries.
- Even if there was a correlation or correlation between all the instances of lottery numbers, correlation doesn't imply causation and these patterns, if found any, couldn't really give any information about the key generation system itself.
- If you are really serious about it. you could go to this page and enter your data as a sequence it would show you the pattern, if there is any:
The On-Line Encyclopedia of Integer Sequences
| null |
CC BY-SA 3.0
| null |
2015-01-12T10:37:27.507
|
2015-01-14T09:59:34.743
|
2015-01-14T09:59:34.743
|
2853
|
6496
| null |
4867
|
2
| null |
4855
|
1
| null |
From your question I am not sure how you are using logistic regression. The "vanilla" version of logistic regression gives a binary classifier, but your target has three values: a positive value, a negative value, and zero. The Apache site, on the page for logistic regression refers to the thesis of Paul Komarek, in which "logistic regression" refers to the binary classifier. So your problem might be that there your target variable has 3 values.
Assuming that that is the case, you have many options, for example:
- a) use "multi-nomial logistic",
- b) use ordinary logistic regression 3 times to fit 3 models, one per level, to distinguish it from the other two combined
| null |
CC BY-SA 3.0
| null |
2015-01-12T10:41:57.830
|
2015-01-12T10:41:57.830
| null | null |
7763
| null |
4868
|
2
| null |
4845
|
0
| null |
The similarity function at the core of the method will define all the values for your distances $d_1,d_2, \ldots, d_n$. The initial query should have some words as a reference point to compare to the words in the document. Not knowing whether the query is a sentence or arbitrary list, you are restricted to a method that does some kind of histogram comparison of the frequency of the words matching in the documents. You can perform naive summations of keyword mappings counts, look at keyword likelihoods in the normalized distributions, or give a distribution of weighting based on the strongest matches. More exotic functions will be based on your prior belief of how the words should be compared. Working within a Bayesian Framework you can see your prior assumptions explicitly. Cosine similarity or any other vector based measure will be slightly arbitrary without knowing the desired nature of comparison between query and document.
There is not much more you can do without looking at some type of features, or attempt to cross compare the documents together, or use the initial query's structure. In short, my answer is to use normalized frequency similarities of the document to the queries and produce a ranking, and with more specific goals in mind to apply measures like cosine similarity on test datasets to search for the best measure.
| null |
CC BY-SA 3.0
| null |
2015-01-12T16:39:40.640
|
2015-01-12T16:39:40.640
| null | null |
34
| null |
4870
|
2
| null |
4859
|
3
| null |
Aleksandar Blekh has given some really nice links about the big picture of how to do sentiment analysis; I'll try to provide some links to software and talk about the nitty-gritty of how to make it work. I'll point you to the example using scikit-learn ([http://scikit-learn.org/stable/](http://scikit-learn.org/stable/) ), a machine learning library in Python.
You would first want to take your dataset and load it into scikit-learn in a sparse format. This link ([http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html](http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html) ) gives examples of how to load text in a bag-of-words representation, and the same module (scikit-learn.feature_extraction.text) can also count n-grams. It then describes how to run Naive Bayes and SVM on that dataset. You can take that example and start playing with it.
| null |
CC BY-SA 3.0
| null |
2015-01-13T17:00:04.647
|
2015-01-13T17:00:04.647
| null | null |
7786
| null |
4872
|
2
| null |
3771
|
1
| null |
Using derivatives as features is almost the same as using past values, as both reconstruct phase or state space for dynamic system behind the time series. but they differ in some points, like noise amplification and how they carry information.
(see Ref: State space reconstruction in the presence of noise; Martin Casdagli; Physica D - 1991 - section 2)
Notice all information is embedded in time series, but using derivatives is going to reinterpret this information, which may be useful or useless.
In your case, if you use all parameters and terms, i believe there is no use in it. but in case of using some algorithms like orthogonal forward regression (OFR) it may be beneficial. (see Ref: Orthogonal least squares methods and their application to non-linear
system identification; S. CHEN, S. A. BILLINGS; INT. J. CONTROL, 1989)
| null |
CC BY-SA 3.0
| null |
2015-01-14T01:08:04.997
|
2015-01-14T01:08:04.997
| null | null |
5200
| null |
4873
|
1
|
4874
| null |
3
|
15055
|
How can I get information about an entity from DBpedia using Python?
Eg: I need to get all DBpedia information about [USA](http://dbpedia.org/page/United_States). So I need to write the query from python (SPARQL) and need to get all attributes on USA as result.
I tried :
```
PREFIX db: <http://dbpedia.org/resource/>
SELECT ?p ?o
WHERE { db:United_States ?p ?o }
```
But here all DBpedia information is not displaying.
How can I do this and which all are the possible plugins/api available for python to connect with DBpedia ?
Also what will be the SPARQL query for generating the above problem result?
|
Querying DBpedia from Python
|
CC BY-SA 4.0
| null |
2015-01-14T09:08:00.483
|
2019-03-28T16:07:29.957
|
2019-03-28T16:07:29.957
|
24968
|
5091
|
[
"python"
] |
4874
|
2
| null |
4873
|
5
| null |
You do not need a wrapper for DBPedia, you need a library that can issue a SPARQL query to its SPARQL endpoint. Here is [an option for the library](https://pypi.python.org/pypi/SPARQLWrapper/1.6.4) and here is the URL to point it to: [http://dbpedia.org/sparql](http://dbpedia.org/sparql)
You need to issue a DESCRIBE query on the United_States resource page:
```
PREFIX dbres: <http://dbpedia.org/resource/>
DESCRIBE dbres:United_States
```
Please note this is a huge download of resulting triplets.
Here is how you would issue the query:
```
from SPARQLWrapper import SPARQLWrapper, JSON
def get_country_description():
sparql = SPARQLWrapper("http://dbpedia.org/sparql")
sparql.setReturnFormat(JSON)
sparql.setQuery(query) # the previous query as a literal string
return sparql.query().convert()
```
| null |
CC BY-SA 3.0
| null |
2015-01-14T12:46:41.357
|
2015-01-14T16:38:14.707
|
2015-01-14T16:38:14.707
|
1367
|
1367
| null |
4875
|
1
| null | null |
5
|
1065
|
Given a dataset that has a binary (0/1) dependent variable and a large collection of continuous and categorical independent variables, is there a process and ideally a R package that can find combinations/subsets/segments of the IVs that are highly correlated with the DV?
Simple example:
DV: college education (0/1), and IVs: age (20 to 120), income (0 to 1 million), race (white, black, hispanic etc), gender (0/1), state, etc.
Then finding correlations combining IVs and subsets of IVs (e.g. women between 30 and 50, with incomes over 100k are highly positively correlated with the DV), and then being able to compare the combinations (e.g. to find out women between 30 and 40, with incomes over 100k have a higher correlation than women between 40 and 50, with incomes over 100k)
|
Determine highly correlated segments
|
CC BY-SA 3.0
| null |
2015-01-14T15:42:48.013
|
2015-03-01T03:58:26.763
|
2015-01-14T16:15:04.760
|
102
|
102
|
[
"machine-learning",
"r",
"correlation"
] |
4876
|
1
|
4878
| null |
1
|
826
|
I have data, which looks like this:

These data are only for one subject. I will have a lot more.
These data will be analyzed in R.
Now I'm storing them like this:
```
subject <- rep(1, times = 24)
measurement <- factor(x = rep(x = 1:3, each = 8),
labels = c("Distance", "Frequency", "Energy"))
speed <- factor(x = rep(x = 1:2, each = 4, times = 3),
labels = c("speed1", "speed2"))
condition <- factor(x = rep(x = 1:2, each = 2, times = 6),
labels = c("Control", "Experm"))
Try <- factor(x = rep(x = 1:2, times = 12),
labels = c("Try1", "Try2"))
result <- c(1:8,
11:18,
21:28)
dt <- data.frame(subject, measurement, speed, condition, Try, result)
```
What is the appropriate way to store these data in R (in a data frame)?
|
Appropriate way to store data in R
|
CC BY-SA 3.0
| null |
2015-01-14T15:52:10.633
|
2015-03-27T11:22:00.087
|
2015-03-27T11:22:00.087
|
2853
|
3377
|
[
"r",
"dataset",
"data-formats"
] |
4877
|
2
| null |
4875
|
1
| null |
I am no expert in that particular case, but doing a bit of research, it seems that the measure you want to construct is called "[Point-biserial correlation coefficient](https://en.wikipedia.org/wiki/Point-biserial_correlation_coefficient)", i.e. the inferred correlation between a continuous variable $X$ and a categorical variable $Y$, e.g. $Y∈\{−1,0,1\}$. See a related question on [Cross Validated SE](https://stats.stackexchange.com/questions/74116/pearson-correlation-between-discrete-variable-thats-mostly-0-and-a-standard-nor).
And yes, [there is an R package for that](http://cran.r-project.org/web/packages/polycor/index.html) :)
| null |
CC BY-SA 3.0
| null |
2015-01-14T17:06:58.750
|
2015-01-14T17:06:58.750
|
2017-04-13T12:44:20.183
|
-1
|
1367
| null |
4878
|
2
| null |
4876
|
3
| null |
Without more information all I can say is that:
- the say you're storing it is fine in general
- you can further transform/store your data depending on your use case
To expand on #2, if I want to study Distance vs Energy across all subjects, then I would format my data like this:
```
> library(reshape2)
> dt2 <- dt[dt$measurement %in% c('Distance','Energy'),]
> dt_cast <- dcast(dt2, subject+Try~measurement+speed+condition, value.var='result')
```
The transformed data (dt_cast) would then look like:
```
subject Try Distance_speed1_Control Distance_speed1_Experm Distance_speed2_Control
1 1 Try1 1 3 5
2 1 Try2 2 4 6
Distance_speed2_Experm Energy_speed1_Control Energy_speed1_Experm Energy_speed2_Control
1 7 21 23 25
2 8 22 24 26
Energy_speed2_Experm
1 27
2 28
```
Allowing me to, for example, look at the relationship between the Distance_speed1_Control vs Energy_speed1_Control columns.
Basically subset/aggregate your data and then use the dcast to get the rows and columns the computer needs.
| null |
CC BY-SA 3.0
| null |
2015-01-14T17:56:58.563
|
2015-01-14T17:56:58.563
| null | null |
525
| null |
4879
|
1
|
4994
| null |
0
|
48
|
As an example, say the input is an array of numbers representing an audio snippet and the output is a transformed/filtered version of it.
What would be the proper name for that? Which are examples of algorithms for the job?
EDIT:
More specifically, I want to train audio source separation. The input is a mixed sound (spectrogram) and the output is the sound with some energy removed in certain frequencies. The function needs to recognize some pattern in the input and decide what to remove.
|
Training a function that maps n-dim to n-dim
|
CC BY-SA 3.0
| null |
2015-01-15T00:48:48.383
|
2015-01-30T21:14:25.563
|
2015-01-24T14:09:47.810
|
7806
|
7806
|
[
"algorithms"
] |
4880
|
1
| null | null |
2
|
569
|
I'm wondering if there's a way to automatically generate a list of tags for live chat transcripts without domain knowledge. I've tried applying NLP chunking to the chat transcripts and keep only the noun phrases as tag candidates. However, this approach would generate too many useless noun phrases. I could use some rules to prune out some of them, but it would be hard to generalize the rules.
|
Generate tags for live chat transcripts
|
CC BY-SA 3.0
| null |
2015-01-15T01:57:38.387
|
2020-08-07T17:05:08.823
|
2015-12-15T08:15:38.597
|
11097
|
7807
|
[
"machine-learning",
"nlp"
] |
4881
|
1
| null | null |
1
|
2074
|
This question might sound silly. But I have been wondering why do we assume that there is
a hidden probability distribution between input-output pairs in machine learning setup ?
For example, if we want to learn a function $f: \mathcal{X} \rightarrow \mathcal{Y}$, we generally tend to assume a probability distribution $\rho(x,y)$ on $Z=\mathcal{X} \times \mathcal{Y} $ and try to minimize the error
$$
\mathcal{E}(f) = \int (f(x)-y)^2 \ d\rho(x,y)
$$
Is the probability distribtution $\rho$ inherent to the very nature of $Z$ or depends on $f$ ?
Can anyone please provide a good intuitive explanation for this ?
|
Probability distribution in input-output pairs
|
CC BY-SA 3.0
| null |
2015-01-15T11:35:08.843
|
2015-01-19T18:03:17.467
|
2015-01-18T10:14:42.397
|
7811
|
7811
|
[
"machine-learning",
"statistics"
] |
4883
|
1
| null | null |
3
|
126
|
I am currently working on a recommendation system for daily news. At first, I evaluated all the recommender algorithms and their corresponding settings (e.g., similarities, factorizers, ...etc) implemented in Mahout. Since we want to recommend daily news for users, we use the reading behavior of each user collected two days ago as training set, data of the next day as the testing set. The evaluated RMSE is good, the best recommender is SVD+SGD, so we implemented the recommender on our system for several days of trial run.
However, the result, the actually recommended news, seems to be not very attractive for real users ("not attractive" here means, the users feel like "why you recommend this to me?"). So we decided another approach: use the tags and categories and their relationship to do the main job of recommendation, the result from CF is for just supporting.
This makes me wonder if CF if not appropriate for some kind of content. Because I also worked on movie and music recommendation, CF is a good tool. But for news, it seems not the case.
Can anyone explain why this happening, and also give some guideline about how to choose appropriate recommendation methods? Thanks:)
|
What kind of data is not appropriate using CF to do recommendation?
|
CC BY-SA 3.0
| null |
2015-01-15T15:13:54.713
|
2015-05-03T19:02:51.487
|
2015-01-16T05:22:00.583
|
5184
|
5184
|
[
"recommender-system"
] |
4884
|
1
|
4887
| null |
0
|
205
|
I was wondering whether we could list machine learning winning methods to apply in many fields of interest: NLP, image, vision, medical, deep package inspection, etc. I mean, if someone will get started a new ML project, what are the ML methods that cannot be forgotten?
|
What are the current killing machine learning methods?
|
CC BY-SA 3.0
| null |
2015-01-15T18:32:36.883
|
2015-01-16T01:00:45.890
| null | null |
6560
|
[
"machine-learning"
] |
4885
|
2
| null |
4836
|
10
| null |
If you're working in R, Carson Sievert's tutorial on using LDA to model topics in movie reviews is an excellent starting point:
[http://cpsievert.github.io/LDAvis/reviews/reviews.html](http://cpsievert.github.io/LDAvis/reviews/reviews.html)
This tutorial makes use of LDAvis, an interactive visualization of topic and word distributions that can really aid intuition.
Also, although not short, David M. Blei's lectures on topic models are a great resource for understanding the meaning behind the parameters: [http://videolectures.net/mlss09uk_blei_tm/](http://videolectures.net/mlss09uk_blei_tm/)
| null |
CC BY-SA 3.0
| null |
2015-01-15T20:07:51.903
|
2015-01-15T20:07:51.903
| null | null |
686
| null |
4886
|
2
| null |
853
|
4
| null |
I am just 5 months late but with CRFSuite you can actually use those float features as numbers, not as strings. For this, you just need to invent an unique label for each dimension then add a ":" followed by the value.
For example, a word "jungle" is represented in 5 dimensions:
0.1 0.4 0.8 0.2 0.9
Then CRFSuite would take that word + feature as:
LABEL f1:0.1 f2:0.4 f3:0.8 f4:0.2 f5:0.9
where of course you replace ``LABEL'' by an actual string and you separate all spaces with tabs (that's the format for CRFSuite).
Not sure though for other packages.
| null |
CC BY-SA 3.0
| null |
2015-01-15T23:27:20.597
|
2015-01-15T23:27:20.597
| null | null |
7818
| null |
4887
|
2
| null |
4884
|
0
| null |
The question is very general. However, there are some studies being conducted to test which algorithms perform relatively well in a broad range of problems (I'll add link to papers later), concerning regression and classification.
Lately Random Decision Forests, Support Vector Machines and certain variations of Neural Networks are being said to achieve the best results for very broad variety of problems.
This does not mean that these are "the best algorithms" for any problem, that does not exist, and actually is not very realistic to pursue. Also it must be observed that both RDF and SVM are rather-easy methods to initially grasp and obtain good results, so they are becoming really popular. NN have been used intensively since couple of decades (after they revived), so they appear often in implementations.
If you are interested in learning further you should look for an specific area and deal with a problem that can be solved nicely by machine learning to understand the main idea (and why is impossible to find the method).
You will find common the task to try to predict the expected behavior of something given some known or observable characteristics (to learn the function that models the problem given input data), the issues related to dealing with data in high-dimensional spaces, the need for good quality data, the notable improvements that can give data pre-processing, and many others.
| null |
CC BY-SA 3.0
| null |
2015-01-16T01:00:45.890
|
2015-01-16T01:00:45.890
| null | null |
5143
| null |
4888
|
2
| null |
1253
|
28
| null |
### Why to use deep networks?
Let's first try to solve very simple classification task. Say, you moderate a web forum which is sometimes flooded with spam messages. These messages are easily identifiable - most often they contain specific words like "buy", "porn", etc. and a URL to outer resources. You want to create filter that will alert you about such suspecious messages. It turns to be pretty easy - you get list of features (e.g. list of suspicious words and presence of a URL) and train simple logistic regression (a.k.a. perceptron), i.e. model like:
```
g(w0 + w1*x1 + w2*x2 + ... + wnxn)
```
where `x1..xn` are your features (either presence of specific word or a URL), `w0..wn` - learned coefficients and `g()` is a [logistic function](http://en.wikipedia.org/wiki/Logistic_function) to make result be between 0 and 1. It's very simple classifier, but for this simple task it may give very good results, creating linear decision boundary. Assuming you used only 2 features, this boundary may look something like this:

Here 2 axes represent features (e.g. number of occurrences of specific word in a message, normalized around zero), red points stay for spam and blue points - for normal messages, while black line shows separation line.
But soon you notice that some good messages contain a lot of occurrences of word "buy", but no URLs, or extended discussion of [porn detection](https://stackoverflow.com/questions/713247/what-is-the-best-way-to-programatically-detect-porn-images), not actually refferring to porn movies. Linear decision boundary simply cannot handle such situations. Instead you need something like this:

This new non-linear decision boundary is much more flexible, i.e. it can fit the data much closer. There are many ways to achieve this non-linearity - you can use polynomial features (e.g. `x1^2`) or their combination (e.g. `x1*x2`) or project them out to a higher dimension like in [kernel methods](http://en.wikipedia.org/wiki/Kernel_method). But in neural networks it's common to solve it by combining perceptrons or, in other words, by building [multilayer perceptron](http://en.wikipedia.org/wiki/Multilayer_perceptron). Non-linearity here comes from logistic function between layers. The more layers, the more sophisticated patterns may be covered by MLP. Single layer (perceptron) can handle simple spam detection, network with 2-3 layers can catch tricky combinations of features, and networks of 5-9 layers, used by large research labs and companies like Google, may model the whole language or detect cats on images.
This is essential reason to have deep architectures - they can model more sophisticated patterns.
### Why deep networks are hard to train?
With only one feature and linear decision boundary it's in fact enough to have only 2 training examples - one positive and one negative. With several features and/or non-linear decision boundary you need several orders more examples to cover all possible cases (e.g. you need not only find examples with `word1`, `word2` and `word3`, but also with all possible their combinations). And in real life you need to deal with hundreds and thousands of features (e.g. words in a language or pixels in an image) and at least several layers to have enough non-linearity. Size of a data set, needed to fully train such networks, easily exceeds 10^30 examples, making it totally impossible to get enough data. In other words, with many features and many layers our decision function becomes too flexible to be able to learn it precisely.
There are, however, ways to learn it approximately. For example, if we were working in probabilistic settings, then instead of learning frequencies of all combinations of all features we could assume that they are independent and learn only individual frequencies, reducing full and unconstrained [Bayes classifier](http://en.wikipedia.org/wiki/Bayes_classifier) to a [Naive Bayes](http://en.wikipedia.org/wiki/Naive_Bayes_classifier) and thus requiring much, much less data to learn.
In neural networks there were several attempts to (meaningfully) reduce complexity (flexibility) of decision function. For example, convolutional networks, extensively used in image classification, assume only local connections between nearby pixels and thus try only learn combinations of pixels inside small "windows" (say, 16x16 pixels = 256 input neurons) as opposed to full images (say, 100x100 pixels = 10000 input neurons). Other approaches include feature engineering, i.e. searching for specific, human-discovered descriptors of input data.
Manually discovered features are very promising actually. In natural language processing, for example, it's sometimes helpful to use special dictionaries (like those containing spam-specific words) or catch negation (e.g. "not good"). And in computer vision things like [SURF descriptors](http://en.wikipedia.org/wiki/SURF) or [Haar-like features](http://en.wikipedia.org/wiki/Haar-like_features) are almost irreplaceable.
But the problem with manual feature engineering is that it takes literally years to come up with good descriptors. Moreover, these features are often specific
### Unsupervised pretraining
But it turns out that we can obtain good features automatically right from the data using such algorithms as autoencoders and restricted Boltzmann machines. I described them in detail in my other [answer](https://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma/117188#117188), but in short they allow to find repeated patterns in the input data and transform it into higher-level features. For example, given only row pixel values as an input, these algorithms may identify and pass higher whole edges, then from these edges construct figures and so on, until you get really high-level descriptors like variations in faces.

After such (unsupervised) pretraining network is usually converted into MLP and used for normal supervised training. Note, that pretraining is done layer-wise. This significantly reduces solution space for learning algorithm (and thus number of training examples needed) as it only needs to learn parameters inside each layer without taking into account other layers.
### And beyond...
Unsupervised pretraining have been here for some time now, but recently other algorithms were found to improve learning both - together with pretraining and without it. One notable example of such algorithms is [dropout](http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf) - simple technique, that randomly "drops out" some neurons during training, creatig some distortion and preventing networks of following data too closely. This is still a hot research topic, so I leave this to a reader.
| null |
CC BY-SA 3.0
| null |
2015-01-16T01:10:21.240
|
2017-04-20T09:20:40.137
|
2017-05-23T12:38:53.587
|
-1
|
1279
| null |
4889
|
2
| null |
4880
|
0
| null |
If you have existing properly tagged chat transcripts, you can try treating it as a supervised learning problem. If you're starting from a blank slate, that won't work.
| null |
CC BY-SA 3.0
| null |
2015-01-16T04:44:23.923
|
2015-01-16T04:44:23.923
| null | null |
7786
| null |
4890
|
2
| null |
4883
|
6
| null |
The key is establishing a proper validation metric.
I notice you talk about how you tried different recommendation algorithms, but at the end of the day you evaluated them all with RMSE. But there's no particular reason to believe that minimizing RMSE generates a "subjectively better" recommendation experience for the user - it just happens to be convenient, and happens to work well in some industries, but there is no real reason why it must.
RMSE is measuring how well your recommender algorithm is predicting user behavior. But that's not the same as measuring recommendation quality. Maybe users value something else - familiarity, or serendipity, or some other quality of the item being recommended. Users don't really care about being predicted.
Given your results, if you want to understand your users further, I'd focus my efforts in coming up with a mathematical metric that more closely matches the target you care about - user satisfaction - rather than RMSE. Once you know what metric you're trying to optimize, the algorithm to optimize it is much easier to select!
| null |
CC BY-SA 3.0
| null |
2015-01-16T04:54:40.550
|
2015-01-16T04:54:40.550
| null | null |
7786
| null |
4891
|
2
| null |
441
|
-1
| null |
Machine learning is a good example of a problem type where Spark-based solutions are light-years ahead of mapreduce-based solutions, despite the young age of spark-on-yarn.
| null |
CC BY-SA 3.0
| null |
2015-01-16T05:30:35.787
|
2015-01-16T05:30:35.787
| null | null |
7786
| null |
4892
|
2
| null |
4880
|
2
| null |
You can try RAKE(Rapid Automatic Keyword Extraction) and there is a python implementation [here](https://github.com/aneesha/RAKE). RAKE is an document-oriented keyword extraction algorithm and also language-independent(theoretically, since RAKE use a generated stop word list to partition candidate keywords, and considering different languages, we need to find a better way to generated stop word list.). However, about English documents, RAKE can extract keywords(or tags) in a acceptable precision and recall. RAKE is also efficient, because to use it we don't have to training a whole corpus, RAKE can generate a keyword list by calculating the word's degree and frequency then comes up a score for every candidate keyword then pick the top N words.
Hope this answer helps you or lead you a direction for your next step investigation.
| null |
CC BY-SA 4.0
| null |
2015-01-16T05:44:41.907
|
2020-08-07T17:05:08.823
|
2020-08-07T17:05:08.823
|
98307
|
1003
| null |
4894
|
2
| null |
3763
|
1
| null |
AdaBoost is a supervised learning method; it starts with a table of 'correct' answers and generates a predictive model for a target, which is known. It is then possible to inspect this model to figure out how it works, what it judged was more important. With that in mind, here is my guess for what he did:
First, he created a training dataset. The dataset was created, according to the article, like this:
>
Now he’d do the same for love. First he’d need data. While his
dissertation work continued to run on the side, he set up 12 fake
OkCupid accounts and wrote a Python script to manage them. The script
would search his target demographic (heterosexual and bisexual women
between the ages of 25 and 45), visit their pages, and scrape their
profiles for every scrap of available information: ethnicity, height,
smoker or nonsmoker, astrological sign—“all that crap,” he says.
To find the survey answers, he had to do a bit of extra sleuthing.
OkCupid lets users see the responses of others, but only to questions
they’ve answered themselves. McKinlay set up his bots to simply answer
each question randomly—he wasn’t using the dummy profiles to attract
any of the women, so the answers didn’t matter—then scooped the
women’s answers into a database. "
So he used python scripts to collect lots of information! At the end of this, he had a table of data, where each row had three pieces of information:
- A bot's answer to all the questions, and their weights
- A woman's answer to all the questions
- Their match percentage.
On this, he could use AdaBoost to create a predictive model, predicting the match percentage from the available information. The weak learners were probably decision stumps, greedily choosing one variable at a time to split on, that's the standard that people refer to when talking about AdaBoost.
Once the predictive model, was in place, it could be used for optimization - determining which weights to put on the questions when holding all other variables constant, maximizing the average match percentage to women in his target audience.
Of course, this is just a guess. The article doesn't have much detail. But it's a potential way to use AdaBoost for that purpose.
| null |
CC BY-SA 3.0
| null |
2015-01-17T04:29:24.393
|
2015-01-17T04:29:24.393
| null | null |
7786
| null |
4895
|
2
| null |
1208
|
0
| null |
Your results are reasonable. Your data brings several ideas to mind:
1) It is quite reasonable that as you change the available features, this will change the relative performance of machine learning methods. This happens quite a lot. Which machine learning method performs best often depends on the features, so as you change the features the best method changes.
2) It is reasonable that in some cases, disparate models will reach the exact same results. This is most likely in the case where the number of data points is low enough or the data is separable enough that both models reach the exact same conclusions for all test points.
| null |
CC BY-SA 3.0
| null |
2015-01-17T04:42:47.530
|
2015-01-17T04:42:47.530
| null | null |
7786
| null |
4896
|
2
| null |
2558
|
1
| null |
A simple way would be to consider Laplace Smoothing ([http://en.wikipedia.org/wiki/Additive_smoothing](http://en.wikipedia.org/wiki/Additive_smoothing) ) or something like it.
Basically, instead of calculating your response rate as (Clicks)/(Impressions) you calculate (Clicks + X)/(Impressions + Y), with X and Y chosen, for example, so that X/Y is the global average of clicks/impressions.
When Clicks and Impressions are both high, this smoothed response rate is basically equal to the true response rate (signal dominates the prior). When Clicks and Impressions are both low, the this smoothed response rate will be close to the global average response rate - a good guess when you have little data and don't want to put much weight on it!
The absolute scale of X and Y will determine how many data points you consider "enough data". It's been argued that the right thing to do is set X to 1, and Y appropriately given that.
| null |
CC BY-SA 3.0
| null |
2015-01-17T05:20:32.763
|
2015-01-17T05:20:32.763
| null | null |
7786
| null |
4897
|
2
| null |
1094
|
3
| null |
You may want to consider gradient boosted trees rather than random forests. They're also an ensemble tree-based method, but since this method doesn't sample dimensions, it won't run in to the problem of not having a useful predictor available to split on at any particular time.
Different implementations of GBDT have different ways of handling missing values, which will make a big difference in your case; I believe R does ternary splits which is likely to work fine.
| null |
CC BY-SA 3.0
| null |
2015-01-17T05:30:01.813
|
2015-01-17T05:30:01.813
| null | null |
7786
| null |
4898
|
2
| null |
155
|
17
| null |
Late answer, but here is an eclectic list of [100+ Interesting Data Sets](http://rs.io/100-interesting-data-sets-for-statistics/)
The blog post is fun and easy to read through (I have no affiliation). It's worth to scan through, and to scrape a few from the top:
- Last words of every Texas inmate executed since 1984
- 10,000 annotated images of cats
- 2.2 million chess matches
| null |
CC BY-SA 3.0
| null |
2015-01-17T16:53:08.033
|
2015-01-17T16:53:08.033
| null | null |
7842
| null |
4899
|
2
| null |
4881
|
0
| null |
I may still misunderstand what you mean, but the general simple formulation is to minimize sum of loss over all training examples. Converting to your formulation, that 'assumes' the joint distribution of the input and output is just the empirical distribution found in the data. That's the best assumption you can make without additional information. If you had reason to assume something else, you would.
| null |
CC BY-SA 3.0
| null |
2015-01-18T10:22:35.777
|
2015-01-18T10:22:35.777
| null | null |
21
| null |
4900
|
2
| null |
2489
|
1
| null |
If I've understood your question correctly, one easy solution would be to concatenate the bits together and make a 7-bit binary sequence then convert it to integer. So, for your sample dataset quoted in the question you would get:
`
CM
16
0
98
8
6
67
`
| null |
CC BY-SA 3.0
| null |
2015-01-18T19:14:41.420
|
2015-01-19T08:13:58.220
|
2015-01-19T08:13:58.220
|
6496
|
6496
| null |
4901
|
1
|
4912
| null |
1
|
216
|
I have a data set of video watching records in a 3G network. In this data set, 2 different kind of features are included:
- user-side information, e.g., age, gender, data plan and etc;
- Video watching records of these users, each of which associated with a download ratio and some detailed network condition metrics, say, download speed, RTT, and something similar.
Under the scenario of internet streaming, a video is divided into several chunks and downloaded to end device one by one, so we have download ratio = download bytes / file size in bytes
Now, Given this data set, I want to predict the download ratio of each video.
Since it is a regression problem, so I use gradient boosting regression tree as model and run 10-fold cross validation.
However, I have tried different model parameter configurations and even different models (linear regression, decision regress tree), the best root-mean-square error I can get is 0.3790, which is quite high, because if I don't use any complex models and just use the mean value of known labels as prediction values, then I can still have an RMSE of 0.3890. There is not obvious difference.
For this problem, I have some questions:
- Does this high error rate imply that the label in data set is unpredictable?
- Apart from the feature problem, is there any other possibilities? If yes, how can I validate them?
|
Does high error rate in regression imply the data set is unpredictable?
|
CC BY-SA 3.0
| null |
2015-01-19T09:37:56.340
|
2015-01-27T11:58:33.430
|
2015-01-27T11:58:33.430
|
7867
|
7867
|
[
"feature-selection",
"regression"
] |
4902
|
2
| null |
3730
|
2
| null |
It is always helpful to just Google the exact error that you are seeing, excluding specifics like the actual memory of the vector. For me, the first hit was an interesting documentation called "[R: Memory limits of R](http://stat.ethz.ch/R-manual/R-devel/library/base/html/Memory-limits.html)", where, under "Unix", one can read:
>
The address-space limit is system-specific: 32-bit OSes imposes a limit of no more than 4Gb: it is often 3Gb. Running 32-bit executables on a 64-bit OS will have similar limits: 64-bit executables will have an essentially infinite system-specific limit (e.g., 128Tb for Linux on x86_64 cpus).
See the OS/shell's help on commands such as limit or ulimit for how to impose limitations on the resources available to a single process. For example a bash user could use
ulimit -t 600 -v 4000000
whereas a csh user might use
limit cputime 10m
limit vmemoryuse 4096m
to limit a process to 10 minutes of CPU time and (around) 4Gb of virtual memory. (There are other options to set the RAM in use, but they are not generally honoured.)
So, you should check
- What type of OS are you running on your EC2 instance
- What type of R build are you running on that OS, and make sure you run a 64bit version
- If both are already 64bit, then use ulimit to set memory to e.g. 8Gb: ulimit -v 8000000
| null |
CC BY-SA 3.0
| null |
2015-01-19T10:07:11.270
|
2015-01-19T10:07:11.270
|
2020-06-16T11:08:43.077
|
-1
|
1367
| null |
4903
|
1
|
4904
| null |
16
|
18086
|
Can anybody tell me what the purpose of feature generation is? And why feature space enrichment is needed before classifying an image? Is it a necessary step?
Is there any method to enrich feature space?
|
What is the difference between feature generation and feature extraction?
|
CC BY-SA 4.0
| null |
2015-01-19T14:26:57.117
|
2021-03-11T20:19:17.107
|
2021-03-11T20:19:17.107
|
29169
|
7873
|
[
"machine-learning",
"classification"
] |
4904
|
2
| null |
4903
|
15
| null |
Feature Generation -- This is the process of taking raw, unstructured data and defining features (i.e. variables) for potential use in your statistical analysis. For instance, in the case of text mining you may begin with a raw log of thousands of text messages (e.g. SMS, email, social network messages, etc) and generate features by removing low-value words (i.e. stopwords), using certain size blocks of words (i.e. n-grams) or applying other rules.
Feature Extraction -- After generating features, it is often necessary to test transformations of the original features and select a subset of this pool of potential original and derived features for use in your model (i.e. feature extraction and selection). Testing derived values is a common step because the data may contain important information which has a non-linear pattern or relationship with your outcome, thus the importance of the data element may only be apparent in its transformed state (e.g. higher order derivatives). Using too many features can result in multicollinearity or otherwise confound statistical models, whereas extracting the minimum number of features to suit the purpose of your analysis follows the principle of parsimony.
Enhancing your feature space in this way is often a necessary step in classification of images or other data objects because the raw feature space is typically filled with an overwhelming amount of unstructured and irrelevant data that comprises what's often referred to as "noise" in the paradigm of a "signal" and "noise" (which is to say that some data has predictive value and other data does not). By enhancing the feature space you can better identify the important data which has predictive or other value in your analysis (i.e. the "signal") while removing confounding information (i.e. "noise").
| null |
CC BY-SA 4.0
| null |
2015-01-19T14:55:25.580
|
2020-07-07T05:58:21.463
|
2020-07-07T05:58:21.463
|
100328
|
2723
| null |
4905
|
2
| null |
4881
|
1
| null |
Some notes:
- In the case of supervised learning you assume there is a function $f:\mathcal{X} \rightarrow \mathcal{Y}$, which means there is a connection somehow between inputs and outputs and you want to use it in order to predict. But how this function might look? Without considering the model itself, usually what happens is that there is some noise. And this noise can be in $Y$ and also can be in $X$. This noise gives use the probabilistic setup of the problem, because without it we would have only to solve some equations.
- Now is important to understand that the noise defines the distribution. So a random variable can be imagined as a function having something fixed and well defined and something not fixed, but taking values according with a distribution. If the variable part does not exist, that you would not have a random variable, right, it would be a simple formula. But its not. Now the $P(X)$ incorporates what happens in $X$ alone, and $P(Y)$ what is in $Y$ alone. When you predict the decision theory says that you are interested in saying which is the most probable value $y_i$ given some input value $x_i$. So you are interested in finding $P(Y|X)$.
- A joint probability is not always completely described by marginal probabilities. In fact, is completely described only when marginal probabilities are independent. Which means for r.v. $X, Y$ knowing $P(X)$ and $P(Y)$ does not put you in position to know the joint density $P(X, Y)$ (thought for independence you have $P(X,Y)=P(X)P(Y)$).
- From here you can go directly and try to estimate $P(Y|X)$. In fact if your only interest in only in prediction, this might be a fair bet. A lot of supervised learning algorithms tries to estimate directly this probability. They are called discriminant classifiers. The reason is because they are used to classify something to the class with maximal conditional probability, you discriminate (choose) the maximum value.
- Now arriving at your question. Notice the obvious $P(X,Y) = P(Y|X)P(X)$ than you see that by trying learning the joint probability, you also learn the conditional (what you need for prediction). This kind of approach is called generative, because knowing the joint probability you can not only predict, but you can generate new data for your problem. More than that, knowing the join probability can give you more insights related with how what are you model works. You can find such kind of information which is not contained only in marginal distributions.
Some final notes:
- Technically you do not minimize the error function, but it's expectation. The error function remains as it is.
- $\mathcal{Z}$ is only a domain it's impossible to describe a probability only by it's domain.
It's late, I hope I was not completely incoherent.
| null |
CC BY-SA 3.0
| null |
2015-01-19T18:03:17.467
|
2015-01-19T18:03:17.467
| null | null |
108
| null |
4906
|
5
| null | null |
0
| null | null |
CC BY-SA 3.0
| null |
2015-01-19T19:11:09.507
|
2015-01-19T19:11:09.507
|
2015-01-19T19:11:09.507
|
-1
|
-1
| null |
|
4907
|
4
| null | null |
0
| null | null |
CC BY-SA 3.0
| null |
2015-01-19T19:11:09.507
|
2015-01-19T19:11:09.507
|
2015-01-19T19:11:09.507
|
-1
|
-1
| null |
|
4908
|
5
| null | null |
0
| null |
## What is Machine Learning?
Machine Learning is a subfield of computer science that draws on elements from algorithmic analysis, computational statistics, mathematics, optimization, etc. It is mainly concerned with the use of data to construct models that have high predictive/forecasting ability. Topics include modeling building, applications, theory, etc. Machine Learning is often used on large datasets to help draw predictive relationships between underlying features within the data.
Modern applications of Machine Learning are wide ranging including those in Bioinformatics, Astronomy, Computational Physics, Economics, Natural Language Processing, Image Recognition/Object Detection, Robotics, Recommendation Systems, etc.
---
## machine-learning Tag usage
When posting questions about Machine Learning, please make sure to take the following into consideration:
- All questions should include both sufficient detail and clarity to be able to solve the problem at hand. This includes links to original data sources, code used for model construction, links to tutorials/other resources used, etc.
- Questions should generally be more specific than "which model should I use" or "how can I achieve this" and explain what has been attempted/done so far.
- Unless directly related to the problem, all questions regarding where to get data (sources, APIs, datasets, etc.) should not be posted on Stack Exchange Data Science, but rather on: Open Data Stack Exchange.
---
## Types
Please see below for a (non-exhaustive) list of the types of Machine Learning:
- Supervised Learning supervised-learning
- Unsupervised Learning unsupervised-learning
- Semisupervised Learning semisupervised-learning
- Reinforcement Learning reinforcement-learning
- Deep Learning deep-learning
---
## External Resources
- scikit-learn: Machine Learning in Python
---
## Machine Learning Journals
- Journal of Machine Learning Research
- Machine Learning
- IEEE Neural Networks and Learning Systems
- Journal of Artificial Intelligence Research
- IEEE Pattern Analysis and Machine Intelligence
- Neurocomputing
| null |
CC BY-SA 4.0
| null |
2015-01-19T19:17:36.820
|
2021-04-03T23:49:53.930
|
2021-04-03T23:49:53.930
|
29169
|
11097
| null |
4909
|
4
| null | null |
0
| null |
Machine Learning is a subfield of computer science that draws on elements from algorithmic analysis, computational statistics, mathematics, optimization, etc. It is mainly concerned with the use of data to construct models that have high predictive/forecasting ability. Topics include modeling building, applications, theory, etc.
| null |
CC BY-SA 4.0
| null |
2015-01-19T19:17:36.820
|
2021-03-23T14:23:11.697
|
2021-03-23T14:23:11.697
|
29169
|
11097
| null |
4910
|
5
| null | null |
0
| null |
In machine learning and statistics, [classification](http://en.wikipedia.org/wiki/Statistical_classification) refers to the problem of predicting category memberships based on a set of pre-labeled examples. It is thus a type of [supervised learning](http://en.wikipedia.org/wiki/Supervised_learning).
Some of the most important classification algorithms are [support vector machines](http://en.wikipedia.org/wiki/Support_vector_machines) [svm](/questions/tagged/svm), [logistic regression](http://en.wikipedia.org/wiki/Logistic_regression), [naive Bayes](http://en.wikipedia.org/wiki/Naive_Bayes_classifier), [random forest](https://en.wikipedia.org/wiki/Random_forest) [random-forest](/questions/tagged/random-forest) and [artificial neural networks](http://en.wikipedia.org/wiki/Artificial_neural_network) [neural-network](/questions/tagged/neural-network).
When we wish to associate inputs with continuous values in a supervised framework, the problem is instead known as [regression](/questions/tagged/regression). The unsupervised counterpart to classification is known as [clustering](/questions/tagged/clustering) (or cluster analysis), and involves grouping data into categories based on some measure of inherent similarity.
| null |
CC BY-SA 3.0
| null |
2015-01-19T19:22:11.877
|
2017-12-26T04:14:54.953
|
2017-12-26T04:14:54.953
|
29575
|
29575
| null |
4911
|
4
| null | null |
0
| null |
An instance of supervised learning that identifies the category or categories which a new instance of dataset belongs.
| null |
CC BY-SA 3.0
| null |
2015-01-19T19:22:11.877
|
2015-01-20T23:20:11.470
|
2015-01-20T23:20:11.470
|
6496
|
6496
| null |
4912
|
2
| null |
4901
|
1
| null |
It's a little hasty to make too many conclusions about your data based on what you presented here. At the end of the day, all the information you have right now is that "GBT did not work well for this prediction problem and this metric", summed up by a single RMSE comparison. This isn't very much information - it could be that this is a bad dataset for GBT and some
other model would work, it could be that the label can't be predicted from these features with any model, or there could be some error in model setup/validation.
I'd recommend checking the following hypotheses:
1) Maybe, with your dataset size and the features you have, GBT isn't a very high-performance model. Try something completely different - maybe just a simple linear regression! Or a random forest. Or GBDT with very different parameter settings. Or something else. This will help you diagnose whether it's an issue with choice of models or with something else; if a few very different approaches give you roughly similar results, you'll know that it's not the model choice that is causing these results, and if one of those models behaves differently, then that gives you additional information to help diagnose the issue.
2) Maybe there's some issue with model setup and validation? I would recommend doing some exploration to get some intuition as to whether the RMSE you're getting is reasonable or whether you should expect better. Your post contained very little detail about what the data actually represents, what you know about the features and labels, etc. Perhaps you know those things but didn't include them here, but if not, you should go back and try to get additional understanding of the data before continuing. Look at some random data points, plot the columns against the target, look at the histograms of your features and labels, that sort of thing. There's no substitute for looking at the data.
3) Maybe there just aren't enough data points to justify complex models. When you have low numbers of data points (< 100), a simpler parametric model built with domain expertise and knowledge of what the features are may very well outperform a nonparametric model.
| null |
CC BY-SA 3.0
| null |
2015-01-19T21:18:54.347
|
2015-01-19T21:18:54.347
| null | null |
7786
| null |
4914
|
1
| null | null |
40
|
2644
|
Recently in a Machine Learning class from professor Oriol Pujol at UPC/Barcelona he described the most common algorithms, principles and concepts to use for a wide range of machine learning related task. Here I share them with you and ask you:
- is there any comprehensive framework matching tasks with approaches or methods related to different types of machine learning related problems?
---
How do I learn a simple Gaussian?
Probability, random variables, distributions; estimation, convergence and asymptotics, confidence interval.
How do I learn a mixture of Gaussians (MoG)? Likelihood, Expectation-Maximization (EM); generalization, model selection, cross-validation; k-means, hidden markov models (HMM)
How do I learn any density? Parametric vs. non-Parametric estimation, Sobolev and other functional spaces; l ́ 2 error; Kernel density estimation (KDE), optimal kernel, KDE theory
How do I predict a continuous variable (regression)? Linear regression, regularization, ridge regression, and LASSO; local linear regression; conditional density estimation.
How do I predict a discrete variable (classification)? Bayes classifier, naive Bayes, generative vs. discriminative; perceptron, weight decay, linear support vector machine; nearest neighbor classifier and theory
Which loss function should I use? Maximum likelihood estimation theory; l -2 estimation; Bayessian estimation; minimax and decision theory, Bayesianism vs frequentism
Which model should I use? AIC and BIC; Vapnik-Chervonenskis theory; cross-validation theory; bootstrapping; Probably Approximately Correct (PAC) theory; Hoeffding-derived bounds
How can I learn fancier (combined) models? Ensemble learning theory; boosting; bagging; stacking
How can I learn fancier (nonlinear) models? Generalized linear models, logistic regression; Kolmogorov theorem, generalized additive models; kernelization, reproducing kernel Hilbert spaces, non-linear SVM, Gaussian process regression
How can I learn fancier (compositional) models? Recursive models, decision trees, hierarchical clustering; neural networks, back propagation, deep belief networks; graphical models, mixtures of HMMs, conditional random fields, max-margin Markov networks; log-linear models; grammars
How do I reduce or relate features? Feature selection vs dimensionality reduction, wrapper methods for feature selection; causality vs correlation, partial correlation, Bayes net structure learning
How do I create new features? principal component analysis (PCA), independent component analysis (ICA), multidimensional scaling, manifold learning, supervised dimensionality reduction, metric learning
How do I reduce or relate the data? Clustering, bi-clustering, constrained clustering; association rules and market basket analysis; ranking/ordinal regression; link analysis; relational data
How do I treat time series? ARMA; Kalman filter and stat-space models, particle filter; functional data analysis; change-point detection; cross-validation for time series
How do I treat non-ideal data? covariate shift; class imbalance; missing data, irregularly sampled data, measurement errors; anomaly detection, robustness
How do I optimize the parameters? Unconstrained vs constrained/Convex optimization, derivative-free methods, first- and second-order methods, backfitting; natural gradient; bound optimization and EM
How do I optimize linear functions? computational linear algebra, matrix inversion for regression, singular value decomposition (SVD) for dimensionality reduction
How do I optimize with constraints? Convexity, Lagrange multipliers, Karush-Kuhn-Tucker conditions, interior point methods, SMO algorithm for SVM
How do I evaluate deeply-nested sums? Exact graphical model inference, variational bounds on sums, approximate graphical model inference, expectation propagation
How do I evaluate large sums and searches? Generalized N-body problems (GNP), hierarchical data structures, nearest neighbor search, fast multiple method; Monte Carlo integration, Markov Chain Monte Carlo, Monte Carlo SVD
How do I treat even larger problems? Parallel/distributed EM, parallel/distributed GNP; stochastic subgradient methods, online learning
How do I apply all this in the real world? Overview of the parts of the ML, choosing between the methods to use for each task, prior knowledge and assumptions; exploratory data analysis and information visualization; evaluation and interpretation, using confidence intervals and hypothesis test, ROC curves; where the research problems in ML are
|
When to use what - Machine Learning
|
CC BY-SA 3.0
| null |
2015-01-20T15:27:38.160
|
2015-01-28T14:23:32.613
|
2015-01-28T14:23:32.613
|
5143
|
5143
|
[
"machine-learning",
"algorithms"
] |
4915
|
2
| null |
4914
|
3
| null |
That's a good list covering a lot. I've used some of these methods since before anything was called machine learning, and I think you will see some of the methods you list coming in and out of use over time. If a method has been out of favour for too long, it might be time for a revisit. Some methods can obfuscate behind different names resulting from different fields of study.
One of the main areas I have used these methods is in mineral potential modelling, which is geospatial and to support that you could add some additional categories relating to spatial and oriented data methods.
Taking your broad question to specific fields will probably be where you find more examples of methods not in your comprehensive list. For example, two methods I've seen in mineral potential have been backward stepwise regression and weights of evidence modelling. I'm not a statistician; perhaps these would be considered covered in the list under linear regression and Bayesian methods.
| null |
CC BY-SA 3.0
| null |
2015-01-21T03:06:39.243
|
2015-01-21T21:52:52.590
|
2015-01-21T21:52:52.590
|
6646
|
6646
| null |
4916
|
2
| null |
4914
|
6
| null |
I agree with @geogaffer. This is a very good list, indeed. However, I see some issues with this list as it is currently formulated. For example, one issue is that suggested solutions are of different granularity levels - some of them represent approaches, some - methods, some - algorithms, and some other - just concepts (in other words, terms within a topic's domain terminology). In addition, - and I believe that this is much more important than the above - I think that it would be much valuable, if all those solutions in the list were arranged within a unified thematic statistical framework. This idea was inspired by reading an excellent book by Lisa Harlow "The essence of multivariate thinking". Hence, recently I've initiated a corresponding, albeit currently somewhat limited, [discussion](https://stats.stackexchange.com/q/131320/31372) on the StackExchange's Cross Validated site. Don't let the title confuse you - my implied intention and hope is for building a unified framework, as mentioned above.
| null |
CC BY-SA 3.0
| null |
2015-01-21T07:50:53.953
|
2015-01-21T07:50:53.953
|
2017-04-13T12:44:20.183
|
-1
|
2452
| null |
4917
|
1
| null | null |
1
|
174
|
I want to visualize goal achievment progress.
This is my first idea:
- use area chart to show progress in current metric
- use horizontal band to show the goal value
- colorize areas under/above the band into "positive" and "negative" colors


Is this approach informative enough? Are there better choises?
Additional info:
- charts made in Tableau
- two data sources: metric progress & goals
|
Visualize performance, % of goal implementation
|
CC BY-SA 3.0
| null |
2015-01-21T09:15:54.750
|
2015-01-23T23:16:54.220
| null | null |
97
|
[
"visualization",
"marketing",
"tableau"
] |
4920
|
2
| null |
4914
|
1
| null |
I think your approach is a little backwards.
"What is the mean of a Gaussian distribution fitted to this data?" is never the problem statement, so "how do I fit a Gaussian?" is never the problem you actually want to solve.
The difference is more than semantic. Consider the question of "how do I construct new features?" If your goal is to develop an index, you might use some type of factor analysis. If your goal is to simply reduce the feature space before fitting a linear model, you might skip the step entirely and use elastic net regression instead.
A better approach would be to compile a list of actual data analysis tasks you would like to be able to tackle. Questions like:
>
How do I predict whether customers will return to my shopping website?
How do I learn how many "major" consumer shopping patterns there are, and what are they?
How do I construct an index of "volatility" for different items in my online store?
Also your list right now includes an enormous amount of material; far too much to "review" and gain more than a surface-level understanding. Having an actual purpose in mind can help you sort out your priorities.
| null |
CC BY-SA 3.0
| null |
2015-01-21T14:17:17.970
|
2015-01-21T14:17:17.970
| null | null |
1156
| null |
4921
|
1
|
4922
| null |
2
|
157
|
I am COMPLETELY new to the field of Data Science, mainly because every employer I have worked for, simply COULDN'T sell any customers anything that would use techniques learned in this field.
Of particular interest to me is machine learning/Predictive Analysis.
I have attempted many "test projects" myself, but I seem to NEED some sort of outside "catalyst" to tell me a specific goal, and a specific set of guidelines, when I am trying to learn something.
Otherwise, I tend to lose focus, and jump from one interesting topic to the next, without ever gaining any experience.
Thank you!!
|
Could someone please offer me some guidance on some kind of particular, SPECIFIC project that I could attemp, to "get my feet wet, so to speak"
|
CC BY-SA 3.0
| null |
2015-01-22T00:52:51.710
|
2015-01-22T17:25:40.690
| null | null |
7909
|
[
"machine-learning",
"data-mining",
"bigdata",
"predictive-modeling"
] |
4922
|
2
| null |
4921
|
3
| null |
I would suggest Kaggle learning projects - [http://www.kaggle.com/competitions](http://www.kaggle.com/competitions)
Look for the ones in the 101 section that offer knowledge. There's many pre-made solutions ready, which you can ingest and try variations of.
Also, I have bookmarked a [Comprehensive learning path – Data Science in Python](http://www.analyticsvidhya.com/blog/learning-path-data-science-python/), which among other things gives a few answers to your specific question.
| null |
CC BY-SA 3.0
| null |
2015-01-22T10:44:37.240
|
2015-01-22T10:44:37.240
| null | null |
587
| null |
4923
|
1
| null | null |
2
|
619
|
In SIFT feature extraction how the key points will be generated and how the features will be stored in the database. In image will the bag of visual words be images or text words?
|
Bag of Words creation in image
|
CC BY-SA 3.0
| null |
2015-01-22T13:36:24.967
|
2015-01-23T06:15:40.773
| null | null |
7873
|
[
"feature-extraction"
] |
4924
|
2
| null |
4921
|
2
| null |
You can use some public data sets to play around with.
The iris flower data set is great for classification problems. As an example you can try to classify the flower species with k-nearest neighbor model or a decision tree (CART) model.
You can find this and other sample machine learning datasets here: [http://archive.ics.uci.edu/ml/datasets.html](http://archive.ics.uci.edu/ml/datasets.html)
This list identifies what type of machine learning tasks can be performed with each dataset.
| null |
CC BY-SA 3.0
| null |
2015-01-22T17:25:40.690
|
2015-01-22T17:25:40.690
| null | null |
6648
| null |
4925
|
1
| null | null |
28
|
3255
|
As there are numerous tools available for data science tasks, and it's cumbersome to install everything and build up a perfect system.
Is there a Linux/Mac OS image with Python, R and other open-source data science tools installed and available for people to use right away? An Ubuntu or a light weight OS with latest version of Python, R (including IDEs), and other open source data visualization tools installed will be ideal. I haven't come across one in my quick search on Google.
Please let me know if there are any or if someone of you have created one for yourself? I assume some universities might have their own VM images. Please share such links.
|
VM image for data science projects
|
CC BY-SA 3.0
| null |
2015-01-22T21:34:57.443
|
2021-02-09T04:28:40.193
|
2017-04-24T17:37:47.470
|
31513
|
7923
|
[
"python",
"r",
"tools"
] |
4926
|
2
| null |
4925
|
13
| null |
There is another choice which popular recently: docker([https://www.docker.com](https://www.docker.com)). Docker is a container and let you create/maintain a working environment very easily and fast.
- install essential tools for data science in python
https://registry.hub.docker.com/u/ceshine/python-datascience/
- use r language to do data science
https://github.com/rocker-org/rocker
Hope that would help you.
| null |
CC BY-SA 3.0
| null |
2015-01-23T02:08:59.950
|
2015-01-23T02:08:59.950
| null | null |
1003
| null |
4927
|
2
| null |
4925
|
8
| null |
While Docker images are now more trendy, I personally find Docker technology not user-friendly, even for advanced users. If you are OK with using non-local VM images and can use Amazon Web Services (AWS) EC2, consider R-focused images for data science projects, pre-built by Louis Aslett. The images contain very recent, if not the latest, versions of Ubuntu LTS, R and RStudio Server. You can access them [here](http://www.louisaslett.com/RStudio_AMI).
Besides main components I've listed above, the images contain many useful data science tools built-in as well. For example, the images support LaTeX, ODBC, OpenGL, Git, optimized numeric libraries and more.
| null |
CC BY-SA 3.0
| null |
2015-01-23T02:26:23.773
|
2015-01-23T02:32:18.343
|
2015-01-23T02:32:18.343
|
2452
|
2452
| null |
4928
|
2
| null |
4923
|
2
| null |
How to calculate key points:
- Take your image $I(x,y)$ and convolve it using a Gaussian $G(x,y,k\sigma$) at different scales $k \sigma$ to obtain $L(x,y,k\sigma)$. This produces several versions (one for each scale $k\sigma$) of your image $I(x,y)$ with different degrees of blurring.
- Separate your blurred images according to octaves (an octave is usually taken as $2\sigma$). Within a given octave, take the difference between adjacent blurred images $L(x,y,k_{i}\sigma)$ and $L(x,y,k_{j}\sigma)$. This difference is called a Difference of Gaussians (DoG) $D(x,y,\sigma) = L(x,y,k_{i}\sigma) - L(x,y,k_{j}\sigma)$. At this point, an image will be helpful (source: opencv):

As you can see, you have the blurred/convolved/filtered images on the left and the differences between adjacent images on the right.
- Then, take one pixel on a DoG and compare it to its 8 neighbors in the same DoG, as well as to the 9 equivalent pixels of the DoG located in the next scale and to the other 9 pixels in the previous DoG. In the drawing, this corresponds to the DoG's located above and below. If this pixel is a local extrema, then you have a candidate keypoint. Keep in mind that candidates can be discarded based on other criteria.
Once you have these keypoints, you proceed to generate descriptors. For each keypoint, calculate an image gradient. As you should know, gradients tell you the direction of maximum rate of change. Therefore, you can construct a grid around each keypoint that is oriented according to the dominant gradient around that point. This grid has subregions (usually 16 subregions) and for each subregion, calculate an 8-bin histogram. Finally, concatenate every histogram you obtained from every subregion in your grid and that is the feature vector for that keypoint (actually, the full feature vector also includes the location and rotation angle.) A helpful illustration (from Solem's book Programming Computer Vision with Python):

At this point, you have a feature vector with 132 values for each keypoint. As for your second question, I'm not sure about the best approach to store an array in a database. Maybe others can expand on this point. Of course, there are several options:
- Create an array datatype to store your data
- Simply use a VARCHAR field
- Store it as a binary file
- Use a database specifically designed to handle arrays.
| null |
CC BY-SA 3.0
| null |
2015-01-23T06:10:37.417
|
2015-01-23T06:15:40.773
|
2015-01-23T06:15:40.773
|
4621
|
4621
| null |
4929
|
2
| null |
4925
|
12
| null |
If you are looking for a VM with a bunch of tools preinstalled, try the [Data Science Toolbox](http://datasciencetoolbox.org/).
| null |
CC BY-SA 3.0
| null |
2015-01-23T10:22:36.327
|
2015-01-23T10:22:36.327
| null | null |
21
| null |
4930
|
2
| null |
4925
|
5
| null |
Did you try Cloudera's QuickStart VM?:
- http://www.cloudera.com/content/cloudera/en/downloads/quickstart_vms/cdh-5-3-x.html
I found it very easy to run it and it includes open source software such as [Mahout](http://mahout.apache.org/) and [Spark](https://spark.apache.org/).
| null |
CC BY-SA 3.0
| null |
2015-01-23T14:00:52.367
|
2015-01-23T14:00:52.367
| null | null |
7919
| null |
4931
|
2
| null |
4925
|
5
| null |
Today I used [this repository](https://github.com/sequenceiq/docker-spark) and built it with docker. It is a docker image building spark based on Hadoop image of the same owner. If you to use spark, it has a python api called [pyspark](http://spark.apache.org/docs/latest/api/python/).
| null |
CC BY-SA 4.0
| null |
2015-01-23T15:34:07.570
|
2021-02-09T04:28:40.193
|
2021-02-09T04:28:40.193
|
29169
|
7926
| null |
4934
|
2
| null |
4917
|
2
| null |
What information should be shown on the charts are based on what the viewer wants. From your sample charts, the x axis indicates 'date'. For time serial analysis, people usually want to know the trending of the metrics and to predict the performance in the future.
Here are 2 suggestions:
- People may want to know how it will perform in the future. Add a trendline. Trendline could help people to predict the performance in the future.
http://onlinehelp.tableausoftware.com/current/pro/online/mac/en-us/trendlines.html
- People may want to know how far away from goal. When people mouseover on the points on the metric line, you can try to show how far from current metric to the goal.
Try to understand what information people want to know and visualise them. Hope my answer could help you.
| null |
CC BY-SA 3.0
| null |
2015-01-23T23:16:54.220
|
2015-01-23T23:16:54.220
| null | null |
1003
| null |
4935
|
1
| null | null |
0
|
93
|
In the big data world there is lot of talk about implementing an "active archive". I see cloudera talk about it a lot.
The value prop is that you move the low value (and less used) data from EDW to Hadoop and then save on expensive EDW storage by using Hadoop.
So in my company say we keep 10 years of data on EDW and say we don't use anything below 2 years very actively. So I move 8 years of data from EDW to Hadoop.
I can setup an Impala (or equivalent) product to query the 8 years of data as well.
But the problem is how do you order, sort a query which requires some data form EDW and some data from Impala?
because this kind of grouping, sorting ordering etc will have to be done in memory and the apps which queried the EDW were not written for operations and don't have the capacity as well to sort, group and process so much of data.
So how are people implementing the Active Archive?
|
Questions on "Active Archive"
|
CC BY-SA 3.0
| null |
2015-01-24T15:15:27.777
|
2015-01-24T16:51:33.300
|
2015-01-24T15:22:51.663
|
6644
|
6644
|
[
"apache-hadoop"
] |
4936
|
1
| null | null |
13
|
5530
|
What happens when we train a basic support vector machine (linear kernel and no soft-margin) on non-linearly separable data? The optimisation problem is not feasible, so what does the minimisation algorithm return?
|
What happens when we train a linear SVM on non-linearly separable data?
|
CC BY-SA 3.0
| null |
2015-01-24T15:52:28.727
|
2018-02-02T14:40:20.767
|
2015-01-25T09:39:27.340
|
1003
|
7944
|
[
"machine-learning",
"svm"
] |
4937
|
2
| null |
4935
|
1
| null |
I think the idea is that you have all data in HDFS, and query it with Impala, and also keep some small amount of data in your data warehouse. That is, keep all 10 years in Hadoop, and also 2 years in an EDW. The cost of also having the 2 years in Hadoop is small.
| null |
CC BY-SA 3.0
| null |
2015-01-24T16:51:33.300
|
2015-01-24T16:51:33.300
| null | null |
21
| null |
4938
|
2
| null |
4936
|
13
| null |
I think basic support vector machine means hard-margin SVM. So, let's review:
### What is a Hard-Margin SVM
In short, we want to find a hyperplane with the largest margin which be able to separate all observations correctly in our training sample space.
### The optimisation problem in hard-margin SVM
Given the above definition, what is the optimisation problem which we need to solve?
- The largest margin hyperplane: We want max(margin)
- Be able to separate all observations correctly: We need to optimise margin and also satisfy the constraint: No in-sample errors
### What happens when we train a linear SVM on non-linearly separable data?
Back to your question, since you mentioned the training data set is not linearly separable, by using hard-margin SVM without feature transformations, it's impossible to find any hyperplane which satisfies "No in-sample errors".
Normally, we solve SVM optimisation problem by Quadratic Programming, because it can do optimisation tasks with constraints. If you use Gradient Descent or other optimisation algorithms which without satisfying the constraints of hard-margin SVM, you should still get a result, but that's not a hard-margin SVM hyperplane.
By the way, with non-linearly separable data, usually we choose
- hard-margin SVM + feature transformations
- use soft-margin SVM directly (In practical, soft-margin SVM usually get good results)
| null |
CC BY-SA 3.0
| null |
2015-01-25T01:29:53.470
|
2018-02-02T14:40:20.767
|
2020-06-16T11:08:43.077
|
-1
|
1003
| null |
4941
|
2
| null |
2682
|
7
| null |
Neo4j and Spark GraphX are meant for solving problem at different level and they are complimentary to each other.
They can be connected by Neo4j's Mazerunner extension:
>
Mazerunner is a Neo4j unmanaged extension and distributed graph
processing platform that extends Neo4j to do big data graph processing
jobs while persisting the results back to Neo4j.
Mazerunner uses a message broker to distribute graph processing jobs
to Apache Spark's GraphX module. When an agent job is dispatched, a
subgraph is exported from Neo4j and written to Apache Hadoop HDFS.
After Neo4j exports a subgraph to HDFS, a separate Mazerunner service
for Spark is notified to begin processing that data. The Mazerunner
service will then start a distributed graph processing algorithm using
Scala and Spark's GraphX module. The GraphX algorithm is serialized
and dispatched to Apache Spark for processing.
Once the Apache Spark job completes, the results are written back to
HDFS as a Key-Value list of property updates to be applied back to
Neo4j.
Neo4j is then notified that a property update list is available from
Apache Spark on HDFS. Neo4j batch imports the results and applies the
updates back to the original graph.
Check out this tutorial to get an idea on how to combine the two:
[http://www.kennybastani.com/2014/11/using-apache-spark-and-neo4j-for-big.html](http://www.kennybastani.com/2014/11/using-apache-spark-and-neo4j-for-big.html)
| null |
CC BY-SA 3.0
| null |
2015-01-25T14:43:21.033
|
2015-02-02T07:31:51.050
|
2015-02-02T07:31:51.050
|
7950
|
7950
| null |
4942
|
1
| null | null |
17
|
11106
|
Due to various [curses of dimensionality](http://en.wikipedia.org/wiki/Curse_of_dimensionality), the accuracy and speed of many of the common predictive techniques degrade on high dimensional data. What are some of the most useful techniques/tricks/heuristics that help deal with high-dimensional data effectively? For example,
- Do certain statistical/modeling methods perform well on high-dimensional datasets?
- Can we improve the performance of our predictive models on high-dimensional data by using certain (that define alternative notions of distance) or kernels (that define alternative notions of dot product)?
- What are the most useful techniques of dimensionality reduction for high-dimensional data?
|
High-dimensional data: What are useful techniques to know?
|
CC BY-SA 3.0
| null |
2015-01-25T22:52:23.437
|
2017-02-14T16:31:51.417
|
2015-01-25T22:59:05.897
|
5203
|
5203
|
[
"machine-learning",
"statistics",
"dimensionality-reduction"
] |
4943
|
1
| null | null |
14
|
40293
|
How does varying the regularization parameter in an SVM change the decision boundary for a non-separable dataset? A visual answer and/or some commentary on the limiting behaviors (for large and small regularization) would be very helpful.
|
Intuition for the regularization parameter in SVM
|
CC BY-SA 3.0
| null |
2015-01-25T23:03:02.297
|
2021-01-10T16:58:56.070
| null | null |
5203
|
[
"svm"
] |
4944
|
1
| null | null |
4
|
1698
|
What are the most useful techniques for learning a binary classifier from a dataset with a high degree of imbalance (i.e., a dataset with the "target" class being much rarer than the "background" class)? For example,
- Should one first down-sample the majority/background class to reduce its frequency and then readjust the probabilities reported by the learning algorithm? How should one do the readjustment?
- Should one use different approaches for different learning algorithms, i.e., are there different techniques for dealing with imbalance in SVM, random forests, logistic regression, etc.?
|
How to learn a classifier from a dataset with high imbalance
|
CC BY-SA 3.0
| null |
2015-01-25T23:09:22.863
|
2017-06-19T11:24:33.477
|
2015-01-26T16:43:42.273
|
97
|
5203
|
[
"machine-learning",
"classification",
"class-imbalance"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.