GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 63,844,295 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-08-22T04:46:00.000 | 0 | 1 | 0 | Dash graph selected legend items as input to callback | 57,602,189 | 0 | python,callback,plotly-dash,legend-properties,plotly-python | Use restyleData input in the callback: Input("graph-id", "restyleData") | I have a Dash app with a dcc.Graph object and a legend for multiple deselectable traces. How can I pass the list of traces selected in the legend as an input to a callback? | 0 | 1 | 542 |
0 | 57,614,302 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-22T10:10:00.000 | 0 | 2 | 0 | Specify log-normal family in BAMBI model | 57,606,946 | 0 | python-3.x,bambi | Unless I'm misunderstanding something, I think all you need to do is specify link='log' in the fit() call. If your assumption is correct, the exponentiated linear prediction will be normally distributed, and the default error distribution is gaussian, so I don't think you need to build a custom family for this—the default gaussian family with a log link should work fine. But feel free to clarify if this doesn't address your question. | I'm trying to fit a simple Bayesian regression model to some right-skewed data. Thought I'd try setting family to a log-normal distribution. I'm using pymc3 wrapper BAMBI. Is there a way to build a custom family with a log-normal distribution? | 0 | 1 | 162 |
0 | 57,712,280 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-23T00:00:00.000 | 1 | 1 | 1 | Dask worker seem die but cannot find the worker log to figure out why | 57,618,323 | 0.197375 | python,dask | Worker logs are usually managed by whatever system you use to set up Dask.
Perhaps you used something like Kubernetes or Yarn or SLURM?
These systems all have ways to get logs back.
Unfortunately, once a Dask worker is no longer running, Dask itself has no ability to collect logs for you. You need to use the system that you use to launch Dask. | I have a piece of DASK code run on local machine which work 90% of time but will stuck sometimes. Stuck mean. No crash, no error print out not cpu usage. never end.
I google and think it maybe due to some worker dead. I will be very useful if I can see the worker log and figure out why.
But I cannot find my worker log. I go to edit config.yaml to add loging but still see nothing from stderr.
Then I go to dashboard --> info --> logs and see blank page.
The code it stuck is
X_test = df_test.to_dask_array(lengths=True)
or
proba = y_pred_proba_train[:, 1].compute()
and my ~/.config/dask/config.yaml or ~.dask/config.yaml look like
logging:
distributed: info
distributed.client: warning
distributed.worker: debug
bokeh: error
I am using
python 3.6
dask 1.1.4
All I need is a way to see the log so that I can try to figure out what goes wrong.
Thanks
Joseph | 0 | 1 | 149 |
0 | 57,627,460 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-23T12:22:00.000 | 0 | 1 | 0 | TFIDF vs Word2Vec | 57,626,276 | 0 | python,machine-learning,data-science,word2vec,tf-idf | In example one, the word2vec maybe doesn't have the words Bills and CHAPS into its bag of words. That been said, taking out these words the sentences are the same*.
In Example 2, maybe in the tokenization of the word2vec algorithm, it took the "requirements:" as one token and the "requirements" as a different one, That's why their vector it's a bit different so they aren't exactly the same.
*Word2vec computes the sentence vector by taking the average of its word vectors. If a word arent in the word2vec's bag of words, it will have vector=[0,0,...0]. | I am trying to find similarity score between two documents (containing around 15000 records).
I am using two methods in python:
1. TFIDF (Scikit learn) 2. Word2Vec (gensim, google pre-trained vectors)
Example1
Doc1- Click on "Bills" tab
Doc2- Click on "CHAPS" tab
First method gives 0.9 score.
Second method gives 1 score
Example2
Doc1- See following requirements:
Doc2- See following requirements
First method gives 1 score.
Second method gives 0.98 score
Can anyone tell me:
why in Example1 Word2Vec is giving 1 though they are very different
and in Example2 Word2Vec is giving 0.98 though they are having difference of only ":" | 0 | 1 | 2,587 |
0 | 57,636,395 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T07:50:00.000 | 0 | 2 | 0 | How to get a callback when the specified epoch number is over? | 57,636,091 | 0 | python,keras | Actually, the way keras works this is probably not the best way to go, it would be much better to treat this as fine tuning, meaning that you finish the 10 epochs, save the model and then load the model (from another script) and continue training with the lr and data you fancy.
There are several reasons for this.
It is much clearer and easier to debug. You check you model properly after the 10 epochs, verify that it works properly and carry on
It is much better to do several experiments this way, starting from epoch 10.
Good luck! | I want to fine turn my model when using Keras, and I want to change my training data and learning rate to train when the epochs arrive 10, So how to get a callback when the specified epoch number is over. | 0 | 1 | 484 |
0 | 57,638,917 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T11:40:00.000 | 0 | 1 | 0 | OpenCV feature pairs to point cloud | 57,637,608 | 0 | python,opencv | As you say, you have no calibration, so let’s forget about rectification. What you want is the depth of the points, so you can project them into 3D (which then uses just the intrinsic calibration of one camera, mainly the focal length).
Since you have no rectification, you cannot expect exact results, so let’s try to get as close as possible:
Depth is focal length times baseline divided by disparity, disparity and focal length being in pixels, and depth and baseline in (recommendation) meters.
For accurate disparity you need a rectified camera and correspondences between your features in both images. Since without calibration, you have no hope of rectification, you could try to just use the original images instead. It will work fine the more parallel the cameras are. If they are not parallel, you will introduce an error here and your results will become less accurate. If this becomes bad you must find a way to calibrate your camera.
But most importantly, you need correspondences between your features in both images. Running SIFT in both images won‘t do. A better approach would be running SIFT in just one image and then finding the corresponding pixels for each of the features in the other image. There are plenty of methods for that, I believe OpenCv has some simple block matching builtin. | I have some SIFT features in two stereo images, and I'm trying to place them in 3D space. I've found triangulatePoints, which seems to be what I want, however, I'm having trouble with the arguments.
triangulatePoints takes 4 arguments, projMatr1 and projMatr2, which is where my issues start, and projPoints1 and projPoints2, which are my feature points. The OpenCV docs suggest using stereoRectify to find the projection matrices.
stereoRectify takes the intrinsic camera matrices (which I've calculated prior with calibrateCamera) and the image size from calibration. As well as two arguments R (rotation matrix) and T (translation vector), which can be found with stereoCalibrate.
However, stereoCalibrate takes "object points", which I'm pretty sure I can't calculate for images without a reference, which is a bit of a roadblock.
Is this the best way to be calculating 3D positions from pairs of features? If so, how can I calculate projMatr1 and projMatr2 without stereoCalibrate? | 0 | 1 | 135 |
0 | 57,641,090 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-24T18:30:00.000 | 1 | 3 | 0 | Best data type (in terms of speed/RAM) for millions of pairs of a single int paired with a batch (2 to 100) of ints | 57,640,595 | 0.066568 | python,numpy | Use numpy. It us the most efficient and you can use it easily with a machine learning model. | I have about 15 million pairs that consist of a single int, paired with a batch of (2 to 100) other ints.
If it makes a difference, the ints themselve range from 0 to 15 million.
I have considered using:
Pandas, storing the batches as python lists
Numpy, where the batch is stored as it's own numpy array (since numpy doesn't allow variable length rows in it's 2D data structures)
Python List of Lists.
I also looked at Tensorflow tfrecords but not too sure about this one.
I only have about 12 gbs of RAM. I will also be using to train over a machine learning algorithm so | 0 | 1 | 71 |
0 | 67,599,276 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-24T20:59:00.000 | 0 | 2 | 0 | Detecting questions in text | 57,641,504 | 0 | python-3.x,machine-learning,text,nlp,analytics | Please use the NLP methods before processing the sentiment analysis. Use the TFIDF, Word2Vector to create vectors on the given dataset. And them try the sentiment analysis. You may also need glove vector for the conducting analysis. | I have a project where I need to analyze a text to extract some information if the user who post this text need help in something or not, I tried to use sentiment analysis but it didn't work as expected, my idea was to get the negative post and extract the main words in the post and suggest to him some articles about that subject, if there is another way that can help me please post it below and thanks.
for the dataset i useed, it was a dataset for sentiment analyze, but now I found that it's not working and I need a dataset use for this subject. | 0 | 1 | 578 |
0 | 57,797,731 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 1 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0.066568 | python,pip,pytorch | I've been in same situation.
My prob was, the python version... I mean, in the 'bit' way.
It was 32 bit that the python I'd installed.
You should check which bit of python you installed.
you can check in the app in setting, search python, then you will see the which bit you've installed.
After I installed the 64 bit of Python, it solved.
I hope you figure it out!
environment : win 10 | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I have already tried to mix around the the different options but none of them has worked.
ERROR: ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0+cpu
I tried to do pip install pytorch but pytorch doesn't support pypi | 0 | 1 | 2,971 |
0 | 57,642,037 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 0 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0 | python,pip,pytorch | It looks like it can't find a version called "1.2.0+cpu" from it's list of versions that it can find (0.1.2, 0.1.2.post1, 0.1.2.post2). Try looking for one of those versions on the PyTorch website. | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I have already tried to mix around the the different options but none of them has worked.
ERROR: ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0+cpu
I tried to do pip install pytorch but pytorch doesn't support pypi | 0 | 1 | 2,971 |
0 | 57,648,698 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-08-24T22:45:00.000 | 0 | 3 | 0 | PyTorch not downloading | 57,642,019 | 0 | python,pip,pytorch | So it looks like I cant install PyTorch because I am running python 32-bit. This may or may not be the problem but this is the only possible error that I could see this being the cause. | I go to the PyTorch website and select the following options
PyTorch Build: Stable (1.2)
Your OS: Windows
Package: pip
Language: Python 3.7
CUDA: None
(All of these are correct)
Than it displays a command to run
pip3 install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I have already tried to mix around the the different options but none of them has worked.
ERROR: ERROR: Could not find a version that satisfies the requirement torch==1.2.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.2.0+cpu
I tried to do pip install pytorch but pytorch doesn't support pypi | 0 | 1 | 2,971 |
0 | 57,658,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T02:56:00.000 | 0 | 1 | 0 | Can someone explain or summarize the input shape of keras under different type of neural networks? | 57,651,278 | 0 | python,keras | Generally, CNN takes 4 dimensions as input data. Whenever you train a CNN the model in Keras it will automatically convert the input data into 4D. If you want to predict using your CNN model you have to make sure that even your output data/ or the data you want to run the inference on should have the same dimensions as input data. You can simply add 0 or None at the output data by using numpy expand_dims function. | I am very new to the python keras. And with the understanding of the Keras, I'm confusing about the input shape of Keras. I feel under different neural network, I need to reconstruct my data into different shapes.
For example, if I'm building a simple ANN, my train data should be a matrix like [m, n], the m is the number of samples and n is the number of feature. But recently I'm learning 1D convolutional neural network. I found the tutorial construct the training data as [a, b, c], where the a is the number of sample, b is the number of timestep, c is the number of feature (equals to 1). But why I can't simply reshape the data into [a, b]? since c will always be 1 for a 1D convolutional neural network.
I'm not sure if I understand the above right. I am just wondering is there a summarize of the training_data shape of different neural networks? Or is there any logic behind the shape of data? So I can always make sure my training data has the right format.
The different neural networks mean like ANN, 1D CNN, 2d CNN, RNN and so on. | 0 | 1 | 54 |
0 | 57,660,353 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T13:46:00.000 | 0 | 1 | 0 | combined different objects contains plots and data frames (tables) and paragraphs (markdowns) in to single html report | 57,659,156 | 0 | python,html,reporting | Use Jupyter or Zeppelin notebooks. They provide all of the functionality you described and can export to PDF. Reports can even be run/emailed on a predetermined schedule. | I want to generate a single report that integrates different objects as my analysis inlcudes plots (matplotlib, seaborn and bokeh) and pandas data frames (tables) and paragraphs (markdowns) into an HTML report in python | 1 | 1 | 16 |
0 | 57,661,488 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-26T15:52:00.000 | 1 | 1 | 0 | Does the simple parameters also change during Hyper-parameter tuning | 57,661,188 | 1.2 | python,machine-learning,cross-validation | Short is NO, they are not fixed.
Because, Hyper-parameters directly influence your simple parameters. So for a neural network, no of hidden layers to use is a hyper-parameter, while weights and biases in each layer can be called simple parameter. Of course, you can't make weights of individual layers constant when the number of layers of the network (hyper-parameter) itself is variable . Similarly in linear regression, your regularization hyper-parameter directly impacts the weights learned.
So goal of tuning hyper-parameter is to get a value that leads to best set of those simple parameters. Those simple parameters are the ones you actually care about, and they are the ones to be used in final prediction/deployment. So tuning hyper-parameter while keeping them fixed is meaningless. | During hyper-parameter tuning do the parameters (weights already learned during model training) are also optimized or are they fixed and only optimal values are found for the hyper-parameters? Please explain. | 0 | 1 | 34 |
0 | 57,663,266 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-08-26T17:30:00.000 | 2 | 1 | 0 | Is it appropriate to train W2V model on entire corpus? | 57,662,405 | 1.2 | python,machine-learning,nlp,word2vec | The answer to most questions like these in NLP is "try both" :-)
Contamination of test vs train data is not relevant or a problem in generating word vectors. That is a relevant issue in the model you use the vectors with. I found performance to be better with whole corpus vectors in my use cases.
Word vectors improve in quality with more data. If you don't use test corpus, you will need to have a method for initializing out-of-vocabulary vectors and understanding the impact they may have on your model performance. | I have a corpus of free text medical narratives, for which I am going to use for a classification task, right now for about 4200 records.
To begin, I wish to create word embeddings using w2v, but I have a question about a train-test split for this task.
When I train the w2v model, is it appropriate to use all of the data for the model creation? Or should I only use the train data for creating the model?
Really, my question sort of comes down to: do I take the whole dataset, create the w2v model, transform the narratives with the model, and then split, or should I split, create w2v, and then transform the two sets independently?
Thanks!
EDIT
I found an internal project at my place of work which was built by a vendor; they create the split, and create the the w2v model on ONLY the train data, then transform the two sets independently in different jobs; so it's the latter of the two options that I specified above. This is what I thought would be the case, as I wouldn't want to contaminate the w2v model on any of the test data. | 0 | 1 | 1,011 |
0 | 57,681,370 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-26T22:11:00.000 | 0 | 1 | 0 | PySpark Group and apply UDF row by row operation | 57,665,530 | 0 | python,pyspark | you need an aggregation ?
df.groupBy("tag").agg({"date":"min"})
what about that ? | I have a dataset that contains 'tag' and 'date'. I need to group the data by 'tag' (this is pretty easy), then within each group count the number of row that the date for them is smaller than the date in that specific row. I basically need to loop over the rows after grouping the data. I don't know how to write a UDF which takes care of that in PySpark. I appreciate your help. | 0 | 1 | 62 |
0 | 58,252,317 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-27T08:01:00.000 | 0 | 1 | 0 | Estimator.train() and .predict() are too slow for small data sets | 57,670,160 | 0 | python,tensorflow-estimator | Convert to a tf.keras.Model instead of an Estimator, and use tf.keras.Model.fit() instead of Estimator.train(). fit() doesn't have the fixed delay that train() does. The Keras predict() doesn't either. | I'm trying to implement a DQN which makes many calls to Estimator.train() followed by Estimator.predict() on the same model with a small number of examples each. But each call takes a minimum of a few hundred milliseconds to over a second which is independent of the number of examples for small numbers like 1-20.
I think these delays are caused by rebuilding the graph and saving checkpoints on each call. Is there are way to keep the same graph and parameters in memory for fast train-predict iterations or otherwise speed it up? | 0 | 1 | 148 |
0 | 62,469,494 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-27T12:03:00.000 | 0 | 1 | 0 | Poor performance transfer learning ResNet50 | 57,674,274 | 0 | python,tensorflow,keras,deep-learning | I have read a few articles about the same topic - i have 12k jpeg images from 3 classes and after 3 epochs the accuracy dropped to 0. I am awaiting delivery of a new graphics card to improve performance (it's currently taking 90 - 120 minutes per epoch) and hope to give more feedback. I am just wondering if the face that this model was designed for ImageNet and its 21k classes might be part of the problem - its too wide and deep, therefore too sensitive to changes to weights....... would be interested in others views | I have a dataset of 11k images labeled for semantic segmentation. About 8.8k belong to 'group 1' and the rest to 'group 2'
I am trying to simulate what would happen if we lost access to 'group 1' imagery but not a network trained from them.
So I trained ResNet50 on group 1 only. Then used that network as a starting point for training group 2 only.
Results are essentially slightly better than not training with group 2 imagery (3% in average per class accuracy) but less than 1% better than if I just started with imagenet weights. I tested freezing blocks of resnet50 and a range of learning rates.
Group 1 and 2 are part of the same problem domain but are a bit different. They are taken at different regions (in fact the whole set covers a bunch of areas but group 1 and 2 are disjoint in this regard) and a different camera/resolution. They are resized to a fixed size though this fixed size is closer to group 1 average size.
They are very different to imagenet images. They are monochrome, rectangular and are essentially one type of object that I'm segmenting.
I'm not seeking to get the same result as training on all the images at once but surely there must be a bump in doing this over just training from imagenet. | 0 | 1 | 150 |
0 | 57,680,106 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-08-27T17:48:00.000 | 2 | 1 | 0 | Saving large numpy 2d arrays | 57,679,863 | 1.2 | python,numpy | As pointed out in the comments, 1e6 rows * 4800 columns * 4 bytes per float32 is 18GiB. Writing a float to text takes ~9 bytes of text (estimating 1 for integer, 1 for decimal, 5 for mantissa and 2 for separator), which comes out to 40GiB. This takes a long time to do, since just the conversion to text itself is non-trivial, and disk I/O will be a huge bottle-neck.
One way to optimize this process may be to convert the entire array to text on your own terms, and write it in blocks using Python's binary I/O. I doubt that will give you too much benefit though.
A much better solution would be to write the binary data to a file instead of text. Aside from the obvious advantages of space and speed, binary has the advantage of being searchable and not requiring transformation after loading. You know where every individual element is in the file, if you are clever, you can access portions of the file without loading the entire thing. Finally, a binary file is more likely to be highly compressible than a relatively low-entropy text file.
Disadvantages of binary are that it is not human-readable, and not as portable as text. The latter is not a problem, since transforming into an acceptable format will be trivial. The former is likely a non-issue given the amount of data you are attempting to process anyway.
Keep in mind that human readability is a relative term. A human can not read 40iGB of numerical data with understanding. A human can process A) a graphical representation of the data, or B) scan through relatively small portions of the data. Both cases are suitable for binary representations. Case A) is straightforward: load, transform and plot the data. This will be much faster if the data is already in a binary format that you can pass directly to the analysis and plotting routines. Case B) can be handled with something like a memory mapped file. You only ever need to load a small portion of the file, since you can't really show more than say a thousand elements on screen at one time anyway. Any reasonable modern platform should be able to keep upI/O and binary-to-text conversion associated with a user scrolling around a table widget or similar. In fact, binary makes it easier since you know exactly where each element belongs in the file. | I have an array with ~1,000,000 rows, each of which is a numpy array of 4,800 float32 numbers.
I need to save this as a csv file, however using numpy.savetxt has been running for 30 minutes and I don't know how much longer it will run for.
Is there a faster method of saving the large array as a csv?
Many thanks,
Josh | 0 | 1 | 266 |
0 | 59,026,590 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-29T04:06:00.000 | 2 | 2 | 0 | How to implement fixed-point binary support in numpy | 57,702,835 | 0.197375 | python,numpy-ndarray | For anyone interested, this turned out to be too hard to do in Python extended Numpy, or just didn't fit the data model. I ended up writing a separate Python library of types implementing the behaviours I wanted, that use Numpy arrays of integers under the hood for speed.
It works OK and does the strict binary range calculation and checking that I wanted, but suffers Python code speed overhead especially with small arrays. If I had time I'm sure it could be done much better/faster as a C library. | I have a homebrew binary fixed-point arithmetic support library and would like to add numpy array support. Specifically I would like to be able to pass around 2D arrays of fixed-point binary numbers and do various operations on them such as addition, subtraction, multiplication, rounding, changing of fixed point format, etc.
The fixed-point support under the hood works on integers, and separate tracking of fixed-point format data (number of integer and fractional bits) for range checking and type conversion.
I have been reading the numpy documentation on ndarray subclassing and dtype, it seems like I might want at least a custom dtype, or separate dtype object for every unique range/precision configuration of fixed-point numbers. I tried subclassing numpy.dtype in Python but that is not allowed.
I'm not sure if I can write something to interoperate with numpy in the way I want without writing C level code - everything so far is pure Python, I have avoided looking under the covers at how to work on the C-based layer of numpy. | 0 | 1 | 2,338 |
0 | 57,712,269 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-08-29T13:17:00.000 | 8 | 1 | 0 | Difference between tf.data.Dataset.repeat() vs iterator.initializer | 57,711,103 | 1.2 | python,tensorflow,repeat | As we know, each epoch in the training process of a model takes in the whole dataset and breaks it into batches. This happens on every epoch.
Suppose, we have a dataset with 100 samples. On every epoch, the 100 samples are broken into 5 batches ( of 20 each ) for feeding them to the model. But, if I have to train the model for say 5 epochs then, I need to repeat the dataset 5 times. Meaning, the total elements in the repeated dataset will have 500 samples ( 100 samples multipled 5 times ).
Now, this job is done by the tf.data.Dataset.repeat() method. Usually we pass the num_epochs argument to the method.
The iterator.get_next() is just a way of getting the next batch of data from the tf.data.Dataset. You are iterating the dataset batch by batch.
That's the difference. The tf.data.Dataset.repeat() repeats the samples in the dataset whereas iterator.get_next() one-by-one fetches the data in the form of batches. | Tensorflow has tf.data.Dataset.repeat(x) that iterates through the data x number of times. It also has iterator.initializer which when iterator.get_next() is exhausted, iterator.initializer can be used to restart the iteration. My question is is there difference when using tf.data.Dataset.repeat(x) technique vs iterator.initializer? | 0 | 1 | 1,504 |
0 | 57,732,477 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-08-29T16:15:00.000 | 0 | 1 | 0 | Spark Arrow, toPandas() and wide transformation | 57,714,161 | 1.2 | python,pandas,apache-spark,apache-arrow | toPandas() takes your spark dataframe object and pulls all partitions on the client driver machine as a pandas dataframe. Any operations on this new object (pandas dataframe) will be running on a single machine with python therefore no wide transformations will be possible because you aren't using spark cluster distributed computing anymore (i.e. no partitions/worker node interaction). | What does toPandas() actually do when using arrows optimization?
Is the resulting pandas dataframe safe for wide transformations (that requires data shuffling) on the pandas dataframe eg..merge operations? what about group and aggregate? What kind of performance limitation should I expect?
I am trying to standardize to Pandas dataframe where possible, due to ease of unit testing and swapability with in-memory objects without starting the monstrous spark instance. | 0 | 1 | 279 |
0 | 66,201,863 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-08-29T21:30:00.000 | 0 | 1 | 0 | 'Chart' object has no attribute 'configure_facet_cell' | 57,717,942 | 0 | python,bar-chart,configure,facet,altair | As per the comments this was created on an early version of the software and is no longer reproducible on current Altair versions. | I am using Altair package when I use following objects I have following error message.
AttributeError: 'Chart' object has no attribute 'configure_facet_cell'
In order to use attribute above, what should I install or add?
Thank you in advance. | 0 | 1 | 549 |
0 | 57,727,108 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-08-30T08:15:00.000 | 0 | 1 | 0 | Update Tensorboard while keeping Tensorflow old with conda | 57,723,038 | 1.2 | python,tensorflow,conda,tensorboard | as @jdehesa suggested in the comments, it's better to have a different conda environment for pytorch and then install just the tb there
!pip install tb-nightly | I have some legacy Keras/Tensorflow code, which is unstable using latest Tensorflow versions (1.13+). It works just fine with previous versions. However i want to use Pytorch's Tensorboard support which requires it to be 1.14+. I've installed all Tensorflow-related packages to 1.10 and wanted to do just conda install tensorboard=1.14 but it removes tensorflow=1.10 as a requirement. I know that these packages are generally independent. How to upgrade tensorboard while keeping tensorflow old? Preferably i would like to use a single conda environment. | 0 | 1 | 1,279 |
0 | 57,726,399 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-08-30T11:39:00.000 | 2 | 1 | 0 | Binary cross entropy Vs categorical cross entropy with 2 classes | 57,726,064 | 0.379949 | python,pytorch,cross-entropy | If you are using softmax on top of the two output network you get an output that is mathematically equivalent to using a single output with sigmoid on top.
Do the math and you'll see.
In practice, from my experience, if you look at the raw "logits" of the two outputs net (before softmax) you'll see that one is exactly the negative of the other. This is a result of the gradients pulling exactly in the opposite direction each neuron.
Therefore, since both approaches are equivalent, the single output configuration has less parameters and requires less computations, thus it is more advantageous to use a single output with a sigmoid ob top. | When considering the problem of classifying an input to one of 2 classes, 99% of the examples I saw used a NN with a single output and sigmoid as their activation followed by a binary cross-entropy loss. Another option that I thought of is having the last layer produce 2 outputs and use a categorical cross-entropy with C=2 classes, but I never saw it in any example.
Is there any reason for that?
Thanks | 0 | 1 | 636 |
0 | 57,743,094 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-08-30T22:08:00.000 | 0 | 1 | 0 | what are the options to implement random search? | 57,733,690 | 0 | python,ray | Generally, you don't need to use ray.tune.suggest.BasicVariantGenerator().
For the other two choices, it's up to what suits your need. tune.randint() is just a thin wrapper around tune.sample_from(lambda spec: np.random.randint(...)). You can do more expressive/conditional searches with the latter, but the former is easier to use. | So i want to implement random search but there is no clear cut example as to how to do this. I am confused between the following methods:
tune.randint()
ray.tune.suggest.BasicVariantGenerator()
tune.sample_from(lambda spec: blah blah np.random.choice())
Can someone please explain how and why these methods are same/different for implementing random search. | 0 | 1 | 60 |
0 | 57,747,241 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-01T14:17:00.000 | 0 | 1 | 0 | How to get the day difference between date-column and maximum date of same column or different column in Python? | 57,746,737 | 0 | python,pandas,datetime64 | Although I was busy with this question for 2 days, now I realized that I had a big mistake. Sorry to everyone.
The reason that can not take the maximum value as date comes from as below.
Existing one: t=data_signups[["date_joined"]].max()
Must-be-One: t=data_signups["date_joined"].max()
So it works with as below.
data_signups['joined_to_today'] = (data_signups['date_joined'].max() - data_signups['date_joined']).dt.days
data_signups.head(3)
There will be no two brackets. So stupid mistake. Thank you. | I am setting up a new column as the day difference in Python (on Jupyter notebook).
I carried out the day difference between the column date and current day. Also, I carried out that the day difference between the date column and newly created day via current day (Current day -/+ input days with timedelta function).
However, whenever I use max() of the same column and different column, the day difference column has NaN values. It does not make sense for me, maybe I am missing the date type. When I checked the types all of them seems datetime64 (already converted to datetime64 by me).
I thought that the reason was having not big enough date. However, it happens with any specific date like max(datecolumn)+timedelta(days=i).
t=data_signups[["date_joined"]].max()
date_joined 2019-07-18 07:47:24.963450
dtype: datetime64[ns]
t = t + timedelta(30)
date_joined 2019-08-17 07:47:24.963450
dtype: datetime64[ns]
data_signups['joined_to_today'] = (t - data_signups['date_joined']).dt.days
data_signups.head(2)
shortened...
date_joined_______________// joined_to_today________
2019-05-31 10:52:06.327341 // nan
2019-04-02 09:20:26.520272 // nan
However it worked on Current day task like below.
Currentdate = datetime.datetime.now()
print(Currentdate)
2019-09-01 17:05:48.934362
before_days=int(input("Enter the number of days before today for analysis "))
30
Done
last_day_for_analysis = Currentdate - timedelta(days=before_days)
print(last_day_for_analysis)
2019-08-02 17:05:48.934362
data_signups['joined_to_today'] = (last_day_for_analysis - data_signups['date_joined']).dt.days
data_signups.head(2)
shortened...
date_joined_______________// joined_to_today________
2019-05-31 10:52:06.327341 // 63
2019-04-02 09:20:26.520272 // 122
I expect that there is datetype problem. However, I could not figure out since all of them are datetime64. There are no NaN values in the columns.
Thank you for your help. I am newbie and I try to learn everyday continuously. | 0 | 1 | 50 |
0 | 57,791,481 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-04T09:45:00.000 | 0 | 1 | 0 | Creating a python algorithm to train a keras model to predict a large sequence of integers | 57,785,752 | 0 | python,tensorflow,machine-learning,keras | One hot encoding increases number of columns according to unique categories in data set. I think you should check the performance of model with just using tokenizer not both. Because most of the time tokenizer alone performs very well. | I'm new to machine learning but I'm trying to apply it to a project I have. I was able to train a model to convert words from one language to another using LSTM layers. Say I use A as input to my model and I get B as output. What I do is:
'original word' -> word embedding -> one-hot encode (A) -> MODEL -> one-hot encoded output (B) -> word embedding -> 'translated word'
This is relatively simple as I'm using a character-level tokenizer to encode the words and that does not require much memory (small sequences, one for each word).
However, I now have to train a model that takes B as input and gives me C (no longer a translation problem). C is later going to be used for different purposes. The difference is that C can have a length of say 315 numbers and each of them can be one of 5514 unique values i.e., shape(215, 5514). Generically what I want to do is, for example:
'banana' -> (some processing, word embedding or one-hot) -> MODEL -> [434, 434, 410, 321, 225, 146, 86, 43, 13, -8, -23, -32, -38, -41, -13, 101, 227, 332, 411, 470, 515, 550, 577, 597, 611, 622, 628, 622, 608, 593, 580, 570, 561, 554, 549, 547, 548, 548, 549, 555, 564, 572, 579, 584, 587, 589, 590, 591, 591, 591, 590, 590, 584, 567, 550, 535, 524, 516, 511, 506, 503, 503, 507, 511, 518, 530, 543, 553, 561, 568, 573, 577, 580, 582, 584, 585, 586, 586, 587, 587, 588, 588, 588, 588, 588, 586]
So the problem is that I don't have enough memory to perform a one-hot encoding of the output sequences. I tried using generators to load each sequence from the disk instead of loading all of them from memory but It doesn't seem to be working.
Do you have any suggestions as to how I should approach this problem?
EDIT:
The dataset I'm using has the following format: n lines, each line contains 2 columns separated by a tab. The first column is the input word and the second column is the sequence I want to obtain if the input is that word. | 0 | 1 | 201 |
0 | 57,786,810 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-04T10:38:00.000 | 0 | 2 | 0 | Preprocessing Dataset with Large Categorical Variables | 57,786,660 | 0 | python,pandas,machine-learning,data-analysis,preprocessor | You can have a look if your categorical variables are suitable for a Spearman rank correlation, which ranks the categorical variables and calculates the correlation coefficient. However, be careful for collinearity between the categorical variables. | I have tried to find out basic answers for this question, but none on Stack Overflow seems a best fit.
I have a dataset with 40 columns and 55,000 rows. Only 8 out of these columns are numerical. The remaining 32 are categorical with string values in each.
Now I wish to do an exploratory data analysis for a predictive model and I need to drop certain irrelevant columns that do not show high correlation with the target (variable to predict). But since all of these 32 variables are categorical what can I do to see their relevance with the target variable?
What I am thinking to try:
LabelEncoding all 32 columns then run a Dimensional Reduction via PCA, and then create a predictive model. (If I do this, then how can I clean my data by removing the irrelevant columns that have low corr() with target?)
One Hot Encoding all 32 columns and directly run a predictive model on it.
(If I do this, then the concept of cleaning data is lost totally, and the number of columns will skyrocket and the model will consider all relevant and irrelevant variables for its prediction!)
What should be the best practice in such a situation to make a predictive model in the end where you have many categorical columns? | 0 | 1 | 173 |
0 | 62,684,892 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-09-04T10:38:00.000 | 0 | 2 | 0 | Preprocessing Dataset with Large Categorical Variables | 57,786,660 | 1.2 | python,pandas,machine-learning,data-analysis,preprocessor | you got to check the correlation.. There are two scenarios I can think of..
if the target variable is continuous and independent variable is categorical, you can go with Kendall Tau correlation
if both target and independent variable are categorical, you can go with CramersV correlation
There's a package in python which cam do all of these for you and you can select only columns that you need..
pip install ctrl4ai
from ctrl4ai import automl
automl.preprocess(dataframe, learning type)
use help(automl.preprocess) to understand more about the hyper parameters and you can customise your preprocessing in the way you want to..
please check automl.master_correlation which checks correlation based on the approach I explained above. | I have tried to find out basic answers for this question, but none on Stack Overflow seems a best fit.
I have a dataset with 40 columns and 55,000 rows. Only 8 out of these columns are numerical. The remaining 32 are categorical with string values in each.
Now I wish to do an exploratory data analysis for a predictive model and I need to drop certain irrelevant columns that do not show high correlation with the target (variable to predict). But since all of these 32 variables are categorical what can I do to see their relevance with the target variable?
What I am thinking to try:
LabelEncoding all 32 columns then run a Dimensional Reduction via PCA, and then create a predictive model. (If I do this, then how can I clean my data by removing the irrelevant columns that have low corr() with target?)
One Hot Encoding all 32 columns and directly run a predictive model on it.
(If I do this, then the concept of cleaning data is lost totally, and the number of columns will skyrocket and the model will consider all relevant and irrelevant variables for its prediction!)
What should be the best practice in such a situation to make a predictive model in the end where you have many categorical columns? | 0 | 1 | 173 |
0 | 57,807,337 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2019-09-04T13:56:00.000 | 0 | 2 | 0 | Improving sound latencies for video presentation in python | 57,790,029 | 0 | python,audio,video,latency,psychopy | Ultimately, yes, the issues are the same for audio-visual sync whether or not they are embedded in a movie file. By the time the computer plays them they are simply visual images on a graphics card and an audio stream on a sound card. The streams just happen to be bundled into a single (mp4) file. | I am creating an experiment in python. It includes the presentation of many mp4 videos, that include both image and sound. The sound is timed so that it appears at the exact same time as a certain visual image in the video. For the presentation of videos, I am using psychopy, namely the visual.MovieStim3 function.
Because I do not know much about technical sound issues, I am not sure if I should/can take measures to improve possible latencies. I know that different sound settings make a difference for the presentation for sound stimuli alone in python, but is this also the case, if the sound is embedded in the video? And if so, can I improve this by choosing a different sound library?
Thank you for any input.
Juliane | 0 | 1 | 82 |
0 | 57,849,959 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-04T22:29:00.000 | 1 | 1 | 0 | Pandas Interpolation Method 'Cubic' - spline or polynomial? | 57,796,327 | 1.2 | python-3.x,pandas,interpolation | In interpolation methods, 'polynomial' generally means that you generate a polynomial with the same number of coefficients as you have data points. So, for 10 data points you would get an order 9 polynomial.
'cubic' generally means piecewise 3rd order polynomials. A sliding window of 4 data points is used to generate these cubic polynomials. | I am trying to understand interpolation in pandas and I don't seem to understand if the method 'cubic' is a polynomial interpolation of order 3 or a spline. Does anybody know what pandas uses behind that method? | 0 | 1 | 180 |
0 | 60,118,279 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-05T03:55:00.000 | 0 | 1 | 0 | How to take multi-GPU support to the OpenNMT-py (pytorch)? | 57,798,219 | 0 | python-2.7,pytorch,opennmt | Maybe you can check whether your torch and python versions fit the openmt requiremen.
I remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work | I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU.
Can someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7?
Here is the command that I tried.
CUDA_VISIBLE_DEVICES=1,2
python train.py -data data/demo -save_model demo-model -world_size 2 -gpu_ranks 0 1
This is the error:
Traceback (most recent call last):
File "train.py", line 200, in
main(opt)
File "train.py", line 60, in main
mp = torch.multiprocessing.get_context('spawn')
AttributeError: 'module' object has no attribute 'get_context' | 0 | 1 | 166 |
0 | 57,807,062 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-05T12:59:00.000 | 0 | 1 | 0 | cannot import name '_pywrap_utils' from 'tensorflow.python' | 57,806,063 | 0 | python,python-3.x,tensorflow,tensorrt | Try pip3 install pywrap and pip3 install tensorflow pywrap utils should be included with tensorflow. If it is not found, that means TF was not installed correctly. | I am working on pose estimation using OpenPose. For that I installed TensorFlow GPU and installed all the requirements including CUDA development kit.
While running the Python script:
C:\Users\abhi\Anaconda3\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py, I encountered the following error:
ImportError: cannot import name '_pywrap_utils' from
'tensorflow.python'
(C:\Users\abhi\Anaconda3\lib\site-packages\tensorflow\python__init__.py)
I tried searching for _pywrap_utils file but there was no such file. | 0 | 1 | 1,170 |
0 | 57,814,580 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-05T19:56:00.000 | 1 | 2 | 0 | Time Series model for predicting online student grades? | 57,812,231 | 0.099668 | python,pandas,scikit-learn,time-series,forecasting | Look at the fbprophet module. This can separate a time series into components such as trend, seasonality and noise. The module was originally developed for web traffic.
You can incorporate this into your regression model in a number of ways by constructing additional variables, for example:
Ratio of trend at start of term to end of term
The magnitude of the weekly seasonal pattern
The variance of the white noise series.
etc.
Not to say any of these constructed variables will be significant in your model, but it is the type of things I would try. You could feasibly construct some of these variables without doing any complex time series model at all, for instance the ratio of time spent watching videos at the start of the course vs the end of the course could be calculated in excel. | I have a dataset with daily activities for online students (time spent, videos watched etc). based on this data I want to predict if each student will pass or not. Until this point I have been treating it as a classification problem, training a model for each week with the student activity to date and their final outcomes.
This model works pretty well, but it ignores behavior over time. I am interested in doing some kind of time series analysis where the model takes into account all datapoints for each student over time to make the final prediction.
The time series models I've been looking at aim to forecast a specific metric for a population (demand, revenue etc) at future time steps. In my case I am less interested in the aggregated timestep metrics and more interested in the final outcome by individual.
In other words, mine is more of a classification or regression problem, but I am hoping to be able to leverage each individual students usage patterns over time for this. Is there a way to combine the two? Basically build a better classifier that understands patterns over time. | 0 | 1 | 217 |
0 | 58,536,847 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-09-06T16:27:00.000 | 2 | 1 | 0 | Tensorflow train.py throws Windows fatal exception | 57,825,630 | 0.379949 | python,tensorflow,machine-learning,tensorflow-datasets | I decided to share what solved my problem, might help the others. I reinstalled Tensorflow itself in a virtual environment, and upgraded it to version 1.8 (Requires Python 3.6, it is not compatible with higher versions (mine is 3.6.5 in particular)), make sure your PYTHONPATH variable is pointing to the right folder. Also, on Windows, this error message on can occur when you use generate_tfrecord.py, i ran into it many times, it usually happend, because i had image(s), that Tensorflow did not like (i'm not completly sure about the cause), at first, try removing .webp, .gif, etc.(non .png/.jpg) files. I even had the exception for renaming an image downloaded from the internet, and TF did not stand it anymore. | I've been working with Tensorflow for quite a while now, had some issues, but they never remained unresolved. Today i wanted to train a new model, when things got interesting. At first, the training stopped after one step without any reason. It happend before, opening a new cmd window solved it. Not this time tough. After i tried again, train.py started to throw this:
Windows fatal exception: access violation
Current thread 0x000018d4 (most recent call first):
File
"C:\windows\system32\venv\lib\site-packages\tensorflow\python\lib\io\file_io.py",
line 84 in _preread_check File
"C:\windows\system32\venv\lib\site-packages\tensorflow\python\lib\io\file_io.py",
line 122 in read File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\utils\label_map_util.py",
line 133 in load_labelmap File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\utils\label_map_util.py",
line 164 in get_label_map_dict File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\data_decoders\tf_example_decoder.py",
line 59 in init File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\data_decoders\tf_example_decoder.py",
line 314 in init File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\builders\dataset_builder.py",
line 130 in build File "train.py", line 121 in get_next File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\legacy\trainer.py",
line 59 in create_input_queue File
"C:\Users\xx\source\TensorFlow\models\research\object_detection\legacy\trainer.py",
line 280 in train File "train.py", line 180 in main File
"C:\windows\system32\venv\lib\site-packages\tensorflow\python\util\deprecation.py",
line 324 in new_func File "C:\Program Files (x86)\Microsoft Visual
Studio\Shared\Python37_64\lib\site-packages\absl\app.py", line 251 in
_run_main File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\absl\app.py", line 300 in
run File
"C:\windows\system32\venv\lib\site-packages\tensorflow\python\platform\app.py",
line 40 in run File "train.py", line 184 in
Last time i saw this issue, it was because i was using data downloaded from the internet, and there was one particular picture that TF did not like, but removing that one from the dataset solved the issue. I was wondering if this was the case, but no. I couldnt start it with previously tried datasets either... i decided to reinstall tensorflow, set up a new virtual environment, but still nothing. Been looking for hours what the problem could be, both on the internet, and on my own trying different things, but nothing worked, same exception each time. Did anybody encounter anything similar? | 0 | 1 | 3,049 |
0 | 57,835,318 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-06T17:54:00.000 | 1 | 1 | 0 | Calculate camera matrix with KNOWN parameters (Python)? | 57,826,605 | 1.2 | python,opencv,camera,computer-vision,projection-matrix | The projection matrix is simply a 3x4 matrix whose [0:3,0:3] left square is occupied by the product K.dot(R) of the camera intrinsic calibration matrix K and its camera-from-world rotation matrix R, and the last column is K.dot(t), where t is the camera-from-world translation. To clarify, R is the matrix that brings into camera coordinates a vector decomposed in world coordinates, and t is the vector whose tail is at the camera center, and whose tip is at the world origin.
The OpenCV calibration routines produce the camera orientations as rotation vectors, not matrices, but you can use cv.Rodrigues to convert them. | OpenCV provides methods to calibrate a camera. I want to know if it also has a way to simply generate a view projection matrix if and when the parameters are known.
i.e I know the camera position, rotation, up, FOV... and whatever else is needed, then call MagicOpenCVCamera(parameters) and obtain a 4x4 transformation matrix.
I have searched this up but I can only find information about calibrating the camera, not about creating one if you already know the parameters. | 0 | 1 | 2,898 |
0 | 57,833,508 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-07T11:50:00.000 | 1 | 1 | 0 | LightGBM unexpected behaviour outside of jupyter | 57,833,411 | 0.197375 | python,jupyter-notebook,lightgbm | It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (or check if there has been some major changements between these versions) | I have this strange but when I'm using a LightGBM model to calculate some predictions.
I trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class.
My problem is when I call my prediction function from this external class outside of jupyter it always predicts an output of 0.5 (on all rows). When I use the exact same class inside of jupyter I get the expected output. In both cases the exact same model is used with the exact same data.
How can this behavior be explained and how can I achieve to get the same results outside of jupyter? Has it something to do with the fact I trained the model inside of jupyter? (I can't imagine why it would, but atm have no clue where this bug is coming from)
Edit: Used versions:
Both times the same lgb version is used (2.2.3), I also checked the python version which are equal (3.6.8) and all system paths (sys.path output). The paths are equal except of '/home/xxx/.local/lib/python3.6/site-packages/IPython/extensions' and '/home/xxx/.ipython'.
Edit 2: I copied the code I used inside of my jupyter and ran it as a normal python file. The model made this way works now inside of jupyter and outside of it. I still wonder why this bug accrued. | 0 | 1 | 100 |
0 | 57,839,165 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-08T00:50:00.000 | 2 | 1 | 0 | Elastic Beanstalk won't recognize absolute path to file, returns FileNotFoundError | 57,838,423 | 1.2 | python,flask,amazon-elastic-beanstalk | Problem solved! My process was being carried out by rq workers. The rq workers are running from my local machine and I did not realize that it would be the workers looking for the file path. I figured this out by printing os.getcwd() and noticing that the current working directory was still my local path. So, I threw an exception on a FileNotFoundError for the workers to use a local path instead as necessary. | I am running a Flask application using AWS Elastic Beanstalk. The application deploys successfully, but there is a task in my code where I use pandas read_csv to pull data out of a csv file. The code line is:
form1 = pd.read_csv('/opt/python/current/app/application/model/static2/form1.csv')
When I try to execute that task in the application, I receive a FileNotFoundError:
FileNotFoundError: [Errno 2] File b'/opt/python/current/app/application/model/static2/form1.csv' does not exist: b'/opt/python/current/app/application/model/static2/form1.csv'
The problem does not occur when I execute the program locally, but only if I use the full, absolute path to the file. This is due to the way my dependencies are set up.
When I first deployed the application, I received errors because I was still using the local path to the file, and so I changed it to the one you see above, which is what I think is the absolute path to the file uploaded on Beanstalk. I think this because I copied it from a static image that I was having an issue with earlier.
I should note that I cannot verify the absolute path because I am unable to remote into Elastic Beanstalk using EB CLI. I have been trying to get EB CLI set up on my machine for days and repeatedly failed, I think because of weird version and file issues on my machine. So I can't obtain information or fix the problem using command line.
So, is the path that I am using above consistent with an EB absolute path? Can this be solved by adding to my static file configurations? If so, how?Is there anything I could add to the .config file?
Any help is greatly appreciated. | 1 | 1 | 432 |
0 | 57,839,378 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-08T00:59:00.000 | 1 | 1 | 0 | Is it feasible to perform feature extraction in a Flutter app? | 57,838,465 | 1.2 | python,android,tensorflow,machine-learning,flutter | If you are targeting mobile, check the integration with “native” code. E.g. look for a java/kotlin library that can do the same on android. And a swift/objC one for iOS.
Then, you could wrap that functionality in a platform-specific module. | I am attempting to implement an audio classifier in my mobile app. When training the data, I used melspectrogram extracted from the raw audio. I am using Tensorflow Lite to integrate the model into the app.
The problem is that I need to perform the same feature extraction on the input audio from the mic before passing into the tflite model. Python's Librosa library implements all of the functions that I need. My initial idea was run Python in flutter (there is the starflut Flutter package but I couldn't get it to work).
Am I going about this in the wrong way? If so, what should I be doing? I could potentially rewrite the Librosa functions in dart lang, but I don't particularly want to do that. | 0 | 1 | 552 |
0 | 57,859,057 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-09T00:45:00.000 | 0 | 1 | 0 | How can i change column of a df to index of the df? | 57,846,743 | 0 | python-3.x,date,dataframe,indexing | Using set_index we can a column index of the df
df.set_index('Date') | I have a "Date" column in my df and I wish to use it as the index of df
Date values in 'Date' column are in the correct format as per DateTime (yyyy-mm-dd) | 0 | 1 | 40 |
0 | 57,863,047 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-09T04:58:00.000 | 0 | 1 | 0 | Getting the parameters from lmfit | 57,848,079 | 0 | python,lmfit | Including a complete, minimal example that shows what you are doing is always a good idea. In addition, your subject is not a good reflection of your question. You have now asked enough questions about lmfit here on SO to know better.
You probably want to use ModelResult.eval() to evaluate the ModelResult (probably your out) for a given independent variable. If you need more help, ask an answerable question that might be useful to others... | I am doing a fit in python with lmfit and after I define my model (i.e. the functio I want to use for the fit) I do out = model.fit(...) and in order to visualize the result I do plt.plot(x, out.best_fit). This works fine, however this computes the value of the function only at the points used for the fit. How can I apply the parameters of the fit to any x vector (to get a smoother curve), something like x_1 = np.arange(xi,xf,i), plt.plot(x_1,out.best_fit(x_1))? Thank you! | 0 | 1 | 114 |
0 | 66,486,493 | 0 | 0 | 0 | 0 | 1 | false | 38 | 2019-09-09T20:26:00.000 | -1 | 2 | 0 | pandas pd.options.display.max_rows not working as expected | 57,860,775 | -0.099668 | python,pandas | min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory,
another way is to use nrows = 'putnumberhere'.
e.g. results = pd.read_csv('ex6.csv', nrows = 5) # display 5 rows from the top 0 - 4
If the dataframe has about 100 rows and you want to display only the first 5 rows from the top...NO TAIL use .nrows | I’m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to.
However, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show.
Any idea how I can get a pd.options.display.max_rows of more than 10 to take effect? | 0 | 1 | 32,532 |
0 | 57,863,569 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-10T02:59:00.000 | 1 | 3 | 0 | DataFrame each column miltiply param then sum | 57,863,464 | 0.066568 | python,pandas,dataframe | I think df * param.to_list() is good. | I have a Dataframe which columns is ['a','b','c'] and a Series param contain three values which is params of Dataframe. The param.index is ['a','b','c']. I want to realize df['a'] * param['a'] + df['b'] * param['b'] + df['c'] * param['c']. Because there are too many columns and params in my code. So is there any concise and elegant code can realize this? | 0 | 1 | 41 |
0 | 57,864,150 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-10T02:59:00.000 | 1 | 3 | 0 | DataFrame each column miltiply param then sum | 57,863,464 | 0.066568 | python,pandas,dataframe | df*param is enough, it will auto determine according to the index.
You can change series indexes to ['b','c','a'] for testing | I have a Dataframe which columns is ['a','b','c'] and a Series param contain three values which is params of Dataframe. The param.index is ['a','b','c']. I want to realize df['a'] * param['a'] + df['b'] * param['b'] + df['c'] * param['c']. Because there are too many columns and params in my code. So is there any concise and elegant code can realize this? | 0 | 1 | 41 |
0 | 57,882,320 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-10T10:29:00.000 | 1 | 1 | 0 | Uploading a file from client to server in python bokeh | 57,868,893 | 1.2 | javascript,python,webserver,bokeh,bokehjs | I'm not sure where you are getting your information. The FileInput widget added in Bokeh 1.3.0 can upload any file the user chooses, not just JSON. | We have set up a bokeh server in our institute, which works properly. We also have a python-based code to analyse fMRI data which at the moment uses matplotlib to plot and save. But I want to transfer the code to bokeh server and allow everybody to upload files into the server from the client and when the analysis is done in the server, save the output plots in their local HDD. This transfer file procedure seems to be lacking in bokeh atm. I saw a new feature recently added in github to upload json files, but my problem is fMRI files come in various formats, and asking (not necessarily tech-savvy) users to convert the files into a certain format beats the purpose. Also, I do not know any JS or the like, hence I do not know what solutions people usually use for such web-based applications.
If anybody has any solutions to get around this issue, it'd be happy to hear it. Even if it is a solution independent of bokeh (which would mean users need to open a separate page to upload the files, a page to run the analysis, and a page to save the output) please let me know. It won't be ideal, but at least better than no solution, which is the case in bokeh right now. Thanks! | 1 | 1 | 184 |
0 | 62,965,796 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-10T16:13:00.000 | 0 | 1 | 0 | How to change location of .flair directory | 57,874,651 | 0 | python,python-3.x | When importing datasets into flair, one can specify a custom path to import from. Copy the flair datasets to a folder you choose on your larger harddrive and then specify that path when loading a dataset.
flair.datasets.WASSA_FEAR(data_folder="E:/flair_datasets/") | I'm currently using flair for sentiment analysis and it's datasets. The datasets for flair are quite large in size and are currently installed on my quite small SSD in my user folder. Is there anyway that I can move the .flair folder from my user folder on my SSD to my other drive without breaking anything.
Thanks in advance | 0 | 1 | 90 |
0 | 57,887,051 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-10T21:47:00.000 | 1 | 1 | 0 | Error with tf.nn.sparse_softmax_cross_entropy_with_logits | 57,878,623 | 0.197375 | python,tensorflow,neural-network,entropy | I don't understand how having a shape [50,1] is not the same as being 1D.
While you can reshape a [50, 1] 2D matrix into a [50] 1D matrix just with a simple squeeze, Tensorflow will never do that automatically.
The only heuristic the tf.nn.sparse_softmax_cross_entropy_with_logits uses to check if the input shape is correct is to check the number of dimensions it has. If it's not 1D, it fails without trying other heuristics like checking if the input could be squeezed. This is a security feature. | I am using tf.nn.sparse_softmax_cross_entropy_with_logits and when I pass through the labels and logits I get the following error
tensorflow.python.framework.errors_impl.InvalidArgumentError: labels
must be 1-D, but got shape [50,1]
I don't understnad how having a shape [50,1] is not the same as being 1D | 0 | 1 | 108 |
0 | 57,888,733 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-11T00:46:00.000 | 0 | 1 | 0 | Using tensorflow object detection for either or detection | 57,879,708 | 1.2 | python-3.x,tensorflow,object-detection | In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect.
The API adds a background class for everything which is not the other classes.
So you simply need to state one class - defect, and tiles which are not detected as such are not defected.
So in your training set - simply give bounding boxes of defects, and no bounding box in case of no defect, and then your model should learn to detect the defects as mentioned above. | I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an image of a tile and it can tell me if there is a crack (and also show the location), or it will tell me if there is no crack on the tile?
I have tried to train using pictures with and without defect, using 2 classes (1 for defect and 1 for no defect). But the results keep showing both (if the picture have defect) in 1 picture. Is there a way to show only the one with defect?
Basically i would like to do defect checking. This is a simplistic case of 1 defect. but the actual case will have a few defects.
Thank you. | 0 | 1 | 92 |
0 | 57,907,511 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-09-12T09:18:00.000 | 4 | 2 | 0 | Interpreting a sigmoid result as probability in neural networks | 57,903,518 | 0.379949 | python,tensorflow,sigmoid | As pointed out by Teja, the short answer is no, however, depending on the loss you use, it may be closer to truth than you may think.
Imagine you try to train your network to differentiate numbers into two arbitrary categories that are beautiful and ugly. Say your input number are either 0 or 1 and 0s have a 0.2 probability of being labelled ugly whereas 1s have o 0.6probability of being ugly.
Imagine that your neural network takes as inputs 0s and 1s, passes them into some layers, and ends in a softmax function. If your loss is binary cross-entropy, then the optimal solution for your network is to output 0.2 when it sees a 0 in input and 0.6 when it sees a 1 in input (this is a property of the cross-entropy which is minimized when you output the true probabilities of each label). Therefore, you can interpret these numbers as probabilities.
Of course, real world examples are not that easy and are generally deterministic so the interpretation is a little bit tricky. However, I believe that it is not entirely false to think of your results as probabilities as long as you use the cross-entropy as a loss.
I'm sorry, this answer is not black or white, but reality is sometimes complex ;) | I've created a neural network with a sigmoid activation function in the last layer, so I get results between 0 and 1. I want to classify things in 2 classes, so I check "is the number > 0.5, then class 1 else class 0". All basic.
However, I would like to say "the probability of it being in class 0 is x and in class 1 is y".
How can I do this?
Does a number like 0.73 tell me it's 73% sure to be in class 1? And then 1-0.73 = 0.27 so 27% in class 0?
When it's 0.27, does that mean it's 27% sure in class 0, 73% in class 1? Makes no sense.
Should I work with the 0.5 and look "how far away from the center is the number, and then that's the percentage"?
Or am I misunderstanding the result of the NN? | 0 | 1 | 1,575 |
0 | 57,905,788 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-12T09:51:00.000 | 0 | 2 | 0 | Is it possible to explain sklearn isolation forest prediction? | 57,904,088 | 1.2 | python,unsupervised-learning,anomaly-detection | You are creating an ensemble of trees so and the path of a given instance will be different for each tree in the ensemble. To detect an anomaly the isolation forest takes the average path length (number of splits to isolate a sample) of all the trees for a given instance and uses this to determine if it is an anomaly (average shorter path lengths indicate anomalies). As you are looking at the average of a set of trees there is no 'exact' path.
Within my knowledge, your best bet would be to use something like SHAP, as you mentioned, but you could also train only a few estimators and look at the path taken for a given instance in these trees to get an insight into the decisions. | I'm using the isolation forest algorithm from sklearn to do some unsupervised anomaly detection.
I need to explained the predictions and I was wondering if there is any way to get the paths that lead to the decision for each sample.
I usually used SHAP or ELI5 but i'd like to do something more custom. So i need the exact path. | 0 | 1 | 2,005 |
0 | 57,910,171 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-12T15:26:00.000 | 0 | 2 | 0 | Best practice convert dataframe to dictionary? | 57,909,996 | 0 | python,pandas,dataframe,dictionary | I think easier to plot from pandas than a dict. Try using df.plot(). You can subset your df as required to only plot the information you're interested it. | I'm trying to take x number of columns from an existing df and convert them into a dictionary.
My questions are:
The method shown below is considered a good practice? I think it's repetitive and I'm sure it can be a more elegant code.
Should I convert from df to dictionary if my idea is to build a plot? Or it's an unnecessary step?
I've tried the code below:
familiarity_dic = familiarity[{'Question':'Question','SCORE':'SCORE'}]
familiarity_dic
Expected result is correct but I want to know if it's the best practice for Pandas.
Question SCORE
36 Invesco 100
35 Schroders 96
34 Fidelity 96
31 M&G 95
0 BlackRock 95 | 0 | 1 | 52 |
0 | 57,921,590 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-13T09:40:00.000 | 0 | 1 | 0 | Storing/Loading huge numpy array with less memory | 57,921,092 | 0 | python,numpy,ram | I think that you can do a lot of things.
First of all you can change the data format to be stored in different ways:
in a file in your secondary memory to be read iteratively (dumping a python object on secondary memory is not efficient. You need to find a better format. For example a text file in which the lines are the rows of the matrix)
or in a database. Always to make the data readable in an iterative manner.
Second, and most important, you need to change your algorithm. If you cannot fit all the data in memory, you need to use other kinds of methods, in which you use batchs of data instead of all the data.
For machine learning for example, there are a lot of methods in which you do incremental updates of the model with batchs of data
Third, there are method in which you can reduce the dimensionality of your training set. For example using methods like PCA, feature selection etc | I have a numpy array of shape (20000, 600, 768). I need to store it, so later I could load it back to my code.
The main problem is memory usage when you load it back.
I have just 16GB RAM.
For example, I tried pickle. When it loads it all I almost have no memory left to do anything else. Especially to train the model.
I tried write and load back with hdf5 (h5py). Just a small piece (1000, 600, 768). But it seems like it "eats" even more memory.
Also tried csv.. That's just a no-no. Takes TOO much time to write data in.
Would be grateful for any suggestions how I could store my array so when I would load it back it wouldn't take that much memory.
P.S. The data I store is vector representation of texts which I later use for training my model. | 0 | 1 | 947 |
0 | 57,937,143 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-14T15:55:00.000 | 2 | 2 | 0 | Is it possible to validate a deep learning model by training small data subset? | 57,937,097 | 0.197375 | python,tensorflow,keras,resnet,vgg-net | Short answer: No, because Deep Learning works well on huge amount of data.
Long answer: No. The problem is that learning only one face could overfit your model on that specific face, without learning features not present in your examples. Because for example, the model has learn to detect your face thanks to a specific, very simple, pattern in that face (that's called overfitting).
Making a stupid simple example, your model has learn to detect that face because there is a mole on your right cheek, and it has learn to identify it
To make your model perform well on the general case, you need an huge amount of data, making your model capable to learn different kind of patterns
Suggestion:
Because the training of a deep neural network is a time consuming task, usually one does not train one single neural network at time, but many neural network are trained in parallel, with different hyperparameters (layers, nodes, activation functions, learning rate, etc).
Edit because of the discussion below:
If your dataset is small is quite impossible to have a good performance on the general case, because the neural network will learn the easiest pattern, which is usually not the general/better one.
Adding data you force the neural network to extract good patterns, that work on the general case.
It's a tradeoff, but usually a training on a small dataset would not lead to a good classifier on the general case
edit2: refrasing everything to make it more clear. A good performance on a small dataset don't tell you if your model when trained on all the dataset is a good model. That's why you train to
the majority of your dataset and test/validate on a smaller dataset | I am looking to train a large model (resnet or vgg) for face identification.
Is it valid strategy to train on few faces (1..3) to validate a model?
In other words - if a model learns one face well - is it evidence that the model is good for the task?
point here is that I don't want to spend a week of GPU expensive time only to find out that my model is no good or data has errors or my TF coding has a bug | 0 | 1 | 221 |
0 | 57,948,589 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-15T21:33:00.000 | 0 | 3 | 0 | structured numpy ndarray, how to get values | 57,948,331 | 0 | python,numpy,dictionary,key-value,numpy-ndarray | Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem. | I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure.
How do I return only the corresponding val, or an array of all vals so that I can read in order? | 0 | 1 | 2,969 |
0 | 57,962,750 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-16T11:58:00.000 | 1 | 1 | 0 | Is there any alternative for pandas.DataFrame function for Python? | 57,956,382 | 1.2 | python-3.x,pandas,numpy,kivy,buildozer | Similar to Pandas.DataFrame.
As database you likely know SQLite (in python see SQLAlchemy and SQLite3).
On the raw tables (i.e., pure matrix-like) Numpy (Numpy.ndarray), it lacks of some database functionalities compared to Pandas but it is fast and you could easily implement what you need. You can find many comparisons between Pandas and Numpy.
Finally,depending on your needs, some simple python dictionaries, maybe OrderedDict. | I am developing an application for Android with Kivy, and package it with Buildozer. The core of my application is using pandas and specially the DataFrame function. It failed when I tried to package it with Buildozer even if I had put pandas in the requirements. So I want to use another library that can be used with Buildozer. So does anyone know about a great alternative to the pandas.DataFrame function with the numpy library for example or another one ?
Thanks a lot for your help. :) | 0 | 1 | 2,855 |
0 | 58,013,231 | 1 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-16T16:45:00.000 | 1 | 1 | 0 | How to create button based chatbot | 57,961,205 | 1.2 | python,networkx,flowchart,rasa | Sure, you can.
You just need each button to point to another intent. The payload of each button should point have the /intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method.
Hope that helps. | I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not sure whether it has that capability. I did search but most of the examples are using dialogue or chat fuel. Can i do it using networkx.
Please help. | 0 | 1 | 614 |
0 | 57,965,103 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-16T22:04:00.000 | 0 | 2 | 0 | Should a function accept/return values in (row,col) or (col,row) order? | 57,964,943 | 0 | c#,python,c++,conventions | The convention is to address (column, row) like (x, y).
Column refers to the type of value and row indicates the value for the column.
Column is a key and with Row you have the value.
But using (row, column) is fine too.
It depends of what you're doing.
In SQL/Linq, you always refers to Columns to get rows. | Say I have a function that accepts a row and a column as parameters, or returns a tuple of a row and a column as its return value. I know it doesn't actually make a difference, but is there a convention as to whether put the row first or the column first? Coming from math, if I think of the pair as coordinates into the table, I would intuitively put the column first, as in a cartesian point (x,y). But if I think of it as a whole matrix, I would put row first, as in MxN size of a matrix.
If there are different conventions for different languages, I would be interested especially in c++, c# and python.
By "convention" I mean preferably that that language's standard library does it a certain way, and if not that then second choice would be only if it were so universal that all major third-party libraries for that language would do it that way,preferably with an explanation why. | 0 | 1 | 75 |
0 | 57,965,076 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-09-16T22:04:00.000 | 1 | 2 | 0 | Should a function accept/return values in (row,col) or (col,row) order? | 57,964,943 | 0.099668 | c#,python,c++,conventions | You shouldn't think of what is "horizontal" and what is "vertical". You should think of which convention is widely used not to introduce a lot of surprise to the developers who would use your code. The same is true for naming the parameters: use (x, y, z) for coordinates, (i, j) for indexes in matrix and (row, column) for the cells of the table (in this order).
I agree with @ParalysisByAnalysis that sometimes it depends, but the rule of thumb is to follow the conventions of the subject. | Say I have a function that accepts a row and a column as parameters, or returns a tuple of a row and a column as its return value. I know it doesn't actually make a difference, but is there a convention as to whether put the row first or the column first? Coming from math, if I think of the pair as coordinates into the table, I would intuitively put the column first, as in a cartesian point (x,y). But if I think of it as a whole matrix, I would put row first, as in MxN size of a matrix.
If there are different conventions for different languages, I would be interested especially in c++, c# and python.
By "convention" I mean preferably that that language's standard library does it a certain way, and if not that then second choice would be only if it were so universal that all major third-party libraries for that language would do it that way,preferably with an explanation why. | 0 | 1 | 75 |
0 | 57,980,615 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-17T07:53:00.000 | 0 | 1 | 0 | Python3, word2vec, How can I get the list of similarity rank about "price" in my model | 57,969,707 | 1.2 | python,gensim,word2vec,similarity,cosine-similarity | If you call wv.most_similar('price', topn=len(wv)), with a topn argument of the full vocabulary count of your model, you'll get back a ranked list of every word's similarity to 'price'.
If you call with topn=0, you'll get the raw similarities with all model words, unsorted (in the order the words appear inside wv.index2entity). | In gensim's word2vec python, I want to get the list of cosine similarity for "price".
I read the document of gensim word2vec, but document it describes most_similar and n_similarity function)()
I want the whole list of similarity between price and all others. | 0 | 1 | 204 |
0 | 57,975,781 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-17T13:31:00.000 | 0 | 1 | 0 | how to deal with high cardinal categorical feature into numeric for predictive machine learning model? | 57,975,387 | 0 | python,machine-learning,data-science,data-cleaning,data-processing | One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a single id. Similarly, for page_entry you could group similar pages based on some attributes like nature of the web page like sports, travel, etc. In this way you could significantly reduce the number dimensions of your variables.
Hope this helps! | I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage.
column information:
area_id: all the locations that were visited during the session. (has location code number of different areas of a webpage)
page_entry: describes the landing page of the session.
how to change these two columns into numerical apart from one_hot encoding?
thank you. | 0 | 1 | 150 |
0 | 57,978,592 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-17T16:17:00.000 | 0 | 1 | 0 | Why not evaluate over test fit results in RandomizedSearchCV? | 57,978,263 | 0 | python,optimization,hyperparameters,gridsearchcv | when we are training a model we usually divide data into train, validation and test sets. Lets look at what are the propose of each set
Train Set: It is used by model to learn its parameters. Usually model reduces its cost on the train set and selects parameters that gives minimum cost.
Validation Set: By name, validation set is used to validate that model will also performs well on the data that it haven't seen yet. That gives us confidence that model is not memorizing the training data and performing very well on training data but not performing good on new data. If the model is complex enough then there is risk of model memorizing training data to improve its performance on training set but not well on validation data.
Usually we use cross validation in which we divide training set into n equal parts, then for each iteration we select one part as validation and remaining parts as training set.
Test Set: Test Set is set aside and only used at last when we are satisfied with our model to estimate how well your final model will perform on new data in wild. One main difference with validation set is that it is not used in any way to improve or change model or improve model but validation set help us select us final model. We do this because we don't want a model that is biased towards test data but will not perform well with data in wild. | I'm trying optimize hiperparameters for classifiers and regression methods in sklearn. And I have a question. Why when you evaluate the results, you choose for example the best train accuracy, instead of evaluate this result over the test, and iterate others values with others train accuracys to obtain the best test accuracy? Because clearly the parameters for the best train accuracy are not the same that the parameters for the best test accuracy.
Thanks! | 0 | 1 | 266 |
0 | 57,999,219 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-18T18:44:00.000 | 0 | 1 | 0 | how to display plot images outside of jupyter notebook? | 57,999,071 | 0 | python,jupyter-notebook | You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot.
Also, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder. | So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the console does not display the plots as those are images. So how do I get to display and see the plots when not using jupyter? | 0 | 1 | 427 |
0 | 58,011,558 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-19T11:49:00.000 | 1 | 1 | 0 | How do I feed data into my neural network? | 58,010,363 | 1.2 | python,neural-network,backpropagation | So, you're saying you implemented a neural network on your own ?
well in this case, basically each neuron on the input layer must be assigned with a feature of a certain row, than just iterate through each layer and each neuron in that layer and calculate as instructed.
I'm sure you are familiar with the back-propagation algorithm so you'll know when to stop.
once you're done with that row, do it again to the next row, assign each feature to each of the input neurons and start the iterations again.
once youre done with all records, thats an Epoch.
I hope that answers your question.
also, I would recommend you to try out Keras, its easy to use and a good tool to be experienced in. | I've coded a simple neural network for XOR in python. While there is loads of information online about how to program this, there isn't much on how to feed the data through it. I've tested the change in weights after one cycle for inputs [1,1] to compare my results with my lecture slides and it's 100% the same, so I believe the code works. I can train the network for that same input, but when I change the input (and corresponding target) every cycle the error doesn't go down.
Should I allow changing the weights and inputs after every cycle or should I run through all the possible inputs first, get an average error and then change the weights? (But changing weights are dependent on the output, so what output would I use then)
I can share my code, if needed, but I'm pretty certain it's correct.
Please give me some advice? Thank you in advance. | 0 | 1 | 704 |
0 | 58,028,414 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-09-20T11:50:00.000 | 3 | 2 | 0 | How to fine-tune a keras model with existing plus newer classes? | 58,027,839 | 0.291313 | python,tensorflow,keras,deep-learning,classification | With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between all the classes (original dataset classes + second time used dataset classes), because when you add the new classes, it keeps their weights only for classification.
But, let's say for experimentation you change the number of output neurons (equal to the number of old + new classes) in the last layer, then it will now give random weights to these neurons which on prediction will not give you meaningful result.
This whole thing of making the model to classify among old + new classes experimentation is still in research area.
However, one way you can achieve it is to train your model from scratch on the whole data (old + new). | Good day!
I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive.
Now what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other objects it has been trained on before. Originally, the models trained on imagenet come with an output layer of 1000 neurons, each representing a separate class. I'm confused about how it should be able to detect the new classes? All the transfer learning and fine-tuning articles and blogs tell us to replace the original 1000-neuron output layer with a different N-neuron layer (N=number of new classes). In my case, I have two celebrities, so if I have a new layer with 2 neurons, I don't know how the model is going to classify the original 1000 imagenet objects.
I need a pointer on this whole thing, that how exactly can I have a pre-trained model taught two new celebrity faces while also maintaining its ability to recognize all the 1000 imagenet objects as well.
Thanks! | 0 | 1 | 1,312 |
0 | 58,042,103 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-20T13:43:00.000 | 2 | 1 | 0 | Nvenc session limit per GPU | 58,029,589 | 1.2 | python,ffmpeg,python-imageio,nvenc | Nvidia limits it 2 per system Not 2 per GPU. The limitation is in the driver, not the hardware. There have been unofficially drivers posted to github which remove the limitation | I'm using Imageio, the python library that wraps around ffmpeg to do hardware encoding via nvenc. My issue is that I can't get more than 2 sessions to launch (I am using non-quadro GPUs). Even using multiple GPUs. I looked over NVIDIA's support matrix and they state only 2 sessions per gpu, but it seems to be per system.
For example I have 2 GPUs in a system. I can either use the env variable CUDA_VISIBLE_DEVICES or set the ffmpeg flag -gpu to select the GPU. I've verified gpu usage using Nvidia-smi cli. I can get 2 encoding sessions working on a single gpu. Or 1 session working on 2 separate gpus each. But I can't get 2 encoding sessions working on 2 gpus.
Even more strangely if I add more gpus I am still stuck at 2 sessions. I can't launch a third encoding session on a 3rd gpu. I am always stuck at 2 regardless of the # of gpus. Any ideas on how to fix this? | 0 | 1 | 967 |
0 | 58,031,057 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-20T14:25:00.000 | 0 | 1 | 0 | Do a vlookup with pandas in python | 58,030,255 | 1.2 | python,pandas,replace,vlookup | results = df2.merge(df1,on="sku", how="outer") | I am struggling with a vlookup in python.
I have two datasets.
First is called "output_apu_stock1". Here i have quantities and prices, that should be update the second dataset.
Second is called "Angebote_Master_File".
Now, if i run my code, the new dataset "results" contains only the values, that matches. Leads to the problem, that my "Angebote_Master_File" that has originally around 1600 observations, shrinks to around 400 observations.
import pandas as pd
df1 = pd.read_csv("C:/Users/Desktop/output_apu_stock1.csv")
df2 = pd.read_csv("C:/Users/Desktop/Angebote_Master_File.csv")
results = df2.merge(df1,on="sku")
I got the point, that the final dataset contains only the matched observations (identifier is the column "sku") and drop the others...
I need the merged file containing all observations from the "Angebote_Master_File" without any losses.
Thanks for your help!
Best
Michael | 0 | 1 | 85 |
0 | 58,035,832 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-09-20T21:44:00.000 | 0 | 2 | 0 | Why can't python vectorize map() or list comprehensions | 58,035,479 | 0 | python,parallel-processing,vectorization,python-multiprocessing,simd | You want vectorization or JIT compilation use numba, pypy or cython but be warned the speed comes at the cost of flexibility.
numba is a python module that will jit compile certain functions for you but it does not support many kinds of input and barfs on some (many) python constructs. It is really fast when it works but can be difficult to wrangle. It is also very targeted at working with numpy arrays.
pypy is a complete replacement for the cpython interpreter that is a JIT. It supports the entire python spec but does not integrate with extensions well so some libraries will not work.
cython is an extension of python which compiles to a binary which will behave like a python module. However it does require you to use special syntax to take advantage of the speed gains and requires you to explicitly declare things as ctypes to real get any advantage.
My recommendation is:
use pypy if you are pure python. (if it works for you it's basically effortless)
Use numba if you need to speed up numeric calculations that numpy doesn't have a good way to do.
Use cython if you need the speed and the other 2 don't work. | I don't know that much about vectorization, but I am interested in understanding why a language like python can not provide vectorization on iterables through a library interface, much like it provides threading support. I am aware that many numpy methods are vectorized, but it can be limiting to have to work with numpy for generic computations.
My current understanding is that python is not capable of vectorizing functions even if they match the "SIMD" pattern. For instance, in theory shouldn't any list comprehension or use of the map() function be vectorizable because they output a list which is the result of running the same function on independent inputs from an input list?
With my niave understanding, it seems that anytime I use map(), in theory, I should be able to create an instruction set that represents the function; then each element in the input just needs to be run through the same function that was compiled. What is the technical challenge to designing a tool that provides simd_map(func, iterable), which attempts to compile func "just in time" and then grabs batches of input from iterable and utilizes the processor's simd capabilities to run those batches through func()?
Thanks! | 0 | 1 | 637 |
0 | 58,045,304 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-21T03:59:00.000 | 0 | 1 | 0 | What is difference between the result of using GPU or not? | 58,037,171 | 0 | python-3.x,keras,gpu,conv-neural-network | You probably don't have enough memory to fit all the images in the CPU during training. Using a GPU will only help if it has more memory. If this is happening because you have too many images or they're resolution is too high, you can try using keras' ImageDataGenerator and any of the flow methods to feed your data in batches. | I have a CNN with 2 hidden layers. When i use keras on cpu with 8GB RAM, sometimes i have "Memory Error" or sometimes precision class was 0 but some classes at the same time were 1.00. If i use keras on GPU,will it solve my problem? | 0 | 1 | 72 |
0 | 62,098,032 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-09-22T04:50:00.000 | 12 | 2 | 0 | Can someone give a good math/stats explanation as to what the parameter var_smoothing does for GaussianNB in scikit learn? | 58,046,129 | 1.2 | python,machine-learning,scikit-learn,gaussian | A Gaussian curve can serve as a "low pass" filter, allowing only the samples close to its mean to "pass." In the context of Naive Bayes, assuming a Gaussian distribution is essentially giving more weights to the samples closer to the distribution mean. This might or might not be appropriate depending if what you want to predict follows a normal distribution.
The variable, var_smoothing, artificially adds a user-defined value to the distribution's variance (whose default value is derived from the training data set). This essentially widens (or "smooths") the curve and accounts for more samples that are further away from the distribution mean. | I am aware of this parameter var_smoothing and how to tune it, but I'd like an explanation from a math/stats aspect that explains what tuning it actually does - I haven't been able to find any good ones online. | 0 | 1 | 4,054 |
0 | 58,234,695 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-22T06:19:00.000 | 1 | 2 | 0 | How to Compare Sentences with an idea of the positions of keywords? | 58,046,570 | 0.099668 | python,nlp,nltk | Semantic Similarity is a bit tricky this way, since even if you use context counts (which would be n-grams > 5) you cannot cope with antonyms (e.g. black and white) well enough. Before using different methods, you could try using a shallow parser or dependency parser for extracting subject-verb or subject-verb-object relations (e.g. ), which you can use as dimensions. If this does not give you the expected similarity (or values adequate for your application), use word embeddings trained on really large data. | I want to compare the two sentences. As a example,
sentence1="football is good,cricket is bad"
sentence2="cricket is good,football is bad"
Generally these senteces have no relationship that means they are different meaning. But when I compare with python nltk tools it will give 100% similarity. How can I fix this Issue? I need Help. | 0 | 1 | 566 |
0 | 58,048,268 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-22T09:08:00.000 | 1 | 1 | 0 | What's the meaning of the number before the progress bar when tensorflow is training | 58,047,736 | 1.2 | python-3.x,tensorflow,tensor,tensorflow-estimator | 10 and 49 corresponds to the number of batches which your dataset has been divided into in each epoch.
For example, in your train dataset, there are totally 10000 images and your batch size is 64, then there will be totally math.ceil(10000/64) = 157 batches possible in each epoch. | Could anyone tell me what's the meaning of '10' and '49' in the following log of tensorflow?
Much Thanks
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 5.899410247802734 secs
10/10 [==============================] - 23s 2s/step - loss: 2.6726 - acc: 0.1459
49/49 [==============================] - 108s 2s/step - loss: 2.3035 - acc: 0.2845 - val_loss: 2.6726 - val_acc: 0.1459
Epoch 2/100
10/10 [==============================] - 1s 133ms/step - loss: 2.8799 - acc: 0.1693
49/49 [==============================] - 17s 337ms/step - loss: 1.9664 - acc: 0.4042 - val_loss: 2.8799 - val_acc: 0.1693 | 0 | 1 | 173 |
0 | 58,049,249 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-22T12:12:00.000 | 1 | 2 | 0 | How do i retrain the model without losing the earlier model data with new set of data | 58,049,090 | 0.099668 | python-3.x,tensorflow,keras,deep-learning,face-recognition | With transfer learning you would copy an existing pre-trained model and use it for a different, but similar, dataset from the original one. In your case this would be what you need to do if you want to train the model to recognize your specific 100 people.
If you already did this and you want to add another person to the database without having to retrain the complete model, then I would freeze all layers (set layer.trainable = False for all layers) except for the final fully-connected layer (or the final few layers). Then I would replace the last layer (which had 100 nodes) to a layer with 101 nodes. You could even copy the weights to the first 100 nodes and maybe freeze those too (I'm not sure if this is possible in Keras). In this case you would re-use all the trained convolutional layers etc. and teach the model to recognise this new face. | for my current requirement, I'm having a dataset of 10k+ faces from 100 different people from which I have trained a model for recognizing the face(s). The model was trained by getting the 128 vectors from the facenet_keras.h5 model and feeding those vector value to the Dense layer for classifying the faces.
But the issue I'm facing currently is
if want to train one person face, I have to retrain the whole model once again.
How should I get on with this challenge? I have read about a concept called transfer learning but I have no clues about how to implement it. Please give your suggestion on this issue. What can be the possible solutions to it? | 0 | 1 | 675 |
0 | 58,049,979 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-22T13:36:00.000 | 0 | 1 | 0 | Linear models : Given the fact that both the models perform equally well on the test data set, which one would you prefer and why? | 58,049,729 | 0 | python,linear-regression | I'd probably go with the second one, just because the numbers in the second one are rounded more, and if they still do equally well, the extra digits in the first one are unnecessary and just make it look worse.
(As a side note, this question doesn't seem related to programming so you may want to post it in a different community.) | Consider two linear models:
L1: y = 39.76x + 32.648628
And
L2: y = 43.2x + 19.8
Given the fact that both the models perform equally well on the test data set, which one would you prefer and why? | 0 | 1 | 183 |
0 | 58,056,413 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-23T05:52:00.000 | 1 | 1 | 0 | Check inputs in csv file | 58,056,352 | 0.197375 | python,pandas | Simple approach that can be modified:
Open df using df = pandas.from_csv(<path_to_csv>)
For each column, use df['<column_name>'] = df['<column_name>'].astype(str) (str = string, int = integer, float = float64, ..etc).
You can check column types using df.dtypes | I`m new to python. I have a csv file. I need to check whether the inputs are correct or not. The ode should scan through each rows.
All columns for a particular row should contain values of same type: Eg:
All columns of second row should contain only string,
All columns of third row should contain only numbers... etc
I tried the following approach, (it may seem blunder):
I have only 15 rows, but no idea on number of columns(Its user choice)
df.iloc[1].str.isalpha()
This checks for string. I don`t know how to check ?? | 0 | 1 | 59 |
0 | 58,059,345 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-09-23T09:20:00.000 | 0 | 1 | 0 | Remove "days 00:00:00"from dataframe | 58,059,278 | 1.2 | python,pandas,dataframe,days | Check this format,
df['date'] = pd.to_timedelta(df['date'], errors='coerce').days
also, check .normalize() function in pandas. | So, I have a pandas dataframe with a lot of variables including start/end date of loans.
I subtract these two in order to get their difference in days.
The result I get is of the type i.e. 349 days 00:00:00.
How can I keep only for example the number 349 from this column? | 0 | 1 | 1,164 |
0 | 58,080,853 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-23T17:28:00.000 | 0 | 1 | 0 | Tensorflow: OneHot-encoding with variable sized length | 58,067,427 | 0 | python,tensorflow,machine-learning,one-hot-encoding | This is how I solved the issue: The problem was that I was using someone else's code and the depth argument in tf.one_hot was derived from someTensor.get_shape().as_list()[1]. The problem here is that if the shape of someTensor is unknown, the argument is a Python-None which is not a valid argument for tf.one_hot. However, using tf.shape(someTensor)[1] solved that problem as it returns a Dimension with unknown shape instead of a Python-None. A Dimension with unknown shape is a valid depth-argument for tf.one_hot. | I need to onehot-encode some positions with TensorFlow.
However, the length of the input sequences (and therefore the depth-argument in tf.one_hot) is None as I work with variable sized inputs.
This throws the following error:
"ValueError: Tried to convert 'depth' to a tensor and failed. Error: None values not supported.".
Is there a workaround for this?
I have already tried to set the depth to the correct sequence length before each individual call (through a variable that has some arbitrary initialization value) for a given sequence but as the computational graph is already built, the changes do not come into effect and the depth is stuck at the initialization value. | 0 | 1 | 129 |
0 | 58,072,232 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-09-23T20:00:00.000 | 1 | 3 | 0 | gensim word2vec extremely big and what are the methods to make file size smaller? | 58,069,421 | 1.2 | python,gensim,word2vec | The size of a full Word2Vec model is chiefly determined by the chosen vector-size, and the size of the vocabulary.
So your main options for big savings is to train smaller vectors, or a smaller vocabulary.
Discarding a few hundred stop-words or punctuation-tokens won't make a noticeable dent in the model size.
Discarding many of the least-frequent words can make a big difference in model size – and often those less-frequent words aren't as important as you might think. (While there are a lot of them in total, each only appears rarely. And because they're rare in the training data, they often tend not to have very good vectors, anyway – based on few examples, and their training influence is swamped by the influence of more-frequent words.)
The easiest way to limit the vocabulary size is to use a higher min_count value during training (ignoring all words with fewer occurrences), or a fixed max_final_vocab cap (which will keep only that many of the most-frequent words).
Note also that if you've been saving/reloading full Word2Vec models (via the gensim-internal .save()/.load() methods), you're retaining model internal weights that are only needed for continued training, and will nearly double the model-size on disk or re-load.
You may want to save just the raw word-vectors in the .wv property instead (via either the gensim-internal .save() or the .save_word2vec_format() methods). | I have a pre-trained word2vec bin file by using skipgram. The file is pretty big (vector dimension of 200 ), over 2GB. I am thinking some methods to make the file size smaller. This bin file contains vectors for punctuation, some stop words. So, I want to know what are the options to decrease the file size for this word2vec. Is it safe to delete those punctuation and stop words rows and what would be the most effective way ? | 0 | 1 | 2,014 |
0 | 58,070,875 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-23T22:18:00.000 | 0 | 2 | 0 | how to remove duplicates when using pandas concat to combine two dataframe | 58,070,840 | 0 | python,pandas,concat | drop_duplicates() only removes rows that are completely identical.
what you're looking for is pd.merge().
pd.merge(df1, df2, on='id) | I have two data from.
df1 with columns: id,x1,x2,x3,x4,....xn
df2 with columns: id,y.
df3 =pd.concat([df1,df2],axis=1)
when I use pandas concat to combine them, it became
id,y,id,x1,x2,x3...xn.
there are two id here.How can I get rid of one.
I have tried :
df3=pd.concat([df1,df2],axis=1).drop_duplicates().reset_index(drop=True).
but not work. | 0 | 1 | 1,418 |
0 | 58,075,290 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | What is the size of your dataset?How many rows are we talking here?
Your dataset is not balanced and so its kind of normal for a simple classification algorithm to predict the 'majority-class' most of the times and give you an accuracy of 90%. Can you collect more data that will have more positive examples in it.
Or, just try oversampling/ under-sampling. see if that helps.
You can also use penalized version of the algorithm to impose penalty, whenever a wrong class is predicted. That may help. | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive is very low. Below are the scores of my model
Accuracy Score : 0.8965989500114129
Precision Score : 0.3333333333333333
Recall Score : 0.029545454545454545
F1 Score : 0.05427974947807933
How I can increase my True Positives? Should I be using a different classification model?
I have tried the PCA and represented my data in 2 components, it increased the model accuracy up to 90%(approx) however True Positives was decreased again | 0 | 1 | 4,004 |
0 | 58,082,997 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | You can try many different solutions.
If you have quite a lot data points. For instance you have 2k 1s and 20k 0s. You can try just dump those extra 0s only keep 2k 0s. Then train it. And also you can try to use different set of 2k 0s and same set of 2k 1s. To train multiple models. And make decision based on multiple models.
You also can try adding weights at the output layer. For instance, you have 10 times 0s than 1s. Try to multiply 10 at the 1s prediction value.
Probably you also can try to increase dropout?
And so on. | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive is very low. Below are the scores of my model
Accuracy Score : 0.8965989500114129
Precision Score : 0.3333333333333333
Recall Score : 0.029545454545454545
F1 Score : 0.05427974947807933
How I can increase my True Positives? Should I be using a different classification model?
I have tried the PCA and represented my data in 2 components, it increased the model accuracy up to 90%(approx) however True Positives was decreased again | 0 | 1 | 4,004 |
0 | 58,074,603 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 0 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0 | python,machine-learning,statistics,data-science | I'm assuming that your purpose is to obtain a model with good classification accuracy on some test set, regardless of the form of that model.
In that case, if you have access to the computational resources, try Gradient-Boosted Trees. That's a ensemble classifier using multiple decision trees on subsets of your data, then a voting ensemble to make predictions. As far as I know, it can give good results with unbalanced class counts.
SciKitLearn has the function sklearn.ensemble.GradientBoostingClassifier for this. I have not used that particular one, but I use the regression version often and it seems good. I'm pretty sure MATLAB has this as a package too, if you have access.
2k features might be difficult for the SKL algorithm - I don't know I've never tried. | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive is very low. Below are the scores of my model
Accuracy Score : 0.8965989500114129
Precision Score : 0.3333333333333333
Recall Score : 0.029545454545454545
F1 Score : 0.05427974947807933
How I can increase my True Positives? Should I be using a different classification model?
I have tried the PCA and represented my data in 2 components, it increased the model accuracy up to 90%(approx) however True Positives was decreased again | 0 | 1 | 4,004 |
0 | 58,074,754 | 0 | 0 | 0 | 0 | 4 | false | 1 | 2019-09-24T06:23:00.000 | 4 | 4 | 0 | How to increase true positive in your classification Machine Learning model? | 58,074,203 | 0.197375 | python,machine-learning,statistics,data-science | There are several ways to do this :
You can change your model and test whether it performs better or not
You can Fix a different prediction threshold : here I guess you predict 0 if the output of your regression is <0.5, you could change the 0.5 into 0.25 for example. It would increase your True Positive rate, but of course, at the price of some more False Positives.
You can duplicate every positive example in your training set so that your classifier has the feeling that classes are actually balanced.
You could change the loss of the classifier in order to penalize more False Negatives (this is actually pretty close to duplicating your positive examples in the dataset)
I'm sure many other tricks could apply, here is just my favorite short-list. | I am new to Machine Learning
I have a dataset which has highly unbalanced classes(dominated by negative class) and contains more than 2K numeric features and the target is [0,1]. I have trained a logistics regression though I am getting an accuracy of 89% but from confusion matrix, it was found the model True positive is very low. Below are the scores of my model
Accuracy Score : 0.8965989500114129
Precision Score : 0.3333333333333333
Recall Score : 0.029545454545454545
F1 Score : 0.05427974947807933
How I can increase my True Positives? Should I be using a different classification model?
I have tried the PCA and represented my data in 2 components, it increased the model accuracy up to 90%(approx) however True Positives was decreased again | 0 | 1 | 4,004 |
0 | 58,078,010 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-09-24T09:39:00.000 | 1 | 3 | 0 | Use numpy structured array instead of dict to save space and keep speed | 58,077,373 | 0.066568 | python,numpy,dictionary,time-complexity,structured-array | numpy array use contiguous block of memory and can store only one type of object like int, float, string or other object. Where each item are allocated fixed bytes in memory.
Numpy also provide set of functions for operation like traversing array, arithmetic operation, some string operation on those stored items which are implemented using c. As these operation doesn't have overhead of python they are normally more efficient in terms of both memory and processing power
As you need key value pair you can also store that in numpy array similar to c-struct but it won't have features like dict like looking of item, checking if key existing filtering etc. you have do do those your self using array functionality
better option for you may be pandas series, which also use numpy array to store its data for provides you lots of functionality on top of it | Are numpy structured arrays an alternative to Python dict?
I would like to save memory and I cannot affort much of a performance decline.
In my case, the keys are str and the values are int.
Can you give a quick conversion line in case they actually are an alternative?
I also don't mind if you can suggest a different alternative.
I need to save memory, because some dictionaries get larger than 50Gb in memory and I need to open multiple at a time with 'only' 192 GB RAM available. | 0 | 1 | 1,173 |
0 | 58,079,908 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-24T11:39:00.000 | 0 | 1 | 0 | How does decision tree recognize the features from a given text dataset? | 58,079,493 | 1.2 | python,machine-learning,scikit-learn,decision-tree,text-processing | The Decision Tree won't recognize from which features the attributes are coming. | I have a binary classification text data in which there are 10 text features.
I use various techniques like Bag of words, TFIDF etc. to convert them to numerical.
I use hstack() to stack all those features together again after processing them.
After converting them to numerical feature, each feature now has large number of columns hence after conversion, my dataset has around 3000 columns.
My question is when I fit this dataset into decision tree classifier (sklearn), how does the classifier recognizes the columns which belong to a particular feature?
For example first 51 column out of 3000 belong to US_states Bag of words.
Now, how will the DT recognize it?
PS: Data before processing is in pandas Dataframe.
After processing, it is a stacked numpy array being input in the classifier. | 0 | 1 | 145 |
0 | 58,088,639 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-24T19:49:00.000 | 0 | 2 | 0 | Create array from image of chessboard | 58,087,263 | 0 | python,opencv,computer-vision | Based on either the edge detector or the red/green square detector, calculate the center coordinates of each square on the game board. For example, average the x-coordinate of the left and right edge of a square to get the x-coordinate of the square's center. Similarly, average the y-coordinate of the top and bottom edge to get the y-coordinate of the center.
It might also be possible to find the top, left, bottom and right edge of the board and then interpolate to find the centers of all the squares. The sides of each square are probably more than a hundred pixels in length, so the calculations don't need to be that accurate.
To determine where the pieces are, iterate of a list of the center coordinates and look at the color of the pixel. If it is red or green, the square is empty. If it is black or white, the square has a corresponding piece in it. Use the information to fill an array with the information for the AI.
If the images are noisy, it might be necessary to average several pixels near the center or to average the center pixel over several frames.
It would work best if the camera is above the center of the board. If it is off to the side, the edges wouldn't be parallel/orthogonal in the picture, which might complicate the math for finding the centers. | Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find a way to translate the visual board into a e.g a 2d array to feed into the A.I to compute the robots moves.
I have a line detection working which draws lines on along the edges of the squares (and also returns edges in canny as a prior step). Moreover I detect green and red (the squares of my board are green and red) and return these both as a mask each.
I also have a sphere detection in place to detect the position of the pieces and some black and white color detection returning a mask each with the black or white detected areas.
My question is how I can now combine these things I have and as a result get some type of array out of which I can deduct information over in which squares my pieces are ?
Like how would i build the 2d array (or connect any 8x8) array to the image of the board with the lines and/or the masks of the red/green tiles ? I guess I have to do some type of calibration ?
And secondly is there a way to somehow overlay the masks so that I then know which pieces are in which squares ? | 0 | 1 | 424 |
0 | 58,087,392 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-09-24T19:49:00.000 | 2 | 2 | 0 | Create array from image of chessboard | 58,087,263 | 0.197375 | python,opencv,computer-vision | Well, first of all remember that chess always starts with the same pieces on the same positions e.g. black knight starts at 8-B which can be [1][7] in your 2D array. If I were you I would start with a 2D array with the begin positions of all the chess pieces.
As to knowing which pieces are where: you do not need to recognize the pieces themselves. What I would do if I were you is detect the empty spots on the chessboard which is actually quite easy in comparison to really recognizing the different chess pieces.
Once your detection system detects that one of the previously empty spots is now no longer empty you know that a chess piece was moved there. Since you can also detect a new open spot(the spot where the chess piece came from) you also know the exact chess piece which was moved. If you keep track of this list during the whole game you can always know which pieces are moved and which pieces are where.
Edit:
As noted in the comments my answer was based on chess instead of checkers. The idea is however still the same but instead of chess pieces you can now put men and kings in the 2D array. | Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find a way to translate the visual board into a e.g a 2d array to feed into the A.I to compute the robots moves.
I have a line detection working which draws lines on along the edges of the squares (and also returns edges in canny as a prior step). Moreover I detect green and red (the squares of my board are green and red) and return these both as a mask each.
I also have a sphere detection in place to detect the position of the pieces and some black and white color detection returning a mask each with the black or white detected areas.
My question is how I can now combine these things I have and as a result get some type of array out of which I can deduct information over in which squares my pieces are ?
Like how would i build the 2d array (or connect any 8x8) array to the image of the board with the lines and/or the masks of the red/green tiles ? I guess I have to do some type of calibration ?
And secondly is there a way to somehow overlay the masks so that I then know which pieces are in which squares ? | 0 | 1 | 424 |
0 | 58,090,124 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-25T00:25:00.000 | 0 | 1 | 0 | Supremum Metric in Python for Knn with Uncertain Data | 58,089,636 | 1.2 | python,knn | I found out using scipy spatial distance and tweaking for-loops in standard knn helps a lot | I'm trying to make a classifier for uncertain data (e.g ranged data) using python. in certain dataset, the list is a 2D array or array of record (contains float numbers for data and a string for labels), where in uncertain dataset the list is a 3D array (contains range of float numbers for data and a string for labels). i managed to manipulate a certain dataset to be uncertain using uniform probability distribution. A research paper says that i have to use supremum distance metric. how do i implement this metric in python? note that in uncertain dataset, both test set and training set is uncertain | 0 | 1 | 119 |
0 | 58,096,338 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-25T05:01:00.000 | 0 | 1 | 0 | OpenCV camera calibration - Intrinsic matrix values are off | 58,091,445 | 0 | python-3.x,opencv,camera,camera-calibration | Cx and Cy are the coordinates (in pixels) of the principal point in your image. Usually a good approximation is (image_width/2, image_height/2).
An average reprojection error of 0.08 pixel seems quite good. | I used OpenCV's camera calibration function for calibrating my camera. I captured around 50 images with different angles and pattern near image borders.
Cx and Cy value in intrinsic matrix is around 300 px off. Is it alright? My average reprojection error is around 0.08 though. | 0 | 1 | 476 |
0 | 58,106,916 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-09-25T13:28:00.000 | 2 | 1 | 0 | Training a model from multiple corpus | 58,099,559 | 1.2 | python,artificial-intelligence,gensim,training-data,fasttext | Adjusting more-generic models with your specific domain training data is often called "fine-tuning".
The gensim implementation of FastText allows an existing model to expand its known-vocabulary via what's seen in new training data (via build_vocab(..., update=True)) and then for further training cycles including that new vocabulary to occur (through train()).
But, doing this particular form of updating introduces murky issues of balance between older and newer training data, with no clear best practices.
As just one example, to the extent there are tokens/ngrams in the original model that don't recur in the new data, new training is pulling those in the new data into new positions that are optimal for the new data... but potentially arbitrarily far from comparable compatibility with the older tokens/ngrams.)
Further, it's likely some model modes (like negative-sampling versus hierarchical-softmax), and some mixes of data, have a better chance of net-benefiting from this approach than others – but you pretty much have to hammer out the tradeoffs yourself, without general rules to rely upon.
(There may be better fine-tuning strategies for other kinds models; this is just speaking to the ability of the gensim FastText to update-vocabulary and repeat-train.)
But perhaps, your domain of interest is scientific texts. And maybe you also have a lot of representative texts – perhaps even, at training time, the complete universe of papers you'll want to compare.
In that case, are you sure you want to deal with the complexity of starting with a more-generic word-model? Why would you want to contaminate your analysis with any of the dominant word-senses in generic reference material, like Wikipedia, if in fact you already have sufficiently-varied and representative examples of your domain words in your domain contexts?
So I would recommend 1st trying to train your own model, from your own representative data. And only if you then fear you're missing important words/senses, try mixing in Wikipedia-derived senses. (At that point, another way to mix in that influence would be to mix Wikipedia texts with your other corpus. And you should also be ready to test whether that really helps or hurts – because it could be either.)
Also, to the extent your real goal is comparing full papers, you might want to look into other document-modeling strategies, including bag-of-words representations, the Doc2Vec ('Paragraph Vector') implementation in gensim, or others. Those approaches will not necessarily require per-word vectors as an input, but might still work well for quantifying text-to-text similarities. | Imagine I have a fasttext model that had been trained thanks to the Wikipedia articles (like explained on the official website).
Would it be possible to train it again with another corpus (scientific documents) that could add new / more pertinent links between words? especially for the scientific ones ?
To summarize, I would need the classic links that exist between all the English words coming from Wikipedia. But I would like to enhance this model with new documents about specific sectors. Is there a way to do that ? And if yes, is there a way to maybe 'ponderate' the trainings so relations coming from my custom documents would be 'more important'.
My final wish is to compute cosine similarity between documents that can be very scientific (that's why to have better results I thought about adding more scientific documents) | 0 | 1 | 254 |
0 | 58,140,537 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-09-26T10:05:00.000 | 0 | 1 | 0 | How to get data from elastic search, if new data came then update it, and again inject it? | 58,114,367 | 0 | python-3.x,pandas,elasticsearch,elasticsearch-py | I'd recommend to not worry about it and just load everything into Elasticsearch. As long as your _ids are consistent the existing documents will be overwritten instead of duplicated. So just be sure to specify an _id for each document and you are fine, the bulk helpers in the elasticsearch-py client all support you setting an _id value for each document alredy. | I have nearly 200 000 lines of tuples in my Pandas Dataframe. I injected that data into elastic search. Now, when I run the program It should check whether the present data already there in elastic search if not present insert into it. | 0 | 1 | 59 |
0 | 58,118,967 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-09-26T12:35:00.000 | 1 | 1 | 0 | Close all variable explorer windows in Spyder | 58,116,958 | 1.2 | python,window,spyder | (Spyder maintainer here) We don't have a command to do that, sorry. | Does anyone know of a quick way to close all open variable explorer windows in Spyder? (i.e. the windows that open when you click on a variable).
In Matlab, you can close all pop-up windows with close all. Does anything like that exist for Spyder? | 0 | 1 | 603 |
0 | 58,117,742 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-09-26T12:46:00.000 | 0 | 1 | 0 | ParserError: Error tokenizing data. C error | 58,117,142 | 1.2 | python-3.x,csv,dataframe | Did you try save these two .csv files as ANSI? I had problem with .csv when they were saved as UTF-8. | i'm using a script ScriptGlobal.py that will call and execute 2 other scripts script1.py and script2.py exec(open("./script2.py").read()) AND exec(open("./script1.py").read())
The output of my script1 is the creation of csv file.
df1.to_csv('file1.csv',index=False)
The output of my script2 is the creation of another csv file.
df2.to_csv('file2.csv',index=False)
In my ScriptGlobal.py i want to read the 2 files file1.csv and file2.csv and then i got this error.
ParserError: Error tokenizing data. C error: Expected 1 fields in line 16, saw 3
Is there solution to do it without doing manuallyu the manipulation in EXCEL ?
Thank you | 0 | 1 | 160 |
0 | 58,326,018 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-09-27T02:10:00.000 | 0 | 2 | 0 | What object types can be used for fetures in decision trees? Do I need to convert my "object" type to another type? | 58,126,842 | 0 | python,types,scikit-learn,decision-tree | I used one hot encoding to convert my categorical data because the scikit-learn decision tree packages do not support categorical data. | I imported a table using pandas and I was able to set independent variables (features) and my dependent variable (target). Two of my independent variables are "object type" and my others are int64 and float64. Do I need to convert my "object" type features to "class" or another type? How can I handle these in Sci-kit learn decision trees? | 0 | 1 | 346 |
0 | 58,137,213 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-27T15:05:00.000 | 0 | 2 | 0 | find the row of a matrix with the largest element in column i | 58,137,114 | 0 | python | To my knowledge, there is not a builtin function in python itself. I would recommend just building the utility yourself, since it's basically just a max over specified list out of matrix, which isn't hard to implement. | Beginner python,
I want to create a method like: max(mat,i)= the row with the maximum value in the column i of matrix mat.
For example, I have a matrix a=[[1,2,3],[4,5,6],[7,8,9]], then the largest value of the i=3 column is 9 and so max(a,3)=[7,8,9].
I'm wondering if there is a builtin function in python? | 0 | 1 | 43 |
0 | 58,138,420 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-09-27T16:26:00.000 | 0 | 2 | 0 | Change column from Pandas date object to python datetime | 58,138,314 | 1.2 | python,pandas,datetime | type(data_raw['pandas_date']) will always return pandas.core.series.Series, because the object data_raw['pandas_date'] is of type pandas.core.series.Series. What you want is to get the dtype, so you could just do data_raw['pandas_date'].dtype.
data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])
This is correct, and if you do data_raw['pandas_date'].dtype again afterwards, you will see that it is datetime[64]. | I have a dataset with the first column as date in the format: 2011-01-01 and type(data_raw['pandas_date']) gives me pandas.core.series.Series
I want to convert the whole column into date time object so I can extract and process year/month/day from each row as required.
I used pd.to_datetime(data_raw['pandas_date']) and it printed output with dtype: datetime64[ns] in the last line of the output. I assume that values were converted to datetime.
but when I run type(data_raw['pandas_date']) again, it still says pandas.core.series.Series and anytime I try to run .dt function on it, it gives me an error saying this is not a datetime object.
So, my question is - it looks like to_datetime function changed my data into datetime object, but how to I apply/save it to the pandas_date column? I tried
data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])
but this doesn't work either, I get the same result when I check the type. Sorry if this is too basic. | 0 | 1 | 384 |
0 | 58,139,262 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-27T17:37:00.000 | 1 | 2 | 0 | scipy ndimage has no attribute filter? | 58,139,189 | 0.099668 | python | Found it!
It should have been b=scipy.ndimage.filters instead of filter | I installed ndimage with sudo pip3 install scipy
then i'm importing it as import scipy.ndimage
then i'm doing the following line b=scipy.ndimage.filter.gaussian_filter(i,sigma=10)
and I get AttributeError: module 'scipy.ndimage' has no attribute 'filter'
Anyone encountered this before? | 0 | 1 | 384 |
0 | 58,364,117 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-09-30T06:16:00.000 | 1 | 1 | 0 | pin and allocate tensorflow on specific NUMA node | 58,162,375 | 0.197375 | python,tensorflow,numa | no, answer ...
I'm using numactl --cpunodebind=1 --membind=1 - binds execution and memory allocation to NUMA node 1. | My system has two NUMA nodes and two GTX 1080 Ti attached to NUMA node 1 (XEON E5).
The NN models are trained via single-machine multi-GPU data parallelism using Keras' multi_gpu_model.
How can TF be instructed to allocate memory and execute the TF workers (merging weights) only on NUMA node 1? For performance reasons I'd like to prevent accessing memory through the QPI.
tf.device():
1) Does tf.device('/cpu:0') refer to a physical CPU or a physical core or is it simply a 'logical device' (thread|pool?) that is moved between all physical cores that are online?
2) How can the TF scheduler be influenced to map the logical device to a set of physical cores?
3) In the case of memory allocation on NUMA systems - does TF support allocating memory on specific nodes? Or do I have to fall back to set_mempolicy()/numactl (LINUX)? | 0 | 1 | 366 |
0 | 58,167,125 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-09-30T11:20:00.000 | 4 | 2 | 0 | Why does iloc use [] and not ()? | 58,166,876 | 0.379949 | python,pandas | import pandas as pd here it is python module
pd.DataFrame(...) if you pay attention to naming convection DataFrame is a class here.
df.reindex() is a method called on instance itself.
df.columns has no bracket because it is an attribute of the object not a method
df.iloc is meant to get item by index so to show it's index-able nature [] makes more sense here. | I am relatively new to python and it seems to me (probably because I don't understand) that the syntax is sometimes slightly inconsistent.
Suppose we are working with the pandas package import pandas as pd. Then any method within this package can be accessed by pd.method, i.e. pd.DataFrame(...). Now, there are certain objects within the pandas package that have certain methods, i.e. df.reindex() (notice circular brackets), or certain attributes, i.e. df.columns (notice no brackets).
My question is two fold:
First of all, is what I have said above correct?
Secondly, why does the iloc method not maintain the above syntax? If it is a method then surely I should use df.iloc(0,0) instead of df.iloc[0,0] to obtain the top left value of a data frame...
Thanks | 0 | 1 | 363 |
0 | 58,176,536 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-09-30T23:30:00.000 | 1 | 1 | 0 | replace do not work even with inplace=True | 58,176,409 | 1.2 | python,pandas | Without setting the regex flag to True, replace will look for an exact match.
To get a partial match, just use df.likes = df.likes.replace(' others', '', regex=True). | Replace function failed to work even with inplace=True.
data:
0 245778 others
1 245778 others
2 245778 others
4 245778 others
code:
df.likes=df.likes.astype('str')
df.likes.replace('others','',inplace=True)
Result:
0 245778 others
1 245778 others
2 245778 others
4 245778 others
Expected Result:
0 245778
1 245778
2 245778
4 245778 | 0 | 1 | 43 |
Subsets and Splits