GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 71,804,014 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-30T01:43:00.000 | 0 | 2 | 0 | 'KeyError: (Timestamp('1993-01-29 00:00:00'), 'colName') | 58,617,655 | 0 | python-3.x,pandas,datetime,yahoo-finance | I think it may be related to the fact that Jan 29th, 1993 was a Saturday
Try shifting the date to the next trading day | I am trying to create a new column on my stockmarket data frame that was imported form yahoo. I am dealing with just one symbol at the moment.
symbol['profit']= [[symbol.loc[ei, 'close1']-symbol.loc[ei, 'close']] if symbol[ei, 'shares']==1 else 0 for ei in symbol.index]
I am expecting to have a new column in the dataframe labeled 'profit', but instead I am getting this as an output:
KeyError: (Timestamp('1993-01-29 00:00:00), 'shares')
I imported the csv to a df with
parse_dates=True
index_col='Date' setting the 'Date' column as a datetimeindex which has been working. I am not sure how to overcome this roadblock at the moment. Any help would be appreciated! | 0 | 1 | 671 |
0 | 58,631,153 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-10-30T14:01:00.000 | 1 | 1 | 0 | How to deal with label that not included in training set when doing prediction | 58,627,102 | 1.2 | python,machine-learning | You could set a certain threshold for prediction the known classes. Your model should predict from the known classes only if it predicts it with a certain threshold value, otherwise, it will be classified as unknown.
The other (and less preferable) way to deal with this problem is to have another class called unknown even during training and put some random faces as corresponding examples of this class. | For example, using supervise learning to classify 5 different people face.
But when test on 6th people face that not in training set, the model will still predict it within the 5 people.
How to let the model predict the 6th and onwards people face as unknown when the model doesn't train them before? | 0 | 1 | 539 |
0 | 58,631,271 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2019-10-30T14:47:00.000 | 3 | 2 | 0 | Python - Pandas read sql modifies float values columns | 58,627,984 | 0.291313 | python,sql,pandas | I've found a workaround for now.
Convert the column you want to string, then after you use Pandas you can convert the string to whatever type you want.
Even though this works, it doesn't feel right to do so. | I'm trying to use Pandas read_sql to validate some fields in my app.
When i read my db using SQL Developer, i get these values:
603.29
1512.00
488.61
488.61
But reading the same sql query using Pandas, the decimal places are ignored and added to the whole-number part. So i end up getting these values:
60329.0
1512.0
48861.0
48861.0
How can i fix it? | 0 | 1 | 309 |
0 | 58,649,241 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-10-31T18:10:00.000 | 0 | 2 | 0 | pandas saves my data only into one column in a csv-file | 58,649,100 | 0 | python,pandas,csv | It seems they are two columns already, as you gave example in your question
codes[W9KBJ-95X9T-ZC3KW-BJTJT-5FF3T]
date_posted[13:14 - 28. Okt. 2019]
read this file in pandas again, with explicit "," as a delimiter, you'll be able to read this file in CSV format.
Let me know if you are still not clear.
Thanks. | I have 2 lists and I want to save them in a csv file but they always end up in one column.
dates=['13:14 - 28. Okt. 2019', '14:30 - 27. Okt. 2019', '11:33 - 26. Okt. 2019', '15:54 - 25. Okt. 2019']
codes=['W9KBJ-95X9T-ZC3KW-BJTJT-5FF3T', 'CZWJJ-X6XHJ-9CJC5-JTT3J-WZ6WC', 'KZK3T-K6RSJ-ZWTCK-JTJ3T-T3HJJ', 'CHCBT-TF6HB-ZC3WC-BT333-KBR3B']
I checked the documentation but without success.
def save_as_csv(codes, dates, save_location):
raw_data = {'codes': codes, 'date_posted': dates}
df = pd.DataFrame(data=raw_data)
df.to_csv(save_location, columns=['codes', 'date_posted'], index=False)
-----
codes,date_posted
W9KBJ-95X9T-ZC3KW-BJTJT-5FF3T,13:14 - 28. Okt. 2019
CZWJJ-X6XHJ-9CJC5-JTT3J-WZ6WC,14:30 - 27. Okt. 2019
KZK3T-K6RSJ-ZWTCK-JTJ3T-T3HJJ,11:33 - 26. Okt. 2019
CHCBT-TF6HB-ZC3WC-BT333-KBR3B,15:54 - 25. Okt. 2019
This is my result, but they are all in one column. | 0 | 1 | 141 |
0 | 58,717,599 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2019-11-02T02:40:00.000 | 1 | 1 | 0 | How to plot a very large audio file with low latency and time to save file? | 58,667,844 | 0.197375 | python,matplotlib,audio,plot,julia | Assuming that your audio file has a sample rate of 44 kHz (which is the most common sampling rate), then there are 60*60*44_000 = 158400000 samples per hour. This number should be compared to a high-resolution screen which is ~4000 pixels wide (4k resolution). If you would print time series with a 600 dpi printer, 1 hour would be 60*60*44_000 / (600 * 2.54 * 100) = 1039 meters long if every sample should be resolved. (so please don't print this :-))
Instead have a look at PyPlot.jl functions psd (power spectral density) and specgram (spectrogram) which are often used to visualize frequencies present in an audio recording. | I have an audio file sampled at 44 kbps and it has a few hours of recording. I would like to view the raw waveform in a plot (figure) with something like matplotlib (or GR in Julia) and then to save the figure to disk. Currently this takes a considerable amount of time and would like to reduce that time.
What are some common strategies to do so? Are there any special circumstances to consider on approaches of reducing the number of points in the figure? I expect that some type of subsampling of the time points will be needed and that some interpolation or smoothing will be used. (Python or Julia solutions would be ideal but other languages like R or MATLAB are similar enough to understand the approach.) | 0 | 1 | 144 |
0 | 58,670,829 | 0 | 0 | 0 | 1 | 1 | true | 2 | 2019-11-02T08:54:00.000 | 2 | 1 | 0 | Is there a way to append data to an excel file without reading its contents, in python? | 58,669,599 | 1.2 | python,excel,pandas | It isn't possible to just append to an xlsx file like a text file. An xlsx file is a collection of XML files in a Zip container so to append data you would need to unzip the file, read the XML data, add the new data, rewrite the XML file(s) and then rezip them.
This is effectively what OpenPyXL does. | I have a huge master data dump excel file. I have to append data to it on a regular basis. The data to be appended is stored as a pandas dataframe. Is there a way to append this data to the master dump file without having to read its contents.
The dump file is huge and takes a considerable amount of time for the program to load the file (using pandas).
I have already tried openpyxl and XlsxWriter but it didn't work. | 0 | 1 | 152 |
0 | 63,735,920 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-03T00:04:00.000 | 0 | 1 | 0 | Facing this error : AttributeError: Can't get attribute 'DeprecationDict' on <module 'sklearn.utils.deprecation' | 58,676,350 | 0 | python,scikit-learn | You used a new version of scikit-learn to load a model that was trained by an older version of scikit-learn.
Therefore, the options are:
Retrain the model with the current version of scikit-learn if you have a training text and data.
Or go back to the lower version of the scikit-learn reported in the warning message | Facing this issue while running the code to load ML model pickle file.,,
AttributeError: Can't get attribute 'DeprecationDict' on | 0 | 1 | 404 |
0 | 58,684,334 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-03T14:51:00.000 | 0 | 1 | 0 | pandas vs numpy packages in python | 58,681,323 | 0 | python-3.x | Yes, there is a speed difference.
Feel free to post your timeit benchmark figures. | Are pandas iterrows fast compare to np.where on a smaller dataset? I heard numpy is always efficient compared to pandas?
I was surprised to see that when I used iterrow in my code vs numpy's np.where on a small dataset, iterrows execution was fast. | 0 | 1 | 31 |
0 | 58,685,507 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-11-03T23:03:00.000 | 3 | 2 | 0 | Project organization with Tensorflow.keras. Should one subclass tf.keras.Model? | 58,685,407 | 1.2 | python,tensorflow,tensorflow-estimator,tf.keras | Subclass only if you absolutely need to. I personally prefer following the following order of implementation. If the complexity of the model you are designing, can not be achieved using the first two options, then of course subclassing is the only option left.
tf.keras Sequential API
tf.keras Functional API
Subclass tf.keras.Model | I'm using Tensorflow 1.14 and the tf.keras API to build a number (>10) of differnet neural networks. (I'm also interested in the answers to this question using Tensorflow 2). I'm wondering how I should organize my project.
I convert the keras models into estimators using tf.keras.estimator.model_to_estimator and Tensorboard for visualization. I'm also sometimes using model.summary(). Each of my models has a number (>20) of hyperparameters and takes as input one of three types of input data. I sometimes use hyperparameter optimization, such that I often manually delete models and use tf.keras.backend.clear_session() before trying the next set of hyperparameters.
Currently I'm using functions that take hyperparameters as arguments and return the respective compiled keras model to be turned into an estimator. I use three different "Main_Datatype.py" scripts to train models for the three different input data types. All data is loaded from .tfrecord files and there is an input function for each data type, which is used by all estimators taking that type of data as input. I switch between models (i.e. functions returning a model) in the Main scripts. I also have some building blocks that are part of more than one model, for which I use helper functions returning them, piecing together the final result using the Keras functional API.
The slight incompatibilities of the different models are begining to confuse me and I've decided to organise the project using classes. I'm planing to make a class for each model that keeps track of hyperparameters and correct naming of each model and its model directory. However, I'm wondering if there are established or recomended ways to do this in Tensorflow.
Question: Should I be subclassing tf.keras.Model instead of using functions to build models or python classes that encapsulate them? Would subclassing keras.Model break (or require much work to enable) any of the functionality that I use with keras estimators and tensorboard? I've seen many issues people have with using custom Model classes and am somewhat reluctant to put in the work only to find that it doesn't work for me. Do you have other suggestions how to better organize my project?
Thank you very much in advance. | 0 | 1 | 349 |
0 | 69,961,017 | 0 | 1 | 0 | 0 | 1 | false | 28 | 2019-11-04T06:52:00.000 | 1 | 2 | 0 | Force Anaconda to install tensorflow 1.14 | 58,688,481 | 0.099668 | python,python-3.x,tensorflow,anaconda,version | first find the python version of tensorflow==1.14.0, then find the Anaconda version by python version.
e.g. tensorflow 1.14.0 can work well on python36, and Anaconda 3.5.1 has python36. So install the Anaconda 3.5.1, then install tensorflow==1.14.0 by pip | Now, the official TensorFlow on Anaconda is 2.0. My question is how to force Anaconda to install an earlier version of TensorFlow instead. So, for example, I would like Anaconda to install TensorFlow 1.14 as plenty of my projects are depending on this version. | 0 | 1 | 51,822 |
0 | 58,690,132 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-11-04T07:33:00.000 | 2 | 1 | 0 | Difference between context-sensitive tensors and word vectors | 58,688,938 | 1.2 | python,nlp,spacy | Word vectors are stored in a big table in the model and when you look up cat, you always get the same vector from this table.
The context-sensitive tensors are dense feature vectors computed by the models in the pipeline while analyzing the text. You will get different vectors for cat in different texts. If you use en_core_web_sm, the token cat in I have a cat will not have the same vector as in The cat is black. Having the context-sensitive tensors available when the model doesn't include word vectors lets the similarity functions work to some degree, but the results are very different than with word vectors.
For most purposes, you probably want to use the _md or _lg model with word vectors. | I am currently working in python with spacy and there are different pre-trained models like the en_core_web_sm or the en_core_web_md. One of them is using words vectors to find word similarity and the other one is using context-sensitive tensors.
What is the difference between using context-sensitive tensors and using word vectors? And what is context-senstiive tensors exactly? | 0 | 1 | 492 |
0 | 58,692,931 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-04T10:30:00.000 | 1 | 1 | 0 | How can deep learning models found via cross validation be combined? | 58,691,535 | 0.197375 | python,keras,scikit-learn,deep-learning,cross-validation | A (more) correct reflection of your performance on your dataset would be to average the N fold-results on your validation set.
As per the three resulting models, you can have an average prediction (voting ensemble) for a new data point. In other words, whenever a new data point arrives, predict with all your three models and average the results.
Please note a very important thing: The purpose of K-fold cross-validation is model checking, not model building. By using K-fold cross-validation you ensure that when you randomly split your data, say in an 80-20 percent fashion, you do not create a very easy test set. Creating a very easy test set would lead the developer to consider that he/she has a very good model, and when subjected to test data the model would perform much worse.
In essence and eventually, what you would want to do is to take all the data that you are using for both train and test and using it only for training. | I'm training a keras deep learning model with 3 fold cross validation. For every fold I'm receiving a best performing model and in the end my algorithm is giving out the combined score of the three best models.
My question now is, if there is a possibility to combine the 3 models in the end or if it would be a legit solution to just take the best performing model of those 3 models? | 1 | 1 | 415 |
0 | 58,701,942 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-04T21:03:00.000 | 1 | 1 | 0 | Pandas, .agg('sum') vs .sum() | 58,701,025 | 0.197375 | python,pandas | The value in column d5['Name'] might contains null values.
Groupby will ignore those rows with None in d5['Name']. | At the end of my code I sum by dataframe below, then export to csv:
sumbyname = d5.groupby(['Name'])['Value'].agg('sum')
I sum the value of each person by name, Now if I sum this column in excel using SUM then I get +12
Now if i do d5['Value'].sum()) in my code to find the total sum, I get -11.
Is there a difference in the way i'm summing these 2 values? I thought they should be the same. | 0 | 1 | 658 |
0 | 58,704,122 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-05T03:30:00.000 | 0 | 2 | 0 | Python Pandas dataFrame - Columns selection | 58,704,076 | 0 | python,pandas,dataframe | No difference
pd.crosstab(train_df['ColA'], train_df['ColB']) is recommended to prevent possible errors.
For example, if you have a column named count and if you type train_df.count it will give an error. train_df['count'] won't give an error. | I have a Pandas dataFrame object train_df with say a column called "ColA" and a column "ColB". It has been loaded from a csv file with columns header using read_csv
I obtain the same results when I code:
pd.crosstab(train_df['ColA'], train_df['ColB'])
or
pd.crosstab(train_df.ColA, train_df.ColB)
Is there any difference in these 2 ways of selecting columns?
When I request to print the type it's the same : pandas.core.series.Series | 0 | 1 | 65 |
0 | 58,704,174 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-05T03:30:00.000 | 0 | 2 | 0 | Python Pandas dataFrame - Columns selection | 58,704,076 | 0 | python,pandas,dataframe | If you only want to select a single column, there is no difference between the two ways.
However, the dot notation doesn't allow you to select multiple columns, whereas you can use dataframe[['col1', 'col2']] to select multiple columns (which returns a pandas.core.frame.DataFrame instead of a pandas.core.series.Series). | I have a Pandas dataFrame object train_df with say a column called "ColA" and a column "ColB". It has been loaded from a csv file with columns header using read_csv
I obtain the same results when I code:
pd.crosstab(train_df['ColA'], train_df['ColB'])
or
pd.crosstab(train_df.ColA, train_df.ColB)
Is there any difference in these 2 ways of selecting columns?
When I request to print the type it's the same : pandas.core.series.Series | 0 | 1 | 65 |
0 | 58,704,657 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-05T04:35:00.000 | 1 | 1 | 0 | Does Tensorflow Use the Best Weights or Most Recent Weights When Testing in the Same Session? | 58,704,575 | 1.2 | python-3.x,tensorflow | It doesn't.
The model relies on a single set of weights, that are variables. You can store the best model with a saver and save the training progress as a separate checkpoint.
Other option would be to have a duplicate set of variables and copy weights once a better model is found.
Yet, the it is normally uncommon to judge if a model at epoch X is better than at epoch Y, since training accuracy might be misleading (read: overfitting). Therefore, one usually evaluates model after every epoch and saves the checkpoint if performance got better during evaluation. This way there is no need to maintain multiple copies of the same model. | This might be a dumb question, but would like someone to tell me yes or no.
Say I have an LSTM network in Tensorflow, and am training it using the Adam Optimizer to minimize a cost function by feeding X and Y variables a set of X and Y dict's during training, and then IN THE SAME SESSION, feeding the variables new X and Y dict's for testing, does Tensorflow automatically use the best model found during it's training (i.e. using the weights that brought about the lowest cost value during training), or just the most recent one in it's run (i.e. the latest epoch)?
Wondering if I need to set up a model.saver function to capture the best model as a new lower cost value is reached, close the current session, and re-open a new one using that saved model, OR if I can just assume that when I test in the same session as training, it will use the best model.
Thanks! | 0 | 1 | 30 |
0 | 63,646,557 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-11-05T06:27:00.000 | 0 | 1 | 0 | Cannot Import name 'spaces' from gym | 58,705,609 | 0 | python-3.x,python-import,importerror,openai-gym | There are probably multiple reasons for this error message. On Windows 10, this can be due to access permissions to gym-related folder. Make sure your Windows user account is granted access to gym and/or python libraries more broadly. | Everything was working fine, but suddenly running a python task which imports gym and from gym imports spaces leads to an error(though it was working fine before):
ImportError: cannot import name 'spaces'
I have tried reinstalling gym but then my tensorflow needs bleach version to be 1.5 while gym requires a upgraded version.
I tried upgrading tensoflow from 1.8.0 to 1.12.0, this again throws an error:
ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory | 0 | 1 | 796 |
0 | 59,372,216 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-05T12:18:00.000 | 0 | 1 | 0 | How to set prunable layers for tfmot.sparsity.keras.prune_low_magnitude? | 58,711,222 | 1.2 | python,machine-learning,keras,tensorflow2.0,pruning | In the end I found that you can also apply prune_low_magnitude() per layer.
So the workaround would be to define a list containing the names or types of the layers that shall be pruned, and iterate the layer-wise pruning over all layers in this list. | I am applying the pruning function from tensorflow_model_optimization, tfmot.sparsity.keras.prune_low_magnitude() to MobileNetV2.
Is there any way to set only some layers of the model to be prunable? For training, there is a method "set_trainable", but I haven't found any equivalent for pruning.
Any ideas or comments will be appreciated! :) | 0 | 1 | 321 |
0 | 58,724,694 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-05T12:45:00.000 | 1 | 2 | 0 | Cluster identification with NN | 58,711,675 | 0.099668 | python,tensorflow,neural-network,cluster-analysis | If you want to treat clustering as a classification problem, then you can try to train the network to predict whether two points belong to the same clusters or to different clusters.
This does not ultimately solve your problems, though - to cluster the data, this labeling needs to be transitive (which it likely will not be) and you have to label n² pairs, which is expensive.
Furthermore, because your clustering is density-based, your network may need to know about further data points to judge which ones should be connected... | I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant).
the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc.
since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters?
during my research I heard about self organizing maps. would these networks do the job?
thank you | 0 | 1 | 97 |
0 | 58,712,729 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-05T12:45:00.000 | 0 | 2 | 0 | Cluster identification with NN | 58,711,675 | 0 | python,tensorflow,neural-network,cluster-analysis | These particles build individual clusters which are already identified
and labeled; meaning that every particle is already assigned to its
correct cluster (this assignment is done by a density estimation but
for my purpose not that relevant).
the challenge is now to build a network which does this clustering
after learning from the huge data.
Sounds pretty much like a classification problem to me. Images themselves can build clusters in their image space (e.g. a vector space of dimension width * height * RGB).
since this is not a classification problem but more a identification
of clusters-challenge what kind of neural network should i use?
You have data of coordinates, you have labels. Start with a simple fully connected single/multi-layer-perceptron i.e. vanilla NN, with as many outputs as number of clusters and softmax-activation function.
There are tons of blogs and tutorials for Deep Learning libraries like keras out there in the internet. | I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant).
the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc.
since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters?
during my research I heard about self organizing maps. would these networks do the job?
thank you | 0 | 1 | 97 |
0 | 58,715,086 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-05T15:56:00.000 | 0 | 1 | 0 | Matrix multiplication using numpy array | 58,715,034 | 1.2 | python,numpy,regression | You are feeding them in the wrong order
Instead of feeding (100,2) * (2,100), you are feeding (2,100) * (100,2) | I am trying to do a linear regression using Matrix multiplication.
X is the feature matrix, and I have 100 data points. As per the normal equation, the dot product of X and of the transpose of X is required.
Having added a column of ones as required, the shape of X is 100×2 while for the transpose of X it is 2×100.
However, when I am doing the dot product, the result (which is given in the book) comes accordingly, a 2×2 matrix. Shouldn't it be a 100×100 matrix as per laws of matrix multiplication using dot product?
Conceptually, where am I going wrong? | 0 | 1 | 48 |
0 | 58,717,243 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-05T18:11:00.000 | 0 | 3 | 0 | How to resize image by mainitaining aspect ratio in python3? | 58,717,150 | 1.2 | python-3.x,numpy | Well, you choose which dimension you want to enforce and then you adjust the other one by calculating either new_width = new_height*aspect_ratio or new_height = new_width/aspect_ratio.
You might want to round those numbers and convert them to int too. | I have an image with image.shape=(20,10)and I want to resize this image so that new image size would be image.size = 90.
I want to use np.resize(image,(new_width, new_height)), but how can I calculate new_width and new_height, so that it maintains aspect_ratio as same as in original image. | 0 | 1 | 52 |
0 | 58,730,741 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-11-06T07:45:00.000 | 2 | 1 | 0 | How to set a threshold value from signal to be processed in wavelet thresholding in python | 58,725,295 | 0.379949 | python,wavelet | There are some helpful graphics on pywt webpage that help visualize what these thresholds are and what they do.
The threshold applies to the coefficients as opposed to your raw signal. So for denoising, this will typically be the last couple of entries returned by pywt.wavedec that will need to be zeroed/thresholded.
I could initial guess is the 0.5*np.std of each coefficeint level you want to threshold. | I'm trying to denoise my signal using discrete wavelet transform in python using pywt package. But i cannot define what is threshold value that i should set in pywt.threshold() function
I have no idea what the best threshold value that should be set in order to reconstruct a signal with minimal noise
I used ordinary code:
pywt.threshold(mysignal, threshold, 'soft')
yes i am intended to do soft thresholding
I want to know if the threshold value could be determined by looking to my signal or from the other way | 0 | 1 | 956 |
0 | 65,612,208 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-06T09:06:00.000 | 0 | 2 | 0 | Should we stop training discriminator while training generator in CycleGAN tutorial? | 58,726,483 | 0 | python,tensorflow,deep-learning,generative-adversarial-network | For the training to happen in an adversarial way the gradients of the discriminator and generator networks should be updated separately. The discriminator becomes stronger because generator produces more realistic samples and vise versa. If you update these networks together the "adversarial" training is not happening - to the best of my knowledge you are unlikely to obtain pleasing synthetic samples this way. | In the code provided by tensorlfow tutorial for CycleGAN, they have trained discriminator and generator simultaneously.
def train_step(real_x, real_y):
# persistent is set to True because the tape is used more than
# once to calculate the gradients.
with tf.GradientTape(persistent=True) as tape:
# Generator G translates X -> Y
# Generator F translates Y -> X.
fake_y = generator_g(real_x, training=True)
cycled_x = generator_f(fake_y, training=True)
fake_x = generator_f(real_y, training=True)
cycled_y = generator_g(fake_x, training=True)
# same_x and same_y are used for identity loss.
same_x = generator_f(real_x, training=True)
same_y = generator_g(real_y, training=True)
disc_real_x = discriminator_x(real_x, training=True)
disc_real_y = discriminator_y(real_y, training=True)
disc_fake_x = discriminator_x(fake_x, training=True)
disc_fake_y = discriminator_y(fake_y, training=True)
# calculate the loss
gen_g_loss = generator_loss(disc_fake_y)
gen_f_loss = generator_loss(disc_fake_x)
total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)
# Total generator loss = adversarial loss + cycle loss
total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)
total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)
disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)
disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)
# Calculate the gradients for generator and discriminator
generator_g_gradients = tape.gradient(total_gen_g_loss,
generator_g.trainable_variables)
generator_f_gradients = tape.gradient(total_gen_f_loss,
generator_f.trainable_variables)
discriminator_x_gradients = tape.gradient(disc_x_loss,
discriminator_x.trainable_variables)
discriminator_y_gradients = tape.gradient(disc_y_loss,
discriminator_y.trainable_variables)
# Apply the gradients to the optimizer
generator_g_optimizer.apply_gradients(zip(generator_g_gradients,
generator_g.trainable_variables))
generator_f_optimizer.apply_gradients(zip(generator_f_gradients,
generator_f.trainable_variables))
discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,
discriminator_x.trainable_variables))
discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,
discriminator_y.trainable_variables))
But while training a GAN network we need to stop training discriminator when we are training generator network.
What's the benefit of using it? | 0 | 1 | 413 |
0 | 58,727,960 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-06T09:06:00.000 | 0 | 2 | 0 | Should we stop training discriminator while training generator in CycleGAN tutorial? | 58,726,483 | 0 | python,tensorflow,deep-learning,generative-adversarial-network | In the GANs, you don't stop training D or G. They are trained simultaneously.
Here they first calculate the gradient values for each network (not to change D or G before calculating the current loss), then update the weights using those.
It's not clear in your question, what's the benefit of what? | In the code provided by tensorlfow tutorial for CycleGAN, they have trained discriminator and generator simultaneously.
def train_step(real_x, real_y):
# persistent is set to True because the tape is used more than
# once to calculate the gradients.
with tf.GradientTape(persistent=True) as tape:
# Generator G translates X -> Y
# Generator F translates Y -> X.
fake_y = generator_g(real_x, training=True)
cycled_x = generator_f(fake_y, training=True)
fake_x = generator_f(real_y, training=True)
cycled_y = generator_g(fake_x, training=True)
# same_x and same_y are used for identity loss.
same_x = generator_f(real_x, training=True)
same_y = generator_g(real_y, training=True)
disc_real_x = discriminator_x(real_x, training=True)
disc_real_y = discriminator_y(real_y, training=True)
disc_fake_x = discriminator_x(fake_x, training=True)
disc_fake_y = discriminator_y(fake_y, training=True)
# calculate the loss
gen_g_loss = generator_loss(disc_fake_y)
gen_f_loss = generator_loss(disc_fake_x)
total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)
# Total generator loss = adversarial loss + cycle loss
total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)
total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)
disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)
disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)
# Calculate the gradients for generator and discriminator
generator_g_gradients = tape.gradient(total_gen_g_loss,
generator_g.trainable_variables)
generator_f_gradients = tape.gradient(total_gen_f_loss,
generator_f.trainable_variables)
discriminator_x_gradients = tape.gradient(disc_x_loss,
discriminator_x.trainable_variables)
discriminator_y_gradients = tape.gradient(disc_y_loss,
discriminator_y.trainable_variables)
# Apply the gradients to the optimizer
generator_g_optimizer.apply_gradients(zip(generator_g_gradients,
generator_g.trainable_variables))
generator_f_optimizer.apply_gradients(zip(generator_f_gradients,
generator_f.trainable_variables))
discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,
discriminator_x.trainable_variables))
discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,
discriminator_y.trainable_variables))
But while training a GAN network we need to stop training discriminator when we are training generator network.
What's the benefit of using it? | 0 | 1 | 413 |
0 | 62,236,017 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-11-06T13:14:00.000 | 2 | 2 | 0 | What is the difference between interpolation and imputation? | 58,731,044 | 0.197375 | python-3.x,pandas | I will answer the second part of your question i.e. when to use what.
We use both techniques depending upon the use case.
Imputation:
If you are given a dataset of patients with a disease (say Pneumonia) and there is a feature called body temperature. So, if there are null values for this feature then you can replace it by average value i.e. Imputation.
Interpolation:
If you are given a dataset of the share price of a company, you know that every Saturday and Sunday are off. So those are missing values. Now, these values can be filled by the average of Friday value and Monday value i.e. Interpolation.
So, you can choose the technique depending upon the use case. | I just learned that you can handle missing data/ NaN with imputation and interpolation, what i just found is interpolation is a type of estimation, a method of constructing new data points within the range of a discrete set of known data points while imputation is replacing the missing data of the mean of the column. But is there any differences more than that? When is the best practice to use each of them? | 0 | 1 | 4,779 |
0 | 58,952,487 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-06T13:30:00.000 | 0 | 1 | 0 | How to use GCP vision trained model in Object Detection API | 58,731,323 | 0 | python-3.x,tensorflow,google-cloud-platform,object-detection-api | You need to save the model in the local drive from the cloud and load this model into the object detection api, the function load_model usually downloads pre-trained model from URL, you need to give the path of the local saved model here. The API that you are looking for is tf.saved_model.load, also update the path for labels correctly in PATH_TO_LABELS. | I trained a object detection model through vision in GCP, How can i use that model in normal tensorflow object detection api provided by google in GitHub?
It gives 3 options for exporting the model which one to use & how? | 0 | 1 | 51 |
0 | 58,738,193 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-06T19:18:00.000 | 0 | 1 | 0 | Is there a way to mutate a neural network in tensorflow/keras? | 58,737,140 | 0 | python,tensorflow,keras,evolutionary-algorithm | Your question is a little bit too vague, but I would assume that coding your own evolutionary algorithm shouldn’t be too difficult for you given what you have done with neural networks so far.
A good starting point for you would be to research the following EA concepts…
Encoding.
Fitness.
Crossover and/or Mutation. | I would like to create a neuroevolution project using python and tensorflow/keras but I couldn't find any good way of mutating the neural network.
I am aware that there are librarys like NEAT, but I wanted to try and code it myself.
Would appreciate it if anyone can tell me something. | 0 | 1 | 181 |
0 | 58,767,741 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-08T13:21:00.000 | 0 | 1 | 0 | Select a "mature" curve that best matches the slope of a new "immature" curve | 58,767,395 | 0 | python | This seems more like a mathematical problem than a coding problem, but I do have a solution.
If you want to find how similar two curves are, you can use box-differences or just differences.
You calculate or take the y-values of the two curves for each x value shared by both the curves (or, if they share no x-values because, say, one has even and the other odd values, you can interpolate those values).
Then you take the difference of the two y-values for every x-value.
Then you sum up those differences for all x-values.
The resulting number represents how different the two curves are.
Optionally, you can square all the values before summing up, but that depends on what definition of "likeness" you are using. | I have a multitude of mature curves (days are plotted on X axis and data is >= 90 days old so the curve is well developed).
Once a week I get a new set of data that is anywhere between 0 and 14 days old.
All of the data (old and new), when plotted, follows a log curve (in shape) but with different slopes. So some weeks have a higher slope, curve goes higher, some smaller slope, curve is lower. At 90 days all curves flatten.
From the set of "mature curves" I need to select the one whose slope matches the best the slope of my newly received date. Also, from the mature curve I then select the Y-value at 90 days and associate it with my "immature"/new curve.
Any suggestions how to do this? I can seem to find any info.
Thanks much! | 0 | 1 | 53 |
0 | 59,092,212 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-09T14:07:00.000 | 0 | 1 | 0 | How to generate frozen_inference_graphe.pb and .pbtxt files with tensorflow 2 | 58,780,057 | 0 | python,opencv,tensorflow,keras,tensorflow2.0 | The .pb file gets generated when using keras.callbacks.ModelCheckpoint().
However, I don't know how to create the .pbtxt file. | I'd like to use my own tensorflow 2 / keras model with opencv (cv.dnn.readNetFromTensorflow( bufferModel[, bufferConfig] ). But, I didn't manage to generate the required files :
bufferModel : buffer containing the content of the pb file (frozen_inference_graphe)
bufferConfig : buffer containing the content of the pbtxt file (model configuration file)
Everything I've found rely on "freeze_graph.py" or other solution that only work with tensorflow 1.x. How should I do with tensorflow 2 ? | 0 | 1 | 243 |
0 | 58,784,283 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-11-09T22:53:00.000 | 0 | 3 | 0 | Get N random non-overlapping substrings of length K | 58,784,258 | 0 | python,string,random | You could simply run a loop, and inside the loop use the random package to pick a starting index, and extract the substring starting at that index. Keep track of the starting indices that you have used so that you can check that each substring is non-overlapping. As long as k isn't too large, this should work quickly and easily.
The reason I mention the size of k is because if it is large enough, then it could be possible to select substrings that don't allow you to find 8 non-overlapping ones. But that only needs to be considered if k is quite large with respect to the length of the original string. | Let's say we have a string R of length 20000 (or another arbitrary length). I want to get 8 random non-overlapping sub strings of length k from string R.
I tried to partition string R into 8 equal length partitions and get the [:k] of each partition but that's not random enough to be used in my application, and the condition of the method to work can not easily be met.
I wonder if I could use the built-in random package to accomplish the job, but I can't think of a way to do it, how can I do it? | 0 | 1 | 284 |
0 | 58,798,707 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-10T00:17:00.000 | 0 | 1 | 0 | Retrain Tensorflow Model on the go | 58,784,730 | 0 | python,tensorflow,deep-learning,image-recognition | As for step two(getting the name of the person), I don't think you would need any retraining to achieve this.
You could use Convolutional LSTM or a similar nn. input shape could be (None,image_dimension_x,y,3) (3 is the color channel, for RGB)
where None would be the current total number of images in the database. It passes all the images in the database into the nn and returns a number as an index.
Or alternatively, you could use a normal convolution (without the None)and make it output the confidence it has for each image in the database to be the person on camera right now. Then choose the person with the highest confidence.
I would say the second one is easier and probably better, that's my suggestion anyway.
Hope it helps :) | I m trying to create an application that captures the feed of one camera, detects the faces in the feed, then takes pictures of them and adds them to the image database. Simultaneously another camera feed will be captured and another neural network will compare the faces in the second camera feed with the face images in the database and then will display the name of the person.
Ideally, the new face images should be loaded into the neural network model without it completely retraining.
Right now I'm trying to achieve that with TensorFlow and OpenCV.
Would a dynamic neural network be possible with TensorFlow? | 0 | 1 | 40 |
0 | 58,789,056 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-10T13:06:00.000 | -1 | 2 | 0 | The smallest valid alpha value in matplotlib? | 58,788,958 | -0.099668 | python,matplotlib | There's no lower limit; the lines just appear to be invisible for very small alpha values.
If you draw one line with alpha=0.01 the difference in color is too small for your screen / eyes to discern. If you draw 100 lines with a=0.01 on top of each other, you will see them.
As for your problem, you can just add a small number to the alpha value of each draw call so that lines that would otherwise have alpha < 0.1 still appear. | Some of my plots have several million lines. I dynamically adjust the alpha value, by the number of lines, so that the outliers more or less disappear, while the most prominent features appear clear. But for some alpha's, the lines just disappear.
What is the smallest valid alpha value for line plots in in matplotlib? And why is there a lower limit? | 0 | 1 | 418 |
0 | 58,789,387 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-10T13:51:00.000 | 1 | 1 | 0 | How to choose python pandas arrangement columns vs rows | 58,789,312 | 1.2 | python,pandas,indexing,row,multiple-columns | Generally in pandas, we follow a practice that instances are columns (here doc number) and features are columns (here words). So, prefer to use the approach 'b'. | I am quite new with pandas (couple of months) and I am starting building up a project that will be based on a pandas data array.
Such pandas data array will consist on a table including different kind of words present in a collection of texts (around 100k docs, and around 200 key-words).
imagine for instance the words "car" and the word "motorbike" and documents numbered doc1, doc2 etc.
how should I go about the arrangement?
a) The name of every column is the doc number and the index the words "car" and "motorbike" or
b) the other way around; the index being the docs numbers and the columns head the words?
I don't have enough insights of pandas in order to be able to foreseen what will the consequences of such choice. And all the code will be based on that decision.
As a side note there array is not static, there will be more documents and more words being added to the array every now and again.
what would you recommend? a or b? and why?
thanks. | 0 | 1 | 28 |
0 | 58,818,221 | 0 | 0 | 0 | 1 | 4 | false | 1 | 2019-11-10T15:08:00.000 | 0 | 5 | 0 | How to remove rows from a datascience table in python | 58,789,936 | 0 | python-3.x,jupyter-notebook | use the 'drop.isnull()' function. | I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?
Thanks | 0 | 1 | 1,193 |
0 | 63,076,734 | 0 | 0 | 0 | 1 | 4 | false | 1 | 2019-11-10T15:08:00.000 | 0 | 5 | 0 | How to remove rows from a datascience table in python | 58,789,936 | 0 | python-3.x,jupyter-notebook | To remove a row in a datascience package:
name_of_your_table.remove() # number of the row in the bracket | I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?
Thanks | 0 | 1 | 1,193 |
0 | 66,193,032 | 0 | 0 | 0 | 1 | 4 | false | 1 | 2019-11-10T15:08:00.000 | 0 | 5 | 0 | How to remove rows from a datascience table in python | 58,789,936 | 0 | python-3.x,jupyter-notebook | #df is the original dataframe#
#The '-' operator removes the null values and re-assigns the remaining ones to df#
df=idf[-(df['Column'].isnull())] | I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?
Thanks | 0 | 1 | 1,193 |
0 | 68,397,031 | 0 | 0 | 0 | 1 | 4 | false | 1 | 2019-11-10T15:08:00.000 | 0 | 5 | 0 | How to remove rows from a datascience table in python | 58,789,936 | 0 | python-3.x,jupyter-notebook | use dataframe_name.isnull() #To check the is there any missing values in your table.
use dataframe_name.isnull.sum() #To get the total number of missing values.
use dataframe_name.dropna() # To drop or delete the missing values. | I have a table with 4 columns filled with integer. Some of the rows have a value "null" as its more than 1000 records with this "null" value, how can I delete these rows all at once? I tried the delete method but it requires the index of the row its theres over 1000 rows. Is there as faster way to do it?
Thanks | 0 | 1 | 1,193 |
1 | 58,790,592 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-10T16:14:00.000 | 0 | 2 | 0 | How to get pixel values inside of a rectangle within an image | 58,790,535 | 0 | python,image,opencv | It is not very easy to iterate through a slanted rectangle. Therefore, what you can do is to rotate the whole image such that the rectangle is parallel to the sides again.
For this, you can compute the slope of one side as difference in the y coordinate over the difference in the x coordinate of the corners. The value you get is the slope. The arctangent of the slope is the angle to the horizontal. You need to rotate the image with the opposite (negative) of this value.
To make it more efficient, you can crop a bit the image. | I have an image, where four corner points are defined. Now I want to get the pixel values of the region, that is defined by the 4 corners. Problem is, although it is a rectangle, it has a "slope", which means neither the two upper corner points nor the lower one are at the same height. How can I still solve this issue?
I have not found anything for this yet.. I'd appreciate any kind of support! :) | 0 | 1 | 2,611 |
0 | 58,801,419 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-11T12:09:00.000 | 1 | 1 | 0 | Error when using loaded Keras classifier with custom metrics function | 58,801,078 | 1.2 | python,keras | Do you mean keras.models.load_model(path)? It sounds very strange to have model.load_model().
You are probably missing the argument custom_objects = {'roc_auc': roc_auc} in load_model. Keras cannot create a model if it doesn't know what roc_auc means. | I have a keras model, which uses custom function for metrics:
model.compile(optimizer = tf.keras.optimizers.Adam(), loss = 'binary_crossentropy', metrics = ['accuracy', roc_auc])
The function works fine and model behaves as expected. However, when saving the model via model.save() and then loading it via model.load_model(), I get ValueError: Unknown metric function:roc_auc when running following code: model.predict(X). Interestingly this error does not appear when I run the same command again, through command shell, it only occurs during first run. Is this a bug? | 0 | 1 | 24 |
0 | 58,802,847 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-11T13:42:00.000 | 0 | 1 | 0 | How to create custom report in PDF using matplotlib and python | 58,802,543 | 0 | python,matplotlib | Even though the question is not very clear. If I have to do what I understand from your question, I will use Jupyter Notebook and save it as PDF. In this notebook, I will have:
Exploratory analysis of the data (What data scientists call EDA)
Discussion and other mathematical formulas at they may apply to your case
The plots
You can save jupyter notebooks to PDF using nbconvert module in Python.
If you don't have it installed on your computer, do so with this command:
pip install nbconvert
To save your notebook as a PDF, go to the folder containing your Jupyter notebook file and run the following command:
jupyter nbconvert --to pdf MyNotebook.ipynb | i am working on a project where i have to present the Chart /graph created using matplotlib with python3 into a PDF format. The PDF must carry the data, custom titles along with the chart/graph. PDF can be multiple page report as well. I know that we can store the matplotlib charts in PDF. But i am looking for any solution if we can achieve Data, chart and custom text in PDF format. | 0 | 1 | 284 |
0 | 58,808,296 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-11T16:51:00.000 | 1 | 3 | 0 | Adding a column to a pandas dataframe based on other columns | 58,805,531 | 0.066568 | python,pandas,list-comprehension | Thanks guys! With your help I was able to solve my problem.
Like Prince Francis suggested I first did
df['temp'] = df.apply(lambda x : [i for i, e in enumerate(x['WD']) if e == x['Max_WD']], axis=1)
to get the indicees of the 'WD'-values in 'LF'. In a second stept I then could add the actual column 'Max_LF' by doing
df['LF_Max'] = df.apply(lambda x: [x['LF'][e] for e in (x['temp'])],axis=1)
Thanks a lot guys! | Problem description
Introductory remark: For the code have a look below
Let's say we have a pandas dataframe consisting of 3 columns and 2 rows.
I'd like to add a 4th column called 'Max_LF' that will consist of an array. The value of the cell is retrieved by having a look at the column 'Max_WD'. For the first row that would be 0.35 which will than be compared to the values in the column 'WD' where 0.35 can be found at the third position. Therefore, the third value of the column 'LF' should be written into the column 'Max_LF'. If the value of 'Max_WD' occures multiple times in 'WD', then all corresponding items of 'LF' should be written into 'Max_LF'.
Failed attempt
So far I had various attemps on first retrieving the index of the item in 'Max_WD' in 'WD'. After potentially retrieving the index the idea was to then get the items of 'LF' via their index:
df4['temp_indices'] = [i for i, x in enumerate(df4['WD']) if x == df4['Max_WD']]
However, a ValueError occured:
raise ValueError('Lengths must match to compare')
ValueError: Lengths must match to compare
This is what the example dateframe looks like
df = pd.DataFrame(data={'LF': [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]] , 'WD': [[0.28, 0.34, 0.35, 0.18], [0.42, 0.45, 0.45, 0.18], [0.31, 0.21, 0.41, 0.41]], 'Max_WD': [0.35, 0.45, 0.41]})
The expected outcome should look like
df=pd.DataFrame(data={'LF': [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]] , 'WD': [[0.28, 0.34, 0.35, 0.18], [0.42, 0.45, 0.45, 0.18], [0.31, 0.21, 0.41, 0.41]], 'Max_WD': [0.35, 0.45, 0.41], 'Max_LF': [[3] ,[2,3], [3,4]]}) | 0 | 1 | 1,805 |
0 | 58,812,577 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-11T22:02:00.000 | 2 | 1 | 1 | Apache beam DirectRunner vs "normal" parallel processes | 58,809,283 | 0.379949 | python,google-cloud-platform,cloud,apache-beam,dataflow | You question is broad. However, I will try to provide you some inputs. It's hard to compare a DirectRunner and a DataflowRunner.
DirectRunner launches your pipeline on your current VM and use the capability of this only VM. It's your VM, you have to set it up, patch it, take care to free disk/partition/logs file, (...)
DataflowRunner launches the pipeline to a managed platform. Dataflow, according with its metrics and "prediction" (no ML here!) chooses to scale up or down the number of VM to execute as quickly as possible your pipeline. You can set small VM (1 vCPU for example) and Dataflow will spawn a lot of them, or bigger VM and, maybe that dataflow will spawn only 1 because it's enough for the pipeline.
Pro tips: the VM bandwidth is limited to 2Gbs per vCPU up to 8 vCPU. Take care of the network bottleneck and choose wisely the VM size (I recommend VM with 4 or 8 vCPU usually)
On one side, you have only one VM to manage, on the other side, you only have to set parameters and let Dataflow managing and scaling your pipeline.
I don't know your growth perspective, but vertical scalability (adding more vCPU/memory on your single VM) can reach a limit a day. With Dataflow, it's elastic and you don't worry about this; in addition of server management and patching.
Finally, answer to your question "faster or slower", too hard to answer... Dataflow, if it run on several VM, will add network latency, dataflow internal management overhead, but can scale to use more vCPU in parallel at some point of time compare to your current VM. Is your pipeline can leverage of this parallelism or not? Is it solve some of your current bottleneck? Too hard to answer on my side. | I currently have a pipeline running on GCP. The entire thing is written using pandas to manipulate CSVs and do some transformations, as well as side inputs from external sources. (It makes use of bigquery and storage APIs). The thing is, it runs on a 32vCPUs/120GB RAM Compute Engine instance (VM) and it does simple parallel processing with python's multiprocessing library. We are currently thinking about switching to Dataflow, and what I'd like to know is: if I were to implement the same pipeline using Beam's DirectRunner, how should I expect the performance to compare to that of the current implementation? Would it be faster or slower and why? Will the DirectRunner use well all the machine resources or is it limited somehow? | 0 | 1 | 564 |
0 | 58,844,839 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-11-12T04:53:00.000 | 0 | 1 | 0 | TensorFlow CondaVerificationError - Mixing Pip with Conda | 58,812,244 | 1.2 | python,windows,tensorflow,pip,conda | Update:
I reinstalled TensorFlow 2.0 with pip and even though it successfully installed, I was still facing issues when I tried to run my code. A file called cudart64_100.dll could not be located. I eventually managed to get my TF 2.0 code to run successfully by installing an older version of CUDA. I'm still not sure what the issue is with conda, but I'll settle for the current working version of anaconda. | I'm running Anaconda 64 bit on Windows 10 and I've encountered a CondaVerificationError when I try installing TensorFlow 2.0 on one of my computers. I believe the error stems from mixing pip installations with conda installations for the same package. I originally installed then uninstalled TensorFlow with pip and then tried installing with conda and now I'm stuck trying to resolve this issue.
I've tried reinstalling Anaconda but the issue persists. Thoughts? | 0 | 1 | 74 |
0 | 69,829,396 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2019-11-12T08:24:00.000 | 0 | 3 | 0 | Matplotlib: Command errored out with exit status 1 | 58,814,671 | 0 | python,python-3.x,matplotlib,pip,python-packaging | pip install --pre -U scikit-learn
this command is work for me
I have found because this error come for the duplication of same libraries. | I want to install the matplotlib package for Python with pip install matplotlib in the command prompt but suddenly the lines get red and the next error appears:
ERROR: Command errored out with exit status 1: 'c:\users\pol\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pol\\AppData\\Local\\Temp\\pip-install-v44y041t\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pol\\AppData\\Local\\Temp\\pip-install-v44y041t\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Pol\AppData\Local\Temp\pip-record-d5re6a86\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
I'm using Windows and my Python version is 3.8.0. I have already tried python -m pip install matplotlib but it doesn't work. | 0 | 1 | 11,426 |
0 | 58,814,829 | 0 | 1 | 0 | 0 | 2 | false | 4 | 2019-11-12T08:24:00.000 | -1 | 3 | 0 | Matplotlib: Command errored out with exit status 1 | 58,814,671 | -0.066568 | python,python-3.x,matplotlib,pip,python-packaging | Try running your command prompt with administrator privileges
If the problem further persists try reinstalling pip | I want to install the matplotlib package for Python with pip install matplotlib in the command prompt but suddenly the lines get red and the next error appears:
ERROR: Command errored out with exit status 1: 'c:\users\pol\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Pol\\AppData\\Local\\Temp\\pip-install-v44y041t\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\Pol\\AppData\\Local\\Temp\\pip-install-v44y041t\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Pol\AppData\Local\Temp\pip-record-d5re6a86\install-record.txt' --single-version-externally-managed --compile Check the logs for full command output.
I'm using Windows and my Python version is 3.8.0. I have already tried python -m pip install matplotlib but it doesn't work. | 0 | 1 | 11,426 |
0 | 58,822,290 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-11-12T15:49:00.000 | 2 | 3 | 0 | Read a pickle with a different version of pandas | 58,822,129 | 1.2 | python,pandas | You will need the same version (or a later one) of pandas as the one used to_pickle.
When pandas converts a dataframe to pickle the compress process is specific to that version.
I advise to contact your administrator and have them convert the pickle to csv that way you can open it with any version of pandas.
Unless the dataframe contains objects csv should be fine | I can't read a pickle file saved with a different version of Python pandas. I know this has been asked here before, but the solutions offered, using pd.read_pickle("my_file.pkl") is not working either. I think (but I am not sure) that these pickle files were created with an newer version of pandas than that of the machine I am working now.
Unfortunately, I am not the administrator and I cannot change the version of pandas. How can I read my files? Are they irrecoverable? | 0 | 1 | 4,370 |
0 | 58,825,987 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-12T20:02:00.000 | 0 | 1 | 0 | How to Sort a Pandas Column by specific values | 58,825,887 | 0 | python,python-3.x,pandas,sorting | What are you sorting by? Alphabetical would be ['four', 'nine', 'one', 'six', 'two'] | Let's say, for the sake of this question, I have a column titled Blah filled with the following data points (I will give it in a list for clarity):
Values = ['one', 'two', 'four', 'six', 'nine']
How could I choose to sort by specific values in this column? For example, I would like to sort this column, Blah, filled with the values above into the following: ['nine', 'four', 'two', 'six', 'one'].
Unfortunately, it is not as easy as just sort_values and choose alphabetical! | 0 | 1 | 55 |
0 | 58,841,007 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-13T12:06:00.000 | 1 | 1 | 0 | Is there support for functional layers api support in tensorflow 2.0? | 58,836,772 | 1.2 | python-3.x,tensorflow,tensorflow2.0 | Tensorflow 2.0 is more or less made around the keras apis. You can use the tf.keras.Model for creating both sequential as well as functional apis. | I'm working on converting our model from tensorflow 1.8.0 to 2.0 but using sequential api's is quite difficult for our current model.So if there any support for functional api's in 2.0 as it is not easy to use sequential api's. | 0 | 1 | 35 |
0 | 58,842,847 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-13T15:15:00.000 | 0 | 1 | 0 | How to split parallel corpora while keeping alignment? | 58,840,145 | 1.2 | python,pandas,unix,scikit-learn,dataset | I found that I can use the shuf command on the file with the random-source parameter, like this shuf tgt-full.txt -o tgt-fullshuf.txt --random-source=tgt-full.txt. | I have two text files containing parallel text in two languages (potentially millions of lines). I am trying to generate random train/validate/test files from that single file, as train_test_split does in sklearn. However when I try to import it into pandas using read_csv I get errors from many of the lines because of erroneous data in there and it would be way too much work to try and fix the broken lines. If I try and set the error_bad_lines=false then it will skip some lines in one of the files and possibly not the other which would ruin the alignment. If I split it manually using unix split it works fine for my needs though so I'm not concerned with cleaning it, but the data that is returned is not random.
How should I go about splitting this dataset into train/validate/test sets?
I'm using python but I can also use linux commands if that would be easier. | 0 | 1 | 85 |
0 | 58,847,337 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-13T20:18:00.000 | 1 | 1 | 0 | CNN model have better accuracy than combined CNN-SVM model | 58,844,965 | 0.197375 | python,classification,svm | It depends on a large number of factors , but yes if the underlying data is image - cnn have proven to deliver better results. | I was trying to compare the accuracy results of CNN model and combined CNN-SVM model for classification. However I found that CNN model have better accuracy than combined CNN-SVM model. Is That correct or it can happen? | 0 | 1 | 144 |
0 | 62,375,398 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-13T20:39:00.000 | -1 | 2 | 0 | Matplotlib and Google Colab: Using ipympl | 58,845,278 | -0.099668 | python,matplotlib,google-colaboratory | Available matplotlib backends: ['tk', 'gtk', 'gtk3', 'wx', 'qt4', 'qt5', 'qt', 'osx', 'nbagg', 'notebook', 'agg', 'inline', 'ipympl', 'widget'] | Whenever I try to plot a figure in a Google Colab notebook using matplotlib, a plot is displayed whenever I use %matplotlib inline but is not displayed when I do %matplotlib ipympl or %matplotlib widget. How can I resolve this issue. My goal is to get the plot to be interactive.
Clarification: when I run %matplotlib --list I get the following output
Available matplotlib backends: ['tk', 'gtk', 'gtk3', 'wx', 'qt4', 'qt5', 'qt', 'osx', 'nbagg', 'notebook', 'agg', 'inline', 'ipympl', 'widget']
Thanks for your help! | 0 | 1 | 1,694 |
0 | 58,858,660 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-13T22:30:00.000 | 0 | 2 | 0 | Filter out range of frequencies using band stop filter in Python and confirm it using Fourier Transform FFT | 58,846,626 | 0 | python,numpy,scipy,fft | Why magnitude for frequency 50 Hz decreased from 1 to 0.7 after Fast Fourier Transform? | Supposing that I have following signal:
y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(100.0 * 2.0*np.pi*x) + 0.2*np.sin(200 * 2.0*np.pi*x)
how can I filter out in example 100Hz using Band-stop filter in Python? In this signal there are peaks at 50Hz, 100Hz and 200Hz. It would be helpful it it could be visualized using FFT in order to confirm that this frequency has been filtered correctly. | 0 | 1 | 978 |
0 | 58,852,959 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-14T06:33:00.000 | 1 | 3 | 0 | CNN on python with Keras | 58,850,711 | 0.066568 | python,tensorflow,keras | What you want to do is called "transfer learning" using the learned weights of a net to solve a new problem.
Please note that this is very hard and acts under many constraints i.e. using a CNN that can detect cars to detect trucks is simpler than using a CNN trained to detect people to also detect cats.
In any case you would use your pre-trained model, load the weights and continue to train it with new data and examples.
Whether this is faster or indeed even better than simply training a new model on all desired classes depends on the actual implementation and problem.
Tl:Dr
Transfer learning is hard! Unless you know what you are doing or have a specific reason, just train a new model on all classes. | I made a simple CNN that classifies dogs and cats and I want this CNN to detect images that aren't cats or dogs and this must be a third different class. How to implement this? should I use R-CNN or something else?
P.S I use Keras for CNN | 0 | 1 | 125 |
0 | 58,852,922 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-14T06:33:00.000 | 0 | 3 | 0 | CNN on python with Keras | 58,850,711 | 0 | python,tensorflow,keras | You can train that almost with the same architecture (of course it depends on this architecture, if it is already bad then it will not be useful on more classes too. I would suggest to use the state of the art model architecture for dogs and cats classification) but you will also need the dogs and cats dataset in addition to this third class dataset. Unfortunately, it is not possible to use pre-trained for making predictions between all 3 classes by only training on the third class later.
So, cut to short, you will need to have all three datasets and train the model from scratch if you want to make predictions between these three classes otherwise use the pre-trained and after training it on third class it can predict if some image belongs to this third class of not. | I made a simple CNN that classifies dogs and cats and I want this CNN to detect images that aren't cats or dogs and this must be a third different class. How to implement this? should I use R-CNN or something else?
P.S I use Keras for CNN | 0 | 1 | 125 |
0 | 58,861,981 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-11-14T11:38:00.000 | 0 | 2 | 0 | Pandas not assuming dtypes when using read_sql? | 58,855,925 | 0 | python,sql,pandas | So it turns out all the data types in the database are defined as varchar.
It seems read_sql reads the schema and assumes data types based off this. What's strange is then I couldn't convert those data types using infer_objects().
The only way to do it was to write to a csv then read than csv using pd.read_csv(). | I have a table in sql I'm looking to read into a pandas dataframe. I can read the table in but all column dtypes are being read in as objects. When I write the table to a csv then re-read it back in using read_csv, the correct data types are assumed. Obviously this intermediate step is inefficient and I just want to be able to read the data directly from sql with the correct data types assumed.
I have 650 columns in the df so obviously manually specifying the data types is not possible. | 0 | 1 | 651 |
0 | 58,863,619 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-14T12:42:00.000 | 0 | 1 | 0 | Automatically create a new object from GridSearchCV best_params_ | 58,857,069 | 0 | python,scikit-learn | I figured out, if you unpack the parameters it is doable, i.e if
best_par=RFC_grid_search.best_params_ then you can create the optimal RFC with the parameters in best_params_ by
rfc_opt=RFC(**best_part) | Assume I want to fit a random forest, RFC, and grid search using sklearns GridSearchCV.
We can get the best parameters using RFC.best_params_ but if I then want to create a random forest I need manually to write those parameters in e.g RFC(n_estimators=12,max_depth=7) afterwards. Is there a way,something like RFC_opt=RFC(best_params_) to do it automatically? | 0 | 1 | 75 |
0 | 58,858,073 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-14T13:27:00.000 | 0 | 2 | 0 | Softmax or sigmoid for multiclass problem | 58,857,899 | 0 | python,image-processing,deep-learning,data-science | In general cases, if you are dealing with multi-class clasification problems, you should use a Softmax because you are guaranted that the sum of probabilities of all clases will sum 1, by weighting them individually and computing the join distribution, whereas with a Sigmoid, you'd be predicting the probability of each class individually, but not necesarilly weighted. If not careful and aware of the difference you can run into some issues with your output. | I am using VGG16 model and fine tuned them on my data. I am predicting ethnicity of images (faces) .i have 5 output classes like white, black,Asian, Sub-continent and others. Should i use softmax or sigmoid. And why?? | 0 | 1 | 403 |
0 | 58,877,449 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-14T16:37:00.000 | 0 | 1 | 0 | cut out part of a point cloud | 58,861,669 | 0 | python-3.x,list-comprehension,point-clouds | Instead of list comprehension, is used numpy and came up with this statement:
inPathPoints = pc[(pc[:, 0] > -0.5) & (pc[:, 0] < 0.5) & (pc[:, 2] > 0.2) & (pc[:, 2] < 2)]
This is so fast it does not even show up in the profile output. | I have an intel D415 depth camera and want to identify obstacles in the path of my robot.
I want to reduce the points from the cam pc=(102720,3) to only a rectangular area where the robot has to pass through
I came up with this list comprehension, p[0] is the x-axis, p[2] the distance and the values are in meters, the robot needs around a 1 meter "door" and I limit the distance to 2 meters.
inPathPoints = np.asarray([p for p in pc if p[0] > -0.5 and p[0] < 0.5 and p[2] > 0.2 and p[2] < 2])
On my laptop cProfile shows a runtime of 0.25 seconds for this evaluation.
As the robot needs to check for obstacles while moving I wanted to repeat this check about 5..10 times a
second. Any hints what I could try to speed it up? | 0 | 1 | 34 |
0 | 58,885,009 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-14T21:51:00.000 | 2 | 2 | 0 | Keras - RTX 2080 ti training slower than both CPU-only and GTX 1070? | 58,867,071 | 1.2 | python,tensorflow,keras | I figured it out! Thanks to the suggestion of a friend who got a 2060 recently, he noted that the default power mode is maximum power savings in the Nvidia Control Panel, or P8 power mode according to nvidia-smi (which is half clock speeds). After setting to prefer maximum performance in 3D settings, training times have significantly been reduced. | I just got my 2080 ti today and hooked it right up to experiment with Keras on my models. But for some reason, when I train on a dense model the 2080 ti is 2 times slower than my CPU (an i7 4790k) and definitely slower than my old GTX 1070 (don't have exact numbers to compare it to).
To train one epoch on my CPU it takes 27 seconds while the 2080 ti is taking 67 seconds with nothing about the model or data changing. Same batch size of 128, etc. This is also significantly slower than my 1070 I just had in the machine last night. I checked the GPU usage while training and the memory usage goes up to max, and the GPU usage goes up to about 20%, while idle is 4%. I have CUDA 10, and the latest CuDNN on NVIDIA's site: v7.6.5. TensorFlow is 1.15
Does anyone have any clue what is going on here? If any more details are needed, just comment I can add them. | 0 | 1 | 884 |
0 | 59,451,889 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-14T22:38:00.000 | 0 | 1 | 0 | smop has issues when translating a >= statment in MATLAB | 58,867,582 | 0 | python,matlab | you can simply edit lexer.py for greater or equal by putting "\" in front of "<" or ">".
example:
from "<="
to "\<=" or "<\=", both of them works the same on your converted python code. | I'm using SMOP version 0.41 to translate my MATLAB code to anaconda python however
whenever there is a statement with a greater or equal to statment for example:
if numFFT >= 2
I get the following error
SyntaxError: Unexpected "=" (parser)
Has anyone experienced this? | 0 | 1 | 246 |
0 | 58,879,617 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2019-11-15T01:16:00.000 | 0 | 1 | 0 | Is there a way to set columns to null within dask read_sql_table? | 58,868,931 | 1.2 | python,pandas,dask | If possible, I recommend setting this up on the oracle side, making a view with the correct data types, and using read_sql_table with that.
You might be able to do it directly, since read_sql_table accepts sqlalchemy expressions. If you can phrase it as such, it ought to work. | I'm connecting to an oracle database and trying to bring across a table with roughly 77 million rows. At first I tried using chunksize in pandas but I always got a memory error no matter what chunksize I set. I then tried using Dask since I know its better for large amounts of data. However, there're some columns that need to be made NULL, is there away to do this within read_sql_table query like there is in pandas when you can write out your sql query?
Cheers | 0 | 1 | 83 |
0 | 58,869,681 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-15T02:38:00.000 | 3 | 2 | 0 | Can we read the data in a pickle file with python generators | 58,869,572 | 1.2 | python,python-3.x,generator,pickle | Nope. The pickle file format isn't like JSON or something else where you can just read part of it and decode it incrementally. A pickle file is a list of instructions for building a Python object, and just like following half the instructions to bake a cake won't bake half a cake, reading half a pickle won't give you half the pickled object. | I have a large pickle file and I want to load the data from pickle file to train a deep learning model. Is there any way if I can use a generator to load the data for each key? The data is in the form of a dictionary in the pickle file. I am using pickle.load(filename), but I am afraid that it will occupy too much RAM while running the model. I used pickle.HIGHEST_PROTOCOL to dump the data to the pickle file initially. | 0 | 1 | 513 |
0 | 58,874,622 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-15T03:34:00.000 | 1 | 1 | 0 | Is it possible to train the sentiment classification model with the labeled data and then use it to predict sentiment on data that is not labeled? | 58,869,955 | 1.2 | nltk,python-3.7,sentiment-analysis,text-classification,training-data | Is it possible to train the sentiment classification model with the labeled data and then use it to predict sentiment on data that is not labeled?
Yes. This is basically the definition of what supervised learning is.
I.e. you train on data that has labels, so that you can then put it into production on categorizing your data that does not have labels.
(Any book on supervised learning will have code examples.)
I wonder if your question might really be: can I use supervised learning to make a model, assign labels to another 500 articles, then do further machine learning on all 600 articles? Well the answer is still yes, but the quality will fall somewhere between these two extremes:
Assign random labels to the 500. Bad results.
Get a domain expert assign correct labels to those 500. Good results.
Your model could fall anywhere between those two extremes. It is useful to know where it is, so know if it is worth using the data. You can get an estimate of that by taking a sample, say 25 records, and have them also assigned by a domain expert. If all 25 match, there is a reasonable chance your other 475 records also have been given good labels. If e.g. only 10 of the 25 match, the model is much closer to the random end of the spectrum, and using the other 475 records is probably a bad idea.
("10", "25", etc. are arbitrary examples; choose based on the number of different labels, and your desired confidence in the results.) | I want to do sentiment analysis using machine learning (text classification) approach. For example nltk Naive Bayes Classifier.
But the issue is that a small amount of my data is labeled. (For example, 100 articles are labeled positive or negative) and 500 articles are not labeled.
I was thinking that I train the classifier with labeled data and then try to predict sentiments of unlabeled data.
Is it possible?
I am a beginner in machine learning and don't know much about it.
I am using python 3.7.
Thank you in advance. | 0 | 1 | 299 |
0 | 58,950,331 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-15T03:50:00.000 | 1 | 1 | 0 | Which algorithm in Deep learning can verfity the relationship of column into a matrix | 58,870,076 | 1.2 | python,algorithm,machine-learning,deep-learning | You can all simply use DNN, however the results are not that good compared with ML but there is no way to solve that. | I'm reading these days about deep learning and its utilization and the methods we can use it. I had a general question regarding the image verification or let's say a simple matrix.
Suppose I have a matrix of size X = (4,4) and a vector of size Y = (1,4), I multiplied the the vector Y by only one column from X, let's say the second column. Hence, Z = Y.*X(:,2). Suppose I know the matrix X and resulted vector Z, can I use the deep learning to verify which column from X was multiplied based on Z and X ?
I know, we can all simply use Maximum likehood decoder, or by divided X/Z ; In reality I need to avoid these conventional ways and go to deep learning. Can we do that using deep learning? which algorithm can be used in that case ? | 0 | 1 | 30 |
0 | 58,872,562 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-15T07:47:00.000 | 2 | 3 | 0 | How do you read files on desktop with jupyter notebook? | 58,872,437 | 0.132549 | python,python-3.x,pandas | This is an obvious path problem, because your notebook is not booted on the desktop path, you must indicate the absolute path to the desktop file, or the relative path relative to the jupyter boot directory. | I launched Jupyter Notebook, created a new notebook in python, imported the necessary libraries and tried to access a .xlsx file on the desktop with this code:
haber = pd.read_csv('filename.xlsx')
but error keeps popping up. Want a reliable way of accessing this file on my desktop without incurring any error response | 0 | 1 | 3,030 |
0 | 58,882,586 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2019-11-15T12:03:00.000 | 2 | 2 | 0 | What is the time complexity of .at and .loc in pandas? | 58,876,676 | 0.197375 | python,pandas,performance,data-structures,time-complexity | Alright so it would appear that:
1) You can build your own index on a dataframe with .set_index in O(n) time where n is the number of rows in the dataframe
2) The index is lazily initialized and built (in O(n) time) the first time you try to access a row using that index. So accessing a row for the first time using that index takes O(n) time
3) All subsequent row access takes constant time.
So it looks like the indexes are hash tables and not btrees. | I'm looking for the time complexity of these methods as a function of the number of rows in a dataframe, n.
Another way of asking this question is: Are indexes for dataframes in pandas btrees (with log(n) time look ups) or hash tables (with constant time lookups)?
Asking this question because I'd like a way to do constant time look ups for rows in a dataframe based on a custom index. | 0 | 1 | 2,034 |
0 | 59,198,937 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-17T01:08:00.000 | 0 | 2 | 0 | MNIST dataset with Sklearn | 58,896,645 | 0 | python,mnist,sklearn-pandas | If you only need to recognize 4s it's a binary classification problem, so you just need to create a new target variable: Y=1 if class is 4, Y=0 if class is not 4.
Train_X will be unchanged
Train_Y will be your new target variable related to Train_X
Test_X will be unchanged
Test_Y will be your new target variable related to Test_X.
<\ul>
Data will be a bit unbalanced but it should not be an issue! | I’m training linear model on MNIST dataset, but I wanted to train only on one digit that is 4. How do I choose my X_test,X_train, y_test, y_train? | 0 | 1 | 157 |
0 | 58,896,802 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-17T01:08:00.000 | 0 | 2 | 0 | MNIST dataset with Sklearn | 58,896,645 | 0 | python,mnist,sklearn-pandas | Your classifier needs to learn to discriminate between sets of different classes.
If you only care about digit 4, you should split your training and testing set into:
Class 4 instances
Not class 4 instances: union of all other digits
Otherwise the train/test split is still the typical one, where you want to have no overlap. | I’m training linear model on MNIST dataset, but I wanted to train only on one digit that is 4. How do I choose my X_test,X_train, y_test, y_train? | 0 | 1 | 157 |
0 | 58,917,762 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-17T03:41:00.000 | 0 | 1 | 0 | No bouding boxes create when running my trained model | 58,897,297 | 0 | python,python-3.x,tensorflow,object-detection,object-detection-api | Did you update the path of your model in the object_detection_tutorial.ipynb file, "tf.saved_model.load" is the API where you have to give the path of your trained model. | I have tried to train my model using ssd_mobilenet_v1_coco_11_06_2017 ,ssd_mobilenet_v1_coco_2018_01_28 and faster_rcnn_inception_v2_coco_2018_01_28 and did it successfully however when i tried to run object_detection_tutorial.ipynb and test my test_images all i get is images without bounding boxes, i trained my model using model_main.py and also tried train.py and i aquired a loss of < 1 in both. i am using tensorflow = 1.14 and i tried it on tensorflow = 2.0. im stuck in this final step. i am positive i create my tfrecords correctly. and also when i run the models(ssd_mobilenet_v1_coco_11_06_2017 ,ssd_mobilenet_v1_coco_2018_01_28 and faster_rcnn_inception_v2_coco_2018_01_28) that i trained them on they worked perfectly, so i suspect that there is something wrong with my model | 0 | 1 | 46 |
0 | 72,333,300 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-17T19:35:00.000 | 0 | 2 | 0 | How do I correctly download a map from osmnx as .svg? | 58,904,416 | 0 | python,svg,jupyter-notebook,openstreetmap,osmnx | Instead of
filename='image', file_format='svg'
Use:
filepath='image.svg' | I am new to Python. Just working with OSMnx and wanted to open a map as an svg in Illustrator. This was posted in the GitHub documentation:
# you can also plot/save figures as SVGs to work with in Illustrator later
fig, ax = ox.plot_graph(G_projected, save=True, file_format='svg')
I tried it in JupyterLab and it downloaded to my files but when I open it, it is just text. How can I correctly download it and open as SVG? - Thanks! | 0 | 1 | 548 |
0 | 58,910,192 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-11-18T07:50:00.000 | 7 | 2 | 0 | Sorting performance comparison between numpy array, python list, and Fortran | 58,910,042 | 1 | python,performance,numpy | You seem to be misunderstanding what NumPy does to speed up computations.
The speedup you get in NumPy does not come from NumPy using some smart way of saving data. Or compiling your Python code to C automatically.
Instead, NumPy implements many useful algorithms in C or Fortran, numpy.sort() being one of them. These functions understand np.ndarrays as input and loop over the data in a C/Fortran loop.
If you want to write fast NumPy code there are really three ways to do that:
Break down your code into NumPy operations (multiplications, dot-product, sort, broadcasting etc.)
Write the algorithm you want to implement in C/Fortran and also write bindings to Python that accept np.ndarrays (internally they're a contiguous array of the type you've chosen).
Use Numba to speed up your function by having Python Just-In-Time compile your code to machine code (with some limitations) | I have been using Fortran for my computational physics related work for a long time, and recently started learning and playing around with Python. I am aware of the fact that being an interpreted language Python is generally slower than Fortran for primarily CPU-intensive computational work. But then I thought using numpy would significantly improve the performance for a simple task like sorting.
So my test case was sorting an array/a list of size 10,000 containing random floats using bubble sort (just a test case with many array operations, so no need to comment on the performance of the algorithm itself). My timing results are as follows (all functions use identical algorithm):
Python3 (using numpy array, but my own function instead of numpy.sort): 33.115s
Python3 (using list): 9.927s
Fortran (gfortran) : 0.291s
Python3 (using numpy.sort): 0.269s (not a fair comparison, since it uses a different algorithm)
I was surprised that operating with numpy array is ~3 times slower than with python list, and ~100 times slower than Fortran. So at this point my questions are:
Why operating with numpy array is significantly slower than python list for this test case?
In case an algorithm that I need is not already implemented in scipy/numpy, and I need to write my own function within Python framework with best performance in mind, which data type I should operate with: numpy array or list?
If my applications are performance oriented, and I want to write functions with equivalent performance as in-built numpy functions (e.g. np.sort), what tools/framework I should learn/use? | 0 | 1 | 3,448 |
0 | 58,939,946 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2019-11-19T12:37:00.000 | 1 | 2 | 0 | fast light and accurate person-detection algorithm to run on raspberry pi | 58,934,308 | 0.099668 | python,computer-vision,raspberry-pi3,object-detection,robotics | Raspberry pi does not have the computational capacity to perform object detection and realsense driver support, check out the processor load once you start the realsense application. One of the simplest models for person detection is opencv's HOGdescripto that you have used. | Hope you are doing well.
I am trying to build a following robot which follows a person.
I have a raspberry pi and and a calibrated stereo camera setup.Using the camera setup,i can find depth value of any pixel with respect to the reference frame of the camera.
My plan is to use feed from the camera to detect person and then using the stereo camera to find the average depth value thus calculating distance and from that calculate the position of the person with respect to the camera and run the motors of my robot accordingly using PID.
Now i have the robot running and person detection using HOGdescriptor that comes opencv.But the problem is,even with nomax suppression, the detector is not stable to implement on a robot as too many false positives and loss of tracking occurs pretty often.
So my question is,can u guys suggest a good way to track only people. Mayb a light NN of some sort,as i plan to run it on a raspberry pi 3b+.
I am using intel d435 as my depth camera.
TIA | 0 | 1 | 527 |
0 | 58,935,738 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-11-19T12:48:00.000 | 1 | 1 | 0 | Share python modules across computers in a dask.distributed cluster | 58,934,528 | 1.2 | python,dask | Such things are possible with networking solutions such as NFS or SSH remote mounts, but that's a pretty big field and beyond the scope of Dask itself. If you are lucky, other answers will appear here, others have solved similar problems, but more likely copying is the simpler solution. | I have a ssh dask.distributed cluster with a main computer containing all modules for my script and another one with only a few, including dask itself of course.
Is it possible to change the syspath of the other computer so that it also looks for modules in the main one? Of course, I could simply upload them via sftp but since I keep making a lot of modules that would be very annoying to do repeatedly. | 0 | 1 | 43 |
0 | 58,937,944 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-19T15:26:00.000 | 1 | 2 | 0 | How to find the intersecting area of two sub images using OpenCv? | 58,937,483 | 0.099668 | python,opencv | MatchTemplate returns the most probable position of a template inside a picture. You could do the following steps:
Find the (x,y) origin, width and height of each picture inside the larger one
Save them as rectangles with that data(cv::Rect r1, cv::Rect r2)
Using the & operator, find the overlap area between both rectangles (r1&r2) | Let's say there are two sub images of a large image. I am trying to detect the overlapping area of two sub images. I know that template matching can help to find the templates. But i'm not sure how to find the intersected area and remove them in either one of the sub images. Please help me out. | 0 | 1 | 420 |
0 | 58,942,213 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-19T19:57:00.000 | 2 | 2 | 0 | How to scale numpy matrix in Python? | 58,941,935 | 0.197375 | python,numpy,machine-learning,scaling,numpy-ndarray | subtract each column's minimum from itself
for each column of the result divide by its maximum
for column 0 of that result multiply by 11-1.5
for column 1 of that result multiply by 5--0.5
add 1.5 to column zero of that result
add -0.5 to column one of that result
You could probably combine some of those steps. | I have this numpy matrix:
x = np.random.randn(700,2)
What I wanna do is scale the values of the first column between that range: 1.5 and 11 and the values of the second column between `-0.5 and 5.0. Does anyone have an idea how I could achieve this? Thanks in advance | 0 | 1 | 4,738 |
0 | 58,959,742 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-20T16:53:00.000 | 0 | 2 | 0 | Is there any Python code for Convolutional Neural Network, but without Tensorflow/Theano/Scikit etc? | 58,959,547 | 0 | python,tensorflow,deep-learning,conv-neural-network | A lot of Deep Learning courses will ask the student to implement a CNN in Python with just numpy, then teach them to achieve the same result with Tensorflow etc. You can just search on Github for "Deep-Learning-Coursera" and you will probably find something like this https://github.com/enggen/Deep-Learning-Coursera/blob/master/Convolutional%20Neural%20Networks/Week1/Convolution%20model%20-%20Step%20by%20Step%20-%20v2.ipynb, where the CNN functions are implemented without Tensorflow. | I hope there will be some code where the Convolutional Neural Network will be implemented without Tensorflow OR theano OR Scikit etc. I searched over the google, but google is so crazy some time :), if i write "CNN without Tensorflow" it just grab the tesorflow part and show me all the results with tesorflow :( and if i skip the tensorflow, it again shows me some how similar results. any help please. | 0 | 1 | 1,274 |
0 | 58,965,640 | 0 | 0 | 0 | 0 | 1 | true | 13 | 2019-11-20T19:15:00.000 | 16 | 1 | 0 | set `torch.backends.cudnn.benchmark = True` or not? | 58,961,768 | 1.2 | python,pytorch | If your model does not change and your input sizes remain the same - then you may benefit from setting torch.backends.cudnn.benchmark = True.
However, if your model changes: for instance, if you have layers that are only "activated" when certain conditions are met, or you have layers inside a loop that can be iterated a different number of times, then setting torch.backends.cudnn.benchmark = True might stall your execution. | I am using pytorch and I wonder if I should use torch.backends.cudnn.benchmark = True. I find on google that I should use it when computation graph does not change. What is computation graph in pytorch? | 0 | 1 | 9,045 |
0 | 58,965,054 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2019-11-20T23:26:00.000 | 2 | 5 | 0 | How to check machine learning accuracy without cross validation | 58,964,954 | 1.2 | python,machine-learning,scikit-learn,neural-network,random-forest | Splitting your data is critical for evaluation.
There is no way that you could train your model on 100% of the data and be able to get a correct evaluation accuracy unless you expand your dataset. I mean, you could change your train/test split, or try to optimize your model in other ways, but i guess the simple answer to your question would be no. | I have training sample X_train, and Y_train to train and X_estimated.
I got task to make my classificator learn as accurate as it can, and then predict vector of results over X_estimated to get close results to Y_estimated (which i have now, and I have to be as much precise as it can). If I split my training data to like 75/25 to train and test it, I can get accuracy using sklearn.metrics.accuracy_score and confusion matrix. But I am losing that 25% of samples, that would make my predictions more accurate.
Is there any way, I could learn by using 100% of the data, and still be able to see accuracy score (or percentage), so I can predict it many times, and save best (%) result?
I am using random forest with 500 estimators, and usually get like 90% accuracy. I want to save best prediction vector as possible for my task, without splitting any data (not wasting anything), but still be able to calculate accuracy (so I can save best prediction vector) from multiple attempts (random forest always shows different results)
Thank you | 0 | 1 | 2,055 |
0 | 58,975,977 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-11-20T23:26:00.000 | 0 | 5 | 0 | How to check machine learning accuracy without cross validation | 58,964,954 | 0 | python,machine-learning,scikit-learn,neural-network,random-forest | It is not necessary to do 75|25 split of your data all the time. 75
|25 is kind of old school now. It greatly depends on the amount of data that you have. For example, if you have 1 billion sentences for training a language model, it is not necessary to reserve 25% for testing.
Also, I second the previous answer of trying K-fold cross-validation. As a side note, you could consider looking at the other metrics like precision and recall as well. | I have training sample X_train, and Y_train to train and X_estimated.
I got task to make my classificator learn as accurate as it can, and then predict vector of results over X_estimated to get close results to Y_estimated (which i have now, and I have to be as much precise as it can). If I split my training data to like 75/25 to train and test it, I can get accuracy using sklearn.metrics.accuracy_score and confusion matrix. But I am losing that 25% of samples, that would make my predictions more accurate.
Is there any way, I could learn by using 100% of the data, and still be able to see accuracy score (or percentage), so I can predict it many times, and save best (%) result?
I am using random forest with 500 estimators, and usually get like 90% accuracy. I want to save best prediction vector as possible for my task, without splitting any data (not wasting anything), but still be able to calculate accuracy (so I can save best prediction vector) from multiple attempts (random forest always shows different results)
Thank you | 0 | 1 | 2,055 |
0 | 58,966,156 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-21T01:53:00.000 | 1 | 2 | 0 | Can we create an ensemble of deep learning models without increasing the classification time? | 58,966,086 | 0.099668 | python,deep-learning,ensemble-learning | There is no magic pill for doing what you want. Extra computation cannot come free.
So one way this can be achieved is by using multiple worker machines to run inference in parallel.
Each model could run on a different machine using tensorflow serving.
For every new inference do the following:
Have a primary machine which takes up the job of running the inference
This primary machine, submits requests to different workers (all of which can run in parallel)
The primary machine collects results from each individual worker, and creates the final output by combining them based upon your ensemble logic. | I want to improve my ResNet model by creating an ensemble of X number of this model, taking the X best one I have trained. For what I've seen, a technique like bagging will take X time longer to classify an image, which is really not an option in my case.
Is there a way to create an ensemble without increasing the required classifying time? Note that I don't care about increasing the training time, because it only needs to be done one time, compared to the classification which could be made a very large number of time. | 0 | 1 | 293 |
0 | 58,969,593 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-21T02:08:00.000 | 1 | 1 | 0 | How to add an additional binary variable with CPLEX and Python? | 58,966,188 | 0.197375 | python,mathematical-optimization,linear-programming,cplex | You did not say whether you use the CPLEX Python API or docplex. But in either case, you can call the functions that create variables multiple times.
So in the CPLEX Python API call Cplex.variables.add() again to add another set of variables.
In docplex just call Model.binary_var_dict() (or whatever method you used to create X) again for the Y variables. | I have an integer programming problem with a decision variable X_i_j_k_t that is 1 if job i was assigned to worker j for day k and shift t. I am maximizing the benefit of assigning orders to my workers. I have an additional binary variable Y_i_k_t that is 1 if the job was executed and a given day and shift (jobs might require more than one worker). How can I add this variable in CPLEX? So as to form, for example, sum(i, k, t)(Y_i_k_t) <= 1 (the order can´t be done more than once).
Thank you in advance | 0 | 1 | 1,334 |
0 | 58,976,553 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-21T13:21:00.000 | 1 | 1 | 0 | Neural Networks - Checking for node activation | 58,976,062 | 0.197375 | python,neural-network,artificial-intelligence | Somehow techytushar's comment nudged my brain into a new line of reasoning, which I think has been very helpful:
So the problem I'm addressing is: 'There can be no dormant code.' Be that lines of C or array elements that are never, and can never, be accessed.
So when the trained NN runs as a compiled C application, the application will calculate the value for each neuron and evaluate its activation function irrespective of the node's input value(s). So actually there is no such thing as dormant code or array elements in this regard. Just a true/false output for that node's activation at that moment. It might change in the next moment. It'll all be re calculated, even if mathmatically the result is always for no activation.
So the question then moves away from this subject, to ensuring that no combinations of node activation can result in the system being in a dangerous state. That's off topic of the original question, so I think I can draw a line under this...? | I'm involved in a research project that is looking at using Neural Networks in a safety critical environment. Part of the regulatory framework this research is targeted towards states that there must be no dormant code within the system. There must be a pathway through every part of the system and that pathway must be testable/verifiable.
Obviously the neural network is comprised of many nodes. The input/output nodes are easy to test for activation, but does anyone know of a method of testing activation of hidden layer nodes?
Obviously the activation is dependent of the node's input values and activation function and there may be a mathematical approach to this.
Ultimately the code will be in C/C++, but we're doing the NN development in Python. So any ideas involving related toolsets would be gratefully received. I could also export/import the NN structure and matrices to another package or environment if that helps with this testing.
Hopefully you'll all be overflowing with ideas, because Google didn't offer anything. :(
Thanks. | 0 | 1 | 108 |
0 | 66,943,729 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-21T23:17:00.000 | 1 | 3 | 0 | opencv imwrite, image get rotated | 58,985,183 | 0.066568 | python,image,opencv,png,jpeg | One possible solution is to change cv2.IMREAD_UNCHANGED to cv2.IMREAD_COLOR while loading image with imdecode. From some reason "unchanged" is not able to read EXIF metadata correctly | I am using opencv in python (cv2) to do some processing on images with format jpg, png and jpeg, JPG. I am doing a test that write image to disk using "cv2.imwrite" right after reading from "cv2.imread". I found part of the image get rotated, some of them rotate 90d, some rotate 180d. But most image keep the right orientation. I cannot conclude a pattern that causes this rotation. Anyone knows more details? Thanks! | 0 | 1 | 2,287 |
0 | 58,993,348 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-11-22T11:22:00.000 | 2 | 1 | 0 | Convert a str(numpy array) representaion to a numpy array - Python | 58,993,231 | 1.2 | python,numpy | Try numpy.array([int(v) for v in your_str[1:-1].split()]) | Let's say I have a numpy array a = numpy.array([1,2,3,4]). Now
str(a) will give me "[1 2 3 4]". How do I convert the string "[1 2 3 4]" back to a numpy.array([1,2,3,4])? | 0 | 1 | 77 |
0 | 58,994,326 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-11-22T12:25:00.000 | 1 | 2 | 0 | How to integrate spreadsheet/excel kind of view to my application using python? | 58,994,269 | 0.099668 | python | I think working with PyQt for large application is the best option ( for large applications ) but tkinter is the secondary option for fast small apps. | I am trying to create an application using python, In which I would like to able to read a .csv or .xlsx file and display its contents on my application, I believe there should be some packages which helps to do this in python, can I have some suggestions?
Regards,
Ram | 0 | 1 | 36 |
0 | 59,466,483 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-11-22T17:36:00.000 | 0 | 1 | 0 | False positive number Bloom filter | 58,999,182 | 0 | python,dataframe,hash,bloom-filter | Yes, and it is very simple.
Count the number of bits that are 'on' and divide that by the total number of bits. This will give you your fill-rate.
When querying, all elements that were inserted earlier will hit 'on' bits and return positive. For elements which were not inserted into the filter, the probability of hitting an 'on' bit is your fill-rate. Therefore, with 3 hash functions, your error-rate will be (fill_rate^3).
Though 0.5 is the optimal fill-rate that maximizes space vs. error-rate, any other fill rate is possible but it will either take too much space or has a higher error-rate than required. So you may be better off using 4 hash functions with less space. It really depends on your use case. What is your requirement? what error-rate are you looking for? | I implemented a bloom filter with 3 hash functions, and now I should calculate the exact number of false positives (not possibility) in that filter. Is there an efficient way to calculate that? The number of items in the filter is 200 million and the size of bit array is 400 million | 0 | 1 | 308 |
0 | 59,011,075 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-11-23T09:38:00.000 | 1 | 2 | 0 | Will pandas.read_excel preserve column order? | 59,006,318 | 1.2 | python,python-3.x,pandas | pandas will return to you the column order exactly as in the original file. If the order changes in the file, the order of columns in the dataframe will change too.
You can define the column order yourself when reading in the data. Sometimes you'd also load the data, check what columns are present (with dataframe.columns.values) and then apply certain heuristic to preprocess them. | I need to read a sheet in excel file. But the number of columns(approx 100 to 150), column names and column position may change everyday in the sheet. Will pandas.read_excel return a dataframe with columns in the same order as they are in my daily excel sheet ? I'm using pandas 0.25.3 | 0 | 1 | 1,802 |
0 | 59,016,457 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-24T09:49:00.000 | 0 | 1 | 0 | Issues installing sklearn_pandas package | 59,016,428 | 0 | python | Your pip doesn't recognized and constantly showing this message while executing: 'pip' is not recognized as an internal or external command, operable program or batch file. If your python version is 3.x.x format then you use pip3 not pip anymore. The usage is pip3 is exactly the same as pip | I have been trying to install the sklearn_pandas package. I tried two methods which I found online:
1) By running 'pip install sklearn-pandas' in the Windows command line in the same location as my Python working directory:
This resulted in the error ''pip' is not recognized as an internal or external command, operable program or batch file.' So I tried 'python -m pip install sklearn-pandas'. This got executed but showed nothing (no message/warning etc) in terms of output.
After this I attempted to import a function from sklearn_pandas in a code (using Spyder IDE), but got an error saying 'No module named 'sklearn_pandas''.
2) After the above, I attempted another suggestion which was to execute 'easy_install sklearn_pandas'. I ran this in the Spyder IDE and got an error saying invalid syntax.
Could someone help me out with this? Thanks | 0 | 1 | 225 |
0 | 59,034,403 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-25T07:23:00.000 | 0 | 1 | 0 | Auto encoder and decoder on numerical data-set | 59,026,939 | 0 | python,neural-network,deep-learning,cryptography,autoencoder | Sure you can apply classical Autoencoders to numerical data. In its simplest form its just a matrix multiplication to a lower dimensional space and then a matrix multiplication to the original space and a L2-Loss based on the reconstruction and the input. You could start from there and add layers if your performance is insufficient. | i am working on cryptography.The data set i am using is numerical data to resolve the dimensional reduction issue i want to apply auto encoder and decoder neural network. is it possible to apply Auto Encoder on numerical data set if yes then how? | 0 | 1 | 226 |
0 | 59,027,703 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-25T08:18:00.000 | 0 | 2 | 0 | Which Python version should I download to run tensorflow GPU | 59,027,575 | 0 | python,tensorflow,gpu | you can use any
but it's better to use python version 3 with pip install 19.0 version
as for the Cuda version you need to check what cuda version can be rub for your GPU
and for cuDnn it will be desided on your cuda version | I try to set my CUDA Tensorflow on my windows 10.
I would like to know what is the newest version of Python that works without bugs with the CUDA tensorflow and in which version.
Also what version of cuDNN should i use?
Thanks a lot. | 0 | 1 | 357 |
0 | 59,032,616 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-25T12:48:00.000 | 0 | 1 | 0 | Finding an element of pandas series at a certain time (date) | 59,032,213 | 0 | python-3.x,pandas,numpy,time-series | If your dataframe is indexed by date, you can:
df[date] to access all the rows indexed by such date (e.g. df['2019-01-01']);
df[date1:date2] to access all the rows with date index between date1 and date2 (e.g. df['2019-01-01': '2019-11-25']);
df[:date] to access all the rows with index before date value (e.g. df[:'2019-01-01']);
df[date:] to access all the rows with index after date value (e.g. df['2019-01-01':]). | I have some pandas series with the type "pandas.core.series.Series". I know that I can see its datetimeindex when I add ".index" to the end of it.
But what if I want to get the element of the series at this time? and whats if I have a "pandas._libs.tslibs.timestamps.Timestamp" and want to get the element of the series at this time? | 0 | 1 | 135 |
0 | 66,849,256 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-11-25T21:18:00.000 | 0 | 2 | 0 | MultiLabel Soft Margin Loss in PyTorch | 59,040,237 | 0 | python,pytorch,loss-function,softmax | In pytorch 1.8.1, I think the right way to do is fill the front part of the target with labels and pad the rest part of the target with -1. It is the same as the MultiLabelMarginLoss, and I got that from the example of MultiLabelMarginLoss. | I want to implement a classifier which can have 1 of 10 possible classes. I am trying to use the MultiClass Softmax Loss Function to do this. Going through the documentation I'm not clear with what input is required for the function.
The documentation says it needs two matrices of [N, C] of which one is input and the other is target. As much as I understand, input matrix would be the one my Neural Network would calculate, which would have probabilities given by the neural network to each of the 10 classes. The target is the one that I have from my dataset.
The documentation says - "Target(N, C) - label targets padded by -1 ensuring same shape as the input." What does this mean? Do I pass zeros in incorrect classes and -1 for the correct one?
It would be great if someone could elaborate on this and show even a sample 2d matrix which could be passed as a target matrix. | 0 | 1 | 3,994 |
0 | 59,041,168 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-11-25T22:22:00.000 | 1 | 2 | 0 | Deploy function in AWS Lamda (package size exceeds) | 59,040,958 | 0.099668 | python,amazon-web-services,numpy,tensorflow,aws-lambda | when the zip file size is bigger than 49 mb, You can upload the zip file to Amazon S3 and use it to update the function code.
aws lambda update-function-code --function-name calculateMath --region us-east-1 --s3-bucket calculate-math-bucket --s3-key 100MBFile.zip | I am trying to deploy my function on AWS Lambda. I need the following packages for my code to function:
keras-tensorflow
Pillow
scipy
numpy
pandas
I tried installing using docker and uploading the zip file, but it exceeds the file size.
Is there a get around for this? How to use these packages for my Lambda function? | 0 | 1 | 273 |
0 | 59,877,757 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-26T01:56:00.000 | 0 | 1 | 0 | Tensorflow-GPU getting stuck saving checkpoint during training - also not using entire GPU, not sure why | 59,042,621 | 0 | python,tensorflow | Had the same issues till I upgradet the Nvidia driver from Version 441.28 to the newest Version.
After this, the training runs without stops or freezes. | GPU: Nvidia GTX 2070
Python Version: 3.5
Tensorflow: 1.13.1
CUDA: 10
cuDNN: 7.4
Model: Faster-RCNN-Inception-V2
I am using the legacy method of training my model (trian.py) and when I run it as such
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
The training runs for some random amount of time (it usually gets stuck around the 150th step, but it will often make it up to 300-700 sometimes when I try it) and then will get stuck attempting to save a checkpoint. I reach the point where it just says
INFO:tensorflow:global step 864: loss = 0.4430 (0.996 sec/step)
INFO:tensorflow:Saving checkpoint to path training/model.ckpt
INFO:tensorflow:Saving checkpoint to path training/model.ckpt
And does not move past that point. Once it reaches this point, I also become incapable of killing the program no matter which methods I try and am forced to simply close the terminal window if I want the process to stop.
Additionally, based on what I have read, the program should theoretically be using up close to 100% of my GPU while it trains but it only ends up using about 10%. I'm not sure if those two things are related but I feel it is probably worth mentioning, especially considering I would like to have it train as fast as possible if I do manage to get it working.
I've seen others post about similar issues in the past but none seem to have any answers. If anyone has any idea please let me know! Thanks. | 0 | 1 | 778 |
0 | 59,050,279 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-11-26T11:34:00.000 | 0 | 2 | 0 | Fill an existing Excel file with data from a Pandas DataFrame | 59,050,052 | 0 | python,python-3.x,pandas,dataframe | If your # of columns and order is same then you may try xlsxwriter and also mention the sheet name to want to refresh:
df.to_excel('filename.xlsx', engine='xlsxwriter', sheet_name='sheetname', index=False) | I have a Pandas DataFrame with a bunch of rows and labeled columns.
I also have an excel file which I prepared with one sheet which contains no data but only
labeled columns in row 1 and each column is formatted as it should be: for example if I
expect percentages in one column then that column will automatically convert a raw number to percentage.
What I want to do is fill the raw data from my DataFrame into that Excel sheet in such a way
that row 1 remains intact so the column names remain. The data from the DataFrame should fill
the excel rows starting from row 2 and the pre-formatted columns should take care of converting
the raw numbers to their appropriate type, hence filling the data should not override the column format.
I tried using openpyxl but it ended up creating a new sheet and overriding everything.
Any help? | 0 | 1 | 1,888 |
0 | 59,053,007 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-26T14:11:00.000 | 0 | 2 | 0 | how to find cosine similarity in a pre-computed matrix with a new vector? | 59,052,818 | 0 | python,pandas,machine-learning,scikit-learn,computer-vision | The initial (5000,5000) matrix encodes the similarity values of all your 5000 items in pairs (i.e. symmetric matrix).
To have the similarities in case of a new item, concatenate and make a (5001, 2048) matrix and then estimate similarity again to get (5001,5001)
In other words, you can not directly use the (5000,5000) precomputed matrix to get the similarity with the new (1,2048) vector. | I have a dataframe with 5000 items(rows) and 2048 features(columns).
Shape of my dataframe is (5000, 2048).
when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix.
Here I can compare each other.
But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated?
EDIT
PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics? | 0 | 1 | 241 |
0 | 59,053,865 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-11-26T14:11:00.000 | 0 | 2 | 0 | how to find cosine similarity in a pre-computed matrix with a new vector? | 59,052,818 | 0 | python,pandas,machine-learning,scikit-learn,computer-vision | Since cosine similarity is symmetric. You can compute the similarity meassure with the old data matrix, that is similarity between the new sample (1,2048) and old matrix (5000,2048) this will give you a vector of (5000,1) you can append this vector into the column dimension of the pre-computed cosine matrix making it (5000,5001) now since you know the cosine similarity of the new sample to itself. you can append this similarity to itself, back into the previously computed vector making it of size (5001,1), this vector you can append in the row dimension of the new cosine matrix that will make it (5001,5001) | I have a dataframe with 5000 items(rows) and 2048 features(columns).
Shape of my dataframe is (5000, 2048).
when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix.
Here I can compare each other.
But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated?
EDIT
PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics? | 0 | 1 | 241 |
0 | 59,063,566 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-11-27T04:57:00.000 | 0 | 1 | 0 | How should I group these elements such that overall variance is minimized? | 59,063,240 | 0 | python,r,algorithm,optimization,minimization | I would sort the numbers into increasing order and then use dynamic programming to work out where to place the boundaries between groups of contiguous elements. For example, if the only constraint is that every number must be in exactly one group, work from left to right. At each stage, for i=1..n work out the set of boundaries that produces minimum variance computed among the elements seen so far for i groups. For i=1 there is no choice. For i>1 consider every possible location for the boundary of the last group, and look up the previously computed answer for the best allocation of items before this boundary into i-1 groups, and use the figure previously computed here to work out the contribution of the variance of the previous i-1 groups.
(I haven't done the algebra, but I believe that if you have groups A and B where mean(A) < mean(B) but there are elements a in A and b in B such that a > b, you can reduce the variance by swapping these between groups. So the lower variance must come from groups that are contiguous when the elements are written out in sorted order). | I have a set of elements, which is for example
x= [250,255,273,180,400,309,257,368,349,248,401,178,149,189,46,277,293,149,298,223]
I want to group these into n number of groups A,B,C... such that sum of all group variances is minimized. Each group need not have same number of elements.
I would like a optimization approach in python or R. | 0 | 1 | 343 |
0 | 59,095,464 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-11-27T20:57:00.000 | 1 | 2 | 0 | Compare Number of Equal Elements in Tensors | 59,078,318 | 0.099668 | python,pytorch,equality,tensor | Something like
equal_count = len((tensor_1.flatten() == tensor_2.flatten()).nonzero().flatten())
should work. | I have two tensors of dimension 1000 * 1. I want to check how many of the 1000 elements are equal in the two tensors. I think I should be able to do this in one line like Numpy but couldn't find a similar function. | 0 | 1 | 4,942 |