Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
how to save jupyter output into a pdf file | 41,493,134 | 3 | 6 | 19,046 | 0 | python-2.7,pdf,jupyter-notebook | When I want to save a Jupyter Notebook I right click the mouse, select print, then change Destination to Save as PDF. This does not save the analysis outputs though. So if I want to save a regression output, for example, I highlight the output in Jupyter Notebook, right click, print, Save as PDF. This process creates fairly nice looking documents with code, interpretation and graphics all-in-one. There are programs that allow you to save more directly but I haven't been able to get them to work. | 0 | 0 | 0 | 0 | 2016-07-29T10:55:00.000 | 2 | 0.291313 | false | 38,657,054 | 0 | 1 | 1 | 1 | I am doing some data science analysis on jupyter and I wonder how to get all the output of my cell saved into a pdf file ?
thanks |
What's the advantage of making your appengine app thread safe? | 38,665,116 | 0 | 1 | 171 | 0 | python,multithreading,google-app-engine | Multithreading is not related to billing in any way - you still pay for one instance even if 10 threads are running in parallel on that instance. | 0 | 1 | 0 | 0 | 2016-07-29T17:54:00.000 | 1 | 0 | false | 38,664,788 | 0 | 0 | 1 | 1 | I have an appserver that is getting painfully complicated in that it has to buffer data from incoming requests then push those buffers out, via pubsub, after enough has been received. The buffering isn't the problem, but efficient locking is... hairy, and I'm concerned that it's slowing down my service. I'm considering dropping thread safety in order to remove all the locking, but I'm worried that my app instance count will have to double (or more) to handle the same user load.
My understanding is that a threadsafe app is one where each thread is a billed app instance. In other words, I get billed for two instances by allowing multiple threads to run in a process, with the only advantage being that the threads can share memory and therefore, have a smaller overall footprint.
So to rephrase, does a multithreaded app instance handle multiple simultaneous connections, or is each billed app instance a separate thread - only capable of handling one request at a time? If I remove thread safety, am I going to need to run a larger pool of app instances? |
Can I use Hendrix to run Falcon app? | 38,676,247 | 2 | 2 | 52 | 0 | python,tornado,wsgi,falcon,hendrix | So found the solution. Created a python file according to hendrix's docs. And imported my app's wsgi callable there. | 0 | 1 | 0 | 0 | 2016-07-30T11:42:00.000 | 1 | 0.379949 | false | 38,673,550 | 0 | 0 | 1 | 1 | Hendrix is a WSGI compatible server written in Tornado. I was wondering if it can be used to run an app written in Falcon ? |
python django: create a new virtualenv for each django project? | 38,676,435 | 2 | 0 | 1,824 | 0 | python,django,virtualenv | Personally I do.
Virtualenvs help you keep the dependencies required for a project organised and manageable. If you have a django 1.7 project, it will require django1.7 and thus install it in your virtualenv. Without a virtualenv, you might decide to take on a project that requires django1.10. This means your django1.7 project might break. To avoid such a scenario use a virtual environment. | 0 | 0 | 0 | 0 | 2016-07-30T16:50:00.000 | 4 | 0.099668 | false | 38,676,315 | 1 | 0 | 1 | 1 | Do you create a new virtualenv every time you start a new project?
I'm going through some tutorials on the web and they create a virtualenv first then pip install django in the virtualenv. But there's one tutorial that i saw saying that you wouldn't create a project within the virtualenv and its only used for dependencies. |
Web2PY caching password | 38,837,806 | 0 | 0 | 71 | 0 | python | Just wanted to close out on this in case someone in the future is looking at this as well.
I was able to capture the password used to login by adding the following to my db.py
def on_ldap_connect(form):
username = request.vars.username
password = request.vars.password
You can save user/password to some session variable or secure file to
use for authenticating to other services.
auth.settings.login_onaccept.append(on_ldap_connect) | 0 | 0 | 1 | 0 | 2016-08-01T12:33:00.000 | 1 | 0 | false | 38,699,035 | 0 | 0 | 1 | 1 | I'm just starting to use Web2PY.
My basic one page app authenticates users to a AD based LDAP service.
I need to collect other data via rest api calls on behave of the user from the server side of the app.
I'd like to cache the username and password of the user for a session so the user doesn't have to be prompted for credentials multiple times.
Is there an easy way to do this ? |
Django: Know when a file is already saved after usign storage.save() | 38,701,216 | 0 | 0 | 49 | 0 | python,django,amazon-web-services,amazon-s3 | No magic solution here. You have to manage states on your model, specially when working with celery tasks. You might need another field called state with the states: NONE (no action is beeing done), PROCESSING (task was sent to celery to process) and DONE (image was rotated)
NONE is the default state. you should set the POCESSING state before calling the celery task (and not inside the celery task, already had bugs because of that) and finally the celery task should set the status to DONE when finished.
When the task is fast the user will not see any difference but when it takes some time you might want to add a message "image is being processed, please try again" or something like that
At least that's how I do it... Hope this helps | 0 | 1 | 0 | 0 | 2016-08-01T13:15:00.000 | 1 | 0 | false | 38,699,927 | 0 | 0 | 1 | 1 | So I've a Django model which has a FileField. This FileField, contains generally an image. After I receipt the picture from a request, I need to run some picture analysis processes.
The problem is that sometimes, I need to rotate the picture before running the analysis (which runs in celery, loading the model again getting the instance by the id). So I get the picture, rotate it and save it with:
storage.save(image_name, new_image_file), where storage is the django default storage (using AWS S3)
The problem is that in some minor cases (lets say 1 in 1000), the picture is not already rotated when running the analysis process in celery, after the rotation process was executed, but after that, if I open the image, is already rotated, so it seems that the save method is taking some time to update the file in the storage, (asynchronously)...
Have anyone had a similar issue? Is there a way to check if the file was already updated, like with a callback or a kind of handler?
Many thanks! |
Using Google Forms to write to multiple tables? | 39,074,108 | 0 | 0 | 638 | 1 | python,sql,google-forms | You can add a script in the Google spreadsheet with an onsubmit trigger. Then you can do whatever you want with the submitted data. | 0 | 0 | 0 | 0 | 2016-08-01T16:37:00.000 | 1 | 0 | false | 38,703,892 | 0 | 0 | 1 | 1 | I am creating a web project where I take in Form data and write to a SQL database.
The forms will be a questionnaire with logic branching. Due to the nature of the form, and the fact that this is an MVP project, I've opted to use an existing form service (e.g Google Forms/Typeform).
I was wondering if it's feasible to have form data submitted to multiple different tables (e.g CustomerInfo, FormDataA, FormDataB, etc.). While this might be possible with a custom form application, I do not think it's possible with Google Forms and/or Typeform.
Does anyone have any suggestions on how to parse user submitted Form data into multiple tables when using Google Forms or Typeform? |
Service inside docker container stops after some time | 38,712,037 | 1 | 0 | 1,799 | 0 | python,rest,nginx,docker | Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died. | 0 | 1 | 0 | 0 | 2016-08-02T04:14:00.000 | 1 | 0.197375 | false | 38,711,658 | 0 | 0 | 1 | 1 | I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue? |
Google App Engine - run task on publish | 38,742,762 | 0 | 1 | 66 | 0 | python,google-app-engine | The main question will be how to ensure it only runs once for a particular version.
Here is an outline on how you might approach it.
You create a HasRun module, which you use store each the version of the deployed app and this indicates if the one time code has been run.
Then make sure you increment your version, when ever you deploy your new code.
In you warmup handler or appengine_config.py grab the version deployed,
then in a transaction try and fetch the new HasRun entity by Key (version number).
If you get the Entity then don't run the one time code.
If you can not find it then create it and run the one time code, either in a task (make sure the process is idempotent, as tasks can be retried) or in the warmup/front facing request.
Now you will probably want to wrap all of that in memcache CAS operation to provide a lock or some sort. To prevent some other instance trying to do the same thing.
Alternately if you want to use the task queue, consider naming the task the version number, you can only submit a task with a particular name once.
It still needs to be idempotent (again could be scheduled to retry) but there will only ever be one task scheduled for that version - at least for a few weeks.
Or a combination/variation of all of the above. | 0 | 1 | 0 | 0 | 2016-08-02T14:46:00.000 | 2 | 0 | false | 38,723,681 | 0 | 0 | 1 | 2 | I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you. |
Google App Engine - run task on publish | 38,794,445 | 2 | 1 | 66 | 0 | python,google-app-engine | Personally, this makes more sense to me as a responsibility of your deploy process, rather than of the app itself. If you have your own deploy script, add the post request there (after a successful deploy). If you use google's command line tools, you could wrap that in a script. If you use a 3rd party tool for something like continuous integration, they probably have deploy hooks you could use for this purpose. | 0 | 1 | 0 | 0 | 2016-08-02T14:46:00.000 | 2 | 0.197375 | false | 38,723,681 | 0 | 0 | 1 | 2 | I have been looking for a solution for my app that does not seem to be directly discussed anywhere. My goal is to publish an app and have it reach out, automatically, to a server I am working with. This just needs to be a simple Post. I have everything working fine, and am currently solving this problem with a cron job, but it is not quite sufficient - I would like the job to execute automatically once the app has been published, not after a minute (or whichever the specified time it may be set to).
In concept I am trying to have my app register itself with my server and to do this I'd like for it to run once on publish and never be ran again.
Is there a solution to this problem? I have looked at Task Queues and am unsure if it is what I am looking for.
Any help will be greatly appreciated.
Thank you. |
Sending pandas dataframe to java application | 38,724,313 | 0 | 2 | 2,769 | 0 | java,python,pandas,numpy,jython | Have you tried using xml to transfer the data between the two applications ?
My next suggestion would be to output the data in JSON format in a txt file and then call the java application which will read the JSON from the text file. | 0 | 0 | 0 | 0 | 2016-08-02T15:10:00.000 | 2 | 0 | false | 38,724,255 | 0 | 1 | 1 | 2 | I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you . |
Sending pandas dataframe to java application | 57,166,461 | 0 | 2 | 2,769 | 0 | java,python,pandas,numpy,jython | Better approach here is to use java pipe input like python pythonApp.py | java read. Output of python application can be used as an input for java application till the format of data is consitent and known. Above soultions of creating a file and then reading also works but is prone to more errors. | 0 | 0 | 0 | 0 | 2016-08-02T15:10:00.000 | 2 | 0 | false | 38,724,255 | 0 | 1 | 1 | 2 | I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you . |
App Engine serving old version intermittently | 38,788,820 | 1 | 0 | 190 | 0 | python,google-app-engine | You have multiple layers of caches beyond memcache,
Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com .
You will probably need to use some cache busting techniques.
You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url.
If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques. | 0 | 1 | 0 | 0 | 2016-08-03T10:43:00.000 | 1 | 1.2 | true | 38,741,327 | 0 | 0 | 1 | 1 | I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another.
My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either.
Any help would be much appreciated
Regards,
Danny |
How to use a Seafile generated upload-link w/o authentication token from command line | 38,743,242 | 9 | 5 | 2,271 | 0 | python,curl,urllib2,http-upload,seafile-server | needed 2 hours to find a solution with curl, it needs two steps:
make a get-request to the public uplink url with the repo-id as query parameter as follows:
curl 'https://cloud.seafile.com/ajax/u/d/98233edf89/upload/?r=f3e30b25-aad7-4e92-b6fd-4665760dd6f5' -H 'Accept: application/json' -H 'X-Requested-With: XMLHttpRequest'
The answer is (json) a id-link to use in next upload-post e.g.:
{"url": "https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680"}
Use this link to initiate the upload post:
curl 'https://cloud.seafile.com/seafhttp/upload-aj/c2b6d367-22e4-4819-a5fb-6a8f9d783680' -F file=@./tmp/index.html -F filename=index.html -F parent_dir="/my-repo-dir/"
The answer is json again, e.g.
[{"name": "index.html", "id": "0a0742facf24226a2901d258a1c95e369210bcf3", "size": 10521}]
done ;) | 0 | 1 | 1 | 0 | 2016-08-03T11:54:00.000 | 2 | 1 | false | 38,742,893 | 0 | 0 | 1 | 1 | With Seafile one is able to create a public upload link (e.g. https://cloud.seafile.com/u/d/98233edf89/) to upload files via Browser w/o authentication.
Seafile webapi does not support any upload w/o authentication token.
How can I use such kind of link from command line with curl or from python script? |
Using Models in Django-Rest Framework | 38,746,370 | 1 | 0 | 911 | 0 | python,django,django-rest-framework | you don't need to use models, but you really should. django's ORM (the way it handles reading/writing to databases) functionality is fantastic and really useful.
if you're executing raw sql statements all the time, you either have a highly specific case where django's functions fail you, or you're using django inefficiently and should rethink why you're using django to begin with. | 0 | 0 | 0 | 0 | 2016-08-03T13:29:00.000 | 3 | 0.066568 | false | 38,745,067 | 0 | 0 | 1 | 1 | I am new to Django-Rest Framework and I wanted to develop API calls.
I am currently using Mysql database so if I have to make changes in the database, do I have to write models in my project or Can I execute the raw data operation onto my database.
Like:
This is my urls.py file which contains a list of URLs and if any of the URL is hit
it directly calls to view function present in views.py file and rest I do the particular operation in that function, like connecting to MySQL database, executing SQL queries and returning JSON response to the front end.
Is this a good approach to making API calls? If not Please guide me.
Any advice or help will be appreciated. |
Sharing data with multiple python programms | 38,751,112 | -1 | 0 | 87 | 0 | python,macos,os.system | You can try writing the data you want to share to a file and have the other script read and interpret it. Have the other script run in a loop to check if there is a new file or the file has been changed. | 0 | 0 | 1 | 0 | 2016-08-03T18:22:00.000 | 3 | 1.2 | true | 38,751,024 | 0 | 0 | 1 | 2 | i am scraping data through multiple websites.
To do that i have written multiple web scrapers with using selenium and PhantomJs.
Those scrapers return values.
My question is: is there a way i can feed those values to a single python program that will sort through that data in real time.
What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time.
what i have tried: i have no idea where to even start |
Sharing data with multiple python programms | 38,751,162 | -1 | 0 | 87 | 0 | python,macos,os.system | Simply use files for data exchange and a trivial locking mechanism.
Each writer or reader (only one reader, it seems) gets a unique number.
If a writer or reader wants to write to the file, it renames it to its original name + the number and then writes or reads, renaming it back after that.
The others wait until the file is available again under its own name and then access it by locking it in a similar way.
Of course you have shared memory and such, or memmapped files and semaphores. But this mechanism has worked flawlessly for me for over 30 years, on any OS, over any network. Since it's trivially simple.
It is in fact a poor man's mutex semaphore.
To find out if a file has changed, look to its writing timestamp.
But the locking is necessary too, otherwise you'll land into a mess. | 0 | 0 | 1 | 0 | 2016-08-03T18:22:00.000 | 3 | -0.066568 | false | 38,751,024 | 0 | 0 | 1 | 2 | i am scraping data through multiple websites.
To do that i have written multiple web scrapers with using selenium and PhantomJs.
Those scrapers return values.
My question is: is there a way i can feed those values to a single python program that will sort through that data in real time.
What i want to do is not save that data to analyze it later i want to send it to a program that will analyze it in real time.
what i have tried: i have no idea where to even start |
How to solve mysql daily analytics that happens when date changes | 38,770,424 | 0 | 0 | 53 | 1 | python,mysql,analytics | You have to put a date on your data and instead of using now() use it. | 0 | 0 | 0 | 0 | 2016-08-04T12:10:00.000 | 1 | 0 | false | 38,766,962 | 0 | 0 | 1 | 1 | I have two separate programs; one counts the daily view stats and another calculates earning based on the stats.
Counter runs first and followed by Earning Calculator a few seconds later.
Earning Calculator works by getting stats from counter table using date(created_at) > date(now()).
The problem I'm facing is that let's say at 23:59:59 Counter added 100 views stats and by the time the Earning Calculator ran it's already the next day.
Since I'm using date(created_at) > date(now()), I will miss out the last 100 views added by the Counter.
One way to solve my problem is to summarise the previous daily report at 00:00:10 every day. But I do not like this.
Is there any other ways to solve this issue?
Thanks. |
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application? | 38,782,897 | 4 | 7 | 1,964 | 0 | python,django,multithreading,webserver,pyramid | There's no magic in pyramid or django that gets you past process boundaries. The answers depend entirely on the particular server you've selected and the settings you've selected. For example, uwsgi has the ability to run multiple threads and multiple processes. If uwsig spins up multiple processes then they will each have their own copies of data which are not shared unless you took the time to create some IPC (this is why you should keep state in a third party like a database instead of in-memory objects which are not shared across processes). Each process initializes a WSGI object (let's call it app) which the server calls via body_iter = app(environ, start_response). This app object is shared across all of the threads in the process and is invoked concurrently, thus it needs to be threadsafe (usually the structures the app uses are either threadlocal or readonly to deal with this, for example a connection pool to the database).
In general the answers to your questions are that things happen concurrently, and objects may or may not be shared based on your server model but in general you should take anything that you want to be shared and store it somewhere that can handle concurrency properly (a database). | 0 | 0 | 0 | 1 | 2016-08-04T12:40:00.000 | 2 | 0.379949 | false | 38,767,616 | 0 | 0 | 1 | 2 | I am having a hard time trying to figure out the big picture of the handling of multiple requests by the uwsgi server with django or pyramid application.
My understanding at the moment is this:
When multiple http requests are sent to uwsgi server concurrently, the server creates a separate processes or threads (copies of itself) for every request (or assigns to them the request) and every process/thread loads the webapplication's code (say django or pyramid) into computers memory and executes it and returns the response. In between every copy of the code can access the session, cache or database. There is a separate database server usually and it can also handle concurrent requests to the database.
So here some questions I am fighting with.
Is my above understanding correct or not?
Are the copies of code interact with each other somehow or are they wholly separated from each other?
What about the session or cache? Are they shared between them or are they local to each copy?
How are they created: by the webserver or by copies of python code?
How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously?
I have googled these questions and have found very interesting answers on StackOverflow but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of django or pyramid with uwsgi or whatever webserver.
Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:) |
What exactly happens on the computer when multiple requests came to the webserver serving django or pyramid application? | 38,767,725 | 3 | 7 | 1,964 | 0 | python,django,multithreading,webserver,pyramid | The power and weakness of webservers is that they are in principle stateless. This enables them to be massively parallel. So indeed for each page request a different thread may be spawned. Wether or not this indeed happens depends on the configuration settings of you webserver. There's also a cost to spawning many threads, so if possible threads are reused from a thread pool.
Almost all serious webservers have page cache. So if the same page is requested multiple times, it can be retrieved from cache. In addition, browsers do their own caching. A webserver has to be clever about what to cache and what not. Static pages aren't a big problem, although they may be replaced, in which case it is quite confusing to still get the old page served because of the cache.
One way to defeat the cache is by adding (dummy) parameters to the page request.
The statelessness of the web was initialy welcomed as a necessity to achieve scalability, where webpages of busy sites even could be served concurrently from different servers at nearby or remote locations.
However the trend is to have stateful apps. State can be maintained on the browser, easing the burden on the server. If it's maintained on the server it requires the server to know 'who's talking'. One way to do this is saving and recognizing cookies (small identifiable bits of data) on the client.
For databases the story is a bit different. As soon as anything gets stored that relates to a particular user, the application is in principle stateful. While there's no conceptual difference between retaining state on disk and in RAM memory, traditionally statefulness was left to the database, which in turned used thread pools and load balancing to do its job efficiently.
With the advent of very large internet shops like amazon and google, mandatory disk access to achieve statefulness created a performance problem. The answer were in-memory databases. While they may be accessed traditionally using e.g. SQL, they offer much more flexibility in the way data is stored conceptually.
A type of database that enjoys growing popularity is persistent object store. With this database, while the distinction still can be made formally, the boundary between webserver and database is blurred. Both have their data in RAM (but can swap to disk if needed), both work with objects rather than flat records as in SQL tables. These objects can be interconnected in complex ways.
In short there's an explosion of innovative storage / thread pooling / caching/ persistence / redundance / synchronisation technology, driving what has become popularly know as 'the cloud'. | 0 | 0 | 0 | 1 | 2016-08-04T12:40:00.000 | 2 | 0.291313 | false | 38,767,616 | 0 | 0 | 1 | 2 | I am having a hard time trying to figure out the big picture of the handling of multiple requests by the uwsgi server with django or pyramid application.
My understanding at the moment is this:
When multiple http requests are sent to uwsgi server concurrently, the server creates a separate processes or threads (copies of itself) for every request (or assigns to them the request) and every process/thread loads the webapplication's code (say django or pyramid) into computers memory and executes it and returns the response. In between every copy of the code can access the session, cache or database. There is a separate database server usually and it can also handle concurrent requests to the database.
So here some questions I am fighting with.
Is my above understanding correct or not?
Are the copies of code interact with each other somehow or are they wholly separated from each other?
What about the session or cache? Are they shared between them or are they local to each copy?
How are they created: by the webserver or by copies of python code?
How are responses returned to the requesters: by each process concurrently or are they put to some kind of queue and sent synchroniously?
I have googled these questions and have found very interesting answers on StackOverflow but anyway can't get the whole picture and the whole process remains a mystery for me. It would be fantastic if someone can explain the whole picture in terms of django or pyramid with uwsgi or whatever webserver.
Sorry for asking kind of dumb questions, but they really torment me every night and I am looking forward to your help:) |
Performance between Python and Java drivers with OrientDB | 38,802,276 | 0 | 0 | 127 | 1 | java,python,performance,orientdb | AFAIK on remote connection (with a standalone OrientDB server) performance would be the same.
The great advantage of using the Java native driver is the option to go embedded. If your deployment scenario allows it, you can avoid the standalone server and use OrientDB embedded into your Java application, avoiding network overhead. | 0 | 0 | 0 | 0 | 2016-08-05T18:15:00.000 | 1 | 0 | false | 38,795,545 | 0 | 0 | 1 | 1 | I want to develop a project that need a noSQL database. After searching a lot, I chose OrientDB. I want to make an API Rest that can connect to OrientDB.
Firstly, I wanted to use Flask to develop but I don't know if it's better to use Java native driver between Python binary driver to connect with database.
Anyone have results of performance between these drivers? |
Why does the Google App Engine NDB datastore have both "—" and "null" for unkown data? | 38,815,611 | 4 | 1 | 184 | 1 | python,google-app-engine,null,google-cloud-datastore,app-engine-ndb | You have to specifically set the value to NULL, otherwise it will not be stored in the Datastore and you see it as missing in the Datastore viewer.
This is an important distinction. NULL values can be indexed, so you can retrieve a list of entities where date of birth, for example, is null. On the other hand, if you do not set a date of birth when it is unknown, there is no way to retrieve a list of entities with date of birth property missing - you'll have to iterate over all entities to find them.
Another distinction is that NULL values take space in the Datastore, while missing values do not. | 0 | 1 | 0 | 0 | 2016-08-07T13:35:00.000 | 1 | 1.2 | true | 38,814,666 | 0 | 0 | 1 | 1 | I recently updated an entity model to include some extra properties, and noticed something odd. For properties that have never been written, the Datastore query page shows a "—", but for ones that I've explicitly set to None in Python, it shows "null".
In SQL, both of those cases would be null. When I query an entity that has both types of unknown properties, they both read as None, which fits with that idea.
So why does the NDB datastore viewer differentiate between "never written" and "set to None", if I can't differentiate between them programatically? |
Cannot type password. (creating a Django admin superuser) | 38,841,784 | 16 | 3 | 4,366 | 0 | python,django,django-admin,python-3.5 | While it does not show you what you are typing, it is still taking the input. So just type in the password both times, press enter and it will work even though it does not show up. | 0 | 0 | 0 | 0 | 2016-08-09T03:49:00.000 | 1 | 1.2 | true | 38,841,720 | 0 | 0 | 1 | 1 | I am doing python manage.py createsuperuser in PowerShell and CMD, and I can type when it prompts me for the Username and Email, but when it gets to Password it just won't let me type. It is not freezing though, because when I press enter it re-prompts me for the password...
Using Django 1.10 and Windows 10. |
Internationalization for datetime | 38,843,085 | 1 | 1 | 224 | 0 | python,postgresql,internationalization,datetime-format,datetimeoffset | Keep as much in UTC as possible. Do your timezone conversion at your edges (client display and input processing), but keep anything stored server side in UTC. | 0 | 0 | 0 | 1 | 2016-08-09T05:23:00.000 | 1 | 1.2 | true | 38,842,666 | 1 | 0 | 1 | 1 | I am building a mobile app and would like to follow best practice for datetime. Initially, we launched it in India and made our server, database and app time to IST.
Now we are launching the app to other countries(timezones), how should I store the datetime? Should the server time be set to UTC and app should display time-based on user's timezone?
What's the best practice to follow in terms of storing date time and exchanging date time format between client and server? Should the client send date time in UTC to the server or in it's own timezone along with locale? |
Trying to make a development instance for a Python pyramid project | 38,886,655 | 1 | 2 | 166 | 1 | python,pyramid,pylons | Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server. | 0 | 0 | 0 | 1 | 2016-08-09T06:17:00.000 | 2 | 0.099668 | false | 38,843,404 | 0 | 0 | 1 | 1 | So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project? |
Can I convert an Odoo browse object to JSON | 38,847,214 | 1 | 3 | 3,015 | 0 | python,json,openerp,odoo-8,odoo-10 | maybe this will help you:
Step 1: Create js that runs reqiest to /example_link
Step 2: Create controller who listens that link @http.route('/example_link', type="json")
Step 3: return from that function json return json.dumps(res) where res is python dictionary and also dont forget to import json.
Thats all, it's not very hard, hope I helped you, good luck. | 0 | 0 | 1 | 0 | 2016-08-09T07:31:00.000 | 1 | 0.197375 | false | 38,844,651 | 1 | 0 | 1 | 1 | I'm trying to integrate reactjs with Odoo, and successfully created components. Now my problem is that I cant get the JSON via odoo. The odoo programmer has to write special api request to make this happen. This is taking more time and code repetitions are plenty.
I tried may suggestions and none worked.
Is there a better way to convert the browse objects, that odoo generate, to JSON ?
Note: Entirely new to python and odoo, please forgive my mistakes, if any mentioned above. |
Apache server seems to be caching requests | 38,869,412 | 0 | 0 | 83 | 1 | apache,flask,python-requests | Dirn was completely right, it turned out not to be an Apache issue at all. It was SQL Alchemy all along.
I imagine that SQL Alchemy knows not to do any 'caching' when it requests data on the development server but decides that it's a good idea in production, which makes perfect sense really. It was not using the committed data on every search, which is why restarting the Apache server fixed it because it also reset the connection.
I guess that's what dirn meant by "How are you loading data in your application?" I had assumed that since I turned off Flask's debugging on the development server it would behave just like it would in deployment but it looks like something has slipped through. | 0 | 0 | 0 | 0 | 2016-08-09T15:06:00.000 | 1 | 0 | false | 38,854,382 | 0 | 0 | 1 | 1 | I am running a Flask app on an Apache 2.4 server. The app sends requests to an API built by a colleague using the Requests library. The requests are in a specific format and constructed by data stored in a MySQL database. The site is designed to show the feedback from the API on the index, and the user can edit the data stored in the MySQL database (and by extension, the data sent in the request) by another page, the editing page.
So let's say for example a custom field date is set to be "2006", I would access the index page, a request would be sent, the API does its magic and sends back data relevant to 2006. If I then went and changed the date to "2007" then the new field is saved in MySQL and upon navigating back to index the new request is constructed, sent off and data for 2007 should be returned.
Unfortunately that's not happening.
My when I change details on my editing page they are definitely stored to the database, but when I navigate back to the index the request sends the previous set of data. I think that Apache is causing the problem because of two reasons:
When I reset the server (service apache2 restart) the data sent back is the 'proper' data, even though I haven't touched the database. That is, the index is initially requesting 2006 data, I change it to request 2007 data, it still requests 2006 data, I restart the server, refresh the index and only then does it request 2007 data like it should have been doing since I edited it.
When I run this on my local Flask development server, navigating to the index page after editing an entry immediately returns the right result - it feeds off the same database and is essentially identical to the deployed server except that it's not running on apache.
Is there a way that Apache could be caching requests or something? I can't figure out why the server would keep sending old requests until I restart it.
EDIT:
The requests themselves are large and ungainly and the responses would return data that I'm not comfortable with making available for examples for privacy reasons.
I am almost certain that Apache is the issue because as previously stated, the Flask development server has no issues with returning the correct dataset. I have also written some requests to run through Postman, and these also return the data as requested, so the request structure must be fine. The only difference I can see between the local Flask app and the deployed one is Apache, and given that restarting the Apache server 'updates' the requests until the data is changed again, I think that it's quite clearly doing something untoward. |
How do you selectively sync a database in Django? | 38,858,633 | -1 | 0 | 218 | 1 | mysql,django,oracle,python-2.7 | Basically write models that match what you want your destination tables to be and then write something to migrate data between the two. I'd make this a comment if I could but not enough rep. | 0 | 0 | 0 | 0 | 2016-08-09T19:04:00.000 | 1 | -0.197375 | false | 38,858,553 | 0 | 0 | 1 | 1 | Right now in Django, I have two databases:
A default MySQL database for my app and
an external Oracle database that, for my purposes, is read-only
There are far more tables in the external database than I need data from, and also I would like to modify the db layout slightly. Is there a way I can selectively choose what data in the external database I would like to sync to my database? The external database is dynamic, and I would like my app to reflect that.
Ex I would like to do something like this:
Say the external database has two tables (out of 100) as follows:
Table47
Eggs
Spam
Sausage
Table48
Name
Age
Color
And I want to keep the data like:
Foo
Eggs
Spam
Type (a foreign key)
Bar
Name
Age
Type (foreign key)
Type
Some fields
Is there a way I could do this in Django? |
Share Odoo Dashboard to all users | 38,877,250 | 0 | 0 | 1,484 | 0 | python,openerp,odoo-8,dashboard | I fear without changing some base elements of Odoo there is no other solution than duplicating the views and change the user, because the field user is required. | 0 | 0 | 0 | 0 | 2016-08-10T13:44:00.000 | 3 | 0 | false | 38,875,222 | 0 | 0 | 1 | 2 | How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.
Is there a better solution ? |
Share Odoo Dashboard to all users | 69,828,393 | 0 | 0 | 1,484 | 0 | python,openerp,odoo-8,dashboard | There is a record rule that prevents other users to see one's dashboard, disabling that record rule will make visible one dashboard to other | 0 | 0 | 0 | 0 | 2016-08-10T13:44:00.000 | 3 | 0 | false | 38,875,222 | 0 | 0 | 1 | 2 | How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.
Is there a better solution ? |
Django hstore field in sqlite | 38,875,962 | 3 | 2 | 912 | 1 | python,django,postgresql,sqlite,django-models | hstore is specific to Postgres. It won't work on sqlite.
If you just want to store JSON, and don't need to search within it, then you can use one of the many third-party JSONField implementations. | 0 | 0 | 0 | 0 | 2016-08-10T14:15:00.000 | 1 | 1.2 | true | 38,875,927 | 0 | 0 | 1 | 1 | I am using sqlite (development stage) database for my django project. I would like to store a dictionary field in a model. In this respect, i would like to use django-hstore field in my model.
My question is, can i use django-hstore dictionary field in my model even though i am using sqlite as my database?
As per my understanding django-hstore can be used along with PostgreSQL (Correct me if i am wrong). Any suggestion in the right direction is highly appreciated. Thank you. |
Handle Flask requests concurrently with threaded=True | 38,876,910 | 9 | 81 | 80,066 | 0 | python,flask | How many requests will my application be able to handle concurrently with this statement?
This depends drastically on your application. Each new request will have a thread launched- it depends on how many threads your machine can handle. I don't see an option to limit the number of threads (like uwsgi offers in a production deployment).
What are the downsides to using this? If i'm not expecting more than a few requests concurrently, can I just continue to use this?
Switching from a single thread to multi-threaded can lead to concurrency bugs... if you use this be careful about how you handle global objects (see the g object in the documentation!) and state. | 0 | 0 | 0 | 0 | 2016-08-10T14:47:00.000 | 2 | 1 | false | 38,876,721 | 1 | 0 | 1 | 1 | What exactly does passing threaded = True to app.run() do?
My application processes input from the user, and takes a bit of time to do so. During this time, the application is unable to handle other requests. I have tested my application with threaded=True and it allows me to handle multiple requests concurrently. |
Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)? | 38,884,139 | 2 | 0 | 197 | 0 | python,email,google-app-engine,cron | You can easily accomplish what you need with Task API. When you create a task, you can set an ETA parameter (when to execute). ETA time can be up to 30 days into the future.
If 30 days is not enough, you can store a "send_email" entity in the Datastore, and set one of the properties to the date/time when this email should be sent. Then you create a cron job that runs once a month (week). This cron job will retrieve all "send_email" entities that need to be send the next month (week), and create tasks for them, setting ETA to the exact date/time when they should be executed. | 0 | 1 | 0 | 1 | 2016-08-10T18:04:00.000 | 2 | 1.2 | true | 38,880,555 | 0 | 0 | 1 | 1 | I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution).
Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again.
Any suggestions how to solve this problem? All help appreciated |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 38,895,935 | 0 | 2 | 3,387 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.
But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.
Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is. | 0 | 0 | 1 | 1 | 2016-08-11T04:02:00.000 | 4 | 0 | false | 38,887,061 | 0 | 0 | 1 | 3 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 38,902,127 | 3 | 2 | 3,387 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | tl;dr: The remote endpoint could not be called, or the response it returned was invalid. also means there may have been a timeout waiting for the endpoint.
I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests. | 0 | 0 | 1 | 1 | 2016-08-11T04:02:00.000 | 4 | 1.2 | true | 38,887,061 | 0 | 0 | 1 | 3 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. |
Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 39,245,816 | 1 | 2 | 3,387 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.
Or it can be the alexa session timeout which is by default set to 8 seconds. | 0 | 0 | 1 | 1 | 2016-08-11T04:02:00.000 | 4 | 0.049958 | false | 38,887,061 | 0 | 0 | 1 | 3 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. |
python process takes time to start in django project running on nginx and uwsgi | 39,093,474 | 0 | 4 | 739 | 0 | python,django,nginx,process,uwsgi | I figured out a workaround (Don't know if it will qualify as an answer).
I wrote the background process as a job in database and used a cronjob to check if I have any job pending and if there are any the cron will start a background process for that job and will exit.
The cron will run every minute so that there is not much delay. This helped in improved performance as it helped me execute heavy tasks like this to run separate from main application. | 0 | 0 | 0 | 0 | 2016-08-11T09:04:00.000 | 2 | 1.2 | true | 38,891,879 | 0 | 0 | 1 | 1 | I am starting a process using python's multiprocessing module. The process is invoked by a post request sent in a django project. When I use development server (python manage.py runserver), the post request takes no time to start the process and finishes immediately.
I deployed the project on production using nginx and uwsgi.
Now when i send the same post request, it takes around 5-7 minutes to complete that request. It only happens with those post requests where I am starting a process. Other post requests work fine.
What could be reason for this delay? And How can I solve this? |
ImportError: No module named jira | 38,902,162 | 1 | 1 | 7,884 | 0 | python,bamboo,suse,python-jira | That is very hard to understand what you problem is. From what I understood you are saying that when you run your module as standalone file, everything works, but when you imoprt it you get an error. Here are some steps towards solving the problem.
Make sure that your script is in Python package. In order to do that, verify that there is (usually) empty __init__.py file in the same directory where the package is located.
Make sure that your script does not import something else in the block that gets executed only when you run the file as script (if __name__ == "__main__")
Make sure that the python path includes your package and visible to the script (you can do this by running print os.environ['PYTHONPATH'].split(os.pathsep) | 0 | 0 | 0 | 0 | 2016-08-11T16:44:00.000 | 2 | 0.099668 | false | 38,901,974 | 1 | 0 | 1 | 2 | Hi I'm running a python script that transitions tickets from "pending build" to "in test" in Jira. I've ran it on my local machine (Mac OS X) and it works perfectly but when I try to include it as a build task in my bamboo deployment, I get the error
"from jira import JIRA
ImportError: No module named jira"
I'm calling the python file from a script task like the following "python myFile.py" and then I supply the location to the myFile.py in the working subdirectory field. I don't think that is a problem because the error shows that it is finding my script fine. I've checked multiple times and the jira package is in site-packages and is in the path. I installed using pip and am running python 2.7.8. The OS is SuSE on our server |
ImportError: No module named jira | 48,773,686 | 0 | 1 | 7,884 | 0 | python,bamboo,suse,python-jira | Confirm that you don't have another file or directory that shares the same name as the module you are trying to import. | 0 | 0 | 0 | 0 | 2016-08-11T16:44:00.000 | 2 | 0 | false | 38,901,974 | 1 | 0 | 1 | 2 | Hi I'm running a python script that transitions tickets from "pending build" to "in test" in Jira. I've ran it on my local machine (Mac OS X) and it works perfectly but when I try to include it as a build task in my bamboo deployment, I get the error
"from jira import JIRA
ImportError: No module named jira"
I'm calling the python file from a script task like the following "python myFile.py" and then I supply the location to the myFile.py in the working subdirectory field. I don't think that is a problem because the error shows that it is finding my script fine. I've checked multiple times and the jira package is in site-packages and is in the path. I installed using pip and am running python 2.7.8. The OS is SuSE on our server |
command error after start project in django | 38,926,190 | 0 | 2 | 961 | 0 | python,django-upgrade | You might have Django installed twice. Running "pip uninstall django" twice, and then reinstalling new version again should help. | 0 | 0 | 0 | 0 | 2016-08-12T19:59:00.000 | 2 | 0 | false | 38,925,638 | 0 | 0 | 1 | 2 | After upgrading my django from 1.8 to 1.10, when i start a project(django-admin startproject lwc) there is an error:
CommandError: C:\Python34\binesh\lwc\lwc\settings.py already exists, overlaying
a project or app into an existing directory won't replace conflicting files.
it creates a file for lwc, manage.py and another lwc folder in it, and settings.py in second lwc folder.
what is wrong with it? |
command error after start project in django | 39,836,064 | 1 | 2 | 961 | 0 | python,django-upgrade | Uninstall django, delete your python/Lib/site-packages/django directory completely, then reinstall.
The installation of the new version, even though it claims to uninstall the old version, leaves old files hanging around, and they are quietly brought into the new version in various ways (e.g., manage.py can bring in syncdb if a sync.py is left over in the django directories). | 0 | 0 | 0 | 0 | 2016-08-12T19:59:00.000 | 2 | 0.099668 | false | 38,925,638 | 0 | 0 | 1 | 2 | After upgrading my django from 1.8 to 1.10, when i start a project(django-admin startproject lwc) there is an error:
CommandError: C:\Python34\binesh\lwc\lwc\settings.py already exists, overlaying
a project or app into an existing directory won't replace conflicting files.
it creates a file for lwc, manage.py and another lwc folder in it, and settings.py in second lwc folder.
what is wrong with it? |
Why WSGI Server need reload Python file when modified but PHP not need? | 38,941,344 | 1 | 2 | 291 | 0 | php,python,django | Django used to do the same back when CGI was the most common way to run dynamic web applications. It would create a new python process on each request, which would load all the files on the fly. But while PHP is optimized for this use-case with a fast startup time, Python, as a general purpose language, isn't, and there were some pretty heavy performance drawbacks. WSGI (and FastCGI before it) solves this performance issue by running the Python code in a persistent background process.
So while WSGI gives a lot of benefits, one of the "drawbacks" is that it only loads code when the process is (re)started, so you have to restart the process for any changes to take effect. In development this is easily solved by using an autoreloader, such as the one in Django's manage.py runserver command.
In production, there are quite a few reasons why you would want to delay the restart until the environment is ready. For example, if you pull in code changes that include a migration to add a database field, the new version of your code wouldn't be able to run before you've ran the migration. In such a case, you don't want the new code to run until you've actually ran all the necessary migrations. | 0 | 0 | 0 | 0 | 2016-08-14T00:39:00.000 | 1 | 1.2 | true | 38,938,191 | 0 | 0 | 1 | 1 | I'm confused about this question. When do Django development, if I have modified the py file or static file, the build-in server will reload. But on PHP app development, if I have modified the files, the Apache Server do not need reload and the modified content will show on browser.
Why? |
jupyter custom.css removal | 38,944,682 | 5 | 2 | 770 | 0 | ipython,jupyter,jupyter-notebook | Reposting as an answer:
When your changes don't seem to be taking effect in an HTML interface, browser caching is often a culprit. The browser saves time by not asking for files again. You can:
Try force-refreshing with Ctrl-F5. It may get some things from the cache anyway, though sometimes mashing it several times is effective.
Use a different browser profile, or private browsing mode, to load the page.
There may be a setting to disable caching under developer options. I think Chrome has this. May only apply while developer tools are open.
If all else fails, load the page using a different browser. If it still doesn't change, it's likely the problem is not (just) browser caching. | 0 | 0 | 0 | 0 | 2016-08-14T16:27:00.000 | 1 | 0.761594 | false | 38,944,204 | 1 | 0 | 1 | 1 | By mistake, I updated this file to customize css.
D:\Continuum\Anaconda2\Lib\site-packages\notebook\static\custom\custom.css
To rollback the above change,
1) I put back the original file that I saved before. still the new css shows up in jupyter.
2) I removed all .ipython and .jupyter dir and it didn't work either.
3) I even uninstalled anaconda and still that css shows up.
I'm really stuck here. Does anyone know how to go back to the default css of jupyter ? |
Steps to Troubleshoot "django.db.utils.ProgrammingError: permission denied for relation django_migrations" | 62,814,973 | 4 | 53 | 32,047 | 1 | python,django,apache,postgresql,github | If you receive this error and are using the Heroku hosting platform its quite possible that you are trying to write to a Hobby level database which has a limited number of rows.
Heroku will allow you to pg:push the database even if you exceed the limits, but it will be read-only so any modifications to content won't be processed and will throw this error. | 0 | 0 | 0 | 0 | 2016-08-14T17:06:00.000 | 4 | 0.197375 | false | 38,944,551 | 0 | 0 | 1 | 1 | What are some basic steps for troubleshooting and narrowing down the cause for the "django.db.utils.ProgrammingError: permission denied for relation django_migrations" error from Django?
I'm getting this message after what was initially a stable production server but has since had some changes to several aspects of Django, Postgres, Apache, and a pull from Github. In addition, it has been some time since those changes were made and I don't recall or can't track every change that may be causing the problem.
I get the message when I run python manage.py runserver or any other python manage.py ... command except python manage.py check, which states the system is good. |
monitoring jboss process with icinga/nagios | 39,367,798 | 0 | 0 | 1,024 | 0 | python,shell,jboss,nagios,icinga | I did this by monitored jboss process using ps aux | grep "\-D\[Standalone\]" for standalone mode and ps aux | grep "\-D\[Server" for domain mode. | 0 | 1 | 0 | 1 | 2016-08-14T18:26:00.000 | 3 | 1.2 | true | 38,945,299 | 0 | 0 | 1 | 1 | I want to monitor jboss if its running or not through Icinga.
I don't want to check /etc/inid.d/jboss status as sometimes service is up but some of the jboss is killed or hang & jboss doesn't work properly.
I would like to create a script to monitor all of its process from ps output. But few servers are running in standalone mode, domain(master,slave) and processes are different for each case.
I'm not sure from where do I start. Anyone here who did same earlier? Just looking for the idea to do this. |
Cache busting with Django | 38,956,967 | 0 | 1 | 865 | 0 | python,django,caching,static,cdn | Here's my work around :
On deployment (from a bash script), I get the shasum of my css style.
I put this variable inside the environment.
I have a context processor for the template engine that will read from the environment. | 0 | 0 | 0 | 0 | 2016-08-15T11:48:00.000 | 2 | 1.2 | true | 38,954,505 | 0 | 0 | 1 | 1 | I'm working on a website built with Django.
When I'm doing updates on the static files, the users have to hard refresh the website to get the latest version.
I'm using a CDN server to deliver my static files so using the built-in static storage from Django.
I don't know about the best practices but my idea is to generate a random string when I redeploy the website and have something like style.css?my_random_string.
I don't know how to handle such a global variable through the project (Using Gunicorn in production).
I have a RedisDB running, I can store the random string in it and clear it on redeployment.
I was thinking to have this variable globally available in templates with a context_processors.
What are your thoughts on this ? |
How to remote click on links from a 3rd party website | 38,979,448 | 0 | 0 | 38 | 0 | javascript,jquery,python,cookies | IF you want to interact to anorher webservice the resolution is send post/get request and parse response
Question is what is your goal? | 0 | 0 | 1 | 1 | 2016-08-16T15:42:00.000 | 1 | 0 | false | 38,979,170 | 0 | 0 | 1 | 1 | I have a problem that I am trying to conceptualize whether possible or not. Nothing too fancy (i.e. remote login or anything etc.)
I have Website A and Website B.
On website A a user selects on a few links from website B, i would like to then remotely click on behalf of the user on the link (as Website B creates a cookie with the clicked information) so when the user gets redirected to Website B, the cookie (and the links) are pre-selected and the user does not need to click on them one by one.
Can this be done? |
jpype accessing java mehtod/variable whose name is reserved name in python | 39,027,662 | 1 | 1 | 358 | 0 | java,python,type-conversion,reserved-words,jpype | Figured out that jpype appends an "_" at the end for those methods/fields in its source code. So you can access it by Jpype.JClass("Foo").pass_
Wish it's documented somewhere | 0 | 0 | 0 | 0 | 2016-08-17T23:37:00.000 | 2 | 0.099668 | false | 39,007,823 | 1 | 0 | 1 | 2 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass |
jpype accessing java mehtod/variable whose name is reserved name in python | 39,007,952 | 0 | 1 | 358 | 0 | java,python,type-conversion,reserved-words,jpype | unfortunally Fields or methods conflicting with a python keyword can’t be accessed | 0 | 0 | 0 | 0 | 2016-08-17T23:37:00.000 | 2 | 0 | false | 39,007,823 | 1 | 0 | 1 | 2 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass |
Django and celery on different servers and celery being able to send a callback to django once a task gets completed | 39,065,804 | 0 | 5 | 1,298 | 0 | python,django,asynchronous,rabbitmq,celery | I've used the following set up on my application:
Task is initiated from Django - information is extracted from the model instance and passed to the task as a dictionary. NB - this will be more future proof as Celery 4 will default to JSON encoding
Remote server runs task and creates a dictionary of results
Remote server then calls an update task that is only listened for by a worker on the Django server.
Django worker read results dictionary and updates model.
The Django worker listens to a separate queue, those this isn't strictly necessary. Results backend isn't used - data needed is just passed to the task | 0 | 1 | 0 | 0 | 2016-08-18T11:59:00.000 | 2 | 0 | false | 39,017,678 | 0 | 0 | 1 | 1 | I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this.
Django app running on one server.
Celery workers and rabbitmq running from another server.
My initial issue being, how to do i access django models from the celery tasks resting on another server?
and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed? |
How to add Google places autocomplete to flask? | 54,605,309 | 0 | 2 | 983 | 0 | javascript,jquery,python,flask,autocomplete | If you managed to implement the search box itself in your flask app (it's being rendered and everything) but there are no drop-down suggestions you should be able to find out the exact error message in the developer tools of your browser.
One of the reasons could be that the URL of your web app is not included in your API key to accept requests from. | 0 | 0 | 0 | 0 | 2016-08-18T12:17:00.000 | 2 | 0 | false | 39,018,026 | 0 | 0 | 1 | 1 | I would like to add the Google places autocomplete library to an input field but am not able to in my flask app (it doesn't give the dropdown suggestions), although it works fine in a standalone HTML file. |
importance of virtual environment setup for django with python | 39,055,884 | 1 | 4 | 6,137 | 0 | python,django,virtualenv | In most simple way, a virtual environment provides you a development environment independent of the host operating system. You can install and use necessary software in the /bin folder of the virtualenv, instead of using the software installed in the host machine.
Python development generally depends of various libraries and dependencies. For eg. if you install latest version of pip using sudo pip install django, the django software of the specific version will be available system wide. Now, if you you need to use django of another version for a project, you can simply create a virtaulenv, install that version of django in it, and use, without bothering the django version installed in os.
Yes, it is strongly recommended to setup separate virtualenv for each project. Once you are used to it, it will seem fairly trivial and highly useful for development, removing a lot of future headaches. | 0 | 0 | 0 | 0 | 2016-08-20T15:22:00.000 | 4 | 0.049958 | false | 39,055,728 | 1 | 0 | 1 | 4 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. |
importance of virtual environment setup for django with python | 66,072,526 | 0 | 4 | 6,137 | 0 | python,django,virtualenv | It allows you to switch between different dependencies and versions of Python and other systems like PIP and Django.
It is similar to using Docker where you can pick and choose each version. It is definitely recommended. If you are starting fresh and using the latest versions, you do not NEED to use it however it is good practice to just install virtualenv and start using it before you install django. | 0 | 0 | 0 | 0 | 2016-08-20T15:22:00.000 | 4 | 0 | false | 39,055,728 | 1 | 0 | 1 | 4 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. |
importance of virtual environment setup for django with python | 39,055,882 | 17 | 4 | 6,137 | 0 | python,django,virtualenv | A virtual environment is a way for you to have multiple versions of
python on your machine without them clashing with each other, each
version can be considered as a development environment and you can
have different versions of python libraries and modules all isolated
from one another
Yes it's very important. For example without a virtualenv, if you're
working on an open source project that uses django 1.5 but locally on
your machine, you installed django 1.9 for other personal projects.
It's almost impossible for you to contribute because you'll get a lot of
errors due to the difference in django versions. If you decide to
downgrade to django 1.5 then you can't work on your personal projects
anymore because they depend on django 1.9.
A virtualenv handles all this for you by enabling you to create seperate
virtual (development) environments that aren't tied to each other and can
be activated and deactivated easily when you're done. You can also have
different versions of python
You're not forced to but you should, it's as easy as:
virtualenv newenv
cd newenv
source bin/activate # The current shell uses the virtual environment
Moreover it's very important for testing, lets say you want to port
a django web app from 1.5 to 1.9, you can easily do that by creating
different virtualenv's and installing different versions of django.
it's impossible to do this without uninstalling one version (except
you want to mess with sys.path which isn't a good idea) | 0 | 0 | 0 | 0 | 2016-08-20T15:22:00.000 | 4 | 1.2 | true | 39,055,728 | 1 | 0 | 1 | 4 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. |
importance of virtual environment setup for django with python | 39,055,867 | 2 | 4 | 6,137 | 0 | python,django,virtualenv | While I can't directly describe the experience with Django and virtual environments, I suspect its pretty similar to how I have been using Flask and virtualenv.
Virtual environment does exactly what it says - where a environment is set up for you to develop your app (including your web app) that does not impact the libraries that you run on your machine. It creates a blank slate, so to speak, with just the core Python modules. You can use pip to install new modules and freeze it into a requirements.txt file so that any users (including yourself) can see which external libraries are needed.
It has a lot of importance because of the ability to track external libraries. For instance, I program between two machines and I have a virtual environment set up for either machine. The requirements.txt file allows me to install only the libraries I need with the exact versions of those libraries. This guarantees that when I am ready to deploy on a production machine, I know what libraries that I need. This prevents any modules that I have installed outside of a virtual environment from impacting the program that I run within a virtual environment.
Yes and no. I think it is good practice to use a virtual environment for those above reasons and keeps your projects clean. Not to mention, it is not difficult to set up a virtual environment and maintain it. If you're just running a small script to check on an algorithm or approach, you may not need a virtual environment. But I would still recommend doing so to keep your runtime environments clean and well managed. | 0 | 0 | 0 | 0 | 2016-08-20T15:22:00.000 | 4 | 0.099668 | false | 39,055,728 | 1 | 0 | 1 | 4 | I am very much new to the process of development of a web-application with django, and i came across this setting up and using virtual environment for python.
So i landed with some basic questions.
What does this virtual environment exactly mean.
Does that has any sort of importance in the development of web-application using django and python modules.
do i have to worry about setting up of virtual environment each time
in the development process. |
Xcode 8: Loading a plug-in failed | 47,449,527 | 0 | 3 | 2,895 | 0 | python,ios,xcode | I had the exact thing happen to me except on High Sierra. I had deleted the old version folders of Python in /System/Library/Frameworks/Python.framework/Versions/, which was a mistake seeing that these are the Apple installed Python files. After trying to launch Xcode, Xcode could no longer access the Python files it needed. Unfortunately I had deleted them and emptied the trash, so the only way I could restore those files was by reinstalling High Sierra.
So if you run into this plugin error and you've messed with Python files, you need to recover those files either by taking them back out of the trash or by reinstalling your operating system (reinstalling doesn't erase the data on your computer, but it will add missing files, such as the Python ones I deleted).
Hope that helps someone in a similar situation. | 0 | 0 | 0 | 0 | 2016-08-21T08:21:00.000 | 4 | 0 | false | 39,062,263 | 1 | 0 | 1 | 1 | when launching Xcode beta 8 on a macOS Sierra beta I'm getting this error:
Loading a plug-in failed.
The plug-in or one of its prerequisite plug-ins may be missing or damaged and may need to be reinstalled.
After searching, it seems that the issue is related with python and the new security measures that Apple introduced after XCode Ghost.
I couldn't find a solution, anybody can help?
EDIT
By looking at the Xcode logs, I noticed that it has NOTHING (apparently) to do with Python.
I see a whole bunch of
*Requested but did not find extension point with identifier Xcode.**
errors
I have to say that I also have Xcode 7 installed on my machine. |
changing s3 storages with django-storages | 39,062,626 | 0 | 0 | 82 | 1 | django,amazon-s3,python-django-storages | The URL is relative to the amazon storage address you provide in your settings. so you only need to move the images to a new bucket and update your settings. | 0 | 0 | 0 | 0 | 2016-08-21T09:08:00.000 | 1 | 0 | false | 39,062,605 | 0 | 0 | 1 | 1 | I have a Django application where I use django-storages and amazon s3 to store images.
I need to move those images to a different account: different user different bucket.
I wanted to know how do I migrate those pictures?
my main concern is the links in my database to all those images, how do I update it? |
Running Python scripts on Amazon Web Services? Do I need to use Boto? | 39,086,478 | 2 | 1 | 561 | 0 | python,amazon-web-services,boto3,boto | Boto is a Python wrapper for AWS APIs. If you want to interact with AWS using its published APIs, you need boto/boto3 library installed. Boto will not be supported for long. So if you are starting to use Boto, use Boto3 which is much simpler than Boto.
Boto3 supports (almost) all AWS services. | 0 | 0 | 1 | 1 | 2016-08-22T18:30:00.000 | 2 | 0.197375 | false | 39,086,388 | 0 | 0 | 1 | 2 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. |
Running Python scripts on Amazon Web Services? Do I need to use Boto? | 39,086,443 | 3 | 1 | 561 | 0 | python,amazon-web-services,boto3,boto | Boto is a python interface to Amazon Services (like copying to S3, etc).
You don't need it to just run regular python as you would on any linux instance with python installed, except to access AWS services from your EC2 instance. | 0 | 0 | 1 | 1 | 2016-08-22T18:30:00.000 | 2 | 1.2 | true | 39,086,388 | 0 | 0 | 1 | 2 | Maybe this is a silly question, I just set up free Amazon Linux instance according to the tutorial, what I want to do is simply running python scripts.
Then I googled AWS and Python, Amazon mentioned Boto.
I don't know why using Boto. Because if I type python, it already installed.
What I want to do is run a script on day time.
Is there a need for me to reading about Boto or just run xx.py on AWS ?
Any help is appreciated. |
django testing without email backend | 39,087,027 | 1 | 0 | 969 | 0 | python,django,email,testing | You don't need to define the EMAIL_BACKEND setting (it has a default), but you do need to define a setting module. You can set the DJANGO_SETTINGS_MODULE in your shell environment, or set os.environ['DJANGO_SETTINGS_MODULE'] to point to your settings module.
Note that calling python manage.py shell will set up the Django environment for you, which includes setting DJANGO_SETTINGS_MODULE and calling django.setup(). You still need to call setup_test_environment() to manually run tests in your python shell. | 0 | 0 | 0 | 1 | 2016-08-22T18:33:00.000 | 1 | 0.197375 | false | 39,086,434 | 0 | 0 | 1 | 1 | I want to test a view in my Django application. So I open the python shell by typing python and then I type from django.test.utils import setup_test_environment. It seems to work fine. Then I type setup_test_environment() and it says
django.core.exceptions.ImproperlyConfigured: Requested setting
EMAIL_BACKEND, but settings are not configured. You must either define
the environment variable DJANGO_SETTINGS_MODULE or call
settings.configure() before accessing settings.
I don't need to send mails in my test, so why does Django wants me to configure an email back-end ?
Are we forced to configure an email back-end for any test even if it doesn't need it ? |
Webpage contained within one dynamic page, unable to use driver interactions with Selenium (Python) | 39,091,712 | 0 | 1 | 69 | 0 | python,selenium,testing | As suggested by saurabh , use
1 self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, OR.Sub_categories)))
Else put a sleep and see however it is not advisable to use that, may be the xpath you have changes at the time of page load | 0 | 0 | 1 | 0 | 2016-08-23T00:53:00.000 | 2 | 0 | false | 39,090,768 | 0 | 0 | 1 | 1 | The homepage for the web application I'm testing has a loading screen when you first load it, then a username/password box appears. It is a dynamically generated UI element and the cursor defaults to being inside the username field.
I looked around and someone suggested using action chains. When I use action chains, I can immediately input text into the username and password fields and then press enter and the next page loads fine. Unfortunately, action chains are not a viable long-term answer for me due to my particular setup.
When I use the webdriver's find_element_by_id I am able to locate it and I am not able to send_keys to the element though because it is somehow not visible. I receive
selenium.common.exceptions.ElementNotVisibleException: Message: element not visible.
I'm also not able to click the field or otherwise interact with it without getting this error.
I have also tried identifying and interacting with the elements via other means, such as "xpaths" and css, to no avail. They are always not visible.
Strangely, it works with dynamic page titles. When the page first loads it is Loading... and when finished it is Login. The driver will return the current title when driver.title is called.
Does anyone have a suggestion? |
How to build model in DynamoDB if each night I need to process the daily records and then delete them? | 39,114,304 | 2 | 2 | 29 | 1 | python,database-design,amazon-dynamodb | You can use RabbitMQ to schedule jobs asynchronously. This would be faster than multiple DB queries. Basically, this tool allows you to create a job queue (Containing UserID, StoreID & Timestamp) where workers can remove (at midnight if you want) and create your reports (or whatever your heart desires).
This also allows you to scale your system horizontally across nodes. Your workers can be different machines executing these tasks. You will also be safe if your DB crashes (though you may still have to design redundancy for a machine running RabbitMQ service).
DB should be used for persistent storage and not as a queue for processing. | 0 | 0 | 0 | 0 | 2016-08-23T22:22:00.000 | 1 | 1.2 | true | 39,111,598 | 0 | 0 | 1 | 1 | I need to store some daily information in DynamoDB. Basically, I need to store user actions: UserID, StoreID, ActionID and Timestamp.
Each night I would like to process the information generated that day, do some aggregations, some reports, and then I can safely deleted those records.
How should I model this? I mean the hash key and the sort key... I need to have the full timestamp of each action for the reports but in order to query DynamoDB I guess it is easier to also save the date only.
I have some PKs as UserID and StoreID but anyhow I need to process all data each night, not the data related to one user or one store...
Thanks!
Patricio |
Virtualenv gives different versions for different os | 39,124,070 | 0 | 0 | 89 | 0 | python,django,virtualenv | Thanks to @Oliver and @Daniel's comments that lead me to the answer why it did not work.
I started the virtual environment on my Debian with python 3. virtualenv made the virtual environment but it was specifically for Debian.
When I used it for mac, since it could not run the python executable in the virtual environment (since it is only compatible with Debian), hence, it used my Mac's system python, which is Python 2.7.10.
In summary, as virtualenv uses the python executable on the system, when the python executable is run on another system, it will not work. | 0 | 1 | 0 | 0 | 2016-08-24T12:43:00.000 | 2 | 0 | false | 39,123,699 | 0 | 0 | 1 | 1 | I am working on a django project on two separate systems, Debian Jessie and Mac El Capitan. The project is hosted on github where both systems will pull from or push to.
However, I noticed that on my Debian, when I run python --version, it gives me Python 3.4.2 but on my Mac, it gives me Python 2.7.10 despite being in the same virtual environment. Moreover, when I run django-admin --version on my Debian, it gives me 1.10 while on my Mac, 1.8.3.
This happens even when I freshly clone the projects from github and run the commands.
Why is it that the virtual environment does not keep the same version of python and django? |
Running code on Django application start | 39,127,214 | 1 | 0 | 242 | 0 | python,django | I agree with the comments; there are prettier approaches than this.
You could add your code to the __init__.pyof your app | 0 | 0 | 0 | 0 | 2016-08-24T14:42:00.000 | 1 | 0.197375 | false | 39,126,445 | 0 | 0 | 1 | 1 | I need to run some code every time my application starts. I need to be able to manipulate models, just like I would in actual view code. Specifically, I am trying to hack built-in User model to support longer usernames, so my code is like this
def username_length_hack(sender, *args, **kwargs):
model = sender._meta.model
model._meta.get_field("username").max_length = 254
But I cannot seem to find the right place to do it. I tried adding a class_prepared signal handler in either models.py or app.py of the app that uses User model (expecting that User will by loaded by the time this apps models are loaded). The post_migrate and pre_migrate only run on migrate command. Adding code into settings.py seems weird and besides nothing is loaded at that point anyway. So far, the only thing that worked was connecting it to a pre_init signal and having it run every time a User instance is spawned. But that seems like a resource drain. I am using Django 1.8. How can I run this on every app load? |
How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying? | 39,128,415 | 0 | 2 | 488 | 1 | python,database,caching,redis,memcached | I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database.
The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all references) is pushed to ElasticSearch and only then the transaction is committed and the flow is returned back to the caller.
This way I maintained full functionality of a relational database (atomic changes, transactions, constraints, triggers, etc.) and still have all entities cached with all their references (parent and child relations) together with the ability to invalidate individual cached objects.
Hope this helps. | 0 | 0 | 0 | 0 | 2016-08-24T16:03:00.000 | 2 | 0 | false | 39,128,100 | 0 | 0 | 1 | 1 | For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.
I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).
I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.
I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.
I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?
I hope this makes sense. |
Determining the Order Files Are Run in a Website Built By Someone Else | 39,154,045 | 0 | 0 | 22 | 0 | javascript,python,django | Try opening in the page in Chrome and hitting F12 - there's a tonne of developer tools and web page debuggers in there.
For your particular question about loading order, check the Network tab, then hit refresh on your page - it'll show you every file that the browser loads, starting with the HTML in your browsers address bar.
If you're trying to figure out javascript, check out the Sources tab. It even allows you to create break points -very handy for following along with a page is doing. | 0 | 0 | 0 | 0 | 2016-08-25T19:59:00.000 | 1 | 0 | false | 39,153,790 | 0 | 0 | 1 | 1 | Ok, this question is going to sound pretty dumb, but I'm an absolute novice when it comes to web development and have been tasked with fixing a website for my job (that has absolutely nothing in the way of documentation).
Basically, I'm wondering if there is any tool or method for tracking the order a website loads files when it is used. I just want to know a very high-level order of the pipeline. The app I've been tasked with maintaining is written in a mix of django, javascript, and HTML (none of which I really know, besides some basic django). I can understand how django works, and I kind of understand what's going on with HTML, but (for instance) I'm at a complete loss as to how the HTML code is calling javascript, and how that information is transfered back to HTML. I wish I could show the code I'm using, but it can't be released publicly.
I'm looking for what amounts to a debugger that will let me step through each file of code, but I don't think it works like that for web development.
Thank you |
Can someone explain debugging / Pycharm's debugger in an easy to understand way? | 39,164,574 | 1 | 0 | 109 | 0 | python,django,debugging,pycharm | It's really easy. You can debug your script by pressing Alt+F5 or the bug button in Pycharm IDE. After that the Debugger handle the execution of the script. now you can debugging line by line by F10, get into a function or other object by pressing F11. Also there is Watch Window where you can trace your variable values while debugging. I really encourage you to search blogs on internet. There are lots of tutorial in this area | 0 | 0 | 0 | 0 | 2016-08-25T22:00:00.000 | 2 | 0.099668 | false | 39,155,391 | 1 | 0 | 1 | 1 | I have reached the stage in developing my Django project where I need to start debugging my code, as my site is breaking and I don't know why. I'm using Pycharm's IDE to code, and the debugger that comes with it is super intimidating!
Maybe because I am a total newbie to programming (been coding only since May) but I don't really understand how debugging, as a basic concept, works. I've read the Pycharm docs about debugging, but I'm still confused. What is the debugger supposed to do/how is it supposed to interact with your program? What helpful info about the code is debugging supposed to offer?
When I previously thought about debugging I imagined that it would be a way of running through the code line by line, say, and finding out "my program is breaking at this line of code," but "stepping through my code" seems to take me into files that aren't even part of my project (e.g. stepping into my code in admin.py will take me into the middle of a function in widgets.py?) etc. and seems to present lots of extra/confusing info. How do I use debugging productively? How can I use it to debug my Django webapp?
Please help! TIA :) |
How to get the flag/state of current operation in Odoo 9? | 39,180,410 | 0 | 1 | 107 | 0 | python,web,openerp | There is no such thing as 'flag/state'.
What you are probably trying to say is that you want to know which operations are taking place on a record. The easiest method is to take a look at your log. There will be statements there in the form /web/dataset/call_kw/model/operation where model is your ORM model and operation could be a search, read, unlink etc. RPC calls are logged in there as well. The format of the log output is a little bit different between different versions of odoo. You can go to a lower level by monitoring sql transactions on postgresql but I do not think that this is what you want. | 0 | 0 | 0 | 0 | 2016-08-26T07:34:00.000 | 1 | 0 | false | 39,160,816 | 0 | 0 | 1 | 1 | I am new in odoo, I want to know how we get the current flag/state of every operation.
For example: when we create a new record how do we know the current flag/state is "add"? or when we view a record how do we know the current flag/state is "view"?
It something like current user id that stored in session named "uid", is there something similar to get the current flag/state in every operation? |
Shopify app: adding a new shipping address via webhook | 39,170,945 | 0 | 0 | 256 | 0 | python,django,shopify | Here is the recipe:
create a Proxy in your App to accept incoming Ajax call from customer
create a form and button in customer liquid that submits to your Proxy
in the App Proxy, validate the call from Shopify and when valid, look for your form params.
open the customer record with the ID of the customer you sent along with the form data, and add an address to their account
Done. Simple. | 0 | 0 | 0 | 0 | 2016-08-26T08:30:00.000 | 1 | 1.2 | true | 39,161,806 | 0 | 0 | 1 | 1 | I'm planning to create a simple app using Django/Python that shows a nice button when installed by the store owner on user's account.
Clicking on that button should trigger a webhook request to our servers that would send back the generated shipping address for the user.
My questions:
Is it possible to create such button through shopify API or this something the store owner must manually add?
Is it possible to add a shipping address upon user request?
Thanks |
How to force application version on AWS Elastic Beanstalk | 42,735,371 | 11 | 18 | 10,464 | 0 | python,django,amazon-web-services,amazon-ec2,amazon-elastic-beanstalk | I've realised that the problem was that Elastic Beanstalk, for some reasons, kept the unsuccessfully deployed versions under .elasticbeanstalk. The solution, at least in my case, was to remove those temporal (or whatever you call them) versions of the application. | 0 | 1 | 0 | 0 | 2016-08-27T20:42:00.000 | 2 | 1.2 | true | 39,185,570 | 0 | 0 | 1 | 1 | I'm trying to deploy a new version of my Python/Django application using eb deploy.
It unfortunately fails due to unexpected version of the application. The problem is that somehow eb deploy screwed up the version and I don't know how to override it. The application I upload is working fine, only the version number is not correct, hence, Elastic Beanstalk marks it as Degraded.
When executing eb deploy, I get this error:
"Incorrect application version "app-cca6-160820_155843" (deployment
161). Expected version "app-598b-160820_152351" (deployment 159). "
The same says in the health status at AWS Console.
So, my question is the following: How can I force Elastic Beanstalk to make the uploaded application version the current one so it doesn't complain? |
What is the difference between table level operation and record-level operation? | 39,187,633 | 1 | 3 | 471 | 0 | python,django,database,django-models | I do not know specifically how Django people use the terms, but 'record-level operation' should mean an operation on 1 or more records while a 'table-level operation' should mean an operation of the table as a whole. I am not quite sure what an operation on all rows should be -- perhaps both, perhaps it depends on the result.
In Python, the usual term for 'record-level' would be 'element-wise'. For Python builtins, bool operates on collections: bool([0, 1, 0, 3]) = True. For numpy arrays, bool operates (at least usually) on elements: `bool([0, 1, 0, 2]) = [False, True, False, True]. Also compare [1,2,3]*2 = [1,2,3,1,2,3] versus [1,2,3]*2 = [2,4,6].
I hope this helps. See if it makes sense in context. | 0 | 0 | 0 | 0 | 2016-08-28T00:41:00.000 | 2 | 0.099668 | false | 39,187,032 | 0 | 0 | 1 | 1 | While going through the documentation of django to muster the detailed knowledge, i endured the word 'table level operation' and 'record level operation'. What is the difference in between them? Could anyone please explain me this 2 word with example? Does they have other name too?
P.S I am not asking their difference just because i feel they are alike but i feel it can be more clear to comprehend this way. |
Change name of the "Authentication and Authorization" menu in Django/python | 39,212,494 | 0 | 3 | 2,299 | 0 | python,django,admin | This can't be done at least cleanly via templates..
You can put the auth app verbose name "authentication and authorization" in your own .po file (& follow Django docs on translation)
This way Django will normally use your name. | 0 | 0 | 0 | 0 | 2016-08-29T16:11:00.000 | 4 | 0 | false | 39,210,668 | 0 | 0 | 1 | 1 | I'm learning python/Django and setting up my first project. All is going well but I've been searching like crazy on something very simple.
There is a default menu item "Authentication and Authorization" and I want to change the name. I've searched in the template if I need to extend something, I've searched if there's a .po file or what not but I can't find it nor a hint on which parameter I should overwrite in admin.py to set it.
I'm not trying to install multi language or some advanced localization, just want to change the name of that one menu item :)
Any ideas? |
Check django permission or operator? | 39,233,603 | 0 | 3 | 594 | 0 | python,django,model-view-controller,permissions | You could write your own decorator for this.
Or use django.contrib.auth.decorators.user_passes_test(your_test_func) to create a custom decorator.
In both cases, have a look at the source code of the permission_required decorator in the above module. | 0 | 0 | 0 | 0 | 2016-08-30T17:09:00.000 | 1 | 0 | false | 39,233,347 | 0 | 0 | 1 | 1 | For my view, I am checking the permission through the @permission_required decorator but I really wish to check for "either" permission A or permission B. so if user has at least one of two permissions, the view is execute...
Is there a way to do this? |
how to install libhdf5-dev? (without yum, rpm nor apt-get) | 67,224,754 | 0 | 6 | 20,706 | 0 | python,linux,installation,hdf5 | For Centos 8, I got the below warning message :
Warning: Couldn't find any HDF5 C++ libraries. Disabling HDF5 support.
and I solved it using the command :
sudo yum -y install hdf5-devel | 0 | 1 | 0 | 1 | 2016-08-30T19:53:00.000 | 4 | 0 | false | 39,236,025 | 0 | 0 | 1 | 1 | I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it.
Anyone know how I can do this? Is there some other source i need to download? (I cant find any though)
I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself. |
How would Auth work between Django and Discourse (working together) | 39,243,456 | 1 | 2 | 696 | 0 | python,django,discourse | That sounds plausible. To make sure a user is logged in to both, you may put one of the auths in front of the other. For example, if discourse is in front of Django, you can use something like the builtin RemoteUserMiddleware.
In general, if they are going to be hosted on different domains, take a look at JWT. It has been gainining ground to marry different services and the only thing you need is to be able to decode the JWT token, which a lot of languages have nowadays in the form of libraries. | 0 | 0 | 0 | 0 | 2016-08-31T05:06:00.000 | 1 | 1.2 | true | 39,241,151 | 0 | 0 | 1 | 1 | I need a modern looking forum solution that is self hosted (to go with a django project)
The only reasonable thing I can see using is discourse, but that gives me a problem... How can I take care of auth between the two? It will need to be slightly deeper than just auth because I will need a few User tables in my django site as well.
I have been reading about some SSO options, but I am unclear on how to appraoch the problem down the road. here is the process that I have roughly in my head... Let me know if it sounds coherent...
Use Discourse auth (since it already has social auth and profiles and a lot of user tables.
Make some SSO hook for django so that it will accept the Discourse login
Upon account creation of the Discourse User, I will send (from the discourse instance) an API request that will create a user in my django instance with the proper user tables for my django site.
Does this sound like a good idea? |
How to test RPC of SOAP web services? | 39,275,854 | 1 | 1 | 458 | 0 | python,django,testing,rpc,spyne | I believe if you are using a service inside a test, that test should not be a unit test.
you might want to consider use factory_boy or mock, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call. | 0 | 0 | 0 | 1 | 2016-09-01T14:54:00.000 | 1 | 0.197375 | false | 39,274,850 | 0 | 0 | 1 | 1 | I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those @rpc functions, I have no luck there at all.
What I have tried in testing those @rpc functions:
1. Get dummy data in model database
2. Start a server at localhost:8000
3. Create a suds.Client object that can communicate with localhost:8000
4. Try to invoke @rpc functions from the suds.Client object, and test if the output matches what I expected.
However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.
I tried to make the server run on a different thread, but that messed up my test even more.
I have searched as much as I could online and found no materials that can answer this question.
TL;DR: how do you test @rpc functions using unit test? |
how to build an deep learning image processing server | 48,819,337 | 0 | 0 | 906 | 0 | python,amazon-web-services,lua,server,torch | It does make sense to look at the whole task and how it fits to your actual server, Nginx or Lighttpd or Apache since you are serving static content. If you are going to call a library to create the static content, the integration of your library to your web framework would be simpler if you use Flask but it might be a fit for AWS S3 and Lambda services.
It may be worth it to roughly design the whole site and match your content to the tools at hand. | 0 | 0 | 0 | 0 | 2016-09-04T16:49:00.000 | 2 | 0 | false | 39,319,275 | 0 | 0 | 1 | 1 | I am building am application to process user's photo on server. Basically, user upload a photo to the server and do some filtering processing using deep learning model. Once it's done filter, user can download the new photo. The filter program is based on the deep learning algorithm, using torch framework, it runs on python/lua. I currently run this filter code on my local ubuntu machine. Just wonder how to turn this into a web service. I have 0 server side knowledge, I did some research, maybe I should use flask or tornado, or other architecture? |
Exposing API Documentation Using Flask and Swagger | 66,651,876 | 0 | 3 | 1,718 | 0 | python,flask,swagger | There are three ways of doing it:
via Restful-Api (Api.doc)
via getting swagger templates
via registering blueprints (from flask-swagger-ui or smth). | 0 | 0 | 0 | 0 | 2016-09-05T00:12:00.000 | 3 | 0 | false | 39,322,550 | 0 | 0 | 1 | 1 | I have build a small service with flask and already wrote a swagger yaml file to describe it's API. How can I expose the swagger file through the flask app?
I didn't mean to expose the file itself (send_from_directory) but to create new endpoint that will show it as swagger-ui (interactive, if possible) |
Celery Worker - Consume from Queue matching a regex | 39,340,088 | -2 | 5 | 409 | 0 | python,celery,django-celery | Something along these lines would work: (\b(dev.)(\w+)).
Then refer to the second group for the stuff after "dev.".
You'll need to set it up to capturing repeated instances if you want to get multiple. | 0 | 1 | 0 | 0 | 2016-09-06T02:32:00.000 | 1 | -0.379949 | false | 39,339,804 | 0 | 0 | 1 | 1 | Background
Celery worker can be started against a set of queues using -Q flag. E.g.
-Q dev.Q1,dev.Q2,dev.Q3
So far I have seen examples where all the queue names are explicitly listed as comma separated values. It is troublesome if I have a very long list.
Question
Is there a way I can specify queue names as a regex & celery worker will start consuming from all queues satisfying that regex.
E.g.
-Q dev.*
This should consume from all queuess starting with dev i.e. dev.Q1, dev.Q2, dev.Q3. But what I have seen is - it creates a queue dev..*
Also how can I tune the regex so that it doesn't pick ERROR queues e.g. dev.Q1.ERROR, dev.Q2.ERROR. |
Sending SMS from django app | 39,379,920 | 1 | 1 | 765 | 0 | python,django,sms | Using Twilio is not mandatory, but I do recommend it. Twilio does the heavy lifting, your Django App just needs to make the proper API Requests to Twilio, which has great documentation on it.
Twilio has Webhooks as well which you can 'hook' to specific Django Views and process certain events. As for the 'programmable' aspect of your app you can use django-celery, django-cron, RabbitMQ or other task-queueing software. | 0 | 0 | 0 | 1 | 2016-09-07T20:46:00.000 | 1 | 0.197375 | false | 39,378,728 | 0 | 0 | 1 | 1 | I came to the requirement to send SMS from my django app. Its a dashboard from multiple clients, and each client will have the ability to send programable SMS.
Is this achievable with django smsish? I have found some packages that aren't updated, and I sending email sms is not possible.
All answers found are old and I have tried all approaches suggested.
Do I have to use services like twilio mandatorily? Thanks |
Change IronPython version in Edge.js? | 43,118,997 | 0 | 1 | 102 | 0 | node.js,ironpython,edgejs | The problem come from Edge, where the version number is hardcoded. | 0 | 0 | 0 | 0 | 2016-09-08T09:56:00.000 | 1 | 1.2 | true | 39,387,888 | 0 | 0 | 1 | 1 | I am testing Edge.JS since I need to run Python functions from Node.js. The problem is that Edge seems to want another version of IronPython:
Could not load file or assembly 'IronPython, Version=2.7.0.40, Culture=neutral, PublicKeyToken=7f709c5b713576e1' or one of its dependencies. The system cannot find the file specified.
I have 2.7.6.3 installed, should I downgrade? Or is there a way to set the version in edge? |
How do I get Django to log why an sql transaction failed? | 39,447,127 | 1 | 0 | 223 | 1 | python,mysql,django,pootle | Install django debug toolbar, you can easily check all of the queries that have been executed | 0 | 0 | 0 | 0 | 2016-09-08T10:01:00.000 | 1 | 1.2 | true | 39,387,983 | 0 | 0 | 1 | 1 | I am trying to debug a Pootle (pootle is build on django) installation which fails with a django transaction error whenever I try to add a template to an existing language. Using the python debugger I can see that it fails when pootle tries to save a model as well as all the queries that have been made in that session.
What I can't see is what specifically causes the save to fail. I figure pootle/django must have added some database database constraint, how do I figure out which one? MySql (the database being used) apparently can't log just failed transactions. |
How to translate generated language in Pelican? | 60,045,659 | 1 | 1 | 538 | 0 | python,blogs,pelican | If you have a non-standard theme installed, go to the folder of that theme and navigate to the templates folder. There are a lot of different html files. If you want to translate the generated text, like "read more" or "Other articles", open the index.html file inside the template folder and search for the text you want to translate, replace it with yours and regenerate your page. Be cautious not to break the syntax of the template file, tho. | 0 | 0 | 0 | 0 | 2016-09-08T13:28:00.000 | 2 | 0.099668 | false | 39,392,297 | 0 | 0 | 1 | 1 | I am setting up a new Pelican blog and stumbled upon a bit of a problem. I am German, the blog is going to be in german so I want the generated text (dates, 'Page 1/5'...) to be in german. (In my post date I include the weekday)
In pelicanconf.py I tried
DEFAULT_LANG = u'ger' and
DEFAULT_LANG = u'de' and
DEFAULT_LANG = u'de_DE'
but I only get everything in en. |
Managing contents of requirements.txt for a Python virtual environment | 53,108,990 | 0 | 14 | 35,512 | 0 | python,pip,virtualenv,requirements.txt | If you only want to see what packages you have installed then just do pip freeze.
but if you want all these packages in your requirement.txt, then do
pip freeze > requirements.txt | 0 | 0 | 0 | 0 | 2016-09-09T07:32:00.000 | 4 | 0 | false | 39,406,177 | 1 | 0 | 1 | 2 | So I am creating a brand new Flask app from scratch. As all good developers do, my first step was to create a virtual environment.
The first thing I install in the virtual environment is Flask==0.11.1. Flask installs its following dependencies:
click==6.6
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.11
wheel==0.24.0
Now, I create a requirements.txt to ensure everyone cloning the repository has the same version of the libraries. However, my dilemma is this:
Do I mention each of the Flask dependencies in the requirements.txt along with the version numbers
OR
Do I just mention the exact Flask version number in the requirements.txt and hope that when they do a pip install requirements.txt, Flask will take care of the dependency management and they will download the right versions of the dependent libraries |
Managing contents of requirements.txt for a Python virtual environment | 39,406,537 | 5 | 14 | 35,512 | 0 | python,pip,virtualenv,requirements.txt | Both approaches are valid and work. But there is a little difference. When you enter all the dependencies in the requirements.txt you will be able to pin the versions of them. If you leave them out, there might be a later update and if Flask has something like Werkzeug>=0.11 in its dependencies, you will get a newer version of Werkzeug installed.
So it comes down to updates vs. defined environment. Whatever suits you better. | 0 | 0 | 0 | 0 | 2016-09-09T07:32:00.000 | 4 | 1.2 | true | 39,406,177 | 1 | 0 | 1 | 2 | So I am creating a brand new Flask app from scratch. As all good developers do, my first step was to create a virtual environment.
The first thing I install in the virtual environment is Flask==0.11.1. Flask installs its following dependencies:
click==6.6
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.11
wheel==0.24.0
Now, I create a requirements.txt to ensure everyone cloning the repository has the same version of the libraries. However, my dilemma is this:
Do I mention each of the Flask dependencies in the requirements.txt along with the version numbers
OR
Do I just mention the exact Flask version number in the requirements.txt and hope that when they do a pip install requirements.txt, Flask will take care of the dependency management and they will download the right versions of the dependent libraries |
Django: no module named http_utils | 39,409,980 | 0 | 0 | 217 | 0 | python,django | Make sure the package is correct (Include init.py file).
Make sure there are no other utils files in the same directory level. That is if you are importing from utils import http_utils from views.py, there should not be a utils.py in the same folder. Conflict occurs because of that.
You dont have to include the folder in the INSTALLED_APP settings. Because the utils folder is a package and should be available for importing | 0 | 0 | 0 | 0 | 2016-09-09T10:34:00.000 | 2 | 1.2 | true | 39,409,581 | 0 | 0 | 1 | 1 | I created a new utils package and an http_utils file with some decorators and HTTP utility functions in there. I imported them wherever I am using them and the IDE reports no errors, and I also added the utils module to the INSTALLED_APPS list.
However, when launching the server I am getting an import error:
ImportError: No module named http_utils
What am I missing? What else do I need to do to register a new module? |
Providing 'default' User when not logged in instead of AnonymousUser | 39,411,527 | 1 | 1 | 49 | 0 | python,django | You could create a custom middleware (called after AuthenticationMiddleware), that checks if the user if logged in or not, and if not, replaces the current user object attached to request, with the the user of your choice. | 0 | 0 | 0 | 0 | 2016-09-09T11:34:00.000 | 1 | 1.2 | true | 39,410,656 | 0 | 0 | 1 | 1 | Is there a way to globally provide a custom instance of User class instead of AnonymousUser?
It is not possible to assign AnonymousUser instances when User is expected (for example in forms, there is need to check for authentication and so on), and therefore having an ordinary User class with name 'anonymous' (so that we could search for it in the DB) would be globally returned when a non-authenticated user visits the page. Somehow implementing a custom authentication mechanism would do the trick? And I also want to ask if such an idea is a standard approach before diving into this. |
pandas.read_html not support decimal comma | 52,673,944 | 20 | 14 | 4,626 | 0 | python,pandas,decimal,xlm | This did not start working for me until I used both decimal=',' and thousands='.'
Pandas version: 0.23.4
So try to use both decimal and thousands:
i.e.:
pd.read_html(io="http://example.com", decimal=',', thousands='.')
Before I would only use decimal=',' and the number columns would be saved as type str with the numbers just omitting the comma.(weird behaviour) For example 0,7 would be "07" and "1,9" would be "19"
It is still being saved in the dataframe as type str but at least I don't have to manually put in the dots. The numbers are correctly displayed; 0,7 -> "0.7" | 0 | 0 | 0 | 0 | 2016-09-09T13:35:00.000 | 4 | 1 | false | 39,412,829 | 0 | 1 | 1 | 1 | I was reading an xlm file using pandas.read_html and works almost perfect, the problem is that the file has commas as decimal separators instead of dots (the default in read_html).
I could easily replace the commas by dots in one file, but i have almost 200 files with that configuration.
with pandas.read_csv you can define the decimal separator, but i don't know why in pandas.read_html you can only define the thousand separator.
any guidance in this matter?, there is another way to automate the comma/dot replacement before it is open by pandas?
thanks in advance! |
Scraping font-size from HTML and CSS | 39,420,644 | 0 | 0 | 556 | 0 | python,html,css,web-scraping | You can use selenium with firefox or phantomjs if you're on a headless machine, the browser will render the page, then you can locate the element and get it's attributes.
On python the method to get attributes is self explanatory, Element_obj.get_attribute('attribute_name') | 0 | 0 | 0 | 1 | 2016-09-09T21:46:00.000 | 1 | 1.2 | true | 39,420,152 | 0 | 0 | 1 | 1 | I am trying to scrape the font-size of each section of text in an HTML page. I have spent the past few days trying to do it, but I feel like I am trying to re-invent the wheel. I have looked at python libraries like cssutils, beautiful-soup, but haven't had much luck sadly. I have made my own html parser that finds the font size inside the html only, but it doesn't look at stylesheets which is really important. Any tips to get me headed in the right direction? |
How do you control user access to records in a key-value database? | 39,518,000 | 2 | 8 | 171 | 1 | python,database,authorization,key-value,nosql | Both of the solutions you described have some limitations.
You point yourself that including the owner ID in the key does not solve the problem of shared data. However, this solution may be acceptable, if you add another key/value pair, containing the IDs of the contents shared with this user (key: userId:shared, value: [id1, id2, id3...]).
Your second proposal, in which you include the list of users who were granted access to a given content, is OK if and only if you application needs to make a query to retrieve the list of users who have access to a particular content. If your need is to list all contents a given user can access, this design will lead you to poor performances, as the K/V store will have to scan all records -and this type of database engine usually don't allow you to create an index to optimise this kind of request.
From a more general point of view, with NoSQL databases and especially Key/Value stores, the model has to be defined according to the requests to be made by the application. It may lead you to duplicate some information. The application has the responsibility of maintaining the consistency of the data.
By example, if you need to get all contents for a given user, whether this user is the owner of the content or these contents were shared with him, I suggest you to create a key for the user, containing the list of content Ids for that user, as I already said. But if your app also needs to get the list of users allowed to access a given content, you should add their IDs in a field of this content. This would result in something like :
key: contentID, value: { ..., [userId1, userID2...]}
When you remove the access to a given content for a user, your app (and not the datastore) have to remove the userId from the content value, and the contentId from the list of contents for this user.
This design may imply for your app to make multiple requests: by example one to get the list of userIDs allowed to access a given content, and one or more to get these user profiles. However, this should not really be a problem as K/V stores usually have very high performances. | 0 | 0 | 0 | 0 | 2016-09-10T07:31:00.000 | 1 | 1.2 | true | 39,423,756 | 0 | 0 | 1 | 1 | I have a web application that accesses large amounts of JSON data.
I want to use a key value database for storing JSON data owned/shared by different users of the web application (not users of the database). Each user should only be able to access the records they own or share.
In a relational database, I would add a column Owner to the record table, or manage shared ownerships in a separate table, and check access on the application side (Python). For key value stores, two approaches come to mind.
User ID as part of the key
What if I use keys like USERID_RECORDID and then write code to check the USERID before accessing the record? Is that a good idea? It wouldn't work with records that are shared between users.
User ID as part of the value
I could store one or more USERIDs in the value data and check if the data contains the ID of the user trying to access the record. Performance is probably slower than having the user ID as part of the key, but shared ownerships are possible.
What are typical patterns to do what I am trying to do? |
The best Python/Django architecture for image heavy web application | 39,457,291 | 1 | 1 | 452 | 0 | python,django,image-processing,imagekit,photologue | If you use django-photologue, you can define a thumbnail size and specify that the thumbnail should not be generated at upload time - instead, it gets generated the first time the thumbnail is requested for display.
If you have lots of different sized thumbnails for a photo, this trick can help a user upload their photos faster.
Source: I maintain django-photologue. | 0 | 0 | 0 | 0 | 2016-09-11T22:09:00.000 | 2 | 1.2 | true | 39,441,109 | 0 | 0 | 1 | 1 | I am building a web application that allows users to upload images to their accounts - similar to flickr and 500px.
I want to know the best setup for such an application. I'm using Python 3.4 and Django 1.9
I'm currently thinking about the following:
Heroku
AWS S3
Postgres
I'm struggling to find a suitable image processing library. I've looked at ImageKit and Photologue. But I find Photologue to be a little bit heavy for what I want to do.
I'm basically looking for a way to allow users to upload images of a certain size without locking up the Heroku dynos. Any suggestions?
Thanks |
what's the difference between google.appengine.ext.ndb and gcloud.datastore? | 39,453,571 | 2 | 14 | 1,307 | 0 | google-app-engine,google-cloud-datastore,app-engine-ndb,google-app-engine-python | The reason for the two implementations is that originally, the Datastore (called App Engine Datastore) was only available from inside App Engine (through a private RPC API). On Python, the only way to access this API was through an ORM-like library (NDB). As you can see on the import, it is part of the App Engine API.
Now Google has made the Datastore available outside of App Engine through a restful API called Cloud Datastore API. gcloud library is a client library that allows access to different rest APIs from Google Cloud, including the Cloud Datastore API. | 0 | 1 | 0 | 0 | 2016-09-11T23:53:00.000 | 2 | 0.197375 | false | 39,441,764 | 0 | 0 | 1 | 1 | ndb: (from google.appengine.ext import ndb)
datastore: (from gcloud import datastore)
What's the difference? I've seen both of them used, and hints they both save data to google datastore. Why are there two different implementations? |
Where is the Folder for shared library in a django project | 39,454,692 | 0 | 0 | 628 | 0 | django,boost-python | put them to $PROJECT_HOME/lib just like a normal package | 0 | 0 | 0 | 1 | 2016-09-12T09:44:00.000 | 1 | 0 | false | 39,447,513 | 0 | 0 | 1 | 1 | I am working on a python/django project which calls a C++ shared library. I am using boost_python C++ library.
It works fine: I can call C++ methods from python interpreter. I can also call this methods from my django project. But i am wondering something: Where is the best folder for my C++ shared library ?
I actually put this binary shared library in django app folder (same folder as view.py). It works but i think this is ugly... Is there a specific folder for shared library in django directory structure ?
Thanks |
How to print the \n character in Jinja2 | 39,458,104 | 5 | 4 | 12,017 | 0 | python,flask,jinja2 | I'll answer my own, maybe it helps someone who has the same question.
This works: {{thestring.encode('string_escape')}} | 0 | 0 | 0 | 0 | 2016-09-12T19:38:00.000 | 2 | 1.2 | true | 39,457,587 | 1 | 0 | 1 | 1 | I have the following string in Python: thestring = "123\n456"
In my Jinja2 template, I use {{thestring}} and the output is:
123
456
The only way I can get Jinja2 to print the exact representation 123\n456 (including the \n) is by escaping thestring = "123\\n456".
Is there any other way this can be done directly in the template? |
pelican make serve error with broken pipe? | 61,891,341 | 0 | 0 | 151 | 0 | python,python-2.7,ubuntu,makefile,server | I can report that :
I encountered the same problem with a python3 / pip3 installation (which is recommended now)
the problem was apparently with the permissions on python. I simply had to run pelican --listen with superuser rights to make the local server work.
Also be careful to install all packets you might have installed without superuser rights with sudo in order to have a fully-working installation with sudo. | 0 | 1 | 0 | 0 | 2016-09-13T05:48:00.000 | 2 | 0 | false | 39,462,958 | 0 | 0 | 1 | 2 | I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks!
Python info:
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
Error info:
127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html
file.
127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 -
---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent
call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe |
pelican make serve error with broken pipe? | 39,462,999 | 0 | 0 | 151 | 0 | python,python-2.7,ubuntu,makefile,server | Well I installed pip on Ubuntu and then it all worked..
Not sure if it is a version thing.. | 0 | 1 | 0 | 0 | 2016-09-13T05:48:00.000 | 2 | 0 | false | 39,462,958 | 0 | 0 | 1 | 2 | I was trying to make a blog with pelican, and in the step of make serve I had below errors. By searching online it looks like a web issue ( I'm not familiar with these at all ) and I didn't see a clear solution. Could anyone shed some light on? I was running on Ubuntu with Python 2.7. Thanks!
Python info:
Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2
Error info:
127.0.0.1 - - [13/Sep/2016 13:23:35] "GET / HTTP/1.1" 200 - WARNING:root:Unable to find / file. WARNING:root:Unable to find /.html
file.
127.0.0.1 - - [13/Sep/2016 13:24:31] "GET / HTTP/1.1" 200 -
---------------------------------------- Exception happened during processing of request from ('127.0.0.1', 51036) Traceback (most recent
call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in
_handle_request_noblock
self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in init
self.finish() File "/usr/lib/python2.7/SocketServer.py", line 710, in finish
self.wfile.close() File "/usr/lib/python2.7/socket.py", line 279, in close
self.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe |
Should a connection to Redis cluster be made on each Flask request? | 39,465,104 | 2 | 0 | 1,131 | 0 | python,flask,redis | This is about performance and scale. To get those 2 buzzwords buzzing you'll in fact need persistent connections.
Eventual race conditions will be no different than with a reconnect on every request so that shouldn't be a problem. Any RCs will depend on how you're using redis, but if it's just caching there's not much room for error.
I understand the desired stateless-ness of an API from a client sides POV, but not so sure what you mean about the server side.
I'd suggest you put them in the application context, not the sessions (those could become too numerous) whereas the app context gives you the optimal 1 connection per process (and created immediately at startup). Scaling this way becomes easy-peasy: you'll never have to worry about hitting the max connection counts on the redis box (and the less multiplexing the better). | 0 | 0 | 0 | 0 | 2016-09-13T07:47:00.000 | 2 | 0.197375 | false | 39,464,748 | 0 | 0 | 1 | 1 | I have a Flask API, it connects to a Redis cluster for caching purposes. Should I be creating and tearing down a Redis connection on each flask api call? Or, should I try and maintain a connection across requests?
My argument against the second option is that I should really try and keep the api as stateless as possible, and I also don't know if keeping some persistent across request might causes threads race conditions or other side effects.
However, if I want to persist a connection, should it be saved on the session or on the application context? |
Celery - Single AMQP queue bound to multiple exchanges | 40,389,543 | 0 | 0 | 177 | 0 | python,django,rabbitmq,celery | CELERY_QUEUES is used only for "internal" celery communication with it's workers, not with your custom queues in rabbitmq independent of celery.
What are you trying to accomplish with two exchanges with the same queue? | 0 | 1 | 0 | 0 | 2016-09-13T10:08:00.000 | 1 | 0 | false | 39,467,399 | 0 | 0 | 1 | 1 | I have a RabbitMQ topology(set up independent of celery) with a queue that is bound to two exchanges with the same routing key. Now, I want to set up a celery instance to post to the exchanges and another one to consume from the queue.
I have the following questions in the context of both the producer and the consumer:
Is the CELERY_QUEUES setting necessary in the first place if I specify only the exchange name and routing key in apply_async and the queue name while starting up the consumer? From my understanding of AMQP, this should be enough...
If it is necessary, I can only set one exchange per queue there. Does this mean that the other binding will not work(producer can't post to the other exchange, consumer can't receive messages routed through the other exchange)? Or, can I post and receive messages from the other exchange regardless of the binding in CELERY_QUEUES? |
Git Blog - the pelican template disappears in the new deployed blog but exists in localhost | 39,476,013 | 1 | 1 | 55 | 0 | python,git,markdown,blogs,pelican | From your Question, what I understood is you are having problem publishing pelican site on git hub. As per my knowledege below is the way to publish it.I don't know why you got 404 Error though.
Step1:
First you need to create repository in github.To create it follow the below steps:
goto github.com--sign In--select git hub pages(Project Pages)--click on '+' to create new repository--give repository name (eg.Blog,Order System)--Check 'public' radio Button--check 'Initialize with READ Me' check box --Click Create Repository
Note:Make sure you use a .gitIgnore file before comminting file
Step2:
Once repository is created you will be on master pages Branch.
Click on Master--create gh-pages--in branch section update default page as gh-pages branch--click on 'code' in menu bar and delete master branch.
Now you need to download READ Me file on local machine.
Copy READ Me file from gh-Pages branch--go to directory where all the files of you project are stored on you machine--goto command prompt--cd directory name(eg. here we have order systems)--
order systems>git add click enter
order systems>git commit -a initialize click enter
order systems>git push origin gh-pages click enter
It will ask you to enter git credentials. Enter those and sign In.
Go to settings. You can see pages are published.
I hope this is helpful for you. | 0 | 0 | 0 | 0 | 2016-09-13T15:32:00.000 | 1 | 0.197375 | false | 39,473,867 | 0 | 0 | 1 | 1 | Sorry if I didn't express the question correctly - I am trying to set up a blog on Git using pelican, but I am new to both of it.
So I followed some websites and tried to release one page, however when I did make serve on my local drive the blog looks ok on localhost:/8000
But after pushing to Git, the template of the blog disappears and the webpage looks pretty ugly. Also, if I click on "Read more" hyperlink, the page navigates to a 404 error.
Did I miss anything here? Many thanks if anyone could shed some light on! |
OperationalError: Can't connect to local MySQL server through socket | 39,475,119 | 1 | 0 | 295 | 1 | python,mysql,django | You can't install mysql through pip; it's a database, not a Python library (and it's currently in version 5.7). You need to install the binary package for your operating system. | 0 | 0 | 0 | 0 | 2016-09-13T16:29:00.000 | 1 | 0.197375 | false | 39,474,896 | 0 | 0 | 1 | 1 | I'm trying to run a server in python/django and I'm getting the following error:
django.db.uils.OperationslError: (200, "Can't connect to local MySQL
server through socket '/tmp/mysql.sock' (2)").
I have MySQL-python installed (1.2.5 version) and mysql installed (0.0.1), both via pip, so I'm not sure why I can't connect to the MySQL server. Does anyone know why? Thanks! |