Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()?
PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly for a lifetime). It would be appreciated if you could explain and differentiate in great detail the efficiency in time and being error prone aspects. | 3 | 4 | 1.2 | 0 | true | 36,819,569 | 0 | 1,554 | 2 | 0 | 0 | 36,819,540 | Use the method that more closely describes what you're trying to do. Are you making log entries? Use logger.*. If (and only if!) that becomes a performance issue, then change it. Until then it's an optimization that you don't know if you'll ever need.
Pros for logging:
It's semantic. When you see logging.info(...), you know you're writing a log message.
It's idiomatic. This is how you write Python logs.
It's efficient. Maybe not extremely efficient, but it's so thoroughly used that it has lots of nice optimizations (like not running string interpolation on log messages that won't be emitted because of loglevels, etc.).
Cons for logging:
It's not as much fun as inventing your own solution (which will invariably turn into an unfeatureful, poorly tested, less efficient version of logging).
Until you know that it's not efficient enough, I highly recommend you use it. Again, you can always replace it later if data proves that it's not sufficient. | 1 | 0 | 0 | Python logging vs. write to file | 2 | python,logging,file-writing,bigdata | 0 | 2016-04-24T05:15:00.000 |
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk. | 1 | 4 | 1.2 | 0 | true | 36,820,728 | 1 | 5,860 | 1 | 0 | 0 | 36,820,171 | The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot. | 1 | 0 | 0 | How to drop table and recreate in amazon RDS with Elasticbeanstalk? | 1 | python,django,amazon-web-services,amazon-elastic-beanstalk,amazon-rds | 0 | 2016-04-24T06:51:00.000 |
There is a way to define MongoDB collection schema using mongoose in NodeJS. Mongoose verifies the schema at the time of running the queries.
I have been unable to find a similar thing for Motor in Python/Tornado. Is there a way to achieve a similar effect in Motor, or is there a package which can do that for me? | 2 | 2 | 1.2 | 0 | true | 36,842,258 | 0 | 1,449 | 1 | 1 | 0 | 36,841,121 | No there isn't. Motor is a MongoDB driver, it does basic operations but doesn't provide many conveniences. An Object Document Mapper (ODM) library like MongoTor, built on Motor, provides higher-level features like schema validation.
I don't vouch for MongoTor. Proceed with caution. Consider whether you really need an ODM: mongodb's raw data format is close enough to Python types that most applications don't need a layer between their code and the driver. | 1 | 0 | 0 | Is there a way to define a MongoDB schema using Motor? | 2 | python,mongodb,tornado-motor,motordriver | 0 | 2016-04-25T12:50:00.000 |
When inserting rows via INSERT INTO tbl VALUES (...), (...), ...;, what is the maximum number of values I can use?
To clarify, PostgreSQL supports using VALUES to insert multiple rows at once. My question isn't how many columns I can insert, but rather how many rows of columns I can insert into a single VALUES clause. The table in question has only ~10 columns.
Can I insert 100K+ rows at a time using this format?
I am assembling my statements using SQLAlchemy Core / psycopg2 if that matters. | 7 | 5 | 0.761594 | 0 | false | 36,879,218 | 0 | 5,268 | 1 | 0 | 0 | 36,879,127 | As pointed out by Gordon, there doesn't appear to be a predefined limit on the number of values sets you can have in your statement. But you would want to keep this to a reasonable limit to avoid consuming too much memory at both the client and the server. The client only needs to build the string and the server needs to parse it as well.
If you want to insert a large number of rows speedily COPY FROM is what you are looking for. | 1 | 0 | 0 | What is the maximum number of VALUES that can be put in a PostgreSQL INSERT statement? | 1 | python,postgresql,sqlalchemy,psycopg2 | 0 | 2016-04-27T02:17:00.000 |
So basically I have this collection where objects are stored with a string parameter.
example:
{"string_": "MSWCHI20160501"}
The last part of that string is a date, so my question is this: Is there a way of writing a mongo query which will take that string, convert part of it into an IsoDate object and then filter objects by that IsoDate.
p.s
I know I can do a migration but I wonder if I can achieve that without one. | 2 | 1 | 1.2 | 0 | true | 37,026,108 | 0 | 61 | 1 | 0 | 0 | 36,888,098 | Depending on the schema of your objects, you could hypothetically write an aggregation pipeline that would first transform the objects, then filter the results based on the results and then return those filtered results.
The main reason I would not recommend this way though is that, given a fairly large size for your dataset, the aggregation is going to fail because of memory problems.
And that is without mentioning the long execution time for this command. | 1 | 0 | 1 | Write a query in MongoDB's client pymongo that converts a part of the string to a date on the fly | 1 | python,mongodb,pymongo | 0 | 2016-04-27T11:11:00.000 |
Using openpyxl, the charts inserted into my worksheet have a border on them. Is there any way to set the style of the chart (pie/bar) to either via the styles.Style/styles.borders module to have no border, or at least a thin white border so that they would print borderless?
The only options I see on the object is .style = <int> which doesn't seem to actually affect the design of the final graphic. | 1 | 0 | 0 | 0 | false | 36,988,605 | 0 | 1,846 | 1 | 0 | 0 | 36,988,384 | This isn't easy but should be possible. You will need to work through the XML source of a suitably formatted sample chart and see which particular variables need setting or changing. openpyxl implements the complete chart API but this unfortunately very complicated. | 1 | 0 | 0 | openpyxl - Ability to remove border from charts? | 2 | python,python-2.7,charts,openpyxl | 0 | 2016-05-02T17:42:00.000 |
When am trying to create a database on server it shows an error
Error Code: 1006. Can't create database 'mmmm' (errno: 2)
How can I solve this error?
The server is mysql. | 0 | 0 | 0 | 0 | false | 37,046,039 | 0 | 1,074 | 1 | 0 | 0 | 36,995,801 | Likely you do not have permission to create a database. | 1 | 0 | 0 | Error Code: 1006. Can't create database 'mmmm' (errno: 2) | 1 | mysql,mysql-workbench,mysql-python | 0 | 2016-05-03T04:48:00.000 |
I'm using Google Analytics API to make a Python program.
For now it's capable to make specific querys, but...
Is possible to obtain a large JSON with all the data in a Google Analytics account?
I've been searching and i didn't have found any answer.
Someone know if it's possible and how? | 1 | 0 | 0 | 0 | false | 37,029,398 | 0 | 488 | 1 | 0 | 1 | 36,998,698 | Google Analytics stores a ton (technical term) of data; there are a lot of metrics and dimensions, and some of them (such as the users metric) have to be calculated specifically for every query. It's easy to underestimate the flexibility of Google Analytics, but the fact that it's easy to apply a carefully defined segment to three-year old data in real time means that the data will be stored in a horrendously complicated format, which is kept away from you for proprietary purposes.
So the data set would be vast, and incomprehensible. On top of that, there would be serious ramifications with regard to privacy, because of the way that Google stores the data (an issue which they can circumvent so long as you can only access the data through their protocols.
Short answer, you can take as much data as you can accurately describe and ask for, but there's no 'download all' button. | 1 | 0 | 0 | Query all data of a Google Analytcs account | 2 | python,google-analytics,google-api,google-analytics-api | 0 | 2016-05-03T07:59:00.000 |
I want to install MySQL-python-1.2.5 on an old Linux (Debian) embedded system.
Python is already installed together with a working MariaDB/MySQL database.
Problems:
1) the system is managed remotely and has no direct internet access;
2) band is infinitesimal to install further apps/libraries, so I would avoid doing this if possible;
3) gcc and mysql_config not installed.
My question: is there any possibility to add the MYSQL-PYTHON package already READY to be imported into a python script, compiled and as a single file even better, without going through a painful and local upgrade?
My dream: prepare the working package/file on my local PC and then transfer it using SCP.
Note: the remote system and my working pc are compatible and I don't need any special toolchain. | 0 | 3 | 1.2 | 0 | true | 37,024,496 | 0 | 570 | 1 | 0 | 0 | 37,022,937 | have you tried the .deb files from here packages.debian.org/wheezy/python-mysqldb | 1 | 0 | 0 | How to install MySQL for python WITHOUT internet access | 1 | python,mysql | 0 | 2016-05-04T08:56:00.000 |
I am working on a Django project with another developer. I had initially created a model which I had migrated and was synced correctly with a MySQL database.
The other developer had later pulled the code I had written so far from the repository and added some additional fields to my model.
When I pulled through his changes to my local machine the model had his changes, and additionly a second migration file had been pulled.
So I then executed the migration commands:
python manage.py makemigrations myapp, then python manage.py migrate in order to update my database schema. The response was that no changes had been made.
I tried removing the migration folder in my app and running the commands again. A new migrations folder had been generated and again my database schema had not been updated.
Is there something I am missing here? I thought that any changes to model can simply be migrated to alter the database schema.
Any help would be greatly appreciated. (Using Django version 1.9). | 0 | 0 | 0 | 0 | false | 56,626,356 | 1 | 881 | 1 | 0 | 0 | 37,044,634 | After you pull, do not delete the migrations file or folder. Simply just do python manage.py migrate. Even after this there is no change in database schema then open the migrations file which came through the git pull and remove the migration code of the model whose table is not being created in the database. Then do makemigrations and migrate. I had this same problem. This worked for me. | 1 | 0 | 0 | Django model changes cannot be migrated after a Git pull | 2 | python,django,git | 0 | 2016-05-05T07:12:00.000 |
As the title suggests, I'm trying to use a environment variable in a config file for a Flask project (in windows 10).
I'm using a virtual env and this far i have tried to add set "DATABASE_URL=sqlite:///models.db" to /Scripts/activate.bat in the virtualenv folder.
But it does not seem to work. Any suggestions? | 4 | 0 | 0 | 0 | false | 37,236,689 | 1 | 2,106 | 1 | 0 | 0 | 37,046,677 | The problem was that PyCharm does not activate the virtualenvironment when pressing the run button. It only uses the virtualenv python.exe. | 1 | 0 | 0 | Setting environment variables in virtualenv (Python, Windows) | 2 | python,windows,flask,pycharm,virtualenv | 0 | 2016-05-05T09:10:00.000 |
Afternoon,
I have a really simple python script in which user is asked to input a share purchase price, script looks up price and returns whether user is up or down.
Currently the input, and text output are done in the CMD prompt which is not ideal. I would love to have in excel a box for inputing purchase price, a button to press and then a cell in which the output is printed.
Is there any straightforward ways to put the python code in the button code where you would normally have VBA? Or alternative hacks?
Thanks in advance | 0 | 0 | 0 | 0 | false | 37,052,827 | 0 | 890 | 1 | 0 | 0 | 37,052,571 | For windows, the win32com package will allow you to control excel from a python script. It's not quite the same as embedding the code, but it will allow you to read and write from the spreadsheet. | 1 | 0 | 1 | Python script input and output in excel | 3 | python,excel,vba | 0 | 2016-05-05T13:58:00.000 |
I have downloaded a PG database backup from my Heroku App, it's in my repository folder as latest.dump
I have installed postgres locally, but I can't use pg_restore on the windows command line, I need to run this command:
pg_restore --verbose --clean --no-acl --no-owner -j 2 -h localhost -d DBNAME latest.dump
But the command is not found! | 4 | 5 | 0.761594 | 0 | false | 37,104,332 | 1 | 10,221 | 1 | 1 | 0 | 37,104,193 | Since you're on windows, you probably just don't have pg_restore on your path.
You can find pg_restore in the bin of your postgresql installation e.g. c:\program files\PostgreSQL\9.5\bin.
You can navigate to the correct location or simply add the location to your path so you won't need to navigate always. | 1 | 0 | 0 | How to use pg_restore on Windows Command Line? | 1 | python,django,database,postgresql,heroku | 0 | 2016-05-08T19:55:00.000 |
I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema.
I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Renamed some column, changed column property like increase datatype size (ex: varchar(10) to varchar(50)). Than it became new release schema that I have stored in New_Release_DB.
Now I want to Table Name, list of column name which has changed in New_Release_DB, and changes along with column name.
Example,
Table_A Column_Name Add(if it is added),
Table_A Column_Name Delete(if it is deleted),
Table_A Column_Name Change(if its property has changed)
I am trying it in Shell script in Linux, But I am not getting it. Please let me know If I can use other script like python or java. | 1 | 0 | 0 | 0 | false | 39,398,778 | 0 | 3,461 | 2 | 0 | 0 | 37,109,762 | I use mysql Workbench which has the schema synchronization utility. Very handy when trying to apply changes from development server to a production server. | 1 | 0 | 0 | How to compare two MySql database | 3 | javascript,python,mysql,linux,shell | 0 | 2016-05-09T07:15:00.000 |
I have to compare two MySql database data, I want to compare two MySql schema and find out the difference between both schema.
I have created two variables Old_Release_DB and New_Release_DB. In Old_Release_DB I have stored old release schema than after some modification like I deleted some column, Added some column, Renamed some column, changed column property like increase datatype size (ex: varchar(10) to varchar(50)). Than it became new release schema that I have stored in New_Release_DB.
Now I want to Table Name, list of column name which has changed in New_Release_DB, and changes along with column name.
Example,
Table_A Column_Name Add(if it is added),
Table_A Column_Name Delete(if it is deleted),
Table_A Column_Name Change(if its property has changed)
I am trying it in Shell script in Linux, But I am not getting it. Please let me know If I can use other script like python or java. | 1 | 0 | 0 | 0 | false | 37,110,095 | 0 | 3,461 | 2 | 0 | 0 | 37,109,762 | You can compare two databases by creating database dumps:
mysqldump -u your-database-user your-database-name > database-dump-file.sql - if you're using a password to connect to a database, also add -p option to a mysqldump command.
And then compare them with diff:
diff new-database-dump-file.sql old-database-dump-file.sql
Optionally, you can save the results of diff execution to a file with STDOUT redirecting by adding > databases_diff to a previous command.
However, that kind of comparison would require some eye work - you will get literally a difference between two files. | 1 | 0 | 0 | How to compare two MySql database | 3 | javascript,python,mysql,linux,shell | 0 | 2016-05-09T07:15:00.000 |
I'm working on a project which involves a huge external dataset (~490Gb) loaded in an external database (MS SQL through django-pyodbc-azure). I've generated the Django models marked managed=False in their meta. In my application this works fine, but I can't seem to figure out how to run my unit tests. I can think of two approaches: mocking the data in a test database, and giving the unit tests (and CI) read-only access to the production dataset. Both options are acceptable, but I can't figure out either of them:
Option 1: Mocked data
Because my models are marked managed=False, there are no migrations, and as a result, the test runner fails to create the database.
Option 2: Live data
django-pyodbc-azure will attempt to create a test database, which fails because it has a read-only connection. Also I suspect that even if it were allowed to do so, the resulting database would be missing the required tables.
Q How can I run my unittests? Installing additional packages, or reconfiguring the database is acceptable. My setup uses django 1.9 with postgresql for the main DB. | 1 | 2 | 0.379949 | 0 | false | 37,130,919 | 1 | 299 | 1 | 0 | 0 | 37,115,070 | After a day of staring at my screen, I found a solution:
I removed the managed=True from the models, and generated migrations. To prevent actual migrations against the production database, I used my database router to prevent the migrations. (return False in allow_migrate when for the appropriate app and database).
In my settings I detect whether unittests are being run, and then just don't define the database router or the external database. With the migrations present, the unit tests. | 1 | 0 | 0 | Unit tests with an unmanaged external read-only database | 1 | python,django,unit-testing | 0 | 2016-05-09T11:50:00.000 |
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | 12 | 1 | 0.049958 | 0 | false | 70,772,212 | 0 | 19,909 | 3 | 0 | 0 | 37,149,748 | I know this answer will be (very) late to contribute to the discussion but maybe it will help someone. I found out what worked for me by following Thomas, who commented above. However, with a bit of a caveat, that I was using pyenv to setup and manage python on my local machine.
So when running sys.executable in a jupyter notebook cell I found out my python path was /usr/local/Cellar/jupyterlab/3.2.8/libexec/bin/python3.9, while I expected it to be somewhere along the lines of '/Users/<USER_NAME>/.pyenv/versions/3.9.2/bin/python'.
This error was attributed to me having installed jupyter through command brew install jupyter instead of pyenv exec pip install jupyter. I proceeded to uninstall jupyter with brew and then executing the second command, which now got jupyter up and running!
(note that you would first have to have pyenv setup properly). | 1 | 0 | 1 | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 4 | python,pip,ipython,ipython-sql | 0 | 2016-05-10T22:04:00.000 |
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | 12 | 0 | 0 | 0 | false | 54,436,119 | 0 | 19,909 | 3 | 0 | 0 | 37,149,748 | I doubt you're using different IPython Notebook kernel other than which you've installed ipython-sql in.
IPython Notebook can have more than one kernel. If it is the case, make sure you're in the right place first. | 1 | 0 | 1 | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 4 | python,pip,ipython,ipython-sql | 0 | 2016-05-10T22:04:00.000 |
Just set up an IPython Notebook on Ubuntu 16.04 but I can't use %load_ext sql.
I get: ImportError: No module named sql
I've tried using pip and pip3 with and without sudo to install ipython-sql. All 4 times it installed without issue but nothing changes on the notebook.
Thanks in advance! | 12 | 5 | 0.244919 | 0 | false | 43,972,590 | 0 | 19,909 | 3 | 0 | 0 | 37,149,748 | I know it's been a long time, but I faced the same issue, and Thomas' advice solved my problem. Just outlining what I did here.
When I ran sys.executable in the notebook I saw /usr/bin/python2, while the pip I used to install the package was /usr/local/bin/pip (to find out what pip you are using, just do which pip or sudo which pip if you are installing packages system-wide). So I reinstalled ipython-sql using the following command, and everything worked out just fine.
sudo -H /usr/bin/python2 -m pip install ipython-sql
This is odd since I always install my packages using pip. I'm wondering maybe there's something special about the magic functions in Jupyter. | 1 | 0 | 1 | IPython Notebook and SQL: 'ImportError: No module named sql' when running '%load_ext sql' | 4 | python,pip,ipython,ipython-sql | 0 | 2016-05-10T22:04:00.000 |
I'm on Odoo 9, I have an issue when lunching odoo server $odoo.py -r odoo -w password, the localhost:8069 doesn't load and I get an error on terminal "Peer authentication failed for user "odoo"".
I already created a user "odoo" on postgres.
When lunching $odoo.py I can load the odoo page on browser but I can't create database (as default user).
It was working and i already created database but when I logged out I couldn't connect to my database account anymore.
Any ideas ? | 4 | 5 | 0.321513 | 0 | false | 37,199,710 | 1 | 23,159 | 1 | 0 | 0 | 37,193,143 | This helped me.
sudo nano /etc/postgresql/9.3/main/pg_hba.conf
then add
local all odoo trust
then restart postgres
sudo service postgresql restart | 1 | 0 | 0 | Peer authentication failed for user "odoo" | 3 | python,postgresql,openerp,odoo-9 | 0 | 2016-05-12T16:57:00.000 |
I'm trying to use a PHP site button to kick off a python script on my server. When I run it, everything seems fine, on the server I can "ps ax" and see that the script is running.
The Python script attempts to process some files and write the results to a MySQL database. When I ultimately check to see that the changes were made to the DB, nothing has happened. Also, redirecting output shows no errors.
I have checked to make sure that it's executing (the ps ax)
I've made sure that all users have access to writing to the output directory (for saving the error report, if there is one)
I've made sure that the logon to MySql is correct...
I'm not sure what else to do or check. | 0 | 0 | 0 | 0 | false | 37,213,822 | 0 | 40 | 1 | 0 | 0 | 37,213,568 | You can check in your web-server logs.(/var/www/log/apache2/error.log) if you have apache as your webserver.. | 1 | 0 | 0 | Running a python (with MySQL) script from PHP | 1 | php,python,mysql | 1 | 2016-05-13T15:10:00.000 |
I just created a script with openpyxl to update a xlsx file that we usually update manually every month.
It work fine, but the new file lost all the graphs and images that were in the workbook. Is there a way to keep them? | 5 | 5 | 1.2 | 0 | true | 37,215,619 | 0 | 3,060 | 1 | 0 | 0 | 37,214,983 | openpyxl version 2.5 will preserve charts in existing files. | 1 | 0 | 0 | Openpyxl updating sheet and maintaining graphs and images | 2 | python,openpyxl | 0 | 2016-05-13T16:25:00.000 |
I am trying to get a project from one machine to another. This project contains a massive log db table. It is too massive. So I exported and imported all db tables except this one via phpmyadmin.
No if I run the migrate command I expected django to create everything missing. But it is not.
How to make django check for and create missing db tables?
What am I missing, why is it not doing this? I feel like the old syncdb did the job. But the new --run-syncdb does not.
Thank you for your help. | 1 | 1 | 0.197375 | 0 | false | 37,232,610 | 1 | 2,300 | 1 | 0 | 0 | 37,231,032 | When you exported the data and re-imported it on the other database part of that package would have included the django_migrations table. This is basically a log of all the migrations successfully executed by django.
Since you have left out only the log table according to you, that should really be the only table that's missing from your schema. Find the entries in django_migrations that correspond to this table and delete them. Then run ./manage.py migrate again and the table will be created. | 1 | 0 | 0 | How to create missing DB tables in django? | 1 | python,django,django-migrations,django-database,django-1.9 | 0 | 2016-05-14T19:35:00.000 |
I am retrieving structured numerical data (float 2-3 decimal spaces) via http requests from a server. The data comes in as sets of numbers which are then converted into an array/list. I want to then store each set of data locally on my computer so that I can further operate on it.
Since there are very many of these data sets which need to be collected, simply writing each data set that comes in to a .txt file does not seem very efficient. On the other hand I am aware that there are various solutions such as mongodb, python to sql interfaces...ect but i'm unsure of which one I should use and which would be the most appropriate and efficient for this scenario.
Also the database that is created must be able to interface and be queried from different languages such as MATLAB. | 1 | 1 | 0.066568 | 1 | false | 37,246,905 | 0 | 989 | 1 | 0 | 0 | 37,246,342 | Have you considered HDF5? It's very efficient for numerical data, and is supported by both Python and Matlab. | 1 | 0 | 1 | Python, computationally efficient data storage methods | 3 | python,sql,arrays,mongodb,database | 0 | 2016-05-16T03:38:00.000 |
I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all?
Game server will be written on Python | 1 | 1 | 1.2 | 0 | true | 37,282,081 | 0 | 285 | 2 | 0 | 0 | 37,281,218 | As your required requirements i preferred you to use NoSQL database e.g Mongowith implementation of redisfor memory data storage that gives you more flexibility and performance.It based on objects so helps you for fetching faster | 1 | 0 | 1 | What is better - store player's inventory in JSON(text field in database) or database directly? | 2 | python,mysql,json,database | 0 | 2016-05-17T16:01:00.000 |
I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all?
Game server will be written on Python | 1 | 1 | 0.099668 | 0 | false | 37,284,304 | 0 | 285 | 2 | 0 | 0 | 37,281,218 | It's probably better to use database tables directly. That way you can take advantage of other database features such as foreign keys, unique constraints, triggers, and so on. | 1 | 0 | 1 | What is better - store player's inventory in JSON(text field in database) or database directly? | 2 | python,mysql,json,database | 0 | 2016-05-17T16:01:00.000 |
I want to use pymssql on a 24/7 Linux production app and am worried about stability. As soon as I hear ODBC I start to have reservations, especially on Linux.
Does pymssql use ODBC or is it straight to freeTDS? | 0 | 0 | 0 | 0 | false | 37,366,489 | 0 | 184 | 1 | 1 | 0 | 37,357,318 | No, pymssql does not use ODBC. | 1 | 0 | 0 | Does Linux pymssql use ODBC? | 1 | python,pymssql | 0 | 2016-05-20T23:34:00.000 |
I was running Mongo 2.4 with a replicaset which contains 1 primary, 1 secondary and an arbiter. We were using pymongo 2.6 and mongoengine 0.8.2.
Recently, we performed an upgrade to Mongo 2.6, and also upgraded pymongo to 2.7.2 and mongoengine to 0.8.7.
This setup worked fine for almost 12 hours, after which we started getting the below error :
[initandlisten] connection refused because too many open connections: 819
The ulimit is 1024 which worked perfectly with Mongo 2.4 and hence we have not increased it. Increasing it might solve the problem temporarily but we will hit it again in the future. Somehow the root cause seems to be the inability of Pymongo to close the connections. Any pointers why this happens?
P.S : When using Mongo 2.4 with pymongo 2.7.2 and mongoengine 0.8.7, everything works well and the max connections are about 250. Only with Mongo 2.6, pymongo 2.7.2 and mongoengine 0.8.7, the number of connections shoot up to 819. | 0 | 1 | 0.197375 | 0 | false | 37,915,282 | 0 | 586 | 1 | 0 | 0 | 37,383,545 | So after a lot of struggle and a lot of load testing, we solved the problem by upgrading PyMongo to 2.8.1. PyMongo 2.7.2 is the first version to support MongoDB 2.6 and it sure does have some problems handling connections. Upgrading to PyMongo 2.8.1 helped us resolve the issue. With the same load, the connections do not exceed 250-300. | 1 | 0 | 0 | Too many open connections with mongo 2.6 + pymongo 2.7.2 | 1 | python,mongodb,database-connection,pymongo,mongoengine | 0 | 2016-05-23T06:01:00.000 |
I have three programs running, one of which iterates over a table in my database non-stop (over and over again in a loop), just reading from it, using a SELECT statement.
The other programs have a line where they insert a row into the table and a line where they delete it. The problem is, that I often get an error sqlite3.OperationalError: database is locked.
I'm trying to find a solution but I don't understand the exact source of the problem (is reading and writing in the same time what make this error occur? or the writing and deleting? maybe both aren't supposed to work).
Either way, I'm looking for a solution. If it were a single program, I could match the database I/O with mutexes and other multithreading tools, but it's not. How can I wait until the database is unlocked for reading/writing/deleting without using too much CPU? | 0 | 0 | 0 | 0 | false | 37,414,407 | 0 | 369 | 1 | 0 | 0 | 37,413,919 | you need to switch databases..
I would use the following:
postgresql as my database
psycopg2 as the driver
the syntax is fairly similar to SQLite and the migration shouldn't be too hard for you | 1 | 0 | 0 | Lock and unlock database access - database is locked | 1 | python,sqlite | 0 | 2016-05-24T12:41:00.000 |
I am getting a UnicodeDecodeError in my Python script and I know that the unique character is not in Latin (or English), and I know what row it is in (there are thousands of columns). How do I go through my SQL code to find this unique character/these unique characters? | 0 | 1 | 1.2 | 0 | true | 37,892,058 | 0 | 30 | 1 | 0 | 0 | 37,425,230 | Do a binary search. Break the files (or the scripts, or whatever) in half, and process both files. One will (should) fail, and the other shouldn't. If they both have errors, doesn't matter, just pick one.
Continue splitting the broken files until you've narrowed it down to the something more manageable that you can probe in to, even down to the actual line that's failing. | 1 | 0 | 0 | How do I find unique, non-English characters in a SQL script that has a lot of tables, scripts, etc. related to it? | 1 | python,sql | 0 | 2016-05-24T22:57:00.000 |
It is possible [in any way, even poorly hacked solution] to share in-memory database between many processes? My application has one process that opens the in-memory database and the other are running only SELECT queries on the database.
NOTE: I need solution only for python 2.7, and btw if it matters the module I use for making new processes is multiprocessing. | 3 | 3 | 0.53705 | 0 | false | 39,020,351 | 0 | 850 | 1 | 0 | 0 | 37,434,949 | On Linux you can just use /dev/shm as the file location of your sqlite.
This is a memory mounted drive suitable exactly for that. | 1 | 0 | 0 | Share in-memory database between processes sqlite | 1 | python,database,sqlite,multiprocessing | 0 | 2016-05-25T10:52:00.000 |
I have a Python script which user's drag and drop KML files into for easy use. It takes the dropped file as sys.arg[1]. When entered into the command line as myScript.py Location.kml everything works fine. But when I drag and drop the file in an error is thrown saying no module named xlsxwriter. xlsxwriter is in the same folder as my Python script in another folder named Packages. Why does it work for the command line but not when dragged and dropped? Is there some trick I am missing? | 0 | 0 | 0 | 0 | false | 37,449,798 | 0 | 290 | 1 | 0 | 0 | 37,449,333 | Thanks to erkysun this issue was solved! eryksun's solution worked perfectly and I found another reason it wasn't working. This was because when I dragged and dropped the file into the python script then ran os.getcwd() no matter where the file was it returned C:\WINDOWS\system32. To counteract this wherever I had os.getcwd() I changed it to os.path.abspath(os.path.dirname(__file__)) and then it worked! | 1 | 0 | 0 | Module not found when drag and drop Python file | 1 | python,windows,drag-and-drop,xlsxwriter | 0 | 2016-05-25T23:41:00.000 |
I have a results analysing spreadsheet where i need to enter my raw data into a 8x6 cell 'plate' which has formatted cells to produce the output based on a graph created. This .xlsx file is heavily formatted with formulas for the analysis, and it is a commercial spreadsheet so I cannot replicate these formulas.
I am using python 2.7 to obtain the raw results into a list and I have tried using xlwt and xlutils to copy the spreadsheet to enter the results. When I do this it loses all formatting when I save the file. I am wondering whether there is a different way in which I can make a copy of the spreadsheet to enter my results.
Also when I have used xlutils.copy I can only save the file as a .xls file, not an xlsx, is this the reason why it loses formatting? | 2 | 0 | 0 | 0 | false | 50,905,858 | 0 | 2,214 | 1 | 0 | 0 | 37,455,466 | While Destrif is correct, xlutils uses xlwt which doesn't support the .xlsx file format.
However, you will also find that xlsxwritter is unable to write xlrd formatted objects.
Similarly, the python-excel-cookbook he recommends only works if you are running Windows and have excel installed. A better alternative for this would be xlwings as it works for Windows and Mac with Excel installed.
If you are looking for something more platform agnostic, you could try writing your xlsx file using openpyxl. | 1 | 0 | 0 | How to enter values to an .xlsx file and keep formatting of cells | 2 | python,excel,formatting,xlwt,xlutils | 0 | 2016-05-26T08:28:00.000 |
abort MySQLdb.
I know many process not use same connect, Because this will be a problem .
BUT, run the under code , mysql request is block , then many process start to query sql at the same time , the sql is "select sleep(10)" , They are one by one.
I not found code abort lock/mutux in MySQLdb/mysql.c , Why there is no problem ? I think there will be problems with same connection fd in general . But pass my test, only block io, problems not arise . Where is the lock ?
import time
import multiprocessing
import MySQLdb
db = MySQLdb.connect("127.0.0.1","root","123123","rui" )
def func(*args):
while 1:
cursor = db.cursor()
cursor.execute("select sleep(10)")
data = cursor.fetchall()
print len(data)
cursor.close()
print time.time()
time.sleep(0.1)
if __name__ == "__main__":
task = []
for i in range(20):
p = multiprocessing.Process(target = func, args = (i,))
p.start()
task.append(p)
for i in task:
i.join()
result log, we found each request interval is ten seconds.
1
1464325514.82
1
1464325524.83
1
1464325534.83
1
1464325544.83
1
1464325554.83
1
1464325564.83
1
1464325574.83
1
1464325584.83
1
1464325594.83
1
1464325604.83
1
1464325614.83
1
1464325624.83
tcpdump log:
we found each request interval is ten seconds two.
13:07:04.827304 IP localhost.44281 > localhost.mysql: Flags [.], ack 525510411, win 513, options [nop,nop,TS val 2590846552 ecr 2590846552], length 0
0x0000: 4508 0034 23ad 4000 4006 190d 7f00 0001 E..4#.@.@.......
0x0010: 7f00 0001 acf9 0cea fc09 7cf9 1f52 a70b ..........|..R..
0x0020: 8010 0201 ebe9 0000 0101 080a 9a6d 2e58 .............m.X
0x0030: 9a6d 2e58 .m.X
13:07:04.928106 IP localhost.44281 > localhost.mysql: Flags [P.], seq 0:21, ack 1, win 513, options [nop,nop,TS val 2590846653 ecr 2590846552], length 21
0x0000: 4508 0049 23ae 4000 4006 18f7 7f00 0001 E..I#.@.@.......
0x0010: 7f00 0001 acf9 0cea fc09 7cf9 1f52 a70b ..........|..R..
0x0020: 8018 0201 fe3d 0000 0101 080a 9a6d 2ebd .....=.......m..
0x0030: 9a6d 2e58 1100 0000 0373 656c 6563 7420 .m.X.....select.
0x0040: 736c 6565 7028 3130 29 sleep(10)
13:07:14.827526 IP localhost.44281 > localhost.mysql: Flags [.], ack 65, win 513, options [nop,nop,TS val 2590856553 ecr 2590856552], length 0
0x0000: 4508 0034 23af 4000 4006 190b 7f00 0001 E..4#.@.@.......
0x0010: 7f00 0001 acf9 0cea fc09 7d0e 1f52 a74b ..........}..R.K
0x0020: 8010 0201 9d73 0000 0101 080a 9a6d 5569 .....s.......mUi
0x0030: 9a6d 5568 .mUh
13:07:14.927960 IP localhost.44281 > localhost.mysql: Flags [P.], seq 21:42, ack 65, win 513, options [nop,nop,TS val 2590856653 ecr 2590856552], length 21
0x0000: 4508 0049 23b0 4000 4006 18f5 7f00 0001 E..I#.@.@.......
0x0010: 7f00 0001 acf9 0cea fc09 7d0e 1f52 a74b ..........}..R.K
0x0020: 8018 0201 fe3d 0000 0101 080a 9a6d 55cd .....=.......mU.
0x0030: 9a6d 5568 1100 0000 0373 656c 6563 7420 .mUh.....select.
0x0040: 736c 6565 7028 3130 29 sleep(10)
```
```
end. | 0 | 0 | 0 | 0 | false | 37,517,765 | 0 | 571 | 1 | 0 | 0 | 37,475,338 | It works contingency.
MySQL is request-rensponse protocol.
When two process sends query, it isn't mixed unless the query is large.
MySQL server (1) receive one query, (2) send response of (1), (3) receive next query, (4) send response of (3).
When first response was send from MySQL server, one of two processes receives it. Since response is small enough, it is received by atomic.
And next response is received by another process.
Try sending "SELECT 1+2" from one process and "SELECT 1+3" from another process. "1+2" may be 4 by chance and "SELECT 1+3" may be 3 by chance. | 1 | 0 | 0 | use multiprocessing to query in same mysqldb connect , block? | 1 | mysql,linux,mysql-python,pymysql | 0 | 2016-05-27T05:15:00.000 |
I was writing a PL/Python function for PostgreSQl, with Python 2.7 and Python 3.5 already installed on Linux.
When I was trying to create extension plpythonu, I got an error, then I fixed executing in terminal the command $ sudo apt-get install postgresql-contrib-9.3 postgresql-plpython-9.3. I understand that this is some other package.
If I will not have Python 2.7/3.5 installed, and I will install the plpython package, will the user defined function still work? Is some how the PL/Python depending on Python? | 0 | 0 | 1.2 | 0 | true | 42,190,637 | 0 | 278 | 1 | 1 | 0 | 37,487,072 | Yes, it will, the package is independent from standard python installation. | 1 | 0 | 0 | PL/Python in PostgreSQL | 1 | python,postgresql,plpython | 0 | 2016-05-27T15:19:00.000 |
I tried to use pandas to read an excel sheet into a dataframe but for floating point columns, the data is read incorrectly. I use the function read_excel() to do the task
In excel, the value is 225789.479905466 while in the dataframe, the value is 225789.47990546614 which creates discrepancy for me to import data from excel to database.
Does anyone face the same issue with pandas.read_exel(). I have no issue reading a csv to dataframe.
Jeremy | 4 | 0 | 0 | 1 | false | 37,596,921 | 0 | 4,533 | 1 | 0 | 0 | 37,492,173 | Excel might be truncating your values, not pandas. If you export to .csv from Excel and are careful about how you do it, you should then be able to read with pandas.read_csv and maintain all of your data. pandas.read_csv also has an undocumented float_precision kwarg, that might be useful, or not useful. | 1 | 0 | 0 | loss of precision when using pandas to read excel | 3 | python,excel,pandas,dataframe,precision | 0 | 2016-05-27T20:59:00.000 |
py.test sets up a test database.
I'd like to use the real database set in settings.py file.
(Since I'm on test machine with test data already)
Would it be possible? | 1 | 0 | 0 | 0 | false | 37,513,426 | 1 | 1,804 | 1 | 0 | 0 | 37,507,458 | yeah you can override the settings on the setUp
set the real database for the tests and load your databases fixtures.. but I think, it`s not a good pratice, since you want to run yours tests without modify your "real" app env.
you should try the pytest-django.
with this lib you can reuse, create drop your databases test. | 1 | 0 | 0 | Django py.test run on real database? | 3 | python,django,pytest | 0 | 2016-05-29T07:46:00.000 |
So I have a database that stores a lot of information about many different objects; to simplify it, just imagine a database that stores information about the weights of 100 dogs and 100 cats over a period of a few years. I made a GUI, and I want one of the tabs to allow the user to enter a newly taken weight or change the past weight of a pet if it was wrong.
I created the GUI using Qt Designer. Before, I communicated with the database using SqlAlchemy. However, PyQt4 also offers QtSql, which I think is another ORM? Could someone explain to me how the ORM exactly works, and what the difference is between SqlAlchemy, QtSql, and even Sql Connector? | 0 | 0 | 1.2 | 0 | true | 37,579,094 | 0 | 2,414 | 1 | 0 | 0 | 37,578,763 | Qt Sql is the SQL Framework that comes with Qt library. It provides basic (and classic) classes to access a database, execute queries and fetch the results.*Qt can be recompiled to support various DBMS such as MySQL, Postgres etc.
Sql Connector
I assume you refer to MySql Connector ?
If so, it's a set of C++ classes to access natively a MySql database.
Classes are almost the same than those in QtSQL. But you don't have to support the Wrapper layer of Qt. But of course, you cannot access another database than MySQL.
Sql Alchemy
It's hard to briefly explain the complexity of an ORM.
Object Relation Mapping. Wikipedia says:
"A technique for converting data between incompatible type systems in
object-oriented programming languages"
It's a good definition. It's basically a technique to map table / queries data into Object-Oriented data structure.
For example, an ORM engine hides the process that explicitly maps the fields of a TABLE to an OO class.
In addition, it works the same whatever the database you're accessing (as long as the ORM knows the DBMS dialect).
For such a purpose, Python language and philosophy perfectly fits an ORM.
But an ORM such as SqlAlchemy is everything but an Object-Oriented Database !
It has some limitations though.
If you need to make complex queries (and believe me, it often happens in specific contexts), it becomes a bit tricky to use it properly and you might experience performance penalties.
If you need just need to access a single table with hundreds records...it won't be worth it as the initialization process is a bit laborious.
Z. | 1 | 0 | 0 | Which ORM to use for Python and MySql? | 1 | python,mysql,orm,sqlalchemy,pyqt | 0 | 2016-06-01T21:05:00.000 |
Use case: I'm writing a backend using MongoDB (and Flask). At the moment this is not using any ORM like Mongoose/Mongothon. I'd like to store the _id of the user which created each document in the document. I'd like it to be impossible to modify that field after creation. The backend currently allows arbitrary updates using (essentially) collection.update_one({"_id": oid}, {"$set": request.json})
I could filter out the _creator_id field from request.json (something like del request.json["_creator_id"]) but I'm concerned that doesn't cover all possible ways in which the syntax could be modified to cause the field to be updated (hmm, dot notation?). Ideally I'd like a way to make fields write-once in MongoDB itself, but failing that, some bulletproof way to prevent updates of a field in code. | 2 | 3 | 0.291313 | 0 | false | 38,043,469 | 1 | 3,632 | 1 | 0 | 0 | 37,580,165 | imho there is no know methods to prevent updates inside mongo.
As you can control app behavior, then someone will still able to make this update outside the app. Mongo don't have triggers - which in sql world have the possibility to play as a data guards and prevent field changes.
As you re not using ODM, then all you can have is CQRS pattern which will allow you to control app behavior and prevent such updates. | 1 | 0 | 0 | How to make a field immutable after creation in MongoDB? | 2 | python,json,node.js,mongodb,immutability | 0 | 2016-06-01T23:03:00.000 |
From my understanding BigTable is a Column Oriented NoSQL database. Although Google Cloud Datastore is built on top of Google’s BigTable infrastructure I have yet to see documentation that expressively says that Datastore itself is a Column Oriented database. The fact that names reserved by the Python API are enforced in the API, but not in the Datastore itself makes me question the extent Datastore mirrors the internal workings of BigTable. For example, validation features in the ndb.Model class are enforced in the application code but not the datastore. An entity saved using the ndb.Model class can be retrieved someplace else in the app that doesn't use the Model class, modified, properties added, and then saved to datastore without raising an error until loaded into a new instance of the Model class. With that said, is it safe to say Google Cloud Datastore is a Column Oriented NoSQL database? If not, then what is it? | 1 | 3 | 0.53705 | 0 | false | 37,609,672 | 1 | 429 | 1 | 1 | 0 | 37,602,604 | Strictly speaking, Google Cloud Datastore is distributed multi-dimensional sorted map. As you mentioned it is based on Google BigTable, however, it is only a foundation.
From high level point of view Datastore actually consists of three layers.
BigTable
This is a necessary base for Datastore. Maps row key, column key and timestamp (three-dimensional mapping) to an array of bytes. Data is stored in lexicographic order by row key.
High scalability and availability
Strong consistency for single row
Eventual consistency for multi-row level
Megastore
This layer adds transactions on top of the BigTable.
Datastore
A layer above Megastore. Enables to run queries as index scans on BigTable. Here index is not used for performance improvement but is required for queries to return results.
Furthermore, it optionally adds strong consistency for multi-row level via ancestor queries. Such queries force the respective indexes to update before executing actual scan. | 1 | 0 | 0 | Is Google Cloud Datastore a Column Oriented NoSQL database? | 1 | python,google-app-engine,google-cloud-datastore | 0 | 2016-06-02T21:41:00.000 |
Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacing between javaEE mysql database. But my problem is python can access the same database.?
I new to here and developing. I appreciate your kindness. Think I can get a best solution | 0 | 1 | 0.099668 | 0 | false | 37,759,626 | 0 | 96 | 2 | 0 | 0 | 37,757,927 | It's a database. It doesn't care what language or application you're using too access it. That's one of the benefits of having standards like the MySQL protocol, SQL in general, or even things like TCP/IP: They allow different systems to seamlessly inter-operate. | 1 | 0 | 0 | can python and php access same mysqldb? | 2 | php,python,mysql | 0 | 2016-06-10T22:22:00.000 |
Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacing between javaEE mysql database. But my problem is python can access the same database.?
I new to here and developing. I appreciate your kindness. Think I can get a best solution | 0 | 0 | 0 | 0 | false | 37,760,285 | 0 | 96 | 2 | 0 | 0 | 37,757,927 | Like @tadman said, yes.
All you care about is making a new connection and obtaining a cursor in each of your program (no matter what language).
The cursor is what does what you want (analogous to executing an actual query in whatever program you're using). | 1 | 0 | 0 | can python and php access same mysqldb? | 2 | php,python,mysql | 0 | 2016-06-10T22:22:00.000 |
As the title says, simple question...
When to use pyodbc and when to use jaydebeapi in Python 2/3?
Let me elaborate with a couple of example scenarios...
If I were a solution architect and am looking at a Pyramid Web Server looking to access multiple RDBMS types (HSQLDB, Maria, Oracle, etc) with the expectation of heavy to massive concurrency and need for performance in latency in a monolithic web server, which paradigm would be chosen? And why?
If I were to implement an Enterprise Microservice solution (a.k.a. the new SOA) with each microservice accessing specific targeted RDBMS but with heavy load and perfomance latency requirements each, which paradigm would be chosen? And why?
Traditionally JDBCs performed significantly better in large Enterprise solutions requiring good concurrency. Are the same idiosyncracies prevalent in Python ? Is there another way besides the two mentioned above?
I am new to Python so please be patient if my question doesn't make sense and I'll attempt to elaborate further. It is best to think about my question from a high-level solution design then going from the ground up as a developer. What would you mandate as the paradigm if you were the sol-architect? | 1 | 2 | 1.2 | 0 | true | 37,793,124 | 0 | 1,110 | 1 | 0 | 0 | 37,792,956 | Simple answer - until more details given in question:
In case you want to speak ODBC with the database: Go with pyodbc or for a pure python solution with pypyodbc
Else if you want to talk JDBC with the database try jaydebeapi
This should depend mor on the channel you want to use between python and the database and less on the version of python you are using. | 1 | 0 | 1 | When to use pyodbc and when to use jaydebeapi in Python 2/3? | 1 | python,python-3.x,pyodbc,jaydebeapi | 0 | 2016-06-13T14:54:00.000 |
I'm using a python driver (mysql.connector) and do the following:
_db_config = {
'user': 'root',
'password': '1111111',
'host': '10.20.30.40',
'database': 'ddb'
}
_connection = mysql.connector.connect(**_db_config) # connect to a remote server
_cursor = _connection.cursor(buffered=True)
_cursor.execute("""SELECT * FROM database LIMIT 1;""")
In some cases, the call to _cursor.execute() hangs with no exception
By the way, when connecting to a local MySQL server it seems to be ok | 1 | -1 | -0.099668 | 0 | false | 37,839,311 | 0 | 1,546 | 1 | 0 | 0 | 37,809,163 | Moving to MySQLdb (instead of mysql.connector) solved all the issues :-) | 1 | 0 | 0 | Call to MySQL cursor.execute() (Python driver) hangs | 2 | python,mysql | 0 | 2016-06-14T10:15:00.000 |
I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy? | 1 | 1 | 1.2 | 0 | true | 46,558,758 | 1 | 2,039 | 2 | 0 | 0 | 37,910,066 | Old question, but the answer seems to be that database_exists and create_database have special case code for when the engine URL starts with postgresql, but if the URL starts with just postgres, these functions will fail. However, SQLAlchemy in general works fine with both variants.
So the solution is to make sure the database URL starts with postgresql:// and not postgres://. | 1 | 0 | 0 | Heroku SQLAlchemy database does not exist | 3 | python,postgresql,heroku,sqlalchemy,heroku-postgres | 0 | 2016-06-19T17:44:00.000 |
I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy? | 1 | 0 | 0 | 0 | false | 62,351,512 | 1 | 2,039 | 2 | 0 | 0 | 37,910,066 | so I was getting the same error and after checking several times I found that I was giving a trailing space in my DATABASE_URL. Which was like DATABASE_URL="url<space>".
After removing the space my code runs perfectly fine. | 1 | 0 | 0 | Heroku SQLAlchemy database does not exist | 3 | python,postgresql,heroku,sqlalchemy,heroku-postgres | 0 | 2016-06-19T17:44:00.000 |
I'm quite new to Python and trying to fetch data in HTML and saved to excels using xlwt.
So far the program seems work well (all the output are correctly printed on the python console when running the program) except that when I open the excel file, an error message saying 'We found a problem with some content in FILENAME, Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes.' And after I click Yes, I found that a lot of data fields are missing.
It seems that roughly the first 150 lines are fine and the problem begins to rise after that (In total around 15000 lines). And missing data fields concentrate at several columns with relative high data volume.
I'm thinking if it's related to sort of cache allocating mechanism of xlwt?
Thanks a lot for your help here. | 0 | -2 | -0.379949 | 0 | false | 37,931,428 | 0 | 219 | 1 | 0 | 0 | 37,925,969 | seems like a caching issue.
Try sheet.flush_row_data() every 100 rows or so ? | 1 | 0 | 0 | Python XLWT: Excel generated by Python xlwt contains missing value | 1 | python,xlwt | 0 | 2016-06-20T15:11:00.000 |
I have stored a pyspark sql dataframe in parquet format. Now I want to save it as xml format also. How can I do this? Solution for directly saving the pyspark sql dataframe in xml or converting the parquet to xml anything will work for me. Thanks in advance. | 0 | -1 | -0.099668 | 0 | false | 37,989,050 | 0 | 992 | 1 | 0 | 1 | 37,945,725 | You can map each row to a string with xml separators, then save as text file | 1 | 0 | 0 | How to save a pyspark sql DataFrame in xml format | 2 | xml,python-2.7,pyspark,spark-dataframe,parquet | 0 | 2016-06-21T13:24:00.000 |
As the data retrieving is too slow when I am querying for the whole data at once in MongoDB using the query db.find({}, {'_id':0}).
I am using PyMongo
How can I retrieve all the documents faster using Python driver.
I think indexing can make data retrieve faster but how to apply Indexing on whole collection to make db.find({}) query for whole collection runs faster. | 1 | 0 | 0 | 0 | false | 37,991,450 | 0 | 117 | 1 | 0 | 0 | 37,991,245 | Indexing will drastically speed up finding subsets of documents within a collection, but will not (to my knowledge) speed up pulling the entire collection.
The reason indexing speeds up finding subsets is that mongo does not have to iterate through each document to see if they match the query- instead mongo can just go to each specific document location by looking it up in the index.
If you are returning the entire collection- then the index has no effect. Mongo fundamentally has to iterate through every document in the collection.
The storage engine you chose will effect hte speed, so I suggest you read up on the differences between Wired Tiger and mmapV1. I know there are third party ones but can't think of them off the top of my head. | 1 | 0 | 1 | How to apply indexing in mongodb to read the whole data at once faster | 1 | python,mongodb,pymongo | 0 | 2016-06-23T12:10:00.000 |
I have an exelfile that I want to convert but the default type for numbers is float. How can I change it so xlwings explicitly uses strings and not numbers?
This is how I read the value of a field:
xw.Range(sheet, fieldname ).value
The problem is that numbers like 40 get converted to 40.0 if I create a string from that. I strip it with: str(xw.Range(sheetFronius, fieldname ).value).rstrip('0').rstrip('.') but that is not very helpful and leads to errors because sometimes the same field can contain both a number and a string. (Not at the same time, the value is chosen from a list) | 1 | 0 | 0 | 0 | false | 70,226,530 | 0 | 3,124 | 1 | 0 | 0 | 37,996,435 | In my case conclusion was, just adding one row to the last row of raw data.
Write any text in the column you want to change to str, save, load, and then delete the last line. | 1 | 0 | 1 | How can I read every field as string in xlwings? | 2 | python,converter,type-conversion,xlwings | 0 | 2016-06-23T15:53:00.000 |
I am new at using raspberry pi.
I have a python 3.4 program that connects to a database on hostinger server.
I want to install mysql connector in raspberry pi.I searched a lot but I was not able to find answers . any help would be appreciated | 4 | 1 | 0.066568 | 0 | false | 48,172,697 | 0 | 31,314 | 1 | 0 | 0 | 38,007,240 | Just use $sudo apt-get install python3-mysqldb and it works on pi-3. | 1 | 0 | 0 | installing mysql connector for python 3 in raspberry pi | 3 | mysql,python-3.x,raspberry-pi2 | 0 | 2016-06-24T06:50:00.000 |
I have a python program that accesses SQL databases with the database login currently encoded in base64 in a text file. I'd like to encode the login instead using MD5 and store it in a config file, but after some research, I couldn't find much on the topic. Could someone point me in the right direction on where to start? | 0 | 1 | 1.2 | 0 | true | 38,016,382 | 0 | 397 | 1 | 0 | 0 | 38,016,242 | MD5, unfortunately, is a hash signature protocol, not an encryption protocol. It is used to generate strings that are used to detect even the very-slightest change to the value from which the MD5 hash was produced. But . . . (by design) . . . you cannot recover the value that was originally used to produce the signature!
If you are working in a corporate, "intra-net" setting, consider using LDAP (Microsoft OpenDirectory) or some other form of authorization/authentication, in lieu of "passwords." In this scenario, the security department authorizes your application to do certain things, and they provide you with an otherwise-meaningless token with which to do it. The database uses the presented token, along with other rules controlled only by the security department, to recognize your script and to grant selected access to it. The token is "useless if stolen."
If you do still need to use passwords, you'll need to find a different way to securely store them. MD5 cannot be used. | 1 | 0 | 0 | Encrypting a SQL Login in a Python program using MD5 | 2 | python,sql,python-3.x,config,md5 | 0 | 2016-06-24T14:49:00.000 |
simple question - if I run apache 32bit version, on 64bit OS, with a lot of memory (32GB RAM). Does this mean all the memory will go to waste since 32bit apache can't use more then 3GB ram? | 0 | 0 | 0 | 0 | false | 38,040,277 | 1 | 203 | 1 | 0 | 0 | 38,040,240 | I would assume so. You should definitely go for a 64-bit version of Apache to make use of all the memory available. | 1 | 0 | 0 | Apache web server 32bit on 64bit computer | 1 | python,django,apache,32bit-64bit,32-bit | 0 | 2016-06-26T15:45:00.000 |
I'm new to pythonanywhere. I wonder how to load data from local csv files (there are many of them, over 1,000) into a mysql table. Let's say the path for the folder of the csv files is d:/data. How can I write let pythonanywhere visit the local files? Thank you very much! | 2 | 2 | 1.2 | 0 | true | 38,056,627 | 0 | 1,111 | 1 | 0 | 0 | 38,045,616 | You cannot get PythonAnywhere to read the files directly off your machine. At the very least, you need to upload the file to PythonAnywhere first. You can do that from the Files tab. Then the link that Rptk99 provided will show you how to import the file into MySQL. | 1 | 0 | 0 | Pythonanywhere Loading data from local files | 1 | database,python-2.7,pythonanywhere | 0 | 2016-06-27T03:46:00.000 |
In Python, what is a database "cursor" most like?
A method within a class
A Python dictionary
A function
A file handle
I have searched on internet but I am not getting proper justification of this question. | 1 | 2 | 0.197375 | 0 | false | 38,067,434 | 0 | 2,948 | 1 | 0 | 0 | 38,067,324 | Probably it is most like a file handle.
That does not mean that it is a file handle, and a cursor is actually an object - an instance of a Cursor class (depending on the actual db driver in use).
The reason that it's similar to a file handle is that you can consume data from it, but (in general) you can't go back to previously consumed data. Consumption of data is therefore unidirectional. Reading from a file handle returns characters/bytes, reading from a cursor returns rows. | 1 | 0 | 1 | Database Cursor | 2 | python,database,cursor | 0 | 2016-06-28T04:55:00.000 |
I am trying to migrate database from SQL Server is in 172.16.12.116 to MariaDB (Windows) is in 172.16.12.107 through MySQL Workbench 6.1.4.
Source selection got succeeded. But when I am trying to connect to target I am getting this error:
Error during Check target DBMS connection: MySQLError("Host '172.16.12.116' is not allowed to connect to this MariaDB server (code 1130)"): error calling Python module function DbMySQLFE.connect
What possibly could be a problem? | 0 | 0 | 0 | 0 | false | 38,116,416 | 0 | 806 | 1 | 0 | 0 | 38,100,722 | MySQL Workbench only works with MySQL servers, with the exception of migration sources (which can be Postgres, Sybase and others).
What you can do however is first to migrate to a MySQL server and then dump the imported data and import that in MariaDB. Might require a few adjustments then. | 1 | 0 | 0 | Error during Check target DBMS connection | 1 | mysql,python-2.7,mysql-workbench,mariadb | 0 | 2016-06-29T13:15:00.000 |
I have a list of variables with unicode characters, some of them for chemicals like Ozone gas: like 'O\u2083'. All of them are stored in a sqlite database which is read in a Python code to produce O3. However, when I read I get 'O\\u2083'. The sqlite database is created using an csv file that contains the string 'O\u2083' among others. I understand that \u2083 is not being stored in sqlite database as unicode character but as 6 unicode characters (which would be \,u,2,0,8,3). Is there any way to recognize unicode characters in this context? Now my first option to solve it is to create a function to recognize set of characters and replace for unicode characters. Is there anything like this already implemented? | 0 | 2 | 0.132549 | 0 | false | 38,146,103 | 0 | 1,263 | 1 | 0 | 0 | 38,106,808 | SQLite allows you to read/write Unicode text directly. u'O\u2083' is two characters u'O' and u'\u2083' (your question has a typo: 'u\2083' != '\u2083').
I understand that u\2083 is not being stored in sqlite database as unicode character but as 6 unicode characters (which would be u,\,2,0,8,3)
Don't confuse u'u\2083' and u'\u2083': the latter is a single character while the former is 4-character sequence: u'u', u'\x10' ('\20' is interpreted as octal in Python), u'8', u'3'.
If you save a single Unicode character u'\u2083' into a SQLite database; it is stored as a single Unicode character (the internal representation of Unicode inside the database is irrelevant as long as the abstraction holds).
On Python 2, if there is no from __future__ import unicode_literals at the top of the module then 'abc' string literal creates a bytestring instead of a Unicode string -- in that case both 'u\2083' and '\u2083' are sequences of bytes, not text characters (\uxxxx is not recognized as a unicode escape sequence inside bytestrings). | 1 | 0 | 0 | Reading unicode characters from file/sqlite database and using it in Python | 3 | python,sqlite,unicode | 0 | 2016-06-29T17:51:00.000 |
i have been trying to connect to SQL Server (I have SQL Server 2014 installed on my machine and SQL Native Client 11.0 32bit as driver) using Python and specifically pyodbc but i did not manage to establish any connection.
This is the connection string i am using:
conn = pyodbc.connect('''DRIVER={SQL Server Native Client 11.0}; SERVER=//123.45.678.910; DATABASE=name_database;UID=blabla;PWD=password''')
The error message i am getting is this:
Error: ('08001', '[08001] [Microsoft][SQL Server Native Client 11.0]Named Pipes Provider: Could not open a connection to SQL Server [161]. (161) (SQLDriverConnect)')
Now, is this caused by the fact that both Python (i have version 3.5.1) and pyodbc are 64bit while the SQL Driver is 32bit?
If yes, how do i go about solving this problem?
How do i adapt pyodbc to query a 32bit database?
I am experiencing the same problem with Oracle database OraCLient11g32_home1
For your information, my machine runs Anaconda 2.5.0 (64-bit).
Any help would be greatly appreciated.Thank you very much in advance. | 0 | 0 | 0 | 0 | false | 38,146,099 | 0 | 228 | 1 | 0 | 0 | 38,145,048 | I may be missing something here. Why don't you connect to your Oracle database as a SQL Server linked server (or the other way around) ? | 1 | 0 | 0 | Database Connection SQL Server / Oracle | 1 | python,sql-server,database,oracle,python-3.x | 0 | 2016-07-01T12:08:00.000 |
I just started using Ipython in Pycharm.
What's the shortcut for insert a cell for Ipython in Pycharm?
To insert a cell between the 2nd and 3rd cell.
To insert a cell at the end of code
According to Pycharm documentation, way to add cell as follows.
But it doesn't work for me. Anyone find the same issue?
Since the new cell is added below the current one, click the cell with import statement - its frame becomes green. Then on the toolbar click add (or press Alt+Insert). | 2 | 1 | 0.099668 | 0 | false | 70,563,736 | 0 | 2,299 | 1 | 0 | 0 | 38,151,292 | I've had the same concern and right now I remembered that you can just write #%% (for code cell) or #%% md (for markdown cell) anywhere you want and it will create a new cell | 1 | 0 | 1 | Shortcut for insert a cell below for Ipython in Pycharm? | 2 | ipython,pycharm | 0 | 2016-07-01T17:49:00.000 |
I want to do a bulk insertion in SQL alchemy and would prefer to remove an index prior to making the insertion, reading it when the insertion is complete.
I see adding and removing indexes is supported by Alembic for migrations, but is this possible with SQLAlchemy? If so, how? | 1 | 1 | 1.2 | 0 | true | 38,949,945 | 0 | 1,562 | 1 | 0 | 0 | 38,238,858 | The best method is to just execute sql. In this casesession.execute("DROP INDEX ...") | 1 | 0 | 0 | Drop and read index using SQLAlchemy | 1 | python,sqlalchemy,migration | 0 | 2016-07-07T06:34:00.000 |
I'm currently using a mix of smart view and power query(sql) to load data into Excel models however my excel always crashes when smart view is used. I'm required to work in Excel but I'm know looking at finding a way to periodically load data from Essbase into my SQL server database and only use power query(sql) for all my models. What would be my best options in doing this? Being a Python enthusiast I found essbasepy.py however there isn't much documentation on it. Please help | 0 | 0 | 1.2 | 0 | true | 38,299,572 | 0 | 2,203 | 1 | 0 | 0 | 38,270,552 | There are a couple of ways to go. The most straightforward is to export all of the data from your Essbase database using column export, then designing a process to load the data into SQL Server (such as using the import functionality or BULK IMPORT, or SSIS...).
Another approach is to use the DataExport calc script command to export either to a file (that you then load into SQL) or directly to the relational database (DataExport can be configured to export data directly to relational).
In either case, you will need privileges that are greater than normal user privileges, and either approach involves Essbase automation that may require you to coordinate with the Essbase admin. | 1 | 0 | 0 | Load data from Essbase into SQL database | 1 | python,sql-server,excel,hyperion,essbase | 0 | 2016-07-08T15:37:00.000 |
I work on a raspberry pi project and use Python + Kivy for such reasons:
I read some string values comming from a device installed in a field every 300ms.
As soon as I see certain value I trigger a python thread to run another function which takes the string and stores it in a list and timestamp it.
My kivy app displays the value stored in the list and run some other functions.
The question is: Is it better approach to save received strings into DB and let kivy read DB or is it better for Python to append list and let to run another function that runs through list and trigger kivy task? | 0 | 0 | 0 | 0 | false | 38,304,824 | 0 | 56 | 1 | 0 | 0 | 38,295,148 | Both approaches have pros and cons.
A database is designed to store and query data. You can query data easily (SQL) from multiple processes. If you don't have multiple processes and no complicated querys a database doesn't really offers that much. Maybe persistence if that is a concern for you. If you don't need the features a database offers, don't use one.
If you simply want to store a bit data a list is better. It's probably faster because you don't need inter process communication. Also if you store the data in the database you will still need to get it into the python process somehow, and then you will probably put it in a list.
Based on your requests a database doesn't offer any features you need, so you should go with a simple list. | 1 | 0 | 1 | is it better to read from LIST or from Database? | 1 | python,database,kivy | 0 | 2016-07-10T18:24:00.000 |
In the Python 3 docs, it states that the dbm module will use gdbm if it's installed. In my script I use from dbm.gnu import open as dbm_open to try and import the module. It always returns with the exception ImportError: No module named '_gdbm'. I've gone to the gnu website and have downloaded the latest version. I installed it using
./configure --enable-libgdbm-compat, make; make check; make install, and it installed with no errors. I can access the man page for the library but I still can't import it into Python 3.5.2 (Anaconda). How do I install the Python module for gdbm? | 5 | 0 | 0 | 0 | false | 70,973,357 | 0 | 2,327 | 1 | 1 | 0 | 38,385,630 | I got similar issue though I am not sure which platform you are using.
Steps are:
look for file _gdbm.cpython-"python version"-.so example file: _gdbm.cpython-39-darwin.so
Once you find the path check which python version in directory path.
Try creating same python venv.
Execute your code.
Before this make sure you have install appropriate gdbm version installed on host machine, for mac it's different for ubuntu it's different name. | 1 | 0 | 1 | Python: How to Install gdbm for dbm.gnu | 1 | python-3.x,gdbm | 0 | 2016-07-14T23:13:00.000 |
Overview: I have data something like this (each row is a string):
81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M
3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M
B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M
61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M
And I want to sort each row based on the first timestamp that is present in the each String, which for these four records is:
2016-07-14 01:28:59
2016-07-14 06:25:32
2016-07-14 08:26:45
2016-07-14 14:29:13
Now I know the sort() method but I don't understand how can I use here to sort all the rows based on this (timestamp) quantity, and I do need to keep the final sorted data in the same format as some other service is going to use it.
I also understand I can make the key() but I am not clear how that can be made to sort on the timestamp field. | 5 | 1 | 0.066568 | 1 | false | 38,389,853 | 0 | 358 | 1 | 0 | 0 | 38,388,799 | you can use string.split(),string.split(',')[1] | 1 | 0 | 0 | Sort A list of Strings Based on certain field | 3 | python,list,python-2.7,sorting | 0 | 2016-07-15T05:57:00.000 |
I have a dataframe in Python. Can I write this data to Redshift as a new table?
I have successfully created a db connection to Redshift and am able to execute simple sql queries.
Now I need to write a dataframe to it. | 32 | 5 | 0.141893 | 1 | false | 42,047,026 | 0 | 57,271 | 1 | 0 | 0 | 38,402,995 | Assuming you have access to S3, this approach should work:
Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this)
Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table script and push it to Redshift to create an empty table
Step 3: Send a copy command from your Python environment to Redshift to copy data from S3 into the empty table created in step 2
Works like a charm everytime.
Step 4: Before your cloud storage folks start yelling at you delete the csv from S3
If you see yourself doing this several times, wrapping all four steps in a function keeps it tidy. | 1 | 0 | 0 | How to write data to Redshift that is a result of a dataframe created in Python? | 7 | python,pandas,dataframe,amazon-redshift,psycopg2 | 0 | 2016-07-15T18:33:00.000 |
I have an Excel file(xlsx) that already has lots of data in it. Now I am trying to use Python to write new data into this Excel file. I looked at xlwt, xldd, xlutils, and openpyxl, all of these modules requires you to load the data of my excel sheet, then apply changes and save to a new Excel file. Is there any way to just change the data in the existing excel sheet rather than load workbook or saving to new files? | 1 | 7 | 1.2 | 0 | true | 38,463,454 | 0 | 5,172 | 1 | 0 | 0 | 38,463,258 | This is not possible because XLSX files are zip archives and cannot be modified in place. In theory it might be possible to edit only a part of the archive that makes up an OOXML package but, in practice this is almost impossible because relevant data may be spread across different files. | 1 | 0 | 0 | Modifying and writing data in an existing excel file using Python | 2 | python,openpyxl,xlrd,xlwt,xlutils | 0 | 2016-07-19T15:52:00.000 |
I already have an owl ontology which contains classes, instances and object properties. How can I map them to a relational data base such as MYSQL using a Python as a programming language(I prefer Python) ?
For example, an ontology can contains the classes: "Country and city" and instances like: "United states and NYC".
So I need manage to store them in relational data bases' tables. I would like to know if there is some Python libraries to so. | 1 | 0 | 0 | 0 | false | 38,534,519 | 0 | 1,036 | 1 | 0 | 0 | 38,498,290 | Use the right tool for the job. You're using RDF, that it's OWL axioms is immaterial, and you want to store and query it. Use an RDF database. They're optimized for storing and querying RDF. It's a waste of your time to homegrow storage & query in MySQL when other folks have already figured out how best to do this.
As an aside, there is a way to map RDF to a relational database. There's a formal specification for this, it's called R2RML. | 1 | 0 | 0 | How can I map an ontology components to a relational database? | 2 | python,mysql,relational-database,semantic-web,ontology | 0 | 2016-07-21T07:56:00.000 |
I need to create a dashboard based upon an excel table and I know excel has a feature for creating dashboards. I have seen tutorials on how to do it and have done my research, but in my case, the excel table on which the dashboard would be based is updated every 2 minutes by a python script. My question is, does the dashboard display automatically if a value in the table has modified, or does it need to be reopened, reloaded, etc..? | 0 | 1 | 0.197375 | 0 | false | 38,520,493 | 0 | 1,488 | 1 | 0 | 0 | 38,518,227 | If the "dashboard" is in Excel and if it contains charts that refer to data in the current workbook's worksheets, then the charts will update automatically when the data is refreshed, unless the workbook calculation mode is set to "manual". By default calculation mode is set to "automatic", so changes in data will immediately reflect in charts based on that data.
If the "dashboard" lives in some other application that looks at the Excel workbook for the source data, you may need to refresh the data connections in the dashboard application after the Excel source data has been refreshed. | 1 | 0 | 0 | Can Excel Dashboards update automatically? | 1 | python,excel,dashboard | 0 | 2016-07-22T04:33:00.000 |
Would that be possible to create programatically a new OGC WMS (1.1/1/3) service using:
Python
MapProxy
Mapnik
PostGIS/Postgres
any script/gist or sample would be more then appreciated.
Cheers, M | 0 | 0 | 0 | 0 | false | 39,417,434 | 0 | 662 | 1 | 0 | 0 | 38,524,212 | If we are looking for publish data in postgres to WMS, enable tilecache, and use more advanced rendering engine like mapnik, then I would say there could be one component missing is the GIS server.
So if I am guessing your requirement correctly as I mentioned earlier then here is what the system design could be:
Use postgres/postgis as database connection.
Write your own server side program using python to create the
service definition file for the dynamic WMS (mapfile if you are
going to use MapServer).
Then your program handling tilecache/tile seeding by change the
configuration file (.yaml) in mapproxy.
Then escalate WMS to mapnik for rendering and expose the output.
Like someone else mentioned, it would be easy to have a template
configuration file for each step and do parameter substitution. | 1 | 0 | 0 | python script for creating maproxy OGC WMS service using Mapnik and PostGIS | 2 | python,postgis,mapnik | 0 | 2016-07-22T10:32:00.000 |
I am writing a Django application that will have entries entered by users of the site. Now suppose that everything goes well, and I get the expected number of visitors (unlikely, but I'm planning for the future). This would result in hundreds of millions of entries in a single PostgreSQL database.
As iterating through such a large number of entries and checking their values is not a good idea, I am considering ways of grouping entries together.
Is grouping entries in to sets of (let's say) 100 a better idea for storing this many entries? Or is there a better way that I could optimize this? | 1 | 1 | 1.2 | 0 | true | 38,587,539 | 1 | 66 | 1 | 0 | 0 | 38,585,719 | Store one at a time until you absolutely cannot anymore, then design something else around your specific problem.
SQL is a declarative language, meaning "give me all records matching X" doesn't tell the db server how to do this. Consequently, you have a lot of ways to help the db server do this quickly even when you have hundreds of millions of records. Additionally RDBMSs are optimized for this problem over a lot of years of experience so to a certain point, you will not beat a system like PostgreSQL.
So as they say, premature optimization is the root of all evil.
So let's look at two ways PostgreSQL might go through a table to give you the results.
The first is a sequential scan, where it iterates over a series of pages, scans each page for the values and returns the records to you. This works better than any other method for very small tables. It is slow on large tables. Complexity is O(n) where n is the size of the table, for any number of records.
So a second approach might be an index scan. Here PostgreSQL traverses a series of pages in a b-tree index to find the records. Complexity is O(log(n)) to find each record.
Internally PostgreSQL stores the rows in batches with fixed sizes, as pages. It already solves this problem for you. If you try to do the same, then you have batches of records inside batches of records, which is usually a recipe for bad things. | 1 | 0 | 0 | Storing entries in a very large database | 1 | python,django,database,postgresql,saas | 0 | 2016-07-26T09:13:00.000 |
I have python 2.7.12 installed on my server. I'm using PuTTY to connect to my server. When running my python script I get the following.
File "home/myuser/python/lib/python2.7/site-packages/peewee.py", line 3657, in _connect
raise ImproperlyConfigured('pysqlite or sqlite3 must be installed.')
peewee.ImproperlyConfigured: pysqlite or sqlite3 must be installed.
I thought sqlite was installed with python 2.7.12, so I'm assuming the issue is something else. Haven't managed to find any posts on here yet that have been helpful.
I am missing something?
Thanks in advance | 1 | 0 | 0 | 0 | false | 38,715,072 | 0 | 955 | 1 | 0 | 0 | 38,589,963 | Peewee will use either the standard library sqlite3 module or, if you did not compile Python with SQLite, Peewee will look for pysqlite2.
The problem is most definitely not with Peewee on this one, as Peewee requires a SQLite driver to use the SqliteDatabase class... If that driver does not exist, then you need to install it. | 1 | 0 | 0 | Python - pysqlite or sqlite3 must be installed | 1 | python,sqlite | 0 | 2016-07-26T12:30:00.000 |
I am attempting to run a python 2.7 program on HTCondor, however after submitting the job and using 'condor_q' to assess the job status, I see that the job is put in 'held'.
After querying using 'condor_q -analyse jobNo.' the error message is "Hold reason: Error from Ubuntu: Failed to execute '/var/lib/condor/execute/dir_12033/condor_exec.exe': (errno=8: 'Exec format error').
I am unsure how to resolve this error, any help would be much appreciated. As I am relatively new to HTCondor and Ubuntu could any guidance be step wise and easy to follow
I am running Ubuntu 16.04 and the latest release of HTCondor | 0 | 0 | 0 | 0 | false | 38,618,691 | 0 | 197 | 1 | 1 | 0 | 38,593,488 | Update, managed to solve my problem. I needed to make sure that all directory paths were correct as I found that HTCondor was looking within its own files for the resources my submission program used. I therefore needed to define a variable in the .py file that contains the directory to the resource | 1 | 0 | 0 | Unable to submit python files to HTCondor- placed in 'held' | 1 | python,ubuntu | 0 | 2016-07-26T15:03:00.000 |
Is there any way in SQLAlchemy by reflection or any other means to get the name that a column has in the corresponding model? For example i have the person table with a column group_id. In my Person class this attribute is refered to as 'group' is there a way to dynamically and generically getting this without importing or call the Person class? | 0 | 0 | 0 | 0 | false | 38,652,751 | 0 | 937 | 1 | 0 | 0 | 38,639,948 | Unfortunately it is most likely not possible... | 1 | 0 | 0 | SQLAlchemy get attribute name from table and column name | 2 | python,sqlalchemy | 0 | 2016-07-28T14:55:00.000 |
I have a Flask webapp running on Pythonanywhere. I've recently been having a look at using Google Cloud's MYSQL service. It requires a list of IP addresses to be whitelisted for access.
How can I find this? I've tried 50.19.109.98 which is the IP address for Python Anywhere, but unless there is a secondary issue thats not it.
Thanks,
Ben | 2 | 2 | 0.379949 | 0 | false | 38,704,698 | 0 | 1,260 | 1 | 0 | 0 | 38,686,528 | Your code running on PythonAnywhere could be on a whole bunch of IPs that could change at any time. You could try to add all the IPs, but that might not be the best/most sustainable. | 1 | 0 | 0 | Pythonanywhere: getting the IP address for database access whitelist | 1 | pythonanywhere | 0 | 2016-07-31T17:17:00.000 |
I am creating a web project where I take in Form data and write to a SQL database.
The forms will be a questionnaire with logic branching. Due to the nature of the form, and the fact that this is an MVP project, I've opted to use an existing form service (e.g Google Forms/Typeform).
I was wondering if it's feasible to have form data submitted to multiple different tables (e.g CustomerInfo, FormDataA, FormDataB, etc.). While this might be possible with a custom form application, I do not think it's possible with Google Forms and/or Typeform.
Does anyone have any suggestions on how to parse user submitted Form data into multiple tables when using Google Forms or Typeform? | 0 | 0 | 0 | 0 | false | 39,074,108 | 1 | 638 | 1 | 0 | 0 | 38,703,892 | You can add a script in the Google spreadsheet with an onsubmit trigger. Then you can do whatever you want with the submitted data. | 1 | 0 | 0 | Using Google Forms to write to multiple tables? | 1 | python,sql,google-forms | 0 | 2016-08-01T16:37:00.000 |
I'm trying to serialize results from a SQLAlchemy query. I'm new to the ORM so I'm not sure how to filter a result set after I've retrieved it. The result set looks like this, if I were to flatten the objects:
A1 B1 V1
A1 B1 V2
A2 B2 V3
I need to serialize these into a list of objects, 1 per unique value for A, each with a list of the V values. I.E.:
Object1:
A: A1
B: B1
V: {V1, V2}
Object2:
A: A2
B: B2
V: {V3}
Is there a way to iterate through all unique values on a given column, but with the ability to return a list of values from the other columns? | 0 | 0 | 0 | 0 | false | 38,882,242 | 0 | 591 | 1 | 0 | 0 | 38,708,645 | Turns out I needed to use association tables and the joinedload() function. The documentation is a bit wonky but I got there after playing with it for a while. | 1 | 0 | 0 | How do I filter SQLAlchemy results based on a columns' value? | 2 | python,orm,sqlalchemy | 0 | 2016-08-01T21:46:00.000 |
I wana draw some simple shapes in excel file like as "arrow, line, rectangle, oval" using XLSXWriter, but i can find any example to do it. Is it possible ? If not, what library of python can do that ?
Thanks! | 2 | 0 | 0 | 0 | false | 38,740,317 | 0 | 1,339 | 1 | 0 | 0 | 38,738,706 | is it possible?
Unfortunately not. Shapes aren't supported in XlsxWriter, apart from Textbox. | 1 | 0 | 0 | How to drawing shapes using XLSXWriter | 1 | python,xlsxwriter | 0 | 2016-08-03T08:46:00.000 |
I have two separate programs; one counts the daily view stats and another calculates earning based on the stats.
Counter runs first and followed by Earning Calculator a few seconds later.
Earning Calculator works by getting stats from counter table using date(created_at) > date(now()).
The problem I'm facing is that let's say at 23:59:59 Counter added 100 views stats and by the time the Earning Calculator ran it's already the next day.
Since I'm using date(created_at) > date(now()), I will miss out the last 100 views added by the Counter.
One way to solve my problem is to summarise the previous daily report at 00:00:10 every day. But I do not like this.
Is there any other ways to solve this issue?
Thanks. | 0 | 0 | 0 | 0 | false | 38,770,424 | 1 | 53 | 1 | 0 | 0 | 38,766,962 | You have to put a date on your data and instead of using now() use it. | 1 | 0 | 0 | How to solve mysql daily analytics that happens when date changes | 1 | python,mysql,analytics | 0 | 2016-08-04T12:10:00.000 |
I want to develop a project that need a noSQL database. After searching a lot, I chose OrientDB. I want to make an API Rest that can connect to OrientDB.
Firstly, I wanted to use Flask to develop but I don't know if it's better to use Java native driver between Python binary driver to connect with database.
Anyone have results of performance between these drivers? | 0 | 0 | 0 | 0 | false | 38,802,276 | 1 | 127 | 1 | 0 | 0 | 38,795,545 | AFAIK on remote connection (with a standalone OrientDB server) performance would be the same.
The great advantage of using the Java native driver is the option to go embedded. If your deployment scenario allows it, you can avoid the standalone server and use OrientDB embedded into your Java application, avoiding network overhead. | 1 | 0 | 0 | Performance between Python and Java drivers with OrientDB | 1 | java,python,performance,orientdb | 0 | 2016-08-05T18:15:00.000 |
I recently updated an entity model to include some extra properties, and noticed something odd. For properties that have never been written, the Datastore query page shows a "—", but for ones that I've explicitly set to None in Python, it shows "null".
In SQL, both of those cases would be null. When I query an entity that has both types of unknown properties, they both read as None, which fits with that idea.
So why does the NDB datastore viewer differentiate between "never written" and "set to None", if I can't differentiate between them programatically? | 1 | 4 | 1.2 | 0 | true | 38,815,611 | 1 | 184 | 1 | 1 | 0 | 38,814,666 | You have to specifically set the value to NULL, otherwise it will not be stored in the Datastore and you see it as missing in the Datastore viewer.
This is an important distinction. NULL values can be indexed, so you can retrieve a list of entities where date of birth, for example, is null. On the other hand, if you do not set a date of birth when it is unknown, there is no way to retrieve a list of entities with date of birth property missing - you'll have to iterate over all entities to find them.
Another distinction is that NULL values take space in the Datastore, while missing values do not. | 1 | 0 | 0 | Why does the Google App Engine NDB datastore have both "—" and "null" for unkown data? | 1 | python,google-app-engine,null,google-cloud-datastore,app-engine-ndb | 0 | 2016-08-07T13:35:00.000 |
So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project? | 2 | 1 | 0.099668 | 0 | false | 38,886,655 | 1 | 166 | 1 | 0 | 0 | 38,843,404 | Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server. | 1 | 0 | 0 | Trying to make a development instance for a Python pyramid project | 2 | python,pyramid,pylons | 1 | 2016-08-09T06:17:00.000 |
I know it's possible to import Google BigQuery tables to R through bigrquery library. But is it possible to export tables/data frames created in R to Google BigQuery as new tables?
Basically, is there an R equivalent of Python's temptable.insert_data(df) or df.to_sql() ?
thanks for your help,
Kasia | 0 | 1 | 1.2 | 1 | true | 39,119,230 | 0 | 635 | 1 | 0 | 0 | 38,847,743 | It looks like bigrquery package does the job with insert_upload_job(). In the package documentation, it says this function
> is only suitable for relatively small datasets
but it doesn't specify any size limits. For me, it's been working for tens of thousands of rows. | 1 | 0 | 0 | Exporting R data.frame/tbl to Google BigQuery table | 1 | python,r,dataframe,google-bigquery | 0 | 2016-08-09T10:03:00.000 |
I am running a Flask app on an Apache 2.4 server. The app sends requests to an API built by a colleague using the Requests library. The requests are in a specific format and constructed by data stored in a MySQL database. The site is designed to show the feedback from the API on the index, and the user can edit the data stored in the MySQL database (and by extension, the data sent in the request) by another page, the editing page.
So let's say for example a custom field date is set to be "2006", I would access the index page, a request would be sent, the API does its magic and sends back data relevant to 2006. If I then went and changed the date to "2007" then the new field is saved in MySQL and upon navigating back to index the new request is constructed, sent off and data for 2007 should be returned.
Unfortunately that's not happening.
My when I change details on my editing page they are definitely stored to the database, but when I navigate back to the index the request sends the previous set of data. I think that Apache is causing the problem because of two reasons:
When I reset the server (service apache2 restart) the data sent back is the 'proper' data, even though I haven't touched the database. That is, the index is initially requesting 2006 data, I change it to request 2007 data, it still requests 2006 data, I restart the server, refresh the index and only then does it request 2007 data like it should have been doing since I edited it.
When I run this on my local Flask development server, navigating to the index page after editing an entry immediately returns the right result - it feeds off the same database and is essentially identical to the deployed server except that it's not running on apache.
Is there a way that Apache could be caching requests or something? I can't figure out why the server would keep sending old requests until I restart it.
EDIT:
The requests themselves are large and ungainly and the responses would return data that I'm not comfortable with making available for examples for privacy reasons.
I am almost certain that Apache is the issue because as previously stated, the Flask development server has no issues with returning the correct dataset. I have also written some requests to run through Postman, and these also return the data as requested, so the request structure must be fine. The only difference I can see between the local Flask app and the deployed one is Apache, and given that restarting the Apache server 'updates' the requests until the data is changed again, I think that it's quite clearly doing something untoward. | 0 | 0 | 0 | 0 | false | 38,869,412 | 1 | 83 | 1 | 0 | 0 | 38,854,382 | Dirn was completely right, it turned out not to be an Apache issue at all. It was SQL Alchemy all along.
I imagine that SQL Alchemy knows not to do any 'caching' when it requests data on the development server but decides that it's a good idea in production, which makes perfect sense really. It was not using the committed data on every search, which is why restarting the Apache server fixed it because it also reset the connection.
I guess that's what dirn meant by "How are you loading data in your application?" I had assumed that since I turned off Flask's debugging on the development server it would behave just like it would in deployment but it looks like something has slipped through. | 1 | 0 | 0 | Apache server seems to be caching requests | 1 | apache,flask,python-requests | 0 | 2016-08-09T15:06:00.000 |
Right now in Django, I have two databases:
A default MySQL database for my app and
an external Oracle database that, for my purposes, is read-only
There are far more tables in the external database than I need data from, and also I would like to modify the db layout slightly. Is there a way I can selectively choose what data in the external database I would like to sync to my database? The external database is dynamic, and I would like my app to reflect that.
Ex I would like to do something like this:
Say the external database has two tables (out of 100) as follows:
Table47
Eggs
Spam
Sausage
Table48
Name
Age
Color
And I want to keep the data like:
Foo
Eggs
Spam
Type (a foreign key)
Bar
Name
Age
Type (foreign key)
Type
Some fields
Is there a way I could do this in Django? | 0 | -1 | -0.197375 | 0 | false | 38,858,633 | 1 | 218 | 1 | 0 | 0 | 38,858,553 | Basically write models that match what you want your destination tables to be and then write something to migrate data between the two. I'd make this a comment if I could but not enough rep. | 1 | 0 | 0 | How do you selectively sync a database in Django? | 1 | mysql,django,oracle,python-2.7 | 0 | 2016-08-09T19:04:00.000 |
I am writing an application that uses historical time series data to perform simulations.
Is it better for application to load the data from the database into local data wrapper classes before executing the main loop (up to 30 years day by day) or connect to the database each day to pull the required data?
Which is more elegant and efficient? | 0 | 0 | 1.2 | 0 | true | 38,870,995 | 0 | 15 | 1 | 0 | 0 | 38,870,823 | For current computers 30 years of day by day data amounts to almost nothing if your dayly data remains below say 10kB. Since your simulation may need efficient retrieval, especially if it combines data from different dates, I'd read all the data in memory in one query and then start processing.
What is considered elegant is changing. Quite some years ago loading everything in to memory would have been considered a cardinal sin. Nowadays in-memory databases are common. Since databases in general offer set-level access (especially when queried by SQL, which you probably use) it would be quite inefficient to retrieve data item by item in a loop (although your database may be clever enough to cache things). | 1 | 0 | 0 | Database to wrapper-classes or direct connectivity to database for time-series simulation application? | 1 | python,database,matlab,oop,time-series | 0 | 2016-08-10T10:28:00.000 |
I am using sqlite (development stage) database for my django project. I would like to store a dictionary field in a model. In this respect, i would like to use django-hstore field in my model.
My question is, can i use django-hstore dictionary field in my model even though i am using sqlite as my database?
As per my understanding django-hstore can be used along with PostgreSQL (Correct me if i am wrong). Any suggestion in the right direction is highly appreciated. Thank you. | 2 | 3 | 1.2 | 0 | true | 38,875,962 | 1 | 912 | 1 | 0 | 0 | 38,875,927 | hstore is specific to Postgres. It won't work on sqlite.
If you just want to store JSON, and don't need to search within it, then you can use one of the many third-party JSONField implementations. | 1 | 0 | 0 | Django hstore field in sqlite | 1 | python,django,postgresql,sqlite,django-models | 0 | 2016-08-10T14:15:00.000 |
What are some basic steps for troubleshooting and narrowing down the cause for the "django.db.utils.ProgrammingError: permission denied for relation django_migrations" error from Django?
I'm getting this message after what was initially a stable production server but has since had some changes to several aspects of Django, Postgres, Apache, and a pull from Github. In addition, it has been some time since those changes were made and I don't recall or can't track every change that may be causing the problem.
I get the message when I run python manage.py runserver or any other python manage.py ... command except python manage.py check, which states the system is good. | 53 | 4 | 0.197375 | 0 | false | 62,814,973 | 1 | 32,047 | 1 | 0 | 0 | 38,944,551 | If you receive this error and are using the Heroku hosting platform its quite possible that you are trying to write to a Hobby level database which has a limited number of rows.
Heroku will allow you to pg:push the database even if you exceed the limits, but it will be read-only so any modifications to content won't be processed and will throw this error. | 1 | 0 | 0 | Steps to Troubleshoot "django.db.utils.ProgrammingError: permission denied for relation django_migrations" | 4 | python,django,apache,postgresql,github | 0 | 2016-08-14T17:06:00.000 |
I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.
I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it. | 1 | 2 | 0.197375 | 0 | false | 39,077,066 | 0 | 109 | 2 | 0 | 0 | 38,961,360 | In openpyxl you'll have to go cell by cell.
You could use Excel's builtin Data Validation or Conditional Formatting, which openpyxl supports, for this. Let Excel do the work and talk to it using xlwings. | 1 | 0 | 0 | Use openpyxl to verify the structure of a spreadsheet | 2 | python,excel,openpyxl | 0 | 2016-08-15T19:05:00.000 |
I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.
I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it. | 1 | 1 | 1.2 | 0 | true | 39,088,757 | 0 | 109 | 2 | 0 | 0 | 38,961,360 | I ended up just manually looking at each cell. I have to read them all into my data structures before I can process anything anyways so it actually made sense to check then. | 1 | 0 | 0 | Use openpyxl to verify the structure of a spreadsheet | 2 | python,excel,openpyxl | 0 | 2016-08-15T19:05:00.000 |
I am facing a strange problem right now. I am using pypyodbc to insert data into a test database hosted by AWS. This database that I created was by hand and did not imitate all relations and whatnot between tables. All I did was create a table with the same columns and the same datatypes as the original (let's call it master) database. When I run my code and insert the data it works in the test environment. Then I change it over to the master database and the code runs all the way through but no data is actually inputted. Is there any chance that there are security protocols in place which prevent me from inputting data in through the Python script rather than through a normal SQL query? Is there something I am missing? | 0 | 3 | 1.2 | 0 | true | 39,027,828 | 0 | 54 | 1 | 0 | 0 | 39,027,681 | It sounds like it's not pointing to the correct database. Have you made sure the connection information changes to point to the correct DB? So the server name is correct, the login credentials are good, etc.? | 1 | 0 | 0 | Same code inserts data into one database but not into another | 1 | python,sql-server,pypyodbc | 0 | 2016-08-18T21:18:00.000 |
I have a Django application where I use django-storages and amazon s3 to store images.
I need to move those images to a different account: different user different bucket.
I wanted to know how do I migrate those pictures?
my main concern is the links in my database to all those images, how do I update it? | 0 | 0 | 0 | 0 | false | 39,062,626 | 1 | 82 | 1 | 0 | 0 | 39,062,605 | The URL is relative to the amazon storage address you provide in your settings. so you only need to move the images to a new bucket and update your settings. | 1 | 0 | 0 | changing s3 storages with django-storages | 1 | django,amazon-s3,python-django-storages | 0 | 2016-08-21T09:08:00.000 |
I need to store some daily information in DynamoDB. Basically, I need to store user actions: UserID, StoreID, ActionID and Timestamp.
Each night I would like to process the information generated that day, do some aggregations, some reports, and then I can safely deleted those records.
How should I model this? I mean the hash key and the sort key... I need to have the full timestamp of each action for the reports but in order to query DynamoDB I guess it is easier to also save the date only.
I have some PKs as UserID and StoreID but anyhow I need to process all data each night, not the data related to one user or one store...
Thanks!
Patricio | 2 | 2 | 1.2 | 0 | true | 39,114,304 | 1 | 29 | 1 | 0 | 0 | 39,111,598 | You can use RabbitMQ to schedule jobs asynchronously. This would be faster than multiple DB queries. Basically, this tool allows you to create a job queue (Containing UserID, StoreID & Timestamp) where workers can remove (at midnight if you want) and create your reports (or whatever your heart desires).
This also allows you to scale your system horizontally across nodes. Your workers can be different machines executing these tasks. You will also be safe if your DB crashes (though you may still have to design redundancy for a machine running RabbitMQ service).
DB should be used for persistent storage and not as a queue for processing. | 1 | 0 | 0 | How to build model in DynamoDB if each night I need to process the daily records and then delete them? | 1 | python,database-design,amazon-dynamodb | 0 | 2016-08-23T22:22:00.000 |
For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.
I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).
I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.
I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.
I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?
I hope this makes sense. | 2 | 0 | 0 | 0 | false | 39,128,415 | 1 | 488 | 1 | 0 | 0 | 39,128,100 | I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database.
The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all references) is pushed to ElasticSearch and only then the transaction is committed and the flow is returned back to the caller.
This way I maintained full functionality of a relational database (atomic changes, transactions, constraints, triggers, etc.) and still have all entities cached with all their references (parent and child relations) together with the ability to invalidate individual cached objects.
Hope this helps. | 1 | 0 | 0 | How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying? | 2 | python,database,caching,redis,memcached | 0 | 2016-08-24T16:03:00.000 |
Background
I studied and found that bigQuery doesn't accept schemas defined by online tools (which have different formats, even though meaning is same).
So, I found that if I want to load data (where no. of columns keeps varying and increasing dynamically) into a table which has a fixed schema.
Thoughts
What i could do as a workaround is:
First check if the data being loaded has extra fields.
If it has, a schema mismatch will occur, so first you create a temporary table in BQ and load this data into the table using "autodetect" parameter, which gives me a schema (that is in a format,which BQ accepts schema files).
Now i can download this schema file and use it,to update my exsisting table in BQ and load it with appropriate data.
Suggestion
Any thoughts on this, if there is a better approach please share. | 0 | 1 | 1.2 | 0 | true | 39,172,452 | 0 | 353 | 1 | 0 | 0 | 39,141,642 | We are in the process of releasing a new feature that can update the schema of the destination table within a load/query job. With autodetect and the new feature you can directly load the new data to the existing table, and the schema will be updated as part of the load job. Please stay tuned. The current ETA is 2 weeks. | 1 | 0 | 0 | schema free solution to BigQuery Load job | 1 | python,google-analytics,google-bigquery,google-cloud-platform | 0 | 2016-08-25T09:34:00.000 |
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt? | 2 | 3 | 0.291313 | 0 | false | 39,163,285 | 0 | 1,006 | 2 | 0 | 0 | 39,158,621 | This is my issue too. SQLite using some kind of locking mechanism which prevent you doing concurrency operation on a DB. But here is a trick which i use when my db are small. You can select all your tables data into memory and operate on it and then update the original table.
As i said this is just a trick and it does not always solve the problem.
I advise to create your trick. | 1 | 0 | 1 | Can you have multiple read/writes to SQLite database simultaneously? | 2 | python,multithreading,sqlite | 0 | 2016-08-26T05:05:00.000 |
I am using Python with SQLite currently and wondering if it is safe to have multiple threads reading and writing to the database simultaneously. Does SQLite handle data coming in as a queue or have sort of mechanism that will stop the data from getting corrupt? | 2 | 2 | 0.197375 | 0 | false | 39,158,655 | 0 | 1,006 | 2 | 0 | 0 | 39,158,621 | SQLite has a number of robust locking mechanisms to ensure the data doesn't get corrupted, but the problem with that is if you have a number of threads reading and writing to it simultaneously you'll suffer pretty badly in terms of performance as they all trip over the others. It's not intended to be used this way, even if it does work.
You probably want to look at using a shared database server of some sort if this is your intended usage pattern. They have much better support for concurrent operations. | 1 | 0 | 1 | Can you have multiple read/writes to SQLite database simultaneously? | 2 | python,multithreading,sqlite | 0 | 2016-08-26T05:05:00.000 |
I would like to create a script in Python for logging into MKS integrity and calling an already defined MKS query. Since I am a newby in programming, I was wondering if there is any script example for the task. That would be a great help for getting me started.
Thank you! | 0 | 0 | 0 | 0 | false | 41,023,731 | 0 | 1,411 | 1 | 0 | 0 | 39,165,180 | I can't help you with python, but for MKS
connect to a host: IM connect --hostname=%host% --port=%port%
run query: im runquery --hostname=%host% --port=%port% %query_name%
You can see the help for each command if you just write IM command -? | 1 | 0 | 0 | Python script for MKS integrity query | 1 | python,mks-integrity | 0 | 2016-08-26T11:23:00.000 |
I have encountered a problem that I can not figure out.
I'm working on an application written in Python and a Sybase ASE database using sybpydb to communicate with the datbase.
Now I need to update a post where one of the columns in the where clause is of numeric(10) data type.
When selecting the post Python treats the data as a float no problem there.
But when I try to update the post using the numeric value I just got from the select i get a "Invalid data type" error.
My first thought was to try to convert the float to an integer but it still gives the same error | 0 | 0 | 0 | 0 | false | 39,202,310 | 0 | 152 | 1 | 0 | 0 | 39,168,251 | You need to capture the actual SQL query text which is sent to the ASE server before conclusions can be drawn. | 1 | 0 | 0 | Sybase numeric datatype and Python | 1 | python,sap-ase | 0 | 2016-08-26T14:06:00.000 |
I have this huge Excel (xls) file that I have to read data from. I tried using the xlrd library, but is pretty slow. I then found out that by converting the Excel file to CSV file manually and reading the CSV file is orders of magnitude faster.
But I cannot ask my client to save the xls as csv manually every time before importing the file. So I thought of converting the file on the fly, before reading it.
Has anyone done any benchmarking as to which procedure is faster:
Open the Excel file with with the xlrd library and save it as CSV file, or
Open the Excel file with win32com library and save it as CSV file?
I am asking because the slowest part is the opening of the file, so if I can get a performance boots from using win32com I would gladly try it. | 1 | 1 | 0.197375 | 0 | false | 39,360,812 | 0 | 582 | 1 | 0 | 0 | 39,179,880 | if you need to read the file frequently, I think it is better to save it as CSV. Otherwise, just read it on the fly.
for performance issue, I think win32com outperforms. however, considering cross-platform compatibility, I think xlrd is better.
win32com is more powerful. With it, one can handle Excel in all ways (e.g. reading/writing cells or ranges).
However, if you are seeking a quick file conversion, I think pandas.read_excel also works.
I am using another package xlwings. so I am also interested with a comparison among these packages.
to my opinion,
I would use pandas.read_excel to for quick file conversion.
If demanding more processing on Excel, I would choose win32com. | 1 | 0 | 0 | XLRD vs Win32 COM performance comparison | 1 | python,excel,csv,win32com | 1 | 2016-08-27T10:07:00.000 |
I want to execute script(probably written in python), when update query is executed on MySQL database. The query is going to be executed from external system written in PHP to which I don't have access, so I can't edit the source code. The MySQL server is installed on our machine. Any ideas how I can accomplish this, or is it even possible? | 1 | 0 | 1.2 | 0 | true | 39,244,221 | 0 | 54 | 1 | 0 | 0 | 39,243,626 | No, it is not possible to call external scripts from MySQL.
The only thing you can do is adding an ON UPDATE trigger that will write into some queue. Then you will have the python script POLLING the queue and doing whatever it's supposed to do with the rows it finds. | 1 | 0 | 0 | Executing script when SQL query is executed | 1 | php,python,mysql,database | 0 | 2016-08-31T07:45:00.000 |