Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | I'm trying to start a python script that will parse a csv file uploaded from the UI by the user. On the client side, how do I make a call to start the python script (I've read AJAX http requests work)? And then secondly, how do I take the user input (just a simple user upload with the HTML tag) which will be read by the python script?
The back end python script works perfectly through the command line, I just need to create a front end for easier use. | false | 42,660,299 | 0 | 0 | 0 | 0 | Many (if not all) server side technology can solve your problem: CGI, Java Servlet, NodeJS, Python, PHP etc.
The steps are:
In browser, upload file via AJAX request.
In server, receive file sent from browser, save it somewhere in server disk.
After file is saved, invoke your python script to handle the file.
As your current script is written by Python, I guess Python is the best choice for server side technology. | 0 | 899 | 0 | 0 | 2017-03-07T23:29:00.000 | javascript,jquery,python,ajax,http | Start python script from client side | 1 | 1 | 1 | 42,662,740 | 0 |
0 | 0 | My question is:
how to join my telegram bot to a telegram public channel that I am not administrator of it, and without asking the channel's admin to add my bot to the channel?
maybe the chatId of channel or thru link of channel?
Thank you in advance :)
edit------
I have heard that some people claim to do this join their bot to channels, and scrape data.
So if Telegram does not allow it, how can they do it? can you think of any work around?
Appreciate your time? | true | 42,674,340 | 1.2 | 1 | 0 | 10 | Till today, only the Channel Creator can add a bot (as Administrator or Member) to the Channel, whether public or private. Even the other Channel Administrators cannot add a normal member leave alone adding a bot, rather they can only post into the channel.
As far as joining the bot via the invite link, there is yet no such method in Bot API to do so. All such claims of adding the bot to a channel by non Creator are false. | 0 | 24,623 | 0 | 16 | 2017-03-08T14:44:00.000 | python,telegram,telegram-bot,python-telegram-bot | How to join my Telegram Bot to PUBLIC channel | 1 | 1 | 3 | 42,696,153 | 0 |
1 | 0 | I am attempting to write an endpoint in Python Flask that requires inputs from 2 users to run the function. I would like to have it so that user 1 would send a request with inputs to the backend and then wait for user 2 to send inputs as well. The endpoint would then calculate a result and output it to both users. What is the most efficient way to do this? | true | 42,715,005 | 1.2 | 0 | 0 | 0 | This is best done by some asynchronous mechanism. The best solution will depend on your exact use-case.
Webhook - If user 1 and user 2 are other applications using your api, the simplest way would be with a webhook mechanism. Where user 1 and user 2 would subscribe to the api by depositing a url that your application calls with the results once both inputs are sent.
Polling - You provide an endpoint that both users need to poll to check if the api is ready to send back the results.
Email - You simply email both users the results once you receive both inputs. Or SMS, or IM message ...
Persistent connections - With a mechanism like websockets or http2 push. You can achieve this with a python application. But it this is the most complex solution and in most cases not needed. | 0 | 124 | 0 | 1 | 2017-03-10T09:39:00.000 | python,flask | Multiple Users Connecting to an API Endpoint | 1 | 1 | 1 | 42,715,400 | 0 |
1 | 0 | I need to develop an application that takes as input an url of an e-commerce website and scrap the products titles, prices with the categories and sub-categories.
Scrapy seems like a good solution for scraping data, so my question is how can I tell scrapy where the titles, prices, cat and sub categories are to extract them knowing that websites have different structures and don't really use the same tags?
EDIT: I gotta change my question to this, can't we write a generic spider that takes the start url, allowed domains, and xpath or css selectors as arguments? | false | 42,716,807 | -0.197375 | 0 | 0 | -1 | Categories and subcategories are usually in the breadcrumbs.
In general the css selector for those will be .breadcrumb a and that will probably work for 80% of modern e-commerce websites. | 0 | 1,132 | 0 | 0 | 2017-03-10T11:02:00.000 | python,scrapy,web-crawler,e-commerce,screen-scraping | Scraping products data with categories from e-commerce | 1 | 1 | 1 | 42,729,874 | 0 |
1 | 0 | I have been updating about 1000 sheets using Python. Each takes about 2-3 minutes to update. The job ran most of the day yesterday (~8hrs). And when I look at my quotas for Google Sheets API in console.developers.google.com, I have used about 3k in the read group and 4k in the write group. Not nearly close to the 40k quota that is given.
Now all of the 1000 sheets interact with one sheet because all of the keys are on that one sheet.
In fact, I have tried using 2 different project sign ins, one through my company domain and one through my gmail, that both have access to these files. When I run it with the company credentials. It also gives me a HttpError 429, and 0 requests have been made with that credential.
Is there some hidden quota I don't know about? Like calls to one spreadsheet? That's what it seems like. Google, are you cutting me off to the spreadsheet because I accessed it for 8hrs yesterday?
It is bombing on spreadsheets().values().update and spreadsheets().batchUpdate | false | 42,735,670 | 0 | 0 | 0 | 0 | I had this issue with a long-running script...I am putting batches of data in spreadsheets, and every 100k rows I start on a new spreadsheet. The data is rolled up on a separate spreadsheet using IMPORTRANGE(). the first 3 were fine but the 4th was bombing with the "Resource has been exhausted" error. I noticed that when I saw this error, the IMPORTRANGE() was also failing in the browser. The error must be indicating something wrong with the server where the spreadsheet is stored/served, and is not API-related. Switching to a new spreadsheet fixed the error for me. | 0 | 10,665 | 0 | 3 | 2017-03-11T13:18:00.000 | python,google-sheets,google-sheets-api,quota | Google sheets API v4: HttpError 429. "Resource has been exhausted (e.g. check quota)." | 1 | 1 | 3 | 59,957,826 | 0 |
0 | 0 | I would like to use ANTLR4 with Python 2.7 and for this I did the following:
I installed the package antlr4-4.6-1 on Arch Linux with sudo pacman -S antlr4.
I wrote a MyGrammar.g4 file and successfully generated Lexer and Parser Code with antlr4 -Dlanguage=Python2 MyGrammar.g4
Now executing for example the generated Lexer code with python2 MyGrammarLexer.py results in the error ImportError: No module named antlr4.
What could to be the problem? FYI: I have both Python2 and Python3 installed - I don't know if that might cause any trouble. | false | 42,737,716 | 0.066568 | 1 | 0 | 1 | The problem was that antlr4 was only installed for Python3 and not Python2. I simply copied the antlr4 files from /usr/lib/python3.6/site-packages/ to /usr/lib/python2.7/site-packages/ and this solved the problem! | 0 | 13,812 | 0 | 5 | 2017-03-11T16:33:00.000 | python,antlr,antlr4 | No module named antlr4 | 1 | 1 | 3 | 42,750,891 | 0 |
1 | 0 | I want to get the INSPECT ELEMENT data of a website. Let's say Truecaller. So that i can get the Name of the person who's mobile number I searched.
But whenever i make a python script it gives me the PAGE SOURCE that does not contain the required information.
Kindly help me. I am a beginner so kindly excuse me of any mistake in the question. | true | 42,757,866 | 1.2 | 0 | 0 | 0 | INSPECT ELEMENT and VIEW PAGE SOURCE are not the same.
View source shows you the original HTML source of the page. When you view source from the browser, you get the HTML as it was delivered by the server, not after javascript does its thing.
The inspector shows you the DOM as it was interpreted by the browser. This includes for example changes made by javascript which cannot be seen in the HTML source. | 0 | 3,156 | 0 | 0 | 2017-03-13T06:46:00.000 | javascript,python,html,python-2.7,dom | How do I get the data of a website as shown in INSPECT ELEMENT and not in VIEW PAGE SOURCE? | 1 | 2 | 3 | 42,757,939 | 0 |
1 | 0 | I want to get the INSPECT ELEMENT data of a website. Let's say Truecaller. So that i can get the Name of the person who's mobile number I searched.
But whenever i make a python script it gives me the PAGE SOURCE that does not contain the required information.
Kindly help me. I am a beginner so kindly excuse me of any mistake in the question. | false | 42,757,866 | 0 | 0 | 0 | 0 | what you see in the element inspector is not the source-code anymore.
You see a javascript manipulated version.
Instead of trying to execute all the scripts on your own which may lead into multiple problems like cross origin security and so on,
search the network tab for the actual search request and its parameters.
Then request the data from there, that is the trick.
Also it seems like you need to be logged in to search on the url you provided so you need to eventually adapt cookie/session/header and stuff, just like a request from your browser would.
So what i want to say is, better analyse where the data you look for is coming from if it is not in the source | 0 | 3,156 | 0 | 0 | 2017-03-13T06:46:00.000 | javascript,python,html,python-2.7,dom | How do I get the data of a website as shown in INSPECT ELEMENT and not in VIEW PAGE SOURCE? | 1 | 2 | 3 | 42,758,772 | 0 |
1 | 0 | I'm trying to use ZEEP v1.2.0 to connect to some service and ran into this issue.
I just execute: python -mzeep http://fulfill.sfcservice.com/default/svc/wsdl
Result:
zeep.exceptions.LookupError: No type 'string' in namespace
http://www.chinafulfill.com/CffSvc/. Available types are: [...]
Am I missing anything here to test this? | true | 42,759,735 | 1.2 | 0 | 0 | 1 | No this is a bug in the WSDL file, It defines an element with type "tns:string", I assume they meant "xsd:string".
See the following line in the wsdl: | 0 | 996 | 0 | 0 | 2017-03-13T09:00:00.000 | python,soap,wsdl,zeep | ZEEP WSDL LookupError: No type 'string' in namespace | 1 | 1 | 1 | 42,760,121 | 0 |
0 | 0 | How can you with Python 2.7 and Selenium read the POST or GET request that the driver send when the driver clicks on a button that send some request? And is it also possible to read the response? | false | 42,774,789 | 0 | 0 | 0 | 0 | I don't know whether some libraries will do this for you. But I think you can simply set up a thread to run something like tcpdump to capture all HTTP package and store them somewhere while the test is running on the main process.
You can start the thread before clicking the buttons and do some analysis on the captured packages after your test to get those packages containing the request you want. | 0 | 754 | 0 | 0 | 2017-03-13T22:44:00.000 | python,python-2.7,selenium,request,http-post | Python Selenium, read POST/GET Request when clicking a button | 1 | 1 | 1 | 42,774,970 | 0 |
0 | 0 | in aws cli we can set output format as json or table. Now I can get json output from json.dumps is there anyway could achieve output in table format?
I tried pretty table but no success | false | 42,787,327 | 0.197375 | 0 | 1 | 1 | Python Boto3 does not return the data in the tabular format. You will need to parse the data and use another python lib to output the data in the tabular format . Pretty table works good for me, read the pretty table lib docs and debug your code. | 0 | 980 | 0 | 0 | 2017-03-14T13:27:00.000 | json,python-2.7,boto3,aws-cli,prettytable | Is it possible to get Boto3 | python output in tabular format | 1 | 1 | 1 | 42,952,740 | 0 |
0 | 0 | I need implement a uploading function which can continue from the point from last interruption via sftp.
I'm trying paramiko. but I cannot fond any example about this. Can anybody give me some advices?
Best regards | true | 42,802,024 | 1.2 | 1 | 0 | 1 | SFTP.open(mode='a') opens a file in appending mode. So first you can call SFTP.stat() to get the current size of the file (on remote side) and then open(mode='a') it and append new data to it. | 0 | 41 | 0 | 0 | 2017-03-15T05:53:00.000 | python,sftp,paramiko | can paramiko has the function to implememt uploading continue from the point from last interruption | 1 | 1 | 2 | 42,807,503 | 0 |
0 | 0 | I am trying to send an image from node js script to python script using python-shell. From what I know, I should use binary format.
I know that in python side I can use this 2 functions:
import sys
sys.stdout.write() and sys.stdin.read()
But I am not sure how the node js side gonna be? (Which functions can I use and how can I use them?) | false | 42,804,006 | 0 | 0 | 0 | 0 | I tried to encode the image and send it but It did not work. So I used Socket programming instead and It worked wonderfully. | 0 | 631 | 0 | 1 | 2017-03-15T08:00:00.000 | javascript,python,node.js | Sending Images from nodejs to Python script via standard input/output | 1 | 1 | 2 | 42,963,477 | 0 |
1 | 0 | I am downloading an .apk file from a link.
I open browser > Go to URL > Chrome browser is displayed.
How do I click on "OK" from the alert.
I am using AppiumLibrary. | false | 42,817,322 | 0 | 0 | 0 | 0 | capabilities.setCapability(“autoAcceptAlerts”,true); | 0 | 480 | 0 | 1 | 2017-03-15T17:59:00.000 | android,appium,robotframework,python-appium | Handle browser alert/pop up in Robot framework + appium | 1 | 1 | 1 | 62,967,696 | 0 |
0 | 0 | I'm developing a custom packet sniffer in Python3.
It does not have to be platform independant. I'm using Linux.
The method I use is to recvfrom() from a socket (AF_PACKET, SOCK_RAW).
It works fine but I have problem with info returned by recvfrom().
recvfrom() returns a tuple with 5 components.
Example: ('eno1', 2054, 0, 1, b'\x00!\x9b\x16\xfa\xd1')
How do I interpret the last 4 components?
Where is it documented?
I prefer not to use libpcap or scapy.
OK! here's a code fragment:
s = socket.socket( socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003)) ... packet,pktAdr = s.recvfrom(65565)
print( 'pktAdr:'+str(pktAdr))
Thanks! | false | 42,821,309 | 0.197375 | 0 | 0 | 1 | It's not documented on docs.python.org so I did some research.
I'm now in a position to answer to myself.
The tuple returned by recvfrom is similar to a sockaddr_ll structure returned by the Linux kernel.
The tuple contains 5 components:
- [0]: interface name (eg 'eth0')
- [1]: protocol at the physical level (defined in linux/if_ether.h)
- [2]: packet type (defined in linux/if_packet.h)
- [3]: ARPHRD (defined in linux/if_arp.h)
- [4]: physical address
The example provided in the question can be decoded into:
- 'eno1'
- ARP protocol (0x806)
- Incoming packet
- Ethernet frame
- MAC address
In case of a WiFi interface in monitor mode, the [3] element would be 803 (meaning 'IEEE802.11 + RadioTap header').
Hope this will help somebody | 0 | 873 | 0 | 1 | 2017-03-15T21:47:00.000 | python-3.x,sockets,packet-sniffers | How to interpret result of recvfrom (raw socket) | 1 | 1 | 1 | 45,215,859 | 0 |
0 | 0 | How should be the timeout of the socket be estimated in a stop-and-wait protocol over UDP? Can it be just any arbitrary integer multiple of the round-trip-time (RTT)? | true | 42,871,248 | 1.2 | 0 | 0 | 0 | Ideally, you want the timeout to be equal to the exact time it takes from the moment you've sent a packet, until the moment you receive the acknowledgement from the other party - this pretty much the RTT.
But it's almost impossible to know the exact ideal timeout in advance, so we have to guess. Let's consider what happens if we guess wrong.
If we use a timeout which happens to be lower than the actual RTT, we'll timeout before the ack had time to arrive. This is known as a premature timeout, and it's bad - we're re-transmitting a packet even though the transmission was successful.
If we use a timeout which happens to be higher than the actual RTT, it'll take us longer to identify and re-transmit a lost packet. This is also bad - we could have identified the lost packet earlier and re-transmitted it.
Now, about your question:
Can it be just any arbitrary integer multiple of the round-trip-time
(RTT)?
First of all, the answer is yes. You could use any positive integer, from 1 to basically infinity. Moreover, you don't even have to use an integer, what's wrong with a multiplier of x2.5?
But what's important is to understand is what would happen with different multipliers. If you pick a low multiplier such as 1, you'd run into quite a few premature timeouts. If you pick a large number, such as 100, you'd have many late timeout, which will cause your transmissions to halt for long periods of time. | 0 | 1,651 | 0 | 0 | 2017-03-18T06:24:00.000 | python,sockets,network-programming,udp | Setting timeout in Stop-and-Wait protocol in UDP | 1 | 1 | 2 | 42,891,009 | 0 |
0 | 0 | I'm trying to work with the Google contacts API using Python 3.5, this presents an issue because the gdata library that is supposed to be used is not up to date for use with Python 3.5. I can use oAuth2 to grab the contact data in JSON and use that in my project, but part of the application is also adding a contact into the users contact list. I cannot find any documentation on this part, besides using the Gdata library, something I cannot do. The majority of project requires Python 3 so, switching to Python 2 would just not be something I could easily do. Is there any further documentation or a work around using the gdata library with Python 3? I'm actually very surprised that the contacts API seems so thinly supported on Python. If anyone has any further information it would be much appreciated. | false | 42,891,919 | 0.099668 | 0 | 0 | 1 | For me I had to install like pip install git+https://github.com/dvska/gdata-python3 (without the egg). Since the package itself contains src dir. Otherwise import gdata would fail. (python 3.6.5 in virtual env) | 1 | 715 | 0 | 1 | 2017-03-19T20:32:00.000 | python-3.x,google-contacts-api | Python 3.5 support for Google-Contacts V3 API | 1 | 2 | 2 | 52,516,469 | 0 |
0 | 0 | I'm trying to work with the Google contacts API using Python 3.5, this presents an issue because the gdata library that is supposed to be used is not up to date for use with Python 3.5. I can use oAuth2 to grab the contact data in JSON and use that in my project, but part of the application is also adding a contact into the users contact list. I cannot find any documentation on this part, besides using the Gdata library, something I cannot do. The majority of project requires Python 3 so, switching to Python 2 would just not be something I could easily do. Is there any further documentation or a work around using the gdata library with Python 3? I'm actually very surprised that the contacts API seems so thinly supported on Python. If anyone has any further information it would be much appreciated. | false | 42,891,919 | 0 | 0 | 0 | 0 | GData Py3k version: pip install -e git+https://github.com/dvska/gdata-python3#egg=gdata | 1 | 715 | 0 | 1 | 2017-03-19T20:32:00.000 | python-3.x,google-contacts-api | Python 3.5 support for Google-Contacts V3 API | 1 | 2 | 2 | 44,453,995 | 0 |
0 | 0 | Is there a way to get a list of all tweets sent to a twitter user?
I know I can get all tweets sent by the user using api.GetUserTimeline(screen_name='realDonaldTrump'), but is there a way to retrieve tweets to that user? | true | 42,937,469 | 1.2 | 1 | 0 | 0 | Use: api.GetSearch(raw_query='q=to%3ArealDonaldTrump') | 0 | 397 | 0 | 0 | 2017-03-21T20:24:00.000 | twitter,python-twitter | Get all tweets to user python-twitter | 1 | 1 | 1 | 44,587,156 | 0 |
0 | 0 | Can please telle me if there are any way to parse an XML file(size = 600M) with unstagle /python
In Fact I use untangle.parse(file.xml) and I got error message :
Process finished with exit code 137
IS there any way to parse this file by bloc for example or other option used by the function untangle.parse() or a specific linux configuration...?
Thanks | false | 42,972,563 | -0.099668 | 0 | 0 | -1 | It's possible to use sax with untangle?, mean that I load the file by sax and read it by untangle , because I have a lot of code wrote using untagle and I developped since long times , and I don't want to restart from scratch
Thanks | 0 | 428 | 0 | 0 | 2017-03-23T09:52:00.000 | python,xml,linux,fedora | Parsing Huge XML Files With 600M | 1 | 1 | 2 | 42,973,365 | 0 |
0 | 0 | I have a python application that uses eventlet Green thread (pool of 1000 green threads) to make HTTP connections. Whenever the client fired more than 1000 parallel requests ETIMEDOUT occurs. Can anyone help me out with the possible reason? | false | 42,977,272 | 0 | 0 | 0 | 0 | Most likely reason in this case: DNS server request throttling. You can easily check if that's the case by eliminating DNS resolving (request http://{ip-address}/path, don't forget to add proper Host: header). If you do web crawling these steps are not optional, you absolutely must:
control concurrency automatically (without human action) based on aggregate (i.e. average) execution time. This applies at all levels independently. Back off concurrent DNS requests if you get DNS responses slower. Back off TCP concurrency if you get response speed (body size / time) slower. Back off overall request concurrency if your CPU is overloaded - don't request more than you can process.
retry on temporary failures, each time increase wait-before-retry period, search backoff algorithm. How to decide if an error is temporary? Mostly research, trial and error.
run local DNS server, find and configure many upstreams
Next popular problem with high concurrency that you'll likely face is OS limit of number of open connections and file descriptors. Search sysctl somaxconn and ulimit nofile to fix those. | 0 | 194 | 0 | 0 | 2017-03-23T13:20:00.000 | python,rest,http,networking,eventlet | ETIMEDOUT occurs when client(jmeter) fired more than 1000 parallel HTTP requests | 1 | 1 | 1 | 42,987,615 | 0 |
0 | 0 | I am using the socket library to emulate sending packets over the network.
Documentation for socket.settimeout() method says..
... socket.settimeout(value)
Set a timeout on blocking socket
operations. The value argument can be a nonnegative float expressing
seconds, or None. If a float is given, subsequent socket operations
will raise a timeout exception if the timeout period value has elapsed
before the operation has completed. Setting a timeout of None disables
timeouts on socket operations. s.settimeout(0.0) is equivalent to
s.setblocking(0); s.settimeout(None) is equivalent to
s.setblocking(1).
What exactly are the blocking socket operations? Is it just recv* calls, or does it also include send calls?
Thank you in advance. | true | 42,983,291 | 1.2 | 0 | 0 | 1 | Blocking operations are operations which can not fully handled locally but where it might need to wait for the peer of the connection. For TCP sockets this includes therefore obviously accept, connect and recv. But it also includes send: send might block if the local write socket buffer is full, i.e. no more data can be written to it. In this case it must wait for the peer to receive and acknowledge enough data so that these data get removed from the write buffer and there is again room to write new data. | 0 | 641 | 0 | 1 | 2017-03-23T17:43:00.000 | python,sockets | Python socket - what exactly are the "blocking" socket operations? | 1 | 1 | 1 | 42,983,798 | 0 |
0 | 0 | I'm getting started with websockets. Trying to write a python server for a browser based (javascript) client.
I have also never really done asynchronous programming before (except "events"). I was trying to avoid it - I have searched and searched for an example of websocket use that did not involve importing tornado or asyncio. But I've found nothing, even the "most basic examples" do it.
So now I'm internalising it, but clear it up for me - is "full duplex" server code necessarily asynchronous? | true | 42,988,519 | 1.2 | 0 | 0 | 0 | Full-duplex servers are necessarily concurrent. In Tornado and asyncio, concurrency is based on the asynchronous programming model, so if you use a websocket library based on one of those packages, your code will need to be asynchronous.
But that's not the only way: full-duplex websockets could be implemented in a synchronous way by dedicating a thread to reading from the connection (in addition to whatever other threads you're using). I don't know if there are any python websocket implementations that support this kind of multithreading model for full-duplex, but that's how Go's websocket implementation works for example.
That said, the asynchronous/event-driven model is a natural fit for websockets (it's how everything works on the javascript side), so I would encourage you to get comfortable with that model instead of trying to find a way to work with websockets synchronously. | 0 | 93 | 0 | 0 | 2017-03-23T22:59:00.000 | python,websocket,tornado,python-asyncio | Does server code involving both sending and receiving have to be asynchronous? | 1 | 1 | 1 | 43,017,774 | 0 |
0 | 0 | In my work place, we have a Dymo printer that picks up data from a database and place it in a template label that I made, it prints with python automatically with a program.
Recently we bought a Zebra Thermal Printer, and I need to update the program to do the same thing, but with the Zebra printer.
I was looking around, and I found the ZebraDesigner for XML and I design a few labels like the ones I need it, but the zebra package for python is not able to print XML format, and I tried to print .lbl format but I wasn't able.
Note that .lbl files can't be edit as a text... And I need to do this...
Is there any solution? | true | 42,995,988 | 1.2 | 0 | 0 | 2 | Finally I found a way to make this.
With ZebraDesigner (not pro) I design the label template to my automated labels and I export to a file changing the route of the printer in Windows Preferences.
With the Online ZPL viewer of Labelary and minimum knowledges of ZPL (always with the manual near) I modified the label to make it editable on python with the use of .format() and {0},{1}, etc.
And finally done this, I call a batch with the command PRINT 'FILE' 'ZEBRAPORT' like PRINT C:\FILE.ZPL USB003 to print the specific modified label.
If someone want specific code of how I do this please, just ask me. | 0 | 2,011 | 0 | 1 | 2017-03-24T09:40:00.000 | python,xml,printing,zebra-printers | Print XML on Zebra printer using Python | 1 | 1 | 1 | 43,110,314 | 0 |
1 | 0 | I use microsoft bot framework and a flask based bot server in my application.
When someone installs the bot, the Botframework stores the JSON POSTed by slack, including data like SLACK_TEAM_ID, SLACK_USER_ID and BOT_ACCESS_TOKEN. Its great, that from this point whenever, an user mentions or directmessages the botuser, the Bot Framework POSTs a JSON to the flask server.
What I would like is, right when the user installs the bot, the Bot Framework does a POST call to the flask server, so that I can (say) congratulate the user for installing my bot.
In short: How to get my flask application notified as to who installs my bot as soon as they install it? | false | 43,002,043 | 0 | 0 | 0 | 0 | ConversationUpdate should be sent any time a bot is added to a channel with a user in it. Are you not seeing that on install? | 0 | 51 | 0 | 0 | 2017-03-24T14:26:00.000 | python,rest,botframework | How can my bot server know INSTANTLY when someone installed the bot using Add-to-slack button? | 1 | 1 | 1 | 43,031,505 | 0 |
0 | 0 | Using the python coinbase API-- The functions-- get_buy_price, get_sell_price, get_spot_price, get_historical_data, etc... all seem to return bitcoin prices only. Is there a way of querying Ethereum prices?
It would seem that currency_pair = 'BTC-USD' could be changed to something akin to currency_pair = 'ETH-USD' although this has no effect.
I would expect that the API simply doesn't support this, except that the official documentation explicitly states:
Get the total price to buy one bitcoin or ether
I can work around this somewhat by using the quote='true' flag in the buy/sell request. This however only works moving forward, I would like historical data. | false | 43,037,896 | 0 | 0 | 0 | 0 | Something worked for me with a similar problem calling exchange rates.
Try changing the params in
coinbase\wallet\client.py
from
response = self._get('v2', 'prices', 'spot', data=params)
to
response = self._get('v2', 'prices', 'spot', params=params) | 0 | 9,048 | 0 | 10 | 2017-03-27T04:21:00.000 | python,python-3.x,ethereum,coinbase-api | Historical ethereum prices - Coinbase API | 1 | 1 | 3 | 48,160,272 | 0 |
0 | 0 | Is there a way to install selenium webdriver as a python project dependence.
I need this in such a way, so when this project goes to a os, that doesn't have selenium webdriver installed not to be an issue for this project to run properly on that os.
Thank you in advance.
PS: Please take a look at my own answer to this question.
Stefan | true | 43,039,601 | 1.2 | 0 | 0 | 1 | This is how I have done for all my projects.
Create a text file which has all the project dependencies mentioned. make sure you mentioned version as well.
Example: requirement.txt
pytest==2.9.1
selenium==2.35.1
Create a Shell script or batch file which, creates a new virtual environment, installs all the dependencies and run the tests. | 0 | 1,572 | 0 | 1 | 2017-03-27T06:48:00.000 | python,selenium | Adding selenium webdriver as a python project dependence | 1 | 1 | 2 | 43,041,354 | 0 |
0 | 0 | People of Earth! Hello. As I can see in the docs, I'm able to download only one file using one API request. In order to download 10 files - I have to make 10 requests that makes me sad... Google Drive UI allows us to download archived files after selecting files and clicking on "download". Is there the same feature in the API that would allow me to download the desired number of files at once? I need Google Drive API to compress files and let me download an archive. | true | 43,050,388 | 1.2 | 0 | 0 | 2 | No there isn't. I believe (haven't tested it) that Google will respect Accept-Encoding: gzip for content downloads. | 0 | 332 | 0 | 0 | 2017-03-27T15:22:00.000 | google-api,google-drive-api,google-api-client,google-api-python-client | Is there a way to force Google Drive API to compress files and let me download an archive? | 1 | 1 | 1 | 43,052,062 | 0 |
1 | 0 | I'd like to reverse engineer Selenium WebDriver to write my tests for me as I use it. This would entail opening a WebDriver on screen, and clicking around and using it as normal. It will output instructions like self.driver.find_element_by_id('username-box') or whatnot for me, instead of the time-wasting of right-clicking the "Inspect element" each time I write a test.
Ideally this will give me a nice xpath which is more exact. How do I retrieve the Xpath/way to recreate actions when manually using Selenium WebDriver? | true | 43,051,032 | 1.2 | 0 | 0 | 0 | As Nameless said, it won't solve the "make me an efficient XPath", etc. problem that you are talking about but you can install Selenium IDE (a FF plugin) and record your scenarios and then export them into various languages. It doesn't write the best code but you can get an idea of what it does with a quick download and install. | 0 | 43 | 0 | 0 | 2017-03-27T15:53:00.000 | java,python,selenium,selenium-webdriver | How to retrieve HTML information about specific actions using WebDriver | 1 | 1 | 1 | 43,053,997 | 0 |
1 | 0 | I am thinking to use AWS API Gateway and AWS Lambda(Python) to create a serverless API's , but while designing this i was thinking of some aspects like pagination,security,caching,versioning ..etc
so my question is:
What is the best approach performance & cost wise to implement API pagination with very big data (1 million records)?
should i implement the pagination in postgresql db? (i think this
would be slow)
should i not use postgresql db pagination and just cache all the results i get from db into aws elastic cache and then do server side pagination in lambda.
I appreciate your help guys. | false | 43,113,198 | 0.53705 | 0 | 1 | 3 | If your data is going to live in a postgresql data base anyway I would start with your requests hitting the database and profile the performance. You've made assumptions about it being slow but you haven't stated what your requirements for latency are or what your schema is, so any assertions people would make about whether or not it would fit your case is completely speculative.
If you do decide that after profiling that it is not fast enough, than adding a cache would make sense, though storing the entire contents in the cache seems wasteful unless you can guarantee your clients will always iterate through all results. You may want to consider a mechanism that prefetches blocks of data that would service a few requests rather than trying to cache the whole data.
TL;DR : Don't prematurely optimize your solution. Quantify how you want your system to respond and test and validate your assumptions. | 0 | 1,679 | 0 | 1 | 2017-03-30T09:04:00.000 | python,postgresql,amazon-web-services,aws-lambda,aws-api-gateway | AWS API Gateway & Lambda - API Pagination | 1 | 1 | 1 | 43,126,859 | 0 |
0 | 0 | Today I'm dealing with a Python3 script that has to do a http post request and send a mail.
The Python script is launched on a Windows PC that is in a corporate network protected by Forefront.
The user is logged with his secret credentials and can access to the internet through a proxy.
Like the other non-Microsoft applications (i.e. Chrome), I want my script to connect to the internet without prompt the user for his username and password.
How can I do this? | false | 43,122,092 | 0.197375 | 0 | 0 | 1 | On Microsoft OSes, the authentication used is Kerberos, so you won't be able to use directly your ID + password.
I'm on Linux, so I can't test it directly but I think that you can create a proxy with fiddler which can negociate the authentication for you, and you can use this proxy with python.
Fiddler's Composer will automatically respond to authentication challenges (including the Negotiate protocol which wraps Kerberos) if you tick the Authentication box on the Options subtab, in the menus. | 0 | 218 | 0 | 8 | 2017-03-30T15:25:00.000 | python,windows,python-3.x,proxy,ldap | A Python script, a proxy and Microsoft Forefront - Auto-Authentication | 1 | 1 | 1 | 43,866,290 | 0 |
0 | 0 | I would like to implement the following in Python but not sure where to start. Are there good modules for shortest path problems of this type?
I am trying to define the shortest path from a particular atom (node) to all other atoms (nodes) in a given collection of xyz coordinates for a 3D chemical structure (the graph). The bonds between the atoms (nodes) are the edges for which travel from node to node is allowed.
I am trying to filter out certain atoms (nodes) from the molecule (graph) based on the connectivity outward from a selected central node.
**For the paths considered, I want to FORBID certain atoms (nodes) from being crossed. If the shortest path from A to B is through forbidden node, this answer is disallowed. The shortest path from A to B must not include the forbidden node **
If the shortest path from the selected centre atom (A) to another other node(B) INCLUDES the forbidden node, AND there is no other path available from A to B through the available edges (bonds), then node B should be deleted from the final set of xyz coordinates (nodes) to be saved.
This should be repeated for A to C, A to D, A to E, etc. for all other atoms (nodes) in the structure (graph).
Thanks in advance for any help you can offer. | true | 43,140,267 | 1.2 | 0 | 0 | 0 | Make sure all edges that would lead to the forbidden node(s) have an infinite cost, and whichever graph-traversing algorithm you use will deal with it automatically.
Alternatively, just remove the forbidden nodes from being considered by the graph traversal algorithm. | 0 | 380 | 0 | 0 | 2017-03-31T12:05:00.000 | python,graph,shortest-path,chemistry,cheminformatics | Calculating shortest path from set node to all other nodes, with some nodes forbidden from path | 1 | 1 | 2 | 43,140,304 | 0 |
0 | 0 | I'm new to selenium. I have this situation and i don't know how to solve it:
If i use direct link in driver.get() i can find and count elements w/o problems using:
element.driver.find_elements_by_xpath();
print(len(element))
I get correct printed result
if I use home page instead in driver.get():
locate search button;
send keys and submit;
element.driver.find_elements_by_xpath();
print(len(element))
Test is passed but result is 0. Any idea what I'm doing wrong? | true | 43,214,668 | 1.2 | 0 | 0 | 0 | You need to wait after the action before you validate. Web drivers provide "wait conditions" that you can use to validate you reached your desired checkpoint before performing other validation operations.
We are not talking about system waits, we are talking about wait conditions where the driver can poll the browser until a given condition is met. There are many conditions and many options for these, and you will need them fairly universally to accomplish your goals. That is why I'm not providing an example. | 0 | 82 | 0 | 0 | 2017-04-04T18:13:00.000 | python,selenium | Python Selenium: Geting different results using driver.find_elements | 1 | 1 | 1 | 43,214,788 | 0 |
0 | 0 | I want to create a telegram bot for a home project and i wish the bot only talk to 3 people, how can I do this?
I thought to create a file with the chat id of each of us and check it before responding to any command, I think it will work. the bot will send the correct info if it's one of us and "goodbye" to any other
But is there any other way to block any other conversation with my bot?
Pd: I'm using python-telegram-bot | true | 43,240,152 | 1.2 | 1 | 0 | 1 | For the first part of your question you can make a private group and add your bot as one of its administrators. Then it can talk to the members and answer to their commands.
Even if you don't want to do so, it is possible by checking the chatID of each update that the bot receives. If the chatID exists in the file, DataBase or even in a simple array the bot answers the command and if not it just ignores or sends a simple text like what you said good-bye.
Note that bots cannot block people they can only ignore their
messages. | 0 | 986 | 0 | 3 | 2017-04-05T19:37:00.000 | telegram,telegram-bot,python-telegram-bot | Telegram-bot user control | 1 | 1 | 1 | 43,241,747 | 0 |
0 | 0 | I've built an API that delivers live data all at once when a user submits a search for content. I'd like to take this API to the next level by delivering the API content to a user as the content is received instead of waiting for all of the data to be received before displaying.
How does one go about this? | false | 43,269,842 | 0 | 0 | 0 | 0 | From your explanation
We're pulling our data from multiple sources with each user search.
Being directly connected to the scrapers for those sources, we display
the content as each scraper completes content retrieval. I was
originally looking to mimic this in the API, which is obviously quite
different from traditional pagination - hope this clarifies.
So you in your API, you want to
take query from user
initiate live scrapers
get back the data to the user when scrapers finish the job !
(correct me if im wrong)
My Answer
This might feel little complicated, but this is the best one I can think of.
1. When user submits the query:
1. Initiate the live scrapers in to celery queue (take care of the priority).
2. Once the queue is finished, get back to the user with the information you have via sockets(this is how facebook or any website sends users notifications`. But in your case you will send the results html data in the socket.
3. Since you will have the data already, moved into the db as you scraped, you will can paginate it like normal db.
But this approach gives you a lag of a few seconds or a minute to reply back to the user, meanwhile you keep theuser busy with something on the UI front. | 0 | 92 | 0 | 0 | 2017-04-07T04:47:00.000 | javascript,python,django,performance,api | Building an API | 1 | 2 | 3 | 43,271,723 | 0 |
0 | 0 | I've built an API that delivers live data all at once when a user submits a search for content. I'd like to take this API to the next level by delivering the API content to a user as the content is received instead of waiting for all of the data to be received before displaying.
How does one go about this? | false | 43,269,842 | 0 | 0 | 0 | 0 | I think the better way to apply it is setting limit in your query. For example, If you have 1000 of records in your database, then retrieving all data at once takes time. So, if a user search a word 'apple', you initially send the database request with limit 10. And, you can set pagination or scroll feature at your front-end. If the user click next page or scroll your page, you can again send the database request with another limit 10 so that the database read action will not take more time to read the limited data. | 0 | 92 | 0 | 0 | 2017-04-07T04:47:00.000 | javascript,python,django,performance,api | Building an API | 1 | 2 | 3 | 43,269,984 | 0 |
1 | 0 | I noticed that I sometimes get blocked while scraping because of a session cookie being used on too many pages.
Is there a way to simply clear all cookies completely during crawling to get back to the initial state of the crawler? | false | 43,275,741 | 0 | 0 | 0 | 0 | Facing the similar situation myself. I can get away easily here, but one idea that I have is to subclass CookieMiddleware and then write a method to tweak jar variable directly. It's dirty, but maybe is worth consideration.
Another option would be to write a feature request to at least have a function to clear the cookies. Can easily take another year to implement, if deemed needed at all, I don't particularly trust scrapy devs here.
Just occured to me, that you can use your own cookiejar meta, and if you want to return to the clean state, you simply use different value (something like incrementing an integer would do). | 0 | 879 | 0 | 2 | 2017-04-07T10:25:00.000 | python,cookies,scrapy | Clear cookies on scrapy completely instead of changing them | 1 | 1 | 1 | 44,177,895 | 0 |
1 | 0 | How do I change the browser used by the view(response) command in the scrapy shell? It defaults to safari on my machine but I'd like it to use chrome as the development tools in chrome are better. | false | 43,299,688 | 0 | 0 | 0 | 0 | This fixed it for me:
If you're on windows 10, find or create a random html-file on your system.
Right click the html-file
Open with
Choose another app
Select your browser (e.g Google Chrome) and check the box "Always use this app to open .html"
Now attempt to use view(response) in the Scrapy shell again and it should work. | 0 | 1,418 | 0 | 2 | 2017-04-08T19:51:00.000 | python,scrapy | How do I change the browser used by the scrapy view command? | 1 | 1 | 4 | 62,831,415 | 0 |
0 | 0 | How can I install the tor browser to make it useable in Python using Selenium?
I have tried sudo apt-get install tor-browser, but I don't know where it gets installed, hence what to put in the PATH variable (or in executable-path).
My goal is to
install Tor browser
open Tor Browser with Python Selenium
go to a website. | false | 43,322,038 | 0 | 0 | 0 | 0 | To see your TorBrowser path and binary open Tor and under the three stripe menu on the top right go Help>Troubleshooting Information | 0 | 2,433 | 0 | 3 | 2017-04-10T11:31:00.000 | python,selenium,ubuntu,tor | Ubuntu: Install tor browser & use it with Selenium Python | 1 | 1 | 3 | 64,649,803 | 0 |
0 | 0 | I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you! | false | 43,327,551 | 0 | 1 | 0 | 0 | Preferably you would have each test be able to run in isolation. If you have a way to create the deal through an API or Database rather than creating one through the UI, you could call that for each test. And, if possible, also clean up that data after your test runs.
If this is not possible, you could also record some data from a test in a database, xml, or json file. Then your following tests could read in that data to get what it needs to run the test. In this case it would be some reference to your financial deal.
The 2nd option is not ideal, but might be appropriate in some cases. | 0 | 284 | 0 | 1 | 2017-04-10T15:43:00.000 | python,selenium,webdriver | How to run all the tests on the same deal. Selenium Webdriver + Python | 1 | 2 | 2 | 43,327,713 | 0 |
0 | 0 | I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you! | false | 43,327,551 | 0 | 1 | 0 | 0 | There's a couple of approaches here that might help, and some of it depends on if you're using a framework, or just building from scratch using the selenium api.
Use setup and teardown methods at the suite or test level.
This is probably the easiest method, and close to what you asked in your post. Every framework I've worked in supports some sort of setup and teardown method out of the box, and even if it doesn't, they're not hard to write. In your case, you've got a script that calls each of the test cases, so just add a before() method at the beginning of the suite that creates the financial deal you're working on.
If you'd like a new deal made for each individual test, just put the before() method in the parent class of each test case so they inherit and run it with every case.
Use Custom Test Data
This is probably the better way to do this, but assumes you have db access or a good relationship with your dbm. You generally don't want the success of one test case to rely on the success of another(what the first answer meant by isolaton). If the creation of the document fails in some way, every single test downstream of that will fail as well, even though they're testing a different feature that might be working. This results in a lot of lost coverage.
So, instead of creating a new financial document every time, speak to your DBM and see if it's possible to create a set of test data that either sits in your test db or is inserted at the beginning of the test suite.
This way you have 1 test that tests document creation, and X tests that verify it's functionality based on the test data, and those tests do not rely on each other. | 0 | 284 | 0 | 1 | 2017-04-10T15:43:00.000 | python,selenium,webdriver | How to run all the tests on the same deal. Selenium Webdriver + Python | 1 | 2 | 2 | 43,328,191 | 0 |
0 | 0 | I had been using urllib2 to parse data from html webpages. It was working perfectly for some time and stopped working permanently from one website.
Not only did the script stop working, but I was no longer able to access the website at all, from any browser. In fact, the only way I could reach the website was from a proxy, leading me to believe that requests from my computer were blocked.
Is this possible? Has this happened to anyone else? If that is the case, is there anyway to get unblocked? | false | 43,334,233 | 0.197375 | 0 | 0 | 1 | It is indeed possible, maybe the sysadmin noticed that your IP was making way too many requests and decided to block it.
It could also be that the server has a limit of requests that you exceeded.
If you don't have a static IP, a restart of your router should reset your IP, making the ban useless. | 0 | 608 | 0 | 0 | 2017-04-10T23:05:00.000 | python,urllib2,blocked | Python urllib2: Getting blocked from a website? | 1 | 1 | 1 | 43,334,265 | 0 |
1 | 0 | xunitmerge create duplicate tests when merged the py.test results of two different set of tests.
I have two test folders and ran them separately using py.test, which created results-1.xml &results-2.xml. after that i am merging as below.
xunitmerge results-1.xml results-2.xml results.xml
which created results.xml, when i publish the results using jenkins (publish xunit results) i see the tests recorded shown them as duplicate though the tests of results-1.xml and results-2.xml are unique.
How to avoid duplicate test results during merge? | false | 43,351,833 | 0 | 1 | 0 | 0 | Check that you didn't include the original results-1.xml and results-2.xml file in the path that Jenkins scan for results.
If you're not sure about it, try to delete the origin files after the merge (and before running the xunit-report action) | 0 | 272 | 0 | 0 | 2017-04-11T16:47:00.000 | python,unit-testing,jenkins,pytest,xunit | xunitmerge does creates duplicate tests when merged the py.test results of two different set of tests | 1 | 1 | 1 | 45,173,514 | 0 |
1 | 0 | I am raising Value Error in my API because the input parameter of a particular function is not valid like below
Password doesn't match
User doesn't exist in db or the value is negative
Client provided valid argument as per the API norms so I think Client side error is not the case(400 series code).
So whether I have to return status code as 200 because that request is totally processed or there should be a http status code for this. | true | 43,363,636 | 1.2 | 0 | 0 | 1 | As there are various invalid types, you should use the most appropriate HTTP status code for each different situation, case by case.
For Password doesn't match, I think 403 Forbidden is the best choice.
For User doesn't exist in db, 204 No Content is the best one.
For value is negative, it depends on why value is negative. | 0 | 2,334 | 0 | 1 | 2017-04-12T07:59:00.000 | python,http,http-status-codes,valueerror | What is the http status code when Api raises ValueError? | 1 | 2 | 3 | 43,363,842 | 0 |
1 | 0 | I am raising Value Error in my API because the input parameter of a particular function is not valid like below
Password doesn't match
User doesn't exist in db or the value is negative
Client provided valid argument as per the API norms so I think Client side error is not the case(400 series code).
So whether I have to return status code as 200 because that request is totally processed or there should be a http status code for this. | false | 43,363,636 | 0.066568 | 0 | 0 | 1 | You should send another status code.
A good example of a processed request which gives another status than 200 is the redirection 3xx. After submitting a form through a POST request, it is recommended that the server gives a 307 Temporary Redirect. However, the request was totally processed, and even succeeded.
In your case, something happened (an exception has been raised). Let the client know it by sending a adequate status. I would recommend 403 Forbidden or 401 Unauthorized. | 0 | 2,334 | 0 | 1 | 2017-04-12T07:59:00.000 | python,http,http-status-codes,valueerror | What is the http status code when Api raises ValueError? | 1 | 2 | 3 | 43,363,778 | 0 |
0 | 0 | I am developing a customer care chat bot to resolve basic queries of customer for my e-commerce site. My order id is 13 digit long. To read queries like
"Please check my order status with id 9876566765432"
api.ai is unable to understand that it is order id. I have set entity type @sys.number. It is able to identify smaller number like 343434 etc. I have tried with @sys.number-integer, @sys.number-sequence but not working for long numbers. Pleas advise... | false | 43,368,500 | 0 | 0 | 0 | 0 | If you are using the enterprise edition you can use @sys.number-sequence. | 0 | 135 | 0 | 4 | 2017-04-12T11:38:00.000 | python,dialogflow-es | Unable to read very large number in api.ai as parameter | 1 | 1 | 1 | 54,820,198 | 0 |
0 | 0 | How would I go about using the Google Custom Search API, using the python GCS library, to only return Google Shopping results?
I have the basic implementation already for standard search queries, which searches the whole web and returns related sites, but how would I only return shopping results?
Thank You. | false | 43,379,256 | 0.197375 | 0 | 0 | 1 | There is a 'search outside of google'checkbox in the dashboard. you will get the same result after you check it. it takes me a while to find it out. the default sitting is only return search result inside of all google websites. | 0 | 463 | 0 | 2 | 2017-04-12T20:40:00.000 | python,google-api,google-custom-search,google-api-python-client | Google Custom Search Python Shopping Results Only? | 1 | 1 | 1 | 43,379,411 | 0 |
0 | 0 | Is Celery mostly just a high level interface for message queues like RabbitMQ? I am trying to set up a system with multiple scheduled workers doing concurrent http requests, but I am not sure if I would need either of them. Another question I am wondering is where do you write the actual task in code for the workers to complete, if I am using Celery or RabbitMQ? | false | 43,379,554 | 0.099668 | 0 | 0 | 1 | Celery is the task management framework--the API you use to schedule jobs, the code that gets those jobs started, the management tools (e.g. Flower) you use to monitor what's going on.
RabbitMQ is one of several "backends" for Celery. It's an oversimplification to say that Celery is a high-level interface to RabbitMQ. RabbitMQ is not actually required for Celery to run and do its job properly. But, in practice, they are often paired together, and Celery is a higher-level way of accomplishing some things that you could do at a lower level with just RabbitMQ (or another queue or message delivery backend). | 0 | 3,260 | 1 | 6 | 2017-04-12T20:59:00.000 | python,rabbitmq,celery | What is the relationship between Celery and RabbitMQ? | 1 | 1 | 2 | 43,379,719 | 0 |
0 | 0 | I am interacting with svg elements in a web page.
I am able to locate the svg elements by xpath, but not able to click it, The error mentions that the methods like click(), onclick() are not available.
Any suggestions of how can we make it clickable ? please advice ? | false | 43,412,779 | 0 | 0 | 0 | 0 | Try to make use of Actions or Robot class | 0 | 106 | 0 | 0 | 2017-04-14T13:42:00.000 | python,selenium,svg,webdriver | SVG Elements: Able to locate elements using xpath but not able to click | 1 | 1 | 1 | 43,422,726 | 0 |
0 | 0 | I need telegram bot which can delete message in group channel
i googled it but did not find a solution
Could you tell the library and the method for this, if you know, thank you in advance? | false | 43,416,724 | 0 | 1 | 0 | 0 | In the bot API 3.0 the method used is "deleteMessage" with parameters chat_id and message_id
Not yet officially announced | 0 | 14,759 | 0 | 1 | 2017-04-14T17:56:00.000 | python,bots,telegram | How to delete message on channel telegram? | 1 | 1 | 4 | 43,864,589 | 0 |
1 | 0 | I have seen in Firebug that my browser sends requests even for all static files. This happened when I have enabled caching for static files.
I also saw the server response with 304 status code.
Now, my question:
Why should the browser send requests for all static files when the cache is enabled?
Is there a way that the browser does not send any request for static files until the expiration of the cache? | true | 43,424,090 | 1.2 | 1 | 0 | -1 | Browsers still send requests to the server in case the files are cached to get to know whether there are new contents to fetch. Note that the response code 304 comes from the server telling the browser that the contents it has cached are still valid, so it doesn't have to download them again. | 0 | 149 | 0 | 0 | 2017-04-15T08:50:00.000 | caching,browser,static,python-requests | Why does browser send requests for static files? | 1 | 1 | 1 | 43,425,459 | 0 |
0 | 0 | I've been using Google Trends for my project alongside a Python API called Pytrend. and it's due in soon. Today, just a couple of hours ago, suddenly, I'm unable to use Google Trends. Every time I search a word, I get the following error
Oops! There was a problem displaying this page.
Please try again
Hence Pytrends doesn't work either.
I read that it maybe due to an ad blocker preventing access to Google Trends, but my anti-virus Norton 360 allows Google Chrome, and I don't have any Google Chrome extension ad blocker either.
Can someone please help provide a solution? I need one really soon. Many thanks. | false | 43,439,795 | 0 | 0 | 0 | 0 | I got this error because I accessed Google Trends far more frequently than most people due to my project being centered on Pytrend's API.
Therefore, Google presumably thought I was a bot or that my activity was suspicious.
I simply restarted my modem, and Google Trends, and hence, Pytrends, now both work.
So simply restart your modem and possibly your computer as well just to be on the safe side. And it should work right after. | 0 | 1,009 | 0 | 0 | 2017-04-16T17:09:00.000 | python,google-chrome,google-trends | Google Trends not working on my computer | 1 | 1 | 1 | 43,440,788 | 0 |
0 | 0 | I'm having trouble understanding Intellij's import policy for python for import os. As far as I know, the import order is supposed to be standard library first, then third party packages, then company packages, and finally intra-package or relative imports. For the most part Intellij orders everything correctly, but keeps pushing import os into third party packages. Am I missing smth? Isn't import os a standard library package? | false | 43,456,232 | 0.099668 | 0 | 0 | 1 | The answer I got from a co-worker a couple of years age is that os was originally a third-party package; IntelliJ left it where it is for some backward compatibility issue. | 1 | 86 | 0 | 2 | 2017-04-17T17:23:00.000 | python,intellij-idea | Intellij keeps re-ordering my `import os` | 1 | 1 | 2 | 43,456,673 | 0 |
0 | 0 | This sends around 5 requests per second to the server. I need to send around 40 requests per second. The server does not limit my requests (I have run 10 instances of this Python script and it has worked) and my internet does not limit the requests.
It's my code which limits my requests per second.
Is it possible to make my Python script send more requests? | false | 43,466,412 | 0 | 0 | 0 | 0 | I agree with @anekix, using twisted is the best way to do it.
You need to know one thing, which ever method you use to 10000 HTTP request, there is open file descriptor limit, which is basically set to 1000 in Linux.
What this mean is that you can have only 1000 concurrent TCP connections. However you can increase this from configurations /etc/security/limits.conf in Linux | 0 | 2,702 | 0 | 0 | 2017-04-18T07:40:00.000 | python,python-2.7 | What is the fastest way to send 10,000 HTTP requests in Python 2.7? | 1 | 1 | 3 | 43,467,319 | 0 |
0 | 0 | OS: Windows 10
I use an Ethernet switch to send UDP packets to two other systems (connected directly to that same switch) simultaneously via Python 3.4.4. The same code works on two other dev/testing PC's so I know it's not the Python code, but for some reason it doesn't work on the PC that I want the system to be utilized by.
When I use Wireshark to view the UDP traffic at 169.254.255.255 (the target IP for sending the UDP packets to), nothing appears. However, sending packets to 169.X.X.1 works. On the other hand, packets sent to 169.X.X.255 are sent, but I receive time-to-live exceeded messages in return. I am restricted to that target IP, so changing the IP is not a solution. I also have it sending on port 6000 (arbitrary), I have tried changing the port number to no avail. Also won't let me send to 169.254.255.1
I have the firewalls turned off.
Thanks for your help. | true | 43,475,468 | 1.2 | 0 | 0 | 0 | The strange thing about my problem, was that this exact code worked on the computer in question (and two development computers) previously, but wasn't working at the time that I posted this question.
Wireshark wasn't leading me to my answer (only showing me that the UDP packets were not sent), so I decided to ping the IP via the command prompt. I received one of two errors (destination host unreachable, or request timed out). These errors led me to adding my desired target IP (169.254.255.255) to the ARP cache, which my problem was solved.
I'd like to thank you for suggesting a possible solution. | 0 | 3,390 | 0 | 1 | 2017-04-18T14:50:00.000 | windows,python-3.x,udp,broadcast,blocked | Windows/Python UDP Broadcast Blocked No Firewall | 1 | 1 | 2 | 43,671,319 | 0 |
1 | 0 | I'm interested in trying a web scraping project. The target sites use Javascript to dynamically load and update content. Most of the discussion online concerning web scraping such sites indicates node.js, casper.js, phantom.js, and nightmare.js are all reasonably popular tools to use when attempting such a project. Node.js seems to be used most often.
If I am running a Flask server and wish to display the results of a node.js, for example, scrape in tabular format on my site, is this possible? Will I run into compatibility issues? Or should I try to stick it out with a python-based approach to scraping like BS4 for the sake of consistency? I ask because node.js is described as a server, so I assume a conflict would arise if I tried to use it and Flask simultaneously. | true | 43,485,072 | 1.2 | 0 | 0 | 1 | If you want to write a web scraper that executes javascript, node.js (with something like Phantom.js) is a great choice. Another popular choice is Selenium. You would need to simulate user actions to activate event handlers. Let's call this action "scraping". BS4 would not be appropriate because it cannot execute javascript.
Once you have your data saved to disk, displaying the results in HTML tabular form (let's call this action "reporting") would require yet another solution. Flask is a suitable choice.
Since the scraping and reporting are separate concerns, no conflict would arise if you wanted to use the two services simultaneously. When using Selenium or node.js as a scraper, you aren't really running a web server. So it's incorrect to think of it as two web-servers in possible conflict. | 0 | 160 | 0 | 0 | 2017-04-19T02:05:00.000 | javascript,node.js,python-3.x,web-scraping | Does running a Flask web server preclude web scraping in Node.JS? | 1 | 1 | 1 | 43,485,536 | 0 |
0 | 0 | How to check if the proxy have a high anonymity level or a transparent level in Python?
I am writing a script to sort the good proxies but I want to filter only the high anonymity ones (elite proxies). | false | 43,493,600 | 0.197375 | 1 | 0 | 1 | Launch test site in the internet. It will perform only one operation: save received request with all the headers into a database or a file. Each request should have your signature to be sure it's yours original request.
Connect with your Python script via proxy being tested to the site. Send all headers you want to see on that side.
Check data received - are there some headers or some date what can break your anonymity. | 0 | 784 | 0 | 1 | 2017-04-19T10:50:00.000 | python,proxy,anonymity | How to check proxy's anonymity level? | 1 | 1 | 1 | 43,558,215 | 0 |
0 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | false | 43,542,157 | 0 | 0 | 0 | 0 | Check your environment variable. i think the PYTHONHOME variable may be pointing to the wrong directory | 1 | 1,145 | 0 | 1 | 2017-04-21T11:58:00.000 | python,dll | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 1 | 3 | 3 | 44,690,127 | 0 |
0 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | false | 43,542,157 | 0 | 0 | 0 | 0 | These kinds of problem generally happens when you have multiple virtual environments of venv available in your system.
Check in the preferences of the visual studio / Any other IDE setting, they generally point to a particular venv .
Change it to point to the venv where this module is installed and then it work
Hope it helps
Thanks | 1 | 1,145 | 0 | 1 | 2017-04-21T11:58:00.000 | python,dll | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 1 | 3 | 3 | 53,442,821 | 0 |
0 | 0 | I am trying to make a .sln file for Visual Studio and in that process I am facing a problem
File "socket.py", line 47, in
import _socket
ImportError: DLL load failed: The specified module could not be found.
This socket.py is present in Python27/Lib folder.
I have checked there is no other version of python installed which is clashing with Python27. | false | 43,542,157 | 0 | 0 | 0 | 0 | If the error is import _socket failed then the file _socket was not installed or it was deleted on mistake, I had the same problem and reinstalling python made stuff fine. As for _socket, its a .pyd file which will have some C code used by socket to code a class. If you did not understand this, open python IDLE and press alt and m together, then type socket, hit enter and the source code will open, scroll down until you see the code starting and you'll find the line import _socket. | 1 | 1,145 | 0 | 1 | 2017-04-21T11:58:00.000 | python,dll | File "socket.py", line 47, in import _socket ImportError: DLL load failed: The specified module could not be found | 1 | 3 | 3 | 67,999,990 | 0 |
1 | 0 | Basically I'm using python to send serial data to an arduino so that I can make moving dials using data from the game. This would work because you can use the url "localhost:8111" to give you a list of these stats when ingame. The problem is I'm using urllib and BeautifulSoup but they seem to be blindly reading the source code not giving the data I need.
The data I need comes up when I inspect the element of that page. Other pages seem to suggest that using something to run the HTML in python would fix this but I have found no way of doing this. Any help here would be great thanks. | false | 43,552,146 | 0 | 0 | 0 | 0 | Your problem might be that the page elements are Dynamic. (Revealed by JavaScript for example)
Why is this a problem? A: You can't access those tags or data. You'll have to use either a headless/Automated browser ( Learn more about selenium ).
Then make a session through selenium and keep feeding the data the way you wanted to the Arduino.
Summary: If you inspect elements you can see the tag, if you go to view source you cant see it. This can't be solved using bs4 or requests alone. You'll have to use a module called Selenium or something similar. | 0 | 853 | 0 | 2 | 2017-04-21T21:15:00.000 | python,html,beautifulsoup,urllib | Trying to read data from War Thunder local host with python | 1 | 1 | 3 | 43,559,174 | 0 |
1 | 0 | I'm wondering if there's an easy way (using python/js/html) to automatically select the form to insert credentials.
Basically at the login-page you don't have to click the 'username' form and can type right away.
Thanks! | false | 43,606,341 | 0.379949 | 0 | 0 | 4 | simply use 'autofocus' in input html <input type="text" name="fname" autofocus> | 0 | 63 | 0 | 2 | 2017-04-25T09:12:00.000 | javascript,python | Type your info without having to click the form | 1 | 1 | 2 | 43,606,517 | 0 |
0 | 0 | The project I'm doing requires a server. But, with Bottle I can create only a localhost server. I want to be able to access it anywhere. What do I use? I know about pythonanywhere.com, but I'm not sure as to how to go about it. | false | 43,613,798 | 0.379949 | 0 | 0 | 2 | On PythonAnywhere, all you need to do is:
Sign up for an account, and log in.
Go to the "Web" tab
Click the "Add a new web app" button
Select "Bottle"
Select the Python version you want to use
Specify where you want your code files to be
...and then you'll have a bottle server up and running on the Internet, with simple "Hello world" code behind it. You can then change that to do whatever you want. | 0 | 63 | 1 | 0 | 2017-04-25T14:40:00.000 | python-3.x,server,bottle | How to create an online Bottle server accessible from any system? | 1 | 1 | 1 | 43,635,182 | 0 |
0 | 0 | I am working on a small programming game/environment in Python to help my younger brother learn to code. It needs to operate over a network, and I am new to network programming. I am going to explain the concept of the game so that someone can point me in the best direction.
The idea is a simple grid of 25x25 'diodes,' squares with fixed positions and editable color values, essentially simulating a very small screen. In addition to the grid display, there is a command window, where Python code can be entered and sent to an instance of InteractiveConsole, and a chat window. A client needs to be able to send Python commands to the host, which will run the code, and then receive the output in the form of a string representing changes to the grid. My concept for doing this involves maintaining a queue on the host side of incoming and outgoing events to handle and relay to the clients on individual threads. Any given command/chat event will be sent to the host and relayed to all clients, including the client who created the event, so that those events are visible to all clients in their command/chat windows. All changes to the grid will originate with the host as a result of processing commands originated from clients and will also be sent out to all clients.
What I primarily don't understand is how to synchronize between all clients, i.e. how to know when a given item in the queue has been successfully sent out to all clients before clearing it from the queue, since any individual thread doing so prematurely will prevent the item from being sent to other clients. This is an extremely open-ended question because I understand that I will definitely need to consume some learning materials before I'm ready to implement this. I'm not asking for a specific solution but rather for some guidance on what general type of solution could work in my situation. I'm doing this in my spare time, so I don't want to spend a month going through networking tutorials that aren't pointing me in a direction that will be applicable to this project. | true | 43,617,007 | 1.2 | 0 | 0 | 1 | My approach would be to use a udp server that can broadcast to multiple clients. So basically, all the clients would connect to this server during a game session, and the server would broadcast the game state to the clients as it is updated. Since your game is relatively simple this approach would give you real time updates. | 0 | 343 | 0 | 0 | 2017-04-25T17:10:00.000 | python,multithreading,networking,server,multicast | How To Send Data To Multiple Clients From A Queue | 1 | 1 | 1 | 43,617,374 | 0 |
0 | 0 | For example, I am interested in gathering daily information on a specific NBA player.
As far as i know Google do not allow to scraping it results. Does Google offers other possibilities for machine queries? Are they Python Packages to preform those queries? | false | 43,620,849 | 0.197375 | 0 | 0 | 1 | Most search engines don't want you scraping their results, but they do offer alternatives:
Google Custom Search
Google Alerts
Bing API
There are also some services that sell access to what you want. Off the top of my head, I know of:
Brightplanet
Webhose
(I'm not affiliated with any of these, but I have used all of them in the past.) | 0 | 70 | 0 | 0 | 2017-04-25T20:59:00.000 | python,python-3.x,web-crawler,scrapy-spider,bigdata | How to gather information on a given theme using python? | 1 | 1 | 1 | 43,621,692 | 0 |
0 | 0 | I've some problem with Sockets UDP in Python:
I've a software which receives a message in input from a socket and then do some elaborations before wait for another message from the socket.
Let's suppose that in the meanwhile more messages arrive:
If I'm right, they go in a buffer (FIFO) and everytime I listen the socket, I read the oldest one, right?
Is there a way to delete the buffer and everytime read the next message? I want to ignore all the oldest messages...
Another problem is that I've like a tons of message every seconds. How can I empty the buffer if they continue to fill it? | false | 43,630,471 | 0.197375 | 0 | 0 | 1 | I also met the same problem. The solution I chose is to turn off socket when I don't need to receive data. Reopen it when I need it. So the data in the buffer is emptied. | 0 | 2,313 | 0 | 3 | 2017-04-26T09:41:00.000 | python,sockets,udp,buffer | UDP Socket in Python: How to clear the buffer and ignore oldes messages | 1 | 1 | 1 | 48,019,506 | 0 |
1 | 0 | Is there any such library available which allow to run python code or any executable binary on the client machine using JavaScript. I have a strange scenario. All client machine use a web application hosted on a server, but authentication should be done using a device in the client machine using ttyUSB0 interface. Since it is not possible for a web application to access client machine, is it possible to create a client application using library like Pywebview which will allow to call that library directly from the web application using Javascript. | false | 43,672,788 | 0 | 0 | 0 | 0 | Can you not call a specific page of your web application (assuming your web application uses some web framework developed in python) via ajax with javascript and does this page execute a Python function (under the table)? | 0 | 66 | 0 | 0 | 2017-04-28T05:43:00.000 | javascript,python | Call Python function from javascript inside client application | 1 | 1 | 1 | 43,673,110 | 0 |
0 | 0 | I am trying to create a python program which will listen to 2 websockets for unlimited time , websockets will be used as one way pipeline and each socket will store data in one variable when new data comes the old data will be replaced.
What will be the best way to do it ? I was looking for Websocket server but i wonder if there is any easier way to just listen to 2 ports/websockets. | false | 43,725,625 | 0 | 0 | 0 | 0 | Using websockets and asyncio it is easy. Connect two sockets and run listeners on it. Depending on where message handling should happen, either in a forever running coroutine which handles messages on its own or by a higher level handler coroutine which asyncio.waits two listener tasks essentially awaiting ws.recv(). | 1 | 3,757 | 0 | 0 | 2017-05-01T19:59:00.000 | python,websocket,listener | Python websocket listener | 1 | 2 | 4 | 55,833,478 | 0 |
0 | 0 | I am trying to create a python program which will listen to 2 websockets for unlimited time , websockets will be used as one way pipeline and each socket will store data in one variable when new data comes the old data will be replaced.
What will be the best way to do it ? I was looking for Websocket server but i wonder if there is any easier way to just listen to 2 ports/websockets. | false | 43,725,625 | 0 | 0 | 0 | 0 | I need to listen to exactly 2 ports like for example 8888 and 8889 the data on ports will be true and false and i should be able to get data to main program , so i can do something if both ports are true for example. | 1 | 3,757 | 0 | 0 | 2017-05-01T19:59:00.000 | python,websocket,listener | Python websocket listener | 1 | 2 | 4 | 43,742,288 | 0 |
0 | 0 | I'm doing a tweepy crawling.
I want the program to run automatically every 12 hours
Could you tell me the syntax?
I'm doing it with Python, and I have all the tweepy crawling code. | false | 43,730,867 | 0.099668 | 1 | 0 | 1 | You can try executing/calling your program by the command line interface and schedule a task using the embedded in windows- Windows Scheduler. There are many alternatives but that is the easiest I found when I did such task few months ago. | 0 | 59 | 0 | 2 | 2017-05-02T05:43:00.000 | python,twitter,save,tweepy | i want your help abut program to run automatically every 12 hours | 1 | 1 | 2 | 43,730,944 | 0 |
1 | 0 | Is it possible to get lifetime data from using facebookads api on python? I tried to use date_preset:lifetime and time_increment:1, but got a server error instead. And, then I found this on their website:
"We use data-per-call limits to prevent a query from retrieving too much data beyond what the system can handle. There are 2 types of data limits:
By number of rows in response, and
By number of data points required to compute the total, such as summary row."
Any way I can do this? And, another question, is there like any way to pull raw data from facebook ad account, like a dump of all the data that resides on facebook for an ad account? | true | 43,731,132 | 1.2 | 0 | 0 | 0 | The first thing is to try is to add the limit parameter, which limits the number of results returned per page.
However, if the account has a large amount of history, the likelihood is that the total amount of data is too great, and in this case, you'll have to query ranges of data.
As you're looking for data by individual day, I'd start trying to query for month blocks, and if this is still too much data, query for each date individually. | 0 | 108 | 0 | 0 | 2017-05-02T06:05:00.000 | python,facebook,facebook-graph-api,facebook-ads-api | Get all data since creation until today from ad account using facebook ads api for python | 1 | 1 | 1 | 44,020,372 | 0 |
0 | 0 | I was using Behave and Selenium to test on something that use a large amount of data. Data tables were becoming too big and making the Gherkin documentation unreadable.
I would like to move most of the data from data tables to external file such as JSON. But I couldn't find any examples on websites. | false | 43,736,100 | 0 | 0 | 0 | 0 | I cannot offer an example at the moment, but I would create the JSON file as needed and give reference to the JSON file in Given or Background , then capture the value in the respective decorated method. | 1 | 1,133 | 0 | 0 | 2017-05-02T10:53:00.000 | json,selenium,cucumber,bdd,python-behave | Move Gherkin's (i.e. Cucumber and Behave) Data Table to JSON | 1 | 1 | 1 | 64,931,158 | 0 |
0 | 0 | I'm using boto3 and trying to upload files. It will be helpful if anyone will explain exact difference between file_upload() and put_object() s3 bucket methods in boto3 ?
Is there any performance difference?
Does anyone among these handles multipart upload feature in behind the scenes?
What are the best use cases for both? | true | 43,739,415 | 1.2 | 0 | 1 | 51 | The upload_file method is handled by the S3 Transfer Manager, this means that it will automatically handle multipart uploads behind the scenes for you, if necessary.
The put_object method maps directly to the low-level S3 API request. It does not handle multipart uploads for you. It will attempt to send the entire body in one request. | 0 | 16,616 | 0 | 49 | 2017-05-02T13:40:00.000 | python,amazon-web-services,amazon-s3,boto3 | What is the Difference between file_upload() and put_object() when uploading files to S3 using boto3 | 1 | 1 | 3 | 43,744,495 | 0 |
0 | 0 | I've been scratching my head trying to figure out if this is possible.
I have a server program running with about 30 different socket connections to it from all over the country. I need to update this server program now and although the client devices will automatically reconnect, its not totally reliable.
I was wondering if there is a way of saving the socket object to a file? then load it back up when the server restarts? or forcefully keeping a socket open even after the program stops. This way the clients never disconnect at all.
Could really do with hot swappable code here really! | false | 43,739,576 | 0 | 0 | 0 | 0 | I think your critical question is how to hot reload.
And as mentioned by @gavinb, you can import importlib and then use importlib.reload(module) to reload a module dynamically.
Be careful, the parameter of reload(param) must be a module. | 0 | 370 | 0 | 2 | 2017-05-02T13:47:00.000 | python,sockets | Python 3 Sockets - Can I keep a socket open while stopping and re-running a program? | 1 | 1 | 3 | 43,740,468 | 0 |
0 | 0 | How do you detect "bounceed" email replies and other automated responses for failed delivery attempts in Python?
I'm implementing a simple server to relay messages between email and comments inside a custom web application. Because my comment model supports a "reply to all" feature, if two emails in a comment thread become invalid, there would possibly be an infinite email chain where my system would send out an email, get a bounceback email, relay this to the other invalid email, get a bounceback email, relay this back to the first, ad infinitum.
I want to avoid this. Is there a standard error code used for bounced or rejected emails that I could check for, ideally with Python's imaplib package? | false | 43,782,851 | 0 | 1 | 0 | 0 | Avoid reacting to messages whose Return-Path isn't in From, or that contain an Auto-Submitted or X-Loop header field, or that have a bodypart with type multipart/report.
You may also want to specify Auto-Submitted: auto-generated on your outgoing mail. I expect that if you do as Max says that'll take care of the problem, but Auto-Submitted isn't expensive. | 0 | 2,370 | 0 | 2 | 2017-05-04T12:24:00.000 | python,email,imap | How to detect bounce emails | 1 | 1 | 3 | 43,785,616 | 0 |
0 | 0 | I am running a python script on a linux server 4.9 kernel. This script uses selenium to open a firefox instance and load some website. The script is supposed to be running for days. Since I am running the process over ssh, I have tried both screen and nohup, but the process just stops after a few hours.
I can see the python process using top but its terminal output is just paused. I am unable to understand why is this happening. | false | 43,785,929 | 0 | 0 | 0 | 0 | I met the similar situation recently. It looks like that the antivirus software on my laptop interrupted the geckodriver in the middle. I turn off that software and now it keep running. | 0 | 228 | 0 | 2 | 2017-05-04T14:39:00.000 | python,linux,selenium,background-process | Python selenium script stops working after a few hours | 1 | 1 | 1 | 43,824,917 | 0 |
0 | 0 | Can we run a Python Web-Scraping Code inside SSIS. If Yes, What is the effect of Using Beautiful Soup & Selenium ? Which one can be preferred. Is there a better way to run this.
My Requirement is to, get the data from the website using python script and store it in a table every time I run the package. | true | 43,787,228 | 1.2 | 0 | 0 | 1 | You can run python script from within SSIS by calling the .py script file from execute process task. That being said, the server where this is being run needs to have the Python installed. | 0 | 507 | 0 | 0 | 2017-05-04T15:33:00.000 | python,ssis,web-scraping | Python WebScraping Script inside SSIS Package during ETL | 1 | 1 | 1 | 43,787,287 | 0 |
1 | 0 | I know how to write and read from a file in S3 using boto. I'm wondering if there is a way to append to a file without having to download the file and re-upload an edited version? | true | 43,791,236 | 1.2 | 0 | 0 | 5 | There is no way to append data to an existing object in S3. You would have to grab the data locally, add the extra data, and then write it back to S3. | 0 | 4,234 | 0 | 5 | 2017-05-04T19:24:00.000 | python-3.x,amazon-s3,boto,boto3 | Appending to a text file in S3 | 1 | 1 | 1 | 43,791,579 | 0 |
0 | 0 | I'm running rather long documents through Spacy, and would like to retain position markers of paragraphs in the Spacy doc but ignore them in the parse. I'm doing this to avoid creating a lot of different docs for all the paragraphs.
Example using XPath:
\\paragraph[@id="ABC"] This is a test sentence in paragraph ABC
I'm looking for a bit of direction here. Do I need to add entities/types or implement a customized tokenizer? Can I use the matcher with a callback function to affect that specific token?
Your Environment
Installed models: en
Python version: 3.4.2
spaCy version: 1.8.1
Platform: Linux-3.16.0-4-686-pae-i686-with-debian-8.6 | false | 43,799,270 | 0.379949 | 0 | 0 | 2 | spaCy's tokenizer is non-destructive, so you can always find your way back to the original string -- text[token.idx : token.idx + len(token)] will always get you the text of the token.
So, you should never need to embed non-linguistic metadata within the text, and then tell the statistical model to ignore it.
Instead, make the metadata a standoff annotation, that holds a character start and end point. You can always make a labelled Span object after the doc is parsed for your paragraphs.
Btw, in order to keep the alignment, spaCy does have tokens for significant whitespace. This sometimes catches people out. | 0 | 837 | 0 | 0 | 2017-05-05T07:44:00.000 | python,nlp,spacy | Spacy: Retain position markers in string, ignore them in Spacy | 1 | 1 | 1 | 43,799,855 | 0 |
0 | 0 | Issue
I can't get the python requests library, easy_install, or pip to work behind the corporate proxy. I can, however, get git to work.
How I got git working
I set the git proxy settings
git config --global http.proxy http ://proxyuser:[email protected]:8080
The corporate proxy server I work behind requires a user name and password and is indeed in the format
http: //username:passsword@ipaddress:port
I did not have to set the https.proxy
Things I have tried
(None of it has worked)
Environment Variables - Pip and Requests library
Method 1
$ export HTTP_PROXY="http://username:passsword@ipaddress:port"
$ export HTTPS_PROXY="http://username:passsword@ipaddress:port"
Method 2
SET HTTP_PROXY="http://username:passsword@ipaddress:port"
SET HTTPS_PROXy="HTTPS_PROXY="http://username:passsword@ipaddress:port"
I have tried both restarting after setting the proxy variables, and trying them right after setting them
Checking the variables with the 'SET' command shows that both are set correctly
Using Proxy Argument - Requests library
Creating a dictionary with the proxy information and passing it to requests.get()
proxies = {
'http': 'http: //username:passsword@ipaddress:port',
'https': 'http: //username:passsword@ipaddress:port'}
requests.get('http: //example.org', proxies=proxies)
Using Proxy Argument - pip
pip install library_name -–proxy=http: //username:passsword@ipaddress:port
pip install library_name -–proxy=username:passsword@ipaddress:port
Results - Requests library
Response
Response [407]
Reason
'Proxy Authorization Required'
Header Information
{'Proxy-Authenticate': 'NTLM', 'Date': 'Fri, 05 May 2017 21:49:06 GMT', 'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Type': 'text/html; charset="UTF-8"', 'Content-Length': '4228', 'Accept-Ranges': 'none', 'Proxy-Connection': 'keep-alive'}
Results - Pip
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connec
tion broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connectio
n failed: 407 Proxy Authorization Required',))'
Note: In regards to this post, I have included a space in all my authentication 'links' between "http" and "://" because stackoverflow won't let me publish this with so many 'links'.
(I set up a new Stackoverflow account as my old account was a login via facebook thing and I can't access it from work) | false | 43,814,526 | 0 | 0 | 0 | 0 | I solved this by installing Fiddler and using it as my local proxy, while Fiddler itself used the corporate proxy. | 0 | 939 | 0 | 0 | 2017-05-05T22:20:00.000 | git,proxy,pip,python-requests,ntlm | Requests library & Pip Ntlm Proxy settings issues. - Python | 1 | 1 | 1 | 44,008,516 | 0 |
1 | 0 | For optimization purposes, I need to my spider to skip website that has been once timed out, and dont let scrapy que it and try it again and again.
How can this be achieved?
Thanks. | false | 43,827,307 | -0.197375 | 0 | 0 | -1 | I just found out,
in setting set :
"RETRY_ENABLED = False" that will take care of it :) | 0 | 117 | 0 | 0 | 2017-05-07T02:41:00.000 | python,scrapy,scrapy-spider | How to not allow scrapy to retry timed out websites? | 1 | 1 | 1 | 43,827,353 | 0 |
1 | 0 | i have written parameters in webpage & applying them by selecting apply button.
1. apply button is getting clicked for web elements outside frame using below command
browser1.find_element_by_name("action").click()
apply button not getting clicked when saving the paramters inside a frame of web page using same command
browser1.find_element_by_name("action").click() | false | 43,846,875 | 0 | 0 | 0 | 0 | I do not know the Python syntax, but in Selenium you have to switch to the frame much like you would switch to a new tab or window before performing your click.
The syntax in Java would be: driver.switchTo().frame(); | 0 | 1,392 | 0 | 0 | 2017-05-08T11:48:00.000 | python-3.x,selenium | click() method not working inside a frame in selenium python | 1 | 1 | 3 | 43,850,009 | 0 |
0 | 0 | Title pretty much sums it up.
I have a website, where custom js file is located, and in order to write some tests, I need to disable loading this js file but inserting my custom js file instead. Using execute_script function from Chromedriver. And here I'm not sure about the working approach.
I was thinking about adding rules in NoScript add-on which will prevent loading of first js file, turning him off and injecting my js file, but I'm not sure whenever it's possible. | true | 43,849,017 | 1.2 | 0 | 0 | 1 | Selenium is designed for end to end testing. If you have to alter the content of the page, you are probably using the wrong tool.
It's possible to alter the page with Selenium with a piece of JavaScript via executeScript. However it can only be done once the page is fully loaded and probably after the execution of the original script.
One way would be to use a proxy to intercept the resource and forward a different one. | 0 | 758 | 0 | 3 | 2017-05-08T13:29:00.000 | python,selenium,webdriver | How to prevent Selenium from loading js file? | 1 | 1 | 1 | 43,851,121 | 0 |
0 | 0 | I would like to serve a Captive Portal - page that would prompt the user to agree to certain terms before starting browsing the web from my wireless network (through WiFi) using http.server on my Raspberry Pi that runs the newest version of Raspbian.
I have http.server (comes with Python3) up an running and have a webpage portal.html locally on the Pi. This is the page that I want users to be redirected to when they connect to my Pi. Lets say that the local IP of that page is 192.168.1.5:80/portal.html
My thought is that I would then somehow allow their connection when they have connected and accepted the terms and conditions.
How would I go about that? | false | 43,849,370 | 0 | 1 | 0 | 0 | You need to work out the document in question, with all the necessary terms and conditions that will link the acceptance of the conditions to a specific user. You can implement a simple login, and link a String (Name - ID) to a Boolean for example, "accepted = true". If you don't need to store the user data, just redirect yo your document and when checked "agree", then you allow the user connection. | 0 | 2,404 | 0 | 4 | 2017-05-08T13:46:00.000 | python-3.x,networking,simplehttpserver,captivenetwork,captiveportal | How do I implement a simple Captive Portal with `http.server`? | 1 | 1 | 2 | 43,850,242 | 0 |
1 | 0 | Has anyone here ever worked with chalice? Its an aws tool for creating api's. I want to use it to create a single page application, but Im not sure how to actually serve html from it. I've seen videos where its explored, but I can't figure out how they actually built the thing. Anyone have any advice on where to go, how to start this? | true | 43,898,566 | 1.2 | 0 | 0 | 1 | You wouldn't serve HTML from Chalice directly. It is explicitly designed to work in concert with AWS Lambda and API Gateway to serve dynamic, API-centric content. For the static parts of an SPA, you would use a web server (nginx or Apache) or S3 (with or without CloudFront).
Assuming you are interested in a purely "serverless" application model, I suggest looking into using the API Gateway "Proxy" resource type, forwarding to static resources on S3.
Worth noting that it's probably possible to serve HTML from Chalice, but from an architecture perspective, that's not the intent of the framework and you'd be swimming upstream to get all the capabilities and benefits from tools purpose-built for serving static traffic (full HTTP semantics w/ caching, conditional gets, etc) | 0 | 923 | 0 | 1 | 2017-05-10T16:54:00.000 | python,amazon-web-services,chalice | Using aws chalice to build a single page application? | 1 | 1 | 2 | 44,975,845 | 0 |
1 | 0 | On AWS, I created a new lambda function. I added a role to the lambda that has the policy, AWSLambdaVPCAccessExecutionRole. I placed the lambda in the same VPC as my EC2 instance and made sure the security group assigned to the lambda and EC2 instance have the same default VPC security group created by AWS which allows all traffic within the vpc. On my EC2 instance, I have a tomcat app running on port 8080. I tried to hit the URL by two methods in my lambda function:
Using my load balancer, which has the same assigned security group
Hitting the IP address of the EC2 box with port 8080
Both of these options do not work for the lambda function. I tried it on my local computer and it is fine.
Any suggestions?
Security Group for Inbound
Type = All Traffic
Protocol = All
Port Range = All
Source = Group ID of Security Group | false | 43,923,482 | 0 | 1 | 0 | 0 | has the security group 8080 port open to internet?
To connect Lambdas with VPC you can't use the default VPC, you have to create one with a nat gateway.
EDIT: Only if the Lambda fucntion needs to access to internet and VPC. | 0 | 1,713 | 0 | 0 | 2017-05-11T18:51:00.000 | python,amazon-web-services,tomcat,amazon-ec2,aws-lambda | AWS Lambda function can't communicate with EC2 Instance | 1 | 1 | 1 | 43,923,646 | 0 |
0 | 0 | we created a RTMP server using NGINX and have a camera that is streaming video to that server. We have a python program that should connect to the RTMP server and then display the video on the computer. When we run the program we keep getting the below error:
RTMP_Connect0, failed to connect socket. 110 (Connection timed out)
I found a RTMP url on that was used for testing the code and it works but our RTMP server doesnt. Does anyone know of any settings that need to be set to be able to get passed this error? | false | 43,967,864 | 0 | 0 | 0 | 0 | Firstly I would check your firewall, TCP port 1935 needs to be open. | 0 | 972 | 0 | 0 | 2017-05-14T19:13:00.000 | python,nginx,streaming,rtmp | RTMP Connection Timeout in Python | 1 | 1 | 1 | 44,003,701 | 0 |
1 | 0 | We are receiving an error:
ImportError: No module named OAuth2Client
We have noticed scores of questions around this topic, many unanswered and at least one answer that describes the solution of copying over files from the Google App Engine SDK.
This approach, however, seems tedious because all the dependencies are unclear. If we copy over oauth2client then run, the next error is another module that is missing. Fix that, then another module is missing, etc., etc.
What is ironic is that we can see all the files and modules needed, listed from Google App Engine SDK right in PyCharm but they seem inaccessible to the script.
Is there no better way to pull in all the files that oauth2client needs for Python to work on App Engine? | false | 44,011,776 | 0 | 0 | 0 | 0 | Run this
sudo python -m pip install oauth2client | 0 | 80,755 | 0 | 49 | 2017-05-16T21:21:00.000 | python,google-app-engine,oauth-2.0 | How to prevent "ImportError: No module named oauth2client.client" on Google App Engine? | 1 | 1 | 5 | 55,574,741 | 0 |
1 | 0 | I'm using Bottle as my webservice. Currently, its running on bottle's default wsgi server and handles HTTP requests. I want to encrypt my webservice and handle HTTPS requests. Can someone suggest a method for this. I tried running on cherrypy server, but the latest version is not supporting the pyOpenSSLAdapter. | false | 44,013,107 | 0.132549 | 0 | 0 | 2 | You need to put your WSGI server (not WsgiRef certainly) behind a reverse-proxy with https support. Nginx is the most common choice. | 0 | 4,247 | 1 | 8 | 2017-05-16T23:18:00.000 | python-3.x,https,cherrypy,bottle | How to make bottle server HTTPS python | 1 | 1 | 3 | 44,016,254 | 0 |
0 | 0 | For example,
"view source code" on Internet Explorer
→ <html> aaa(bbb)ccc </html>
requests.get(url).text
→ <html> aaa()ccc </html>
Why?
How I can get the former html-text in Python? | false | 44,015,911 | 0.53705 | 0 | 0 | 3 | This can be explained by several reasons:
Either the website filters the clients by a criterion (like the User Agent header) so it only sends the contents to "real" clients (ie browsers)
Either the website loads an empty webpage and then populates it with javascript, which means that you only get the dummy page with your GET request (this can only be the case if you use Inspect Element and not View source code) | 0 | 69 | 0 | 0 | 2017-05-17T04:53:00.000 | python,html | What is the difference between html-text by "view source code" on Internet Explorer and by requests.get() method in python? | 1 | 1 | 1 | 44,016,198 | 0 |
0 | 0 | I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message:
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job
self.server + CREATE_JOB % locals(), config_xml, headers))
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open
'Possibly authentication failed [%s]' % (e.code)
jenkins.JenkinsException: Error in request.Possibly authentication failed [403]
The config file I have created was copied from another job config file as it was the easiest way to build it:
I am using the import jenkins module.
The server instance I create is using these credentials:
server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN')
Any help will be greatly appreciated. | false | 44,023,078 | 0 | 0 | 0 | 0 | Error 403 is basically issued when the user is not allowed to access the resource. Are you able to access the resource manually using the same credentials? If there are some other admin credentials, then you can try using those.
Also, I am not sure but may be you can try running the python script with admin rights. | 0 | 2,119 | 0 | 1 | 2017-05-17T11:05:00.000 | python,authentication,jenkins | Authentication failed with Jenkins using python API | 1 | 2 | 2 | 44,024,658 | 0 |
0 | 0 | I am trying to create a job using the python api. I have created my own config, but the authentication fails. It produces an error message:
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 415, in create_job
self.server + CREATE_JOB % locals(), config_xml, headers))
File "/usr/lib/python2.7/dist-packages/jenkins/__init__.py", line 236, in jenkins_open
'Possibly authentication failed [%s]' % (e.code)
jenkins.JenkinsException: Error in request.Possibly authentication failed [403]
The config file I have created was copied from another job config file as it was the easiest way to build it:
I am using the import jenkins module.
The server instance I create is using these credentials:
server = jenkins.Jenkins(jenkins_url, username = 'my_username', password = 'my_APITOKEN')
Any help will be greatly appreciated. | false | 44,023,078 | 0 | 0 | 0 | 0 | As far as I know for security reasons in Jenkins 2.x only admins are able to create jobs (to be specific - are able to send PUT requests). At least that's what I encountered using Jenkins Job Builder (also Python) and Jenkins 2.x. | 0 | 2,119 | 0 | 1 | 2017-05-17T11:05:00.000 | python,authentication,jenkins | Authentication failed with Jenkins using python API | 1 | 2 | 2 | 44,050,111 | 0 |
0 | 0 | Bit of a python noob, but I was wondering …
I want to start a thread on start up and pass udp socket data to the thread when it comes in for the thread to process AND then respond to the client accordingly.
All the examples I have seen so far create a thread, do something, bin it, repeat. I don’t want thousands of threads to be created, just one to handle message data of a particular type.
Is this possible and does anyone know of any examples ?
Thanks | false | 44,026,930 | 0 | 0 | 0 | 0 | Yes, it is possible. Do note however that you will not gain throughput this way unless you are able to process the datagrams using a compiled extension module like NumPy or some custom logic. | 1 | 186 | 0 | 0 | 2017-05-17T13:55:00.000 | python,python-3.x,python-multithreading,python-sockets | Python: how to pass UDP socket data to thread for processing | 1 | 1 | 1 | 44,027,062 | 0 |
0 | 0 | I was having some SSL connectivity issues while connecting to a secured URL, Later I fixed this problem by providing .pem file path for verifying. ie, verify="file/path.pem"
My doubt is should this file be stored in a common place on the server or should this file be part of the project source code and hence version control.
Please advise. | true | 44,037,702 | 1.2 | 1 | 0 | 4 | In most cases a .pem file contains sensitive information and is environment specific, it should not be part of the project source code. It should be available in a secured server and downloadable with appropriate authorization. | 0 | 513 | 0 | 0 | 2017-05-18T02:39:00.000 | python,version-control,pem | Python: Should I version control .pem files? | 1 | 1 | 1 | 44,037,794 | 0 |
0 | 0 | I'm trying to establish a connection using ssl in python (I'm using python 3.6.1).
I'm very new to ssl and so I read the ssl documentation and I saw that there is a function called create_default_context that return a new SSLContext object with default setting and I did'nt fully understand this function.
I want to create a ssl server (client is in javascript).
My question is if I can use only the default context that I'm creating or do I need to create self signed certificate and key file for the server as well ? | false | 44,042,709 | 0 | 0 | 0 | 0 | The cert file verification can be ignored by setting the verfiy_mode to CERT_NONE (default mode in Python 3.6, ssl module). In this mode, only encryption of data is proceeded, the identity of the server and the client will not be veried. Be careful that this is unsecure. | 1 | 1,300 | 0 | 1 | 2017-05-18T08:39:00.000 | python,ssl | python default context object ssl | 1 | 1 | 1 | 44,043,522 | 0 |
0 | 0 | I often open hundreds of tabs when using web browsers, and this slows my computer. So I want to write a browser manager in Python and Selenium , which opens tabs and can save the urls of those tabs, then I can reopen them later.
But it seems like the only way to get the url of a tab in Python Selenium is calling get_current_url.
I'm wondering if there's a way to get the url of a tab without switching to it? | false | 44,066,098 | 0 | 0 | 0 | 0 | Just go to the text link which is switching to other tab and save its @href attribute link into a string or list | 0 | 1,256 | 0 | 1 | 2017-05-19T09:18:00.000 | python,selenium,selenium-webdriver,window-handles,getcurrenturl | Selenium: How to get current url of a tab without switching to it? | 1 | 1 | 4 | 44,066,234 | 0 |
1 | 0 | After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand.
I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported.
I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka?
Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this.
Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know.
After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history. | false | 44,087,849 | 0 | 0 | 0 | 0 | Please include the Kafka client library within the WAR file of the Java application which you are deploying to Tomcat | 0 | 370 | 0 | 0 | 2017-05-20T15:44:00.000 | python,eclipse,apache-kafka,kafka-producer-api | Send message to a kafka topic using java | 1 | 3 | 3 | 44,092,800 | 0 |
1 | 0 | After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand.
I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported.
I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka?
Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this.
Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know.
After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history. | false | 44,087,849 | 0 | 0 | 0 | 0 | Please use org.apache.kafka.clients.producer.KafkaProducer rather than kafka.producer.Producer (which is the old client API) and make sure you have the Kafka client library on the classpath. The client library is entirely in Java. It's the old API that's written in scala, as is the server-side code. You don't need to import the server library in your code or add it to the classpath if you use the new client API. | 0 | 370 | 0 | 0 | 2017-05-20T15:44:00.000 | python,eclipse,apache-kafka,kafka-producer-api | Send message to a kafka topic using java | 1 | 3 | 3 | 44,109,016 | 0 |
1 | 0 | After several weeks looking for some information here and google, I've decided to post it here to see if anyone with the same problem can raise me a hand.
I have a java application developed in Eclipse Ganymede using tomcat to connect with my local database. The problem is that I want to send a simple message ("Hello World") to a Kafka Topic published on a public server. I've imported the libraries and developed the Kafka function but something happens when I run in debug mode. I have no issues or visible errors when compiling, but when I run the application and push the button to raise this function it stops in KafkaProducer function because there is NoClassDefFoundError kafka.producer..... It seems like it is not finding the library properly, but I have seen that it is in the build path properly imported.
I am not sure if the problem is with Kafka and the compatibility with Eclipse or Java SDK (3.6), it could be?. Anyone knows the minimum required version of Java for Kafka?
Also, I have found that with Kafka is really used Scala but I want to know if I can use this Eclipse IDE version for not change this.
Another solution that I found is to use a Python script called from the Java application, but I have no way to call it from there since I follow several tutorials but then nothing works, but I have to continue on this because it seems an easier option. I have developed the .py script and works with the Kafka server, now I have to found the solution to exchange variables from Java and Python. If anyone knows any good tutorial for this, please, let me know.
After this resume of my days and after hitting my head with the walls, maybe someone has found this error previously and can help me to find the solution, I really appreciate it and sorry for the long history. | true | 44,087,849 | 1.2 | 0 | 0 | 0 | At the end the problem was related with the library that was not well added. I had to add it in the build.xml file, importing here the library. Maybe this is useful for the people who use an old Eclipse version.
So now it finds the library but I have to update Java version, other matter. So it is solved | 0 | 370 | 0 | 0 | 2017-05-20T15:44:00.000 | python,eclipse,apache-kafka,kafka-producer-api | Send message to a kafka topic using java | 1 | 3 | 3 | 44,161,576 | 0 |
1 | 0 | I've built a small telegram bot using python-telegram-bot.
When a conversation is started,I add a periodical job to the job queue and then message back to the user every X minutes.
Problem is when my bot goes offline (maintenance, failures, etc), the jobqueue is lost and clients do not receive updates anymore unless they send /start again
I could maybe store all chat_ids in a persistent queue and restore them at startup but how do I send a message without responding to an update ? | false | 44,090,457 | 0 | 0 | 0 | 0 | you have lots of options. at first you need to store all chat_ids. you can do it in database or simple text file.
then you need a trigger in order to start sending messages. I'm not familiar with your technology but i just create simple service in order to do it. | 0 | 778 | 0 | 1 | 2017-05-20T20:19:00.000 | telegram-bot,python-telegram-bot | Restoring job-queue between telegram-bot restarts | 1 | 1 | 1 | 44,094,915 | 0 |