Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I have a Raspberry Pi running a Python script posting data to a database on my server. So I would like to do the inverse of this. I need this raspberry pi to do some actions when they are called from the website.
What would be the best approach?
Maybe open some port and start listening for events there? | false | 45,697,524 | 0 | 1 | 0 | 0 | There are two simple ways to achieve this. You haven't described what sort of actions you are processing so the following is quite generic.
Polling
Have a master that all of the workers (pi) connect to and poll to get any work. The workers can do the work and send data back to the master.
Event driven
Run an API on each pi that your master can call for each event. This is going to be the most performant but will probably require more work. | 0 | 33 | 0 | 0 | 2017-08-15T16:47:00.000 | php,python,api,raspberry-pi | Communicate to a local device from website | 1 | 1 | 1 | 45,698,143 | 0 |
1 | 0 | I am trying to scrape reviews from a website and am not able to scrape reviews having a 'read more' option.
I am only able to get data till read more.
I am using BeautifulSoup.
Any help is appreciated. | false | 45,735,733 | 0 | 0 | 0 | 0 | You will have to use the click option given with selenium that will allow you to find the read more tag or class and click it, as soon as it appear you will have to click it again.. and when it does not shows up you will have to scrap the content you require, | 0 | 2,339 | 0 | 0 | 2017-08-17T12:55:00.000 | python,web-scraping,beautifulsoup,bs4 | How to Scrape reviews with read more from Webpages using BeautifulSoup | 1 | 1 | 3 | 45,735,827 | 0 |
1 | 0 | I have a list of 1,000 URLs of articles published by different agencies, and of course each has its own HTML layout.
I am writing a python code to extract ONLY the article body from each URL. Can this be done by only looking at the < p>< /p> Paragraph tags?
Will I be missing some content? or including irrelevant content by this approach?
Thanks | false | 45,741,991 | 0 | 0 | 0 | 0 | To answer your question, it's highly unlikely you can get ONLY article content targeting <p></p> tags. You WILL get a lot of unnecessary content that will take a ton of effort to filter through, guaranteed.
Try to find an RSS feed for these websites. That will make scraping target data much easier than parsing an entire HTML page. | 0 | 80 | 0 | 0 | 2017-08-17T17:54:00.000 | python,html-parsing | How to extract article contents from websites with different layouts | 1 | 2 | 2 | 45,742,170 | 0 |
1 | 0 | I have a list of 1,000 URLs of articles published by different agencies, and of course each has its own HTML layout.
I am writing a python code to extract ONLY the article body from each URL. Can this be done by only looking at the < p>< /p> Paragraph tags?
Will I be missing some content? or including irrelevant content by this approach?
Thanks | true | 45,741,991 | 1.2 | 0 | 0 | 0 | For some articles you will be missing content, and for others you will include irrelevant content. There is really no way to grab just the article body from a URL since each site layout will likely vary significantly.
One thing you could try is grabbing text contained in multiple consecutive p tags inside the body tag, but there is still no guarantee you will get just the body of the article.
It would be a lot easier if you broke the list of URLs into a list for each distinct site, that would you could define what the article body is case by case. | 0 | 80 | 0 | 0 | 2017-08-17T17:54:00.000 | python,html-parsing | How to extract article contents from websites with different layouts | 1 | 2 | 2 | 45,742,189 | 0 |
0 | 0 | The first process will receive and send some data (to complete the authentication) after accept a sslsocket, then send the sslsocket to another process.
I know that multiprocessing.reduction.send_handle can send socket, but it didn't work with sslsocket.
Please help. | false | 45,781,750 | 0.197375 | 0 | 0 | 1 | This is not possible.
SSL sockets in Python are implemented using OpenSSL. For each SSL socket in python there is a user space state managed by OpenSSL. Transferring a SSL socket to another process would need this internal SSL state to be transferred too. But, Python has no direct access to this state because it only uses the OpenSSL library with the Libraries API and thus can not transfer it. | 0 | 41 | 0 | 0 | 2017-08-20T11:40:00.000 | ssl,python-3.6 | How to send a sslsocket to a running process | 1 | 1 | 1 | 45,781,816 | 0 |
0 | 0 | I am trying to build a python application that reads AND writes data to a file online that other instances of the application have access to. I know I could use sockets with a dedicated server, but I don't have one. Is there any service that does this, or should I get a server?
Thanks | false | 45,787,580 | 0 | 0 | 0 | 0 | There are quite a few ways one can imagine handling this. However, the right solution is almost certainly setting up a database. AWS offers free services below a certain tier of usage. Look there if this is a small personal project.
Since you are using python, you should be using sqlalchemy to define a model and interact with your persistent data. You can setup such a database on an ec2 instance for free if you keep it small enough. RDS makes database management easier, but I'm not sure there is a free tier for it. | 0 | 46 | 0 | 0 | 2017-08-20T23:14:00.000 | python,sockets,server | Python Data Acquisition Without Server | 1 | 1 | 1 | 45,787,646 | 0 |
1 | 0 | I am using python-social-auth for Google authentication in my Django application. Can I override the python-social-auth URLs ? By default, it's http://mydomain/login/google-oauth2/ and I need to change the URL as part of my view (get request) ; which has the end-point as http://mydomain/login/. | false | 45,793,618 | 0.379949 | 0 | 0 | 2 | The only way to override the URLs is to define your own ones pointing to the views and link it into your main urls.py file.
If what you are after for is to make /login automatically handle the Google auth backend, then you need to define a custom view for it that can call python-social-auth views to fire up the process. | 0 | 339 | 0 | 1 | 2017-08-21T09:27:00.000 | python,django,python-social-auth | override python-social-auth built-in urls? | 1 | 1 | 1 | 45,806,230 | 0 |
1 | 0 | I have to download source code of a website like www.humkinar.pk in simple HTML form. Content on site is dynamically generated. I have tried driver.page_source function of selenium but it does not download page completely such as image and javascript files are left. How can I download complete page. Is there any better and easy solution in python available? | false | 45,796,411 | -0.066568 | 0 | 0 | -1 | It's not allowed to download a website without Permission. If you would know that, you would also know there is hidden Code on hosting Server, where you as Visitior has no access to it. | 0 | 1,442 | 0 | 3 | 2017-08-21T11:50:00.000 | javascript,python,html,selenium,web-scraping | Download entire webpage (html, image, JS) by Selenium Python | 1 | 1 | 3 | 45,796,569 | 0 |
0 | 0 | Temporary disable logging completely
I am trying to write a new log-handler in Python which will post a json-representation of the log-message to a HTTP endpoint, and I am using the request library for the posting. The problem is that both request and urllib3 (used by request) logs, and their loggers has propagate=True, meaning that the logs they log will be propagated to any parent loggers. If the user of my log-handler creates a logger with no name given, it becomes the root logger, so it will receive this messages, causing an infinite loop of logging. I am a bit lost on how to fix this, and I have two suggestions which both seem brittle.
1) Get the "reguest" and "urllib3" loggers, set their propagate values to false, post the log message before setting the propagate values back to their old values.
2) Check if the incoming record has a name which contains ".request" or ".urllib3", and if it does then ignore the record.
Both of these will break badly if the request library either replaces urllib3 with something else or changes the name of its logger. It also seems likely that method 1 will be problematic in a multi-threaded or multi-process case.
What I would want is some way of disabling all logging for the current thread from some point and then enable it again after we have posted the message, but I don't know any way to do this.
Any suggestions? | false | 45,799,786 | 0.197375 | 0 | 0 | 1 | Importing os.devnull and setting it as a default file handler for parent logger maybe?
I usually flush all logs to devnull except that were explicitly set up (dunno if it's a good or bad practice). | 1 | 197 | 0 | 1 | 2017-08-21T14:34:00.000 | python,logging | Temporary disable logging completely in Python | 1 | 1 | 1 | 45,800,031 | 0 |
0 | 0 | What is a responsible / ethical time delay to put in a web crawler that only crawls one root page?
I'm using time.sleep(#) between the following calls
requests.get(url)
I'm looking for a rough idea on what timescales are:
1. Way too conservative
2. Standard
3. Going to cause problems / get you noticed
I want to touch every page (at least 20,000, probably a lot more) meeting certain criteria. Is this feasible within a reasonable timeframe?
EDIT
This question is less about avoiding being blocked (though any relevant info. would be appreciated) and rather what time delays do not cause issues to the host website / servers.
I've tested with 10 second time delays and around 50 pages. I just don't have a clue if I'm being over cautious. | true | 45,807,417 | 1.2 | 0 | 0 | 1 | I'd check their robots.txt. If it lists a crawl-delay, use it! If not, try something reasonable (this depends on the size of the page). If it's a large page, try 2/second. If it's a simple .txt file, 10/sec should be fine.
If all else fails, contact the site owner to see what they're capable of handling nicely.
(I'm assuming this is an amateur server with minimal bandwidth) | 0 | 968 | 0 | 1 | 2017-08-22T00:52:00.000 | python,web-crawler,delay,responsibility | Responsible time delays - web crawling | 1 | 1 | 1 | 45,808,077 | 0 |
0 | 0 | I'd like to set up Chrome in headless mode and the ChromeDriver for Selenium testing on my PythonAnywhere instance. I can't find any instructions on how to sort this out. Does anyone have any advice/pointers to docs please? | false | 45,822,897 | 0.197375 | 0 | 0 | 2 | PythonAnywhere dev here -- unfortunately Chrome (headless or otherwise) doesn't work in our virtualization environment right now, so it won't work :-(
[edit] ...but now it does! See @Ralf Zosel's answer for more details. | 0 | 365 | 0 | 0 | 2017-08-22T16:33:00.000 | selenium-chromedriver,pythonanywhere,google-chrome-headless | How to Set Up Chrome Headless on PythonAnywhere? | 1 | 1 | 2 | 45,845,020 | 0 |
0 | 0 | I'm writing a basic socket program in Python3, which consists of three different programs - sender.py, channel.py, and receiver.py. The sender should send a packet through the channel to the receiver, then receiver sends an acknowledgement packet back.
It works for sending one packet - it goes through the channel to the receiver, and the receiver sends an acknowledgement packet through the channel to the sender, which gets it successfully. But when the sender tries to send a second packet, it attempts to send it but gets no response, so it sends it again. When it does, it gets BrokenPipeError: [Errno 32] Broken pipe. The channel gives no indication that it receives the second packet, and just sits there waiting. What does this mean and how can it be avoided?
I never call close() on any of the sockets. | true | 45,838,549 | 1.2 | 0 | 0 | 0 | It was because I was generating a new socket each time, rather than just re-using one socket. | 0 | 56 | 0 | 0 | 2017-08-23T11:32:00.000 | python,sockets | How do I avoid a BrokenPipeError while using the sockets module in Python? | 1 | 1 | 1 | 51,373,042 | 0 |
1 | 0 | I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code | true | 45,854,752 | 1.2 | 1 | 0 | 2 | You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change. | 0 | 613 | 0 | 0 | 2017-08-24T06:49:00.000 | python,python-3.x,amazon-web-services,amazon-ec2,aws-lambda | Obtain EC2 instance IP from AWS lambda function and use requests library | 1 | 1 | 1 | 45,856,165 | 0 |
0 | 1 | I am trying to validate a system to detect more than 2 cluster in a network graph. For this i need to create a synthetic graph with some cluster. The graph should be very large, more than 100k nodes at least. I s there any system to do this? Any known dataset with more than 2 cluster would also be suffice. | false | 45,857,602 | 0 | 0 | 0 | 0 | If you stick to networkx, you can generate two large complete graphs with nx.complete_graph(), merge them, and then add some edges connecting randomly chosen nodes in each graph. If you want a more realistic example, build dense nx.erdos_renyi_graph()s instead of the complete graphs. | 0 | 182 | 0 | 0 | 2017-08-24T09:15:00.000 | python,graph,networkx | Synthetic network graph | 1 | 1 | 1 | 45,891,842 | 0 |
1 | 0 | Would it be possible to upload a file and store it in a session and send that file using requests in python ? Can someone get back with a detailed answer | false | 45,892,395 | 0 | 0 | 0 | 0 | I'm having a hard time understanding your need for this type of implementation.
Are you trying to pass a JSON file between sessions? or just any type of file?
It would help to know your idea behind this implementation. | 0 | 35 | 0 | 0 | 2017-08-26T06:00:00.000 | django,python-2.7,python-requests | Handling file field values in session | 1 | 1 | 1 | 45,893,315 | 0 |
0 | 0 | I am writing an application in python to acquire data using a serial communication. I use the pyserial library for establishing the communication. What is the best approach to request data in an interval (eg every 2 seconds). I always have to the send a request, wait for the answer and start the process again. | false | 45,912,973 | 0 | 0 | 0 | 0 | if this a "slow" process, that does not accurate time precision, use a while loop and time.sleep(2) to timeout the process for 2 seconds. | 0 | 241 | 0 | 0 | 2017-08-28T06:35:00.000 | python,pyserial,data-acquisition | pyserial time based repeated data request | 1 | 1 | 2 | 45,913,039 | 0 |
1 | 0 | i am student and i am totally new to scraping etc, today my supervisor gave me task to get the list of followers of a user or page(celebrity etc)
the list should contain information about every user (i.e user name, screen name etc)
After a long search i found that i can't get the age and gender of any user on twitter.
secondly i got help regarding getting list of my followers but i couldnt find help about "how i can get user list of public account"
kindly suggest me that its possible or not, and if it is possible, what are the ways to get to my goals
thank you in advance | false | 45,923,490 | 0 | 1 | 0 | 0 | Hardly so, and even if you manage to somehow do it, you'll most likely get blacklisted. Also, please read the community guidelines when it comes to posting questions. | 0 | 279 | 0 | 0 | 2017-08-28T16:26:00.000 | python,twitter,web-scraping,text-mining,scrape | is it possible to scrape list of followers of a public twitter acount (page) | 1 | 1 | 1 | 45,959,340 | 0 |
0 | 0 | Trying to install Google Cloud SDK(Python) on Windows 10 for All Users. Getting the following error.
This is new machine and start building fresh. Installed python 2.7 version prior to this.
Please help me to resolve this.
Output folder: C:\Program Files (x86)\Google\Cloud SDK Downloading
Google Cloud SDK core. Extracting Google Cloud SDK core. Create Google
Cloud SDK bat file: C:\Program Files (x86)\Google\Cloud
SDK\cloud_env.bat Installing components. Welcome to the Google Cloud
SDK! This will install all the core command line tools necessary for
working with the Google Cloud Platform. Traceback (most recent call
last): File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 214, in
main() File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 192, in main
Install(pargs.override_components, pargs.additional_components) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 134, in
Install
InstallOrUpdateComponents(to_install, update=update) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 177, in
InstallOrUpdateComponents
['--quiet', 'components', verb, '--allow-no-backup'] + component_ids) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 813, in
Execute
self._HandleAllErrors(exc, command_path_string, specified_arg_names) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 787, in
Execute
resources = args.calliope_command.Run(cli=self, args=args) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line
754, in Run
resources = command_instance.Run(args) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\surface\components\update.py", line 99, in
Run
version=args.version) File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 850, in Update
command_path='components.update') File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 591, in _GetStateAndDiff
command_path=command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\update_manager.py",
line 574, in _GetLatestSnapshot
*effective_url.split(','), command_path=command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\snapshots.py",
line 165, in FromURLs
for url in urls] File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\snapshots.py",
line 186, in _DictFromURL
response = installers.ComponentInstaller.MakeRequest(url, command_path) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\installers.py",
line 285, in MakeRequest
return ComponentInstaller._RawRequest(req, timeout=timeout) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\updater\installers.py",
line 329, in _RawRequest
should_retry_if=RetryIf, sleep_ms=500) File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\util\retry.py", line 155,
in TryFunc
return func(*args, kwargs), None File "C:\Program Files (x86)\Google\Cloud
SDK\google-cloud-sdk\lib\googlecloudsdk\core\url_opener.py", line 73,
in urlopen
return opener.open(req, data, timeout) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 429, in open
response = self._open(req, data) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 447, in _open
'_open', req) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 407, in _call_chain
result = func(*args) File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\url_opener.py", line 58,
in https_open
return self.do_open(build, req) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\urllib2.py",
line 1195, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers) File
"c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1042, in request
self._send_request(method, url, body, headers) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1082, in _send_request
self.endheaders(body) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 1038, in endheaders
self._send_output(message_body) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 882, in _send_output
self.send(msg) File "c:\users\cpa8161\appdata\local\temp\tmpxcdivh\python\lib\httplib.py",
line 844, in send
self.connect() File "C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\lib\third_party\httplib2__init__.py", line 1081,
in connect
raise SSLHandshakeError(e)
**httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661) Failed to install. | false | 45,927,259 | 0.066568 | 0 | 0 | 2 | I just spent hours trying to make the installer run trying to edit ca cert files but the installer keeps wiping the directories as part of the installation process. In order to make the bundle gcloud sdk installer work, I ended up having to create an environment variable SSL_CERT_FILE and setting the path to a ca cert text file that contained the Google CAs + my company's proxy CA cert. Then the installer ran without issue. It seems that env variable is used by the python http client for CA validation.
Then you need to run gcloud config set custom_ca_certs_file before running gcloud init | 0 | 13,684 | 1 | 2 | 2017-08-28T20:56:00.000 | google-cloud-platform,google-cloud-sdk,google-cloud-python | google cloud python sdk installation error - SSL Certification Error | 1 | 1 | 6 | 56,367,158 | 0 |
0 | 0 | I have a Python + Selenium script that helps me scrape information. However, the webpage encounters an error from time to time and then I need to refresh the page and scrape again. The problem is that the error is erratic and it might crash my scraper when I already clicked some buttons or filled some forms.
I need to find an elegant method to refresh the page exactly with the same buttons clicked (I mean, exactly to the same state). Any help? | false | 45,935,059 | 0 | 0 | 0 | 0 | if you build a robust scrapy system,you must think about exception handling . i have done some large scrape project .in these projects we will use a log system to record the exception ,then retry when don't success scraping the message. | 0 | 289 | 0 | 0 | 2017-08-29T09:12:00.000 | python,selenium,selenium-webdriver,web-scraping,selenium-chromedriver | How to make Selenium refresh page to last state of its elements? | 1 | 1 | 1 | 45,935,491 | 0 |
0 | 0 | I'm a fresh-out-of-college programmer with some experience in Python and Javascript, and I'm trying to develop either a website or just a back-end system that will aggregate information from online market websites which don't have any API (or none that I've found, anyway). Ideally I would also want the system that can write to local storage to track changes to the data over time in some kind of database, but that's down the road a bit.
I've already pounded out some javascript that can grab the data I want, but apparently there doesn't seem to be a way to access or act upon data from other websites due to data security protections or to save the data to local storage in order to be read from other pages. I know that there are ways to aggregate data, as I've seen other websites that do this.
I can load websites in Python using the urllib2 and use regular expressions to parse what I want from some pages, but on a couple of the desired sites I would need to log into the website before I can access the data I want to gather.
Since I am relatively new to programming, is there an ideal tool / programming language that would streamline or simplify what I'm trying to do?
If not, could you please point me in the right direction for how I might go about this? After doing some searching, there seems to be a general lack of cross-domain data gathering and aggregation. Maybe I'm not even using the right terminology to describe what I'm trying to do.
Whichever way you would look at this, please help! :-) | false | 45,949,911 | 0 | 0 | 0 | 0 | i suggest you use selenium webdriver to login to get cookie,and use requests library to scrap the message.That is what my company do in the scraping system.if you only use selenium webdriver, you will need to many memory and cpu capacity.
if you are good at html and js,it is a good way for you to use requests library to Simulate logging.
for the website you must log in,the most import thing is to get cookie. | 1 | 123 | 0 | 0 | 2017-08-29T23:54:00.000 | javascript,python,web-scraping,cross-domain,aggregate | Looking for ways to aggregate info/data from different websites | 1 | 1 | 1 | 45,950,238 | 0 |
0 | 0 | I've tried using Scapy's sniff function to sniff some packets and compared it to Wiresharks output. Upon displaying Scapy's sniffed packets and Wireshark's sniffed packets on the same interface, I discover that Wireshark can sniff some packets that Scapy was apparently not able to sniff and display. Is there a reason why and if so how can I prevent it so Scapy does not 'drop' any packets and sniffs all the packets Wireshark can receive? | false | 45,961,177 | 0 | 0 | 0 | 0 | Scapy itself has many libraries and extensions which are either pre-installed or you will have to install it based on your needs. Your question is a bit vague about what exactly is your comparison factor here between the two, but for example, Scapy will need a HTTPS decoder library installed for decoding the information of those packets. Also in Scapy, you can write your own protocol as you deem. But again if you are doing real-time parsing without a PCAP file Scapy is a good option even with the packet drop ratio. But if you are not concerned about the PCAP file I suggest to use Wireshark/TCPdump and record a PCAP file. You can dissect the PCAP file using Scapy then. Hope this helps. | 0 | 600 | 0 | 0 | 2017-08-30T13:00:00.000 | python,wireshark,scapy,packets,sniffing | Scapy cannot sniff some packets | 1 | 1 | 1 | 48,503,065 | 0 |
0 | 0 | configured AWS Cli on Linux system.
While running any command like "aws ec2 describe-instances" it is showing error "Invalid IPv6 URL" | true | 45,999,285 | 1.2 | 1 | 0 | 6 | Ran into the same error.
Running this command fixed the error for me:
export AWS_DEFAULT_REGION=us-east-1
You might also try specifying the region when running any command:
aws s3 ls --region us-east-1
Hope this helps!
or run aws configure and enter valid region for default region name | 0 | 6,082 | 1 | 0 | 2017-09-01T11:27:00.000 | python-2.7,amazon-ec2,aws-cli | Invalid IPv6 URL while running commands using AWS CLI | 1 | 2 | 2 | 46,288,793 | 0 |
0 | 0 | configured AWS Cli on Linux system.
While running any command like "aws ec2 describe-instances" it is showing error "Invalid IPv6 URL" | false | 45,999,285 | 0.099668 | 1 | 0 | 1 | I ran into this issue due to region being wrongly typed. When you run aws configure during initial setup, if you try to delete a mistaken entry, it will end up having invalid characters in the region name.
Hopefully, running aws configure again will resolve your issue. | 0 | 6,082 | 1 | 0 | 2017-09-01T11:27:00.000 | python-2.7,amazon-ec2,aws-cli | Invalid IPv6 URL while running commands using AWS CLI | 1 | 2 | 2 | 50,748,839 | 0 |
1 | 0 | I am trying to scrape a dynamic content (javascript) page with Python + Selenium + BS4 and the page blocks my requests at random (the soft might be: F5 AMS).
I managed to bypass this thing by changing the user-agent for each of the browsers I have specified. The thing is, only the Chrome driver can pass over the rejection. Same code, adjusted for PhantomJS or Firefox drivers is blocked constantly, like I am not even changing the user agent.
I must say that I am also multithreading, that meaning, starting 4 browsers at the same time.
Why does this happen? What does Chrome Webdriver have to offer that can pass over the firewall and the rest don't?
I really need to get the results because I want to change to Firefox, therefore, I want to make Firefox pass just as Chrome. | false | 46,011,743 | 0.197375 | 0 | 0 | 1 | Two words: Browser Fingerprinting. It's a huge topic in it's own right and as Tarun mentioned would take a decent amount of research to nail this issue on its head. But possible I believe. | 0 | 181 | 0 | 0 | 2017-09-02T08:02:00.000 | python,selenium,web-scraping,selenium-chromedriver,geckodriver | How does Chrome Driver work, but Firefox, PhantomJS and HTMLUnit not? | 1 | 1 | 1 | 46,012,320 | 0 |
0 | 0 | I have a beginner PythonAnywhere account, which, the account comparison page notes, have "Access to External Internet Sites: Specific Sites via HTTP(S) Only."
So I know only certain hosts can be accessed through HTTP protocols, but are there restrictions on use of the socket module? In particular, can I set up a Python server using socket? | false | 46,018,426 | 0 | 0 | 0 | 0 | Nope. PythonAnywhere doesn't support the socket module. | 0 | 1,618 | 0 | 1 | 2017-09-02T21:36:00.000 | python,sockets,pythonanywhere | PythonAnywhere - Are sockets allowed? | 1 | 2 | 2 | 46,066,092 | 0 |
0 | 0 | I have a beginner PythonAnywhere account, which, the account comparison page notes, have "Access to External Internet Sites: Specific Sites via HTTP(S) Only."
So I know only certain hosts can be accessed through HTTP protocols, but are there restrictions on use of the socket module? In particular, can I set up a Python server using socket? | false | 46,018,426 | 0.379949 | 0 | 0 | 4 | PythonAnywhere dev here. Short answer: you can't run a socket server on PythonAnywhere, no.
Longer answer: the socket module is supported, and from paid accounts you can use it for outbound connections just like you could on your normal machine. On a free account, you could also create a socket connection to the proxy server that handles free accounts' Internet access, and then use the HTTP protocol to request a whitelisted site from it (though that would be hard work, and it would be easier to use requests or something like that).
What you can't do on PythonAnywhere is run a socket server that can be accessed from outside our system. | 0 | 1,618 | 0 | 1 | 2017-09-02T21:36:00.000 | python,sockets,pythonanywhere | PythonAnywhere - Are sockets allowed? | 1 | 2 | 2 | 46,093,607 | 0 |
1 | 0 | Does Scrapy ping the website every time I use response.xpath? Or is there one response value stored in memory per request and all subsequent xpath queries are run locally? | true | 46,020,031 | 1.2 | 0 | 0 | 0 | Response is stored in memory. That said, every time you call response.xpath it doesn't hit the website on the matter but memory. | 0 | 98 | 0 | 1 | 2017-09-03T03:33:00.000 | python,web-scraping,scrapy,scrapy-spider | Optimizing Scrapy xpath queries | 1 | 1 | 1 | 46,020,248 | 0 |
0 | 0 | I have a few Python selenium scripts running on a computer, these scripts open, close, and create chromedriver object instances regularly.
After some time of doing this, I get an error "Only one usage of each socket address" on all scripts except for one, the one that doesn't get the error is throwing timeout exceptions.
I'm trying to catch the error but it still is thrown and not caught.
How do I fix the main issue?
Is there too many object instances? | false | 46,046,120 | 0 | 0 | 0 | 0 | I faced the same issue but it was happening with Firefox. On more investigation I found that there were two different Firefox installations on my comp: x86 and x64. I uninstalled the x86 version and the error got resolved.
Check if you've got two different instances of chrome installed on your pc. If yes, then remove one of them. | 0 | 1,448 | 0 | 2 | 2017-09-05T02:18:00.000 | python,selenium,google-chrome,webdriver,selenium-chromedriver | Only one usage of each socket address address (python selenium) | 1 | 1 | 1 | 51,934,022 | 0 |
1 | 0 | I have a python script which resides on a web-server (running node.js) and does some machine learning computation. The data has to be supplied to the python script using javascript running in web-browser. How can this be done? I want to know the complete setup. For now, the server is the localhost only. | false | 46,098,385 | 0 | 0 | 0 | 0 | I believe you need a simple API on the server which accept input from the client which can be done via JavaScript.
There are several technologies you could have a look at:
Ajax.
WebSockets. | 0 | 951 | 0 | 1 | 2017-09-07T14:01:00.000 | javascript,python,node.js,socket.io | communicate between python script on server side and javascript in the web-browser | 1 | 1 | 2 | 46,098,475 | 0 |
1 | 0 | I'm looking for a way to show html to a user if they call from a browser or just give them the API response in JSON if the call is made from an application, terminal with curl or generally any other way.
I know a number of APIs do this and I believe Django's REST framework does this.
I've been able to fool a number of those APIs by passing in my browser's useragent to curl so I know this is done using useragents, but how do I implement this? To cover every single possible or most useragents out there.
There has to be a file/database or a regex, so that I don't have to worry about updating my useragent lists every few months, and worrying that my users on the latest browsers might not be able to access my website. | false | 46,099,439 | 0.291313 | 0 | 0 | 3 | I know this post is a few years old, but since I stumbled upon it...
tldr; Do not use the user agent to determine the return format unless absolutely necessary. Use the Accept header or (less ideal) use a separate endpoint/URL.
The standard and most future-proof way to set the desired return format for a specific endpoint is to use the Accept header. Accept is explicitly designed to allow the client to state what response format they would like returned. The value will be a standard MIME type.
Web browsers, by default, will send text/html as the value of the Accept header. Most Javascript libraries and frontend frameworks will send application/json, but this can usually be explicitly set to something else (e.g. text/xml) if needed. All mobile app frameworks and HTTP client libraries that I am aware of have the ability to set this header if needed.
There are two big problems with trying to use user agent for simply determining the response format:
The list will be massive. You will need to account for every possible client which needs to be supported today. If this endpoint is used internally, this may not be an immediate problem as you might be able to enforce which user agents you will accept (may cause its own set of problems in the future, e.g. forcing your users to a specific version of Internet Explorer indefinitely) which will help keep this list small. If this endpoint is to be exposed externally, you will almost certainly miss something you badly need to accept.
The list will change. You will need to account for every possible client which needs to be supported tomorrow, next week, next year, and in five years. This becomes a self-induced maintenance headache.
Two notes regarding Accept:
Please read up on how to use the Accept header before attempting to implement against it. Here is an actual example from this website: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9. Given this, I would return back HTML.
The value of the header can be */*, which basically just says "whatever" or "I don't care". At that point, the server is allowed to determine the response format. | 0 | 1,500 | 0 | 2 | 2017-09-07T14:52:00.000 | python,api,curl,user-agent | How to detect if a GET request is from a browser or not | 1 | 1 | 2 | 69,885,034 | 0 |
0 | 0 | I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this? | false | 46,101,394 | 0 | 1 | 0 | 0 | I am not sure to understand your question, can you give us what you pretend to do more explained?
You have a few options, creating a group and adding the bot to it.
In private chat you only can talk with a single user at a time. | 0 | 2,352 | 0 | 1 | 2017-09-07T16:39:00.000 | python,python-3.x,chat,bots,telegram | how can i join two users in a telegram chat bot? | 1 | 2 | 3 | 46,103,183 | 0 |
0 | 0 | I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this? | true | 46,101,394 | 1.2 | 1 | 0 | 0 | You need to make a database with chatID as primary column. and another column as partner. which stores his/her chat partner chatID.
now when a user sends a message to you bot you just need to check the database for that user and send the message to her chat partner.
after the chat is done you should empty partner fields of both users.
And for the picking part. when a user wants to find a new partner, choose a random row from your database WHERE partnerChatID is Null and set them to first users ID and vise versa. | 0 | 2,352 | 0 | 1 | 2017-09-07T16:39:00.000 | python,python-3.x,chat,bots,telegram | how can i join two users in a telegram chat bot? | 1 | 2 | 3 | 46,113,831 | 0 |
0 | 0 | I have loaded twisted using pip pip install twisted. Then I tried to import from autobahn.twisted.websocket import WebSocketClientProtocol, I get error when I import 'twisted' has no attribute '__version__'. | false | 46,118,937 | 0 | 0 | 0 | 0 | pip install --upgrade pyopenssl can solve the issue when I am using Ubuntu | 1 | 227 | 0 | 1 | 2017-09-08T14:36:00.000 | python-3.x,twisted,autobahn | 'twisted' has no attribute '__version__' | 1 | 1 | 2 | 46,118,938 | 0 |
0 | 0 | I usually start http.server by typing
python3 -m http.server port and I can access the server by going to localhost:port
My goal is to access the srver by simply typing localhost. | true | 46,124,591 | 1.2 | 0 | 0 | 2 | There is always port... the default is 80, so just run it on 80 and you'll reach it by just localhost. | 0 | 1,128 | 0 | 1 | 2017-09-08T21:10:00.000 | python-3.x,localhost,port,httpserver | How can you run http.server without a port and simply on localhost? | 1 | 1 | 1 | 46,124,616 | 0 |
0 | 0 | I have a server process which receives requests from a web clients.
The server has to call an external worker process ( another .py ) which streams data to the server and the server streams back to the client.
The server has to monitor these worker processes and send messages to them ( basically kill them or send messages to control which kind of data gets streamed ). These messages are asynchronous ( e.g. depend on the web client )
I thought in using ZeroMQ sockets over an ipc://-transport-class , but the call for socket.recv() method is blocking.
Should I use two sockets ( one for streaming data to the server and another to receive control messages from server )? | false | 46,139,185 | 0.197375 | 0 | 0 | 2 | Using a separate socket for signalling and messaging is always better
While a Poller-instance will help a bit, the cardinal step is to use separate socket for signalling and another one for data-streaming. Always. The point is, that in such setup, both the Poller.poll() and the event-loop can remain socket-specific and spent not more than a predefined amount of time, during a real-time controlled code-execution.
So, do not hesitate to setup a bit richer signalling/messaging infrastructure as an environment where you will only enjoy the increased simplicity of control, separation of concerns and clarity of intents.
ZeroMQ is an excellent tool for doing this - including per-socket IO-thread affinity, so indeed a fine-grain performance tuning is available at your fingertips. | 0 | 563 | 1 | 3 | 2017-09-10T09:27:00.000 | python,sockets,ipc,zeromq | ZeroMQ bidirectional async communication with subprocesses | 1 | 1 | 2 | 46,140,815 | 0 |
0 | 0 | Code:
sh 'python ./selenium/xy_python/run_tests.py'
Error:
Traceback (most recent call last):
File "./selenium/xy_python/run_tests.py", line 6, in
import nose
ImportError: No module named nose | false | 46,166,480 | 0 | 0 | 0 | 0 | I recommend explicitly activating a python env before you run your script in your jenkinsfile to ensure you are in an environment which has nose installed.
Please check out virtualenv, tox, or conda for information on how to do so. | 0 | 5,801 | 1 | 2 | 2017-09-12T01:22:00.000 | python,python-3.x,dsl | Calling a Python Script from Jenkins Pipeline DSL causing import error | 1 | 1 | 2 | 46,199,573 | 0 |
1 | 0 | I have used Beautiful Soup with great success when crawling single pages of a site, but I have a new project in which I have to check a large list of sites to see if they contain a mention or a link to my site. Therefore, I need to check the entire site of each site.
With BS I just don't know yet how to tell my scraper that it is done with a site, so I'm hitting recursion limits. Is that something Scrapy handles out of the box? | true | 46,183,843 | 1.2 | 0 | 0 | 2 | Scrapy uses a link follower to traverse through a site, until the list of available links is gone. Once a page is visited, it's removed from the list and Scrapy makes sure that link is not visited again.
Assuming all the websites pages have links on other pages, Scrapy would be able to visit every page of a website.
I've used Scrapy to traverse thousands of websites, mainly small businesses, and have had no problems. It's able to walk through the whole site. | 0 | 344 | 0 | 1 | 2017-09-12T19:15:00.000 | python,web-scraping,beautifulsoup,scrapy | Does Scrapy 'know' when it has crawled an entire site? | 1 | 1 | 2 | 46,185,425 | 0 |
1 | 0 | In local system elastic search works perfectly but when i'm trying to search in server system it shows in console : "ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))" | false | 46,188,842 | 0 | 0 | 0 | 0 | It appears that the localhost value you are trying to connect to is a Unicode string host=u'localhost. Not sure how you are getting/assigning that value into a variable, but you should try to encode/convert it to ASCII so that it can be properly interpreted during the HTTP connection routine. | 0 | 3,169 | 1 | 0 | 2017-09-13T04:25:00.000 | python,django,elasticsearch | ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10)) | 1 | 1 | 1 | 46,230,007 | 0 |
1 | 0 | I am trying to do some python based web scraping where execution time is pretty critical.
I've tried phantomjs, selenium, and pyqt4 now, and all three libraries have given me similar response times. I'd post example code, but my problem affects all three, so I believe the problem either lies in a shared dependency or outside of my code. At around 50 concurrent requests, we see a huge desegregation in response time. It takes about 40 seconds to get back all 50 pages, and that time gets exponentially slower with greater page demands. Ideally I'm looking for ~200+ requests in about 10 seconds. I used multiprocessing to spawn each instance of phantonjs/pyqt4/selenium, so each url request gets it's own instance so that I'm not blocked by single threading.
I don't believe it's a hardware bottleneck, it's running on 32 dedicated cpu cores, totaling to 64 threads, and cpu usage doesn't typically spike to over 10-12%. Bandwidth as well sits reasonably comfortably at around 40-50% of my total throughput.
I've read about the GIL, which I believe I've addressed with using multiprocessing. Is webscraping just an inherently slow thing? Should I stop expecting to pull 200ish webpages in ~10 seconds?
My overall question is, what is the best approach to high performance web scraping, where evaluating js on the webpage is a requirement? | false | 46,205,279 | 0.197375 | 0 | 0 | 1 | "evaluating js on the webpage is a requirement" <- I think this is your problem right here. Simply downloading 50 web pages is fairly trivially parallelized and should only take as long as the slowest server takes to respond.
Now, spawning 50 javascript engines in parallel (which is essentially what I guess it is you are doing) to run the scripts on every page is a different matter. Imagine firing up 50 chrome browsers at the same time.
Anyway: profile and measure the parts of your application to find where the bottleneck lies. Only then you can see if you're dealing with an I/O bottleneck (sounds unlikely), a CPU bottleneck (more likely) or a global lock somewhere that serializes stuff (also likely but impossible to say without any code posted) | 0 | 598 | 0 | 0 | 2017-09-13T19:17:00.000 | python,multithreading,web-scraping,multiprocessing | Python & web scraping performance | 1 | 1 | 1 | 46,205,430 | 0 |
0 | 0 | I had a similar issue when a Python script called from a scheduled task on a windows server tried to access a network shared drive. It would run from the IDLE on the server but not from the task. I switched to using a local drive it worked fine. This script works if run from console or IDLE on the server and partially executes when run as a scheduled task. It pulls data from a MSSQL database and creates a local csv. That works called from the task but the part to upload the file to a Google Drive does not. I have, as I did, before try other methods of calling outside of the scheduled task ex Powershell, bat file... but same results. I am using google-api-python-client (1.6.2) and can't find anything. Thanks in advance! | false | 46,206,424 | 0 | 0 | 0 | 0 | I found my answer. In the optional field of the Windows Scheduled Task Action dialog, "Start in" I added the path to the Python Scripts folder and the script now runs perfect. | 0 | 33 | 1 | 0 | 2017-09-13T20:37:00.000 | python-2.7,google-api-python-client | Python27 to upload a file to google drive does not work when run as a windows scheduled task. Why? | 1 | 1 | 1 | 46,245,436 | 0 |
0 | 0 | I am trying to find a way for saving a session object requests.session() to a file and then continue my session from another python script from the same place, include the cookies, the headers etc.
I've try doing that with pickle but for some reason the session's cookies and all other attributes are not loaded from file.
Is there any way to do this? | true | 46,219,917 | 1.2 | 0 | 0 | 1 | You might want look into serialization, something like pickle
For example you would open a file and dump the session with pickle.dump(sess, f) and read with pickle.load(f) into a session object | 0 | 1,027 | 0 | 0 | 2017-09-14T13:01:00.000 | python,python-requests,pickle | python requests library , save session to file | 1 | 1 | 1 | 46,219,989 | 0 |
0 | 0 | Selenium IDE is a very useful tool to create Selenium tests quickly. Sadly, it has been unmaintained for a long time and now not compatible with new Firefox versions.
Here is my work routine to create Selenium test without Selenium IDE:
Open the Inspector
Find and right click on the element
Select Copy CSS Selector
Paste to IDE/Code editor
Type some code
Back to step 2
That is a lot of manual work, switching back and for. How can I write Selenium tests faster? | false | 46,235,088 | 0 | 1 | 0 | 0 | The best way is to learn how your app is constructed, and to work with the developers to add an id element and/or distinctive names and classes to the items you need to interact with. With that, you aren't dependent on using the fragile xpaths and css paths that the inspector returns and instead can create short, concise expressions that will hold up even when the underlying structure of the page changes. | 0 | 116 | 0 | 0 | 2017-09-15T08:30:00.000 | python,selenium,automated-tests,selenium-ide,web-testing | Since Selenium IDE is unmaintained, how to write Selenium tests quickly? | 1 | 1 | 3 | 46,241,635 | 0 |
1 | 0 | I want to extract reviews from public facebook pages like airline page, hospital page, to perform sentiment analysis. I have app id and app secret id which i generated from facebook graph API using my facebook account, But to extract the reviews I need page access token and as I am not the owner/admin of the page so I can not generate that page access token. Is any one know how to do it or it requires some paid service?
Kindly help.
Thanks in advance. | false | 46,252,155 | 0.197375 | 0 | 0 | 1 | Without a Page Token of the Page, it is impossible to get the reviews/ratings. You can only get those for Pages you own. There is no paid service either, you can only ask the Page owners to give you access. | 0 | 259 | 0 | 0 | 2017-09-16T09:07:00.000 | r,python-2.7,facebook-graph-api | How to extract reviews from facebook public page without page access token? | 1 | 1 | 1 | 46,254,610 | 0 |
0 | 0 | I'm built a telegram bot for the groups.When the bot is added to the group, it will delete messages containing ads.How can I change the bot to work for only 30 days in each group and then stop it?
That means, for example, today's bot is added to group 1 and the next week the bot is added to group 2; I need to change the bot to stop the 30 days in group 1 and stop it in group 2 for another 37 days.
How can I do that? | false | 46,253,238 | 0 | 1 | 0 | 0 | You can't know how long your bot had been added to group at this time. :(
You need to log it to your own database, and there is leaveChat method if you need it. | 0 | 174 | 0 | 0 | 2017-09-16T11:11:00.000 | python,python-telegram-bot | How can a telegram bot change that works only for a specific time in a group? | 1 | 2 | 2 | 46,253,303 | 0 |
0 | 0 | I'm built a telegram bot for the groups.When the bot is added to the group, it will delete messages containing ads.How can I change the bot to work for only 30 days in each group and then stop it?
That means, for example, today's bot is added to group 1 and the next week the bot is added to group 2; I need to change the bot to stop the 30 days in group 1 and stop it in group 2 for another 37 days.
How can I do that? | false | 46,253,238 | 0 | 1 | 0 | 0 | Simply, all you need is a database at the back-end. Just store the group_id and join_date in each row.
At any time, you can query your database. If more than 30 days has passed join_date, stop the bot or leave the group.
You can also use any other storage rather than a database. A file, index, etc. | 0 | 174 | 0 | 0 | 2017-09-16T11:11:00.000 | python,python-telegram-bot | How can a telegram bot change that works only for a specific time in a group? | 1 | 2 | 2 | 46,253,626 | 0 |
0 | 0 | I need to execute my webbrowser (opera) from python code, how can i get it?
Python version 3.6, OS MS Windows 7 | false | 46,268,174 | 0 | 0 | 0 | 0 | You can use Selenium to launch web browser through python. | 0 | 737 | 0 | 0 | 2017-09-17T19:41:00.000 | python,python-3.x,webbrowser-control,opera | How can i execute my webbrowser in Python? | 1 | 1 | 2 | 46,268,349 | 0 |
1 | 0 | I want to stop crawlers from indexing specific images on my website, but only if they're older than a specific date. However, the crawler shall not stop indexing the page where the image is currently linked.
My initial approach was to write a script which adds the URL of the image to 'robots.txt', but I think the file will become huge, as we talk about a really huge amount of potential images.
My next idea was to use the <meta name="robots" content="noimageindex"> tag, but I think this approach can be error prone, as I could forget to add this tag to a template where I might want to stop crawlers from indexing the image. It's also redundant and the crawler will ignore all images.
My question is: do you know a programmatically way to force a crawler too not index a image, if a condition (in my case the date) is true? Or is my only possibility to stop the crawler from indexing the whole page? | true | 46,275,319 | 1.2 | 0 | 0 | 2 | Building on top of what you had in mind, you could just create a separate place to keep the images that you don't want to be indexed, write a script to move files to that location once they're "expired" and just add the url to the the robots.txt file. Perhaps something like /expired_images*. | 0 | 66 | 0 | 0 | 2017-09-18T08:58:00.000 | python,html,django,seo | Is there a programmatically way to force a crawler to not index specific images? | 1 | 1 | 1 | 46,275,441 | 0 |
0 | 0 | Is it a good idea to use a standard library function from an imported module? For example, I write a xyz.py module and within xyz.py, I've this statetment import json
I have another script where I import xyz. In this script I need to make use of json functions. I can for sure import json in my script but json lib was already imported when I import xyz. So can I use xyz.json() or is it a bad practice? | false | 46,289,557 | 0.379949 | 0 | 0 | 2 | You should use import json again to explicitly declare the dependency.
Python will optimize the way it loads the modules so you don't have to be concerned with inefficiency.
If you later don't need xyz.py anymore and you drop that import, then you still want import json to be there without having to re-analyze your dependencies. | 1 | 39 | 0 | 0 | 2017-09-18T23:27:00.000 | python | Calling a standard library from imported module | 1 | 1 | 1 | 46,289,744 | 0 |
0 | 0 | I am playing around with a discord.py bot, I have pretty much everything working that I need (at this time), but I can't for the life of me figure out how to embed a YouTube video using Embed().
I don't really have any code to post per-say, as none of it has worked correctly.
Note: I've tried searching everywhere (here + web), I see plenty of info on embedding images which works great.
I do see detail in the discord API for embedding video, as well as the API Documentation for discord.py; but no clear example of how to pull it off.
I am using commands (the module I am working on as a cog)
Any help would be greatly appreciated.
Environment:
Discord.py Version: 0.16.11
Python: 3.5.2
Platform: OS X
VirtualEnv: Yes | false | 46,306,032 | 0.066568 | 1 | 0 | 1 | Only form of media you can embed is images and gifs. It’s not possible to embed a video | 0 | 7,927 | 0 | 1 | 2017-09-19T17:00:00.000 | python,youtube,embed,discord.py | discord.py embed youtube video without just pasting link | 1 | 2 | 3 | 50,806,683 | 0 |
0 | 0 | I am playing around with a discord.py bot, I have pretty much everything working that I need (at this time), but I can't for the life of me figure out how to embed a YouTube video using Embed().
I don't really have any code to post per-say, as none of it has worked correctly.
Note: I've tried searching everywhere (here + web), I see plenty of info on embedding images which works great.
I do see detail in the discord API for embedding video, as well as the API Documentation for discord.py; but no clear example of how to pull it off.
I am using commands (the module I am working on as a cog)
Any help would be greatly appreciated.
Environment:
Discord.py Version: 0.16.11
Python: 3.5.2
Platform: OS X
VirtualEnv: Yes | false | 46,306,032 | 0.26052 | 1 | 0 | 4 | What you are asking for is unfortunately impossible, since Discord API does not allow you to set custom videos in embeds. | 0 | 7,927 | 0 | 1 | 2017-09-19T17:00:00.000 | python,youtube,embed,discord.py | discord.py embed youtube video without just pasting link | 1 | 2 | 3 | 46,324,897 | 0 |
0 | 0 | Is there any way to get the current executed Client information
inside the Task set class? We can get the Thread number in Jmeter.
Like that can we get the Client number in Locust tool?
Suppose I'm executing the 20 requests with 5 clients. Can i say that each
client is executing 4 requests (20/5 = 4 request each)? What is the internal
mechanism using here to execute those 20 requests by using 5
clients?
This question is related to the data given in Question: 2, Is that execution is happened iteration-wise. Like 1st iteration, Client 1, 2, 3 ,4 and 5 are executing request 1, 2,3, 4 and 5 respectively. Next iteration, again Client 1, 2, 3 ,4 and 5 are executing requests 6,7,8,9 and 10 respectively. How could we achieve this type of execution mechanism in Locust tool. Is this possible?
Please help me to clarify above questions. | false | 46,321,590 | 0.099668 | 0 | 0 | 1 | I don't think there's built-in support for it. You could set an id yourself in the on_start method
Each Locust will trigger a new task after the previous task was finished (taking into account the wait period). If your response time has a small variation, you can assume that the requests are equally distributed.
No, the Locusts are picking tasks one after the other, but are not waiting their turn. When a Locust is available, it will pick a task and execute it. I don't think there is support for your requirement. | 0 | 1,436 | 0 | 2 | 2017-09-20T12:04:00.000 | python,locust | How do we control the Clients in the Task set execution in Locust? | 1 | 1 | 2 | 46,342,647 | 0 |
1 | 0 | i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with. | false | 46,322,899 | 0.197375 | 1 | 0 | 3 | You can use CloudWatch for this.
You can create a cloudwatch rule
Service Name - Ec2
Event Type - EC2 Instance change notification
Specific state(s) - shutting-down
Then use an SNS target to deliver email. | 0 | 203 | 0 | 1 | 2017-09-20T13:05:00.000 | python,amazon-web-services,amazon-ec2,aws-lambda,serverless-framework | send email whenever ec2 is shut down using serverless | 1 | 1 | 3 | 46,323,508 | 0 |
0 | 0 | I have a python application that needs to give users a JSON web token for authentication. The token is built using the PyJWT library (import jwt).
From what I have been reading it seems like an acceptable practice to give the token to a client after they have provided some credentials, such as logging in.
The client then uses that token in the HTTP request header in the Authorization Bearer field which must happen over TLS to ensure the token is not exposed.
The part I do not understand is what if the client exposes that token accidentally? Won't that enable anybody with that token to impersonate them?
What is the most secure way to hand off the token to a client? | true | 46,331,570 | 1.2 | 0 | 0 | 2 | You could encrypt the token before handing it off to the client, either using their own public key, or delivering them the key out of band. That secures the delivery, but still does not cover everything.
In short, there's no easy solution. You can perform due diligence and require use of security features, but once the client has decrypted the token, there is still no way to ensure they won't accidentally or otherwise expose it anyway. Good security requires both participants practice good habits.
The nice thing about tokens is you can just give them a preset lifespan, or easily revoke them and generate new ones if you suspect they have been compromised. | 0 | 550 | 0 | 2 | 2017-09-20T20:52:00.000 | python,jwt | How to give clients JSON web token (JWT) in secure fashion? | 1 | 2 | 2 | 46,333,523 | 0 |
0 | 0 | I have a python application that needs to give users a JSON web token for authentication. The token is built using the PyJWT library (import jwt).
From what I have been reading it seems like an acceptable practice to give the token to a client after they have provided some credentials, such as logging in.
The client then uses that token in the HTTP request header in the Authorization Bearer field which must happen over TLS to ensure the token is not exposed.
The part I do not understand is what if the client exposes that token accidentally? Won't that enable anybody with that token to impersonate them?
What is the most secure way to hand off the token to a client? | false | 46,331,570 | 0.099668 | 0 | 0 | 1 | Token will be build based on user provided information and what you back-end decided to be part of the token. For higher security you can just widen your token information to some specific data of the user like current ip address or device mac address, this will give you a more secure way of authentication but will restrict user to every time use the same device, as a matter you can send a confirmation email when a new login happens. | 0 | 550 | 0 | 2 | 2017-09-20T20:52:00.000 | python,jwt | How to give clients JSON web token (JWT) in secure fashion? | 1 | 2 | 2 | 46,337,636 | 0 |
0 | 0 | What's the equivalent of:
driver.get_cookies()
to get the LocalStorage instead of Сookies? | true | 46,361,494 | 1.2 | 0 | 0 | 20 | I solved using:
driver.execute_script("return window.localStorage;")
EDIT: this is a quick and short answer. See Florent B.'s answer for a more detailed one. | 0 | 40,399 | 0 | 31 | 2017-09-22T09:37:00.000 | python,selenium,selenium-webdriver,local-storage,selenium-chromedriver | How to get the localStorage with Python and Selenium WebDriver | 1 | 1 | 4 | 46,361,873 | 0 |
0 | 0 | I want to simulate a file download via http(s) from within a Python script. The script is used to test a router/modem appliance in my local LAN.
Currently, I have a virtual machine's network configured to use the appliance as default gateway and DNS. The python script on my test machine sshs into the VM using the Phython's paramiko library. From within the python script using this ssh connection I run wget https://some.websi.te/some_file.zip. That seems like overkill but I don't want to reconfigure the network on my machine running the test script.
Is there a way to eliminate the extra VM (the one that runs wget via ssh) and still simulate the download? Of course that should run within my python test script on my local machine?
In other words can I get python to use another gateway than the system default from within a script? | true | 46,362,927 | 1.2 | 1 | 0 | 2 | I can provide 2 ways to do want you want: a bad way, and a clean way.
Let's talk about the bad way:
If your script is using "the default gateway" of the system, may be, by setting up a dedicated route for your destination used by your script into your system to avoid your script to use the default gateway, may be enough to solve your problem. This is a bad way because not only your script will be impacted, all the traffic generated from your host to that destination will be impacted.
Let's talk about the clean way. I suppose your are running Linux, but I think you may be able to do the same in all OS that are supporting multi-routing tables feature into their kernel.
Here is the summary of the steps you need to do:
to prepare the second (or third ...) routing table using the ip command and the 'table table_name' option
to create a rule traffic selector using ip rule using the uidrange option. This option allows you to select the traffic based on the UID of the user.
When you have done these steps, you can test your setup by creating a user account within the range of UID selected into ip-rule command and test your routing setup.
The last step, is to modify your python script by switching your uid to the uid selected in ip-rule, and existing from it when you have finish to generate your traffic.
To summarize the clean way, it is to do the same as the bad way, but just for a limited number of users uid of your Linux system by using the multi-routing tables kernel feature, and to switch to the selected user into your script to use the special routing setup.
The multi-routing tables feature of kernel is the only feature I think that is able to merge 2 different systems, as you requested it, your host and its VM, without the need of VM. And you can activate the feature by switching from one user to another one into your script of any language. | 0 | 431 | 0 | 1 | 2017-09-22T10:48:00.000 | python,networking,python-requests | Is there a way to make a http request in python using another gateway than the system default? | 1 | 1 | 1 | 46,373,450 | 0 |
0 | 0 | I want to use a Python module like urllib.request but have all the module's dependencies in a file where I can use them on a computer without having the entire Python installation.
Is there a module or tool I can use to just copy a specific module into a file and it's dependencies without having to go through the entire script and copying it manually. I'm using Python 3. | false | 46,383,369 | 0.099668 | 0 | 0 | 1 | Use a container tech.
Docker, for example, gives you the ability to port your code with dependencies to any machine you want without you have to install anything new in the machine and it saves a lot of time too. | 1 | 1,569 | 0 | 0 | 2017-09-23T19:15:00.000 | python,python-3.x,python-module | How to copy a Python module and it's dependencies to a file | 1 | 1 | 2 | 46,383,961 | 0 |
1 | 0 | I have a python script that scrapes data from different websites then writes some data into CSV files.
I run this every day from my local computer but I would like to make this automatically running on a server and possibly for free.
I tried PythonAnywhere but it looks that their whitelist stops me from scraping bloomberg.com.
I then passed to Heroku, I deployed my worker (the python script). Everything seems to work but looking with Heroku bash to the directory where the python script is supposed to write the CSV files nothing appears.
I also realised that I have no idea of how I would download those CSV files in case those were written.
I am wondering if I can actually achieve what I am trying to achieve with Heroku or if the only way to get a python script to work on a server is by paying for PythonAnywhere and avoid scraping restriction? | true | 46,390,149 | 1.2 | 0 | 0 | 1 | Heroku's filesystem is per-dyno and ephemeral. You must not save things to it for later use.
One alternative is to write it to somewhere permanent, such as Amazon S3. You can use the boto library for this. Although you do have to pay for S3 storage and data, it's very inexpensive. | 0 | 484 | 0 | 0 | 2017-09-24T12:42:00.000 | python,csv,heroku,web-scraping,download | Can I use Heroku to Scrape Data to later Download? | 1 | 1 | 1 | 46,390,214 | 0 |
1 | 0 | Scrapy seems to complete without processing all the requests. I know this because i am logging before and after queueing the request and I can clearly see that.
I am logging in both parse and error callback methods and none of them got called for those missing requests.
How can I debug what happened to those requests? | true | 46,466,345 | 1.2 | 0 | 0 | 0 | You need to add dont_filter=True when re-queueing the request. Though the request may not match other request but Scrapy remembers what requests it has already made and it will filter out if you re-queue it. It will assume it was by mistake. | 0 | 43 | 0 | 0 | 2017-09-28T10:01:00.000 | scrapy,python-3.5,scrapy-spider | requests disappear after queueing in scrapy | 1 | 1 | 1 | 46,466,842 | 0 |
1 | 0 | Whats the best way to handle invalid parameters passed in with a GET or POST request in Flask+Python?
Let's say for the sake of argument I handle a GET request using Flask+Python that requires a parameter that needs to be an integer but the client supplies it as a value that cannot be interpreted as an integer. So, obviously, an exception will be thrown when I try to convert that parameter to an integer.
My question is should I let that exception propagate, thus letting Flask do it's default thing of returning an HTTP status code of 500 back to the client? Or should I handle it and return a proper (IMO) status code of 400 back to the client?
The first option is the easier of the two. The downside is that the resulting error isn't clear as to whose at fault here. A system admin might look at the logs and not knowing anything about Python or Flask might conclude that there's a bug in the code. However, if I return a 400 then it becomes more clear that the problem might be on the client's end.
What do most people do in these situations? | false | 46,488,989 | 0.066568 | 0 | 0 | 1 | If you look at most of the REST APIs, they will return 400 and appropriate error message back to the client if the user sends request parameters of a different type than is expected.
So, you should go with your 2nd option. | 0 | 1,882 | 0 | 1 | 2017-09-29T12:36:00.000 | python,flask | Best way to handle invalid GET/POST request parameters with Flask+Python? | 1 | 2 | 3 | 46,489,288 | 0 |
1 | 0 | Whats the best way to handle invalid parameters passed in with a GET or POST request in Flask+Python?
Let's say for the sake of argument I handle a GET request using Flask+Python that requires a parameter that needs to be an integer but the client supplies it as a value that cannot be interpreted as an integer. So, obviously, an exception will be thrown when I try to convert that parameter to an integer.
My question is should I let that exception propagate, thus letting Flask do it's default thing of returning an HTTP status code of 500 back to the client? Or should I handle it and return a proper (IMO) status code of 400 back to the client?
The first option is the easier of the two. The downside is that the resulting error isn't clear as to whose at fault here. A system admin might look at the logs and not knowing anything about Python or Flask might conclude that there's a bug in the code. However, if I return a 400 then it becomes more clear that the problem might be on the client's end.
What do most people do in these situations? | false | 46,488,989 | 0.066568 | 0 | 0 | 1 | A status code of 400 means you tell the client "hey, you've messed up, don't try that again". A status code of 500 means you tell the client "hey, I've messed up, feel free to try again later when I've fixed that bug".
In your case, you should return a 400 since the party that is at fault is the client. | 0 | 1,882 | 0 | 1 | 2017-09-29T12:36:00.000 | python,flask | Best way to handle invalid GET/POST request parameters with Flask+Python? | 1 | 2 | 3 | 46,489,291 | 0 |
0 | 0 | I would like to hide a InlineKeyboardMarkup if no answer is provided. I'm using Python and python-telegram-bot. Is this possible?
Thank you. | true | 46,492,279 | 1.2 | 1 | 0 | 0 | Just edit the message sent without providing a reply_markup. | 0 | 545 | 0 | 0 | 2017-09-29T15:33:00.000 | python,telegram-bot,python-telegram-bot | Hide InlineKeyboardMarkup in no answer provided | 1 | 1 | 2 | 46,501,988 | 0 |
1 | 0 | I'm looking to create an AWS system with one master EC2 instance which can create other instances.
For now, I managed to create python files with boto able to create ec2 instances.
The script works fine in my computer environment but when I try to deploy it using Amazon BeanStalk with Django (Python 3.4 included) the script doesn't work. I can't configure aws cli (and so Boto) through SSL because the only user I can access is ec2-user and the web server uses another user.
I could simply handwrite my access ID key and password on the python file but that would not be secure. What can I do to solve this problem?
I also discovered AWS cloudformation today, is it a better idea to create new instances with that rather than with the boto function run? | false | 46,493,886 | 0.197375 | 0 | 0 | 1 | This sounds like an AWS credentials question, not specifically a "create ec2 instances" question. The answer is to assign the appropriate AWS permissions to the EC2 instance via an IAM role. Then your boto/boto3 code and/or the AWS CLI running on that instance will have permissions to make the necessary AWS API calls without having an access key and secret key stored in your code. | 0 | 161 | 0 | 0 | 2017-09-29T17:15:00.000 | python,amazon-web-services,amazon-ec2,boto,aws-cli | How to create ec2 instances from another instance? boto awscli | 1 | 1 | 1 | 46,494,159 | 0 |
0 | 0 | I have an issue with requests lib :
With a code like
requests.get("HTTPS://api.twitch.tv/helix/...", headers = headers),
with the information that twitch API needs in the variable "headers".
And unfortunately, with except Exception, e: print(e) I get ('Connection aborted.', BadStatusLine("''",)).
I already tried to fake my user agent.
I'm almost sure that it isn't from server (Twitch) because I also use the ancient API and I have the same bug, while I already used it successfully (Since that, I reseted my Raspberry, it may can explain...).
It doesn't do this error every requests, but like 1 on 10 so it's a bit embarrassing.
I also have this error only with Raspbian, but not with Windows.
Thanks for helping me, a young lost coder. | true | 46,520,193 | 1.2 | 0 | 0 | 0 | There are a lot of reasons for this error and main of them is - you violate Twitch's user policy (which directly prohibit using scrapers) and server banned some of your requests.
You should try to use sessions when you access site:
session = requests.Session()
and use session.get instead of requests.get
Another things to try are limit your requests rate and rotate different sessions with different headers (don't mix headers and sessions). | 0 | 960 | 0 | 0 | 2017-10-02T05:57:00.000 | python,python-requests,twitch | Python - requests lib - error ('Connection aborted.', BadStatusLine("''",)) | 1 | 1 | 1 | 46,521,360 | 0 |
1 | 0 | I am trying to open a page and get some data from it, using Requests and BeautifulSoup. How can I see the response of a requests.post in my browser as a page instead of the source code? | false | 46,528,522 | 0 | 0 | 0 | 0 | You need to learn about python GUI libraries like PyQt5 and instale it under your python libraries so that through QtWebEngineWidget and QWebView you can render your page to display it. | 0 | 987 | 0 | 2 | 2017-10-02T15:19:00.000 | python-requests | python requests: show response in browser | 1 | 1 | 2 | 46,528,848 | 0 |
0 | 0 | I'm planning on using the Teradata Python module, which can use either the Teradata REST API or ODBC to connect to Teradata. I'm wondering what the performance would be like for REST vs. ODBC connection methods for fairly large data pulls (> 1 million rows, > 1 GB of results).
Information on Teradata's site suggests that the use case for the REST API is more for direct access of Teradata by browsers or web applications, which implies to me that it may not be optimized for queries that return more data than a browser would be expected to handle. I also wonder if JSON overhead will make it less efficient than the ODBC data format for sending query results over the network.
Does anyone have experience with Teradata REST services performance or can point to any comparisons between REST and ODBC for Teradata? | false | 46,538,313 | 0.379949 | 0 | 0 | 2 | I had exactly the same question. As the rest web server is active for us, I just run a few tests. I tested PyTD with rest and odbc back ends, and jdbc using jaydebeapi + Jpype1. I used Python 3.5, CentOS 7 machine, I got similar results with python 3.6 on centos and on windows.
Rest was the fastest and jdbc was the slowest. It is interesting, because in R JDBC was really fast. That probably means JPype is the bottleneck. Rest was also very fast for writing, but my guess is that could be improved in JDBC using prepared statements appropriately.
We are now going to switch to rest for production. Let's see how it goes, it is also not problem free for sure. Another advantage is that our analysts want also to work on their own pcs/macs and rest is the easiest to install particularly on windows (you do pip install teradata and you are done, while for odbc and jaydebeapi+Jpype you need a compiler, and with odbc spend some time to get it configured right).
If speed is critical I guess another way would be to write a java command line app that fetches the rows, writes them to a csv and then read the csv from python. I did not test, but based on my previous experience on these kind of issues I bet that is going to be faster than anything else.
Selecting 1M rows
Python 3- JDBC: 24 min
Python 3- ODBC: 6.5 min
Python 3- Rest: 4 min
R - JDBC: 35 s
Selecting 100 K rows
Python 3- JDBC 141 s
Python 3- ODBC 41 s
Python 3- Rest 16 s
R - JDBC 5 s
Inserting 100 K Rows
Python 3- JDBC got errors, too lazy to correct them
Python 3- ODBC 7 min
Python 3- Rest 8 s (batch) 9 min (no batch)
R - JDBC 8 min | 0 | 2,042 | 0 | 1 | 2017-10-03T06:23:00.000 | python,odbc,teradata | How does the Teradata REST API performance compare to other means of querying Teradata? | 1 | 1 | 1 | 49,409,451 | 0 |
1 | 0 | I have a really basic django app to get weather.
I need to get user's location to show them their current location's weather. I am using GeoIP for that.
But an issue has come up that GeoIP does not have information of all the IP addresses. It returns NoneType for those IP addresses.
I want to know if there is any other precise way by which I can info about User's current latitude and longitude, like maybe browser API? It should not miss any User's location like GeoIP does. | false | 46,547,897 | 0.379949 | 0 | 0 | 2 | First of all, You can NOT get exact location of user by IP.
Some IPs of ISPs are not relevant with user location but with their IDC location.
So If you "really" want your client's location, you should use client(browser)'s GeoLocation API. (Fron-end)
What you have to do is..
get user location by Geolocation API
post user location to your server
return location-based information
update your webpage(DOM) with info. | 0 | 1,361 | 0 | 0 | 2017-10-03T15:11:00.000 | python,django,geolocation,geoip | django - How to get location through browser IP | 1 | 1 | 1 | 46,548,014 | 0 |
1 | 0 | For the last few months i've been working on a Rest API for a web app for the company I work for. The endpoints supply data such as transaction history, user data, and data for support tickets. However, I keep running into one issue that always seems to set me back to some extent.
The issue I seem to keep running into is how do I handle user authentication for the Rest API securely? All data is going to be sent over a SSL connection, but there's a part of me that's paranoid about potential security problems that could arise. As it currently stands when a client attempts to login the client must provide a username or email address, and a password to a login endpoint (E.G "/api/login"). Along with with this information, a browser fingerprint must be supplied through header of the request that's sending the login credentials. The API then validates whether or not the specified user exists, checks whether or not the password supplied is correct, and stores the fingerprint in a database model. To access any other endpoints in the API a valid token from logging in, and a valid browser fingerprint are required.
I've been using browser fingerprints as a means to prevent token-hijacking, and as a way make sure that the same device used to login is being used to make the requests. However, I have noticed a scenario where this practice backfires on me. The client-side library i'm using to generate browser fingerprints isn't always accurate. Sometimes the library spits out a different fingerprint entirely. Which causes some client requests to fail as the different fingerprint isn't recognized by the API as being valid. I would like to keep track of what devices are used to make requests to the API. Is there a more consistent way of doing so, while still protecting tokens from being hijacked?
When thinking of the previous question, there is another one that also comes to mind. How do I store auth tokens on client-side securely, or in a way that makes it difficult for someone to obtain the tokens through malicious means such as a xss-attack? I understand setting a strict Content-Security Policy on browser based clients can be effective in defending against xss-attacks. However, I still get paranoid about storing tokens as cookies or in local storage.
I understand oauth2 is usually a good solution to user authentication, and I have considered using it before to deal with this problem. Although, i'm writing the API using Flask, and i'm also using JSON Web tokens. As it currently stands, Flask's implementation of oauth2 has no way to use JWTs as access tokens when using oauth for authentication.
This is my first large-scale project where I have had to deal with this issue and i am not sure what to do. Any help, advice, or critiques are appreciated. I'm in need of the help right now. | false | 46,562,267 | 0 | 0 | 0 | 0 | Put an API Gateway in front of your API , your API Gateway is publicly ( i.e in the DMZ ) exposed while the actual API are internal.
You can look into Kong.. | 0 | 260 | 0 | 0 | 2017-10-04T10:10:00.000 | python-3.x,rest,api,security,authentication | How to handle Rest API user authentication securely? | 1 | 1 | 1 | 46,562,640 | 0 |
0 | 0 | I have a "main" process and a few "worker" processes between which I want to pass some messages. The messages could be binary blobs but has a fixed size for each. I want an abstraction which will neatly buffer and separate out each message for me. I don't want to invent my own protocol on top of TCP, and I can't find any simple+lightweight solution that is portable across languages. (As of now, the "main" process is a Node.js server, and the "worker" processes are planned to be in Python.) | false | 46,571,076 | 0.197375 | 0 | 0 | 2 | The question is purely opinion based but I will give it a shot anyway:
WebSocket is an overkill imo. First of all in order to make WebSockets work you have to implement HTTP (or at least some basic form of it) to do the handshake. If you do that then it is better to stick to "normal" HTTP unless there's a reason for full-duplex communication. There are lots of tools everywhere to handle HTTP over (unix domain) sockets.
But this might be an overkill as well. If you have workers then I suppose performance matters. The easiest and (probably) most efficient solution is a following protocol: each message starts with 1-8 (pick a number) bytes which determine the size of the following content. The content is anything you want, e.g. protobuf message. For example if you want to send foo, then you send 0x03 0x00 0x66 0x6f 0x6f. First two bytes correspond to the size of the content (being 3) and then 3 bytes correspond to foo. | 0 | 3,857 | 0 | 3 | 2017-10-04T17:46:00.000 | python,node.js,websocket,ipc | Is using websockets a good idea for IPC? | 1 | 2 | 2 | 46,571,515 | 0 |
0 | 0 | I have a "main" process and a few "worker" processes between which I want to pass some messages. The messages could be binary blobs but has a fixed size for each. I want an abstraction which will neatly buffer and separate out each message for me. I don't want to invent my own protocol on top of TCP, and I can't find any simple+lightweight solution that is portable across languages. (As of now, the "main" process is a Node.js server, and the "worker" processes are planned to be in Python.) | false | 46,571,076 | 0 | 0 | 0 | 0 | It sounds like you need a message broker of some sort.
Your requirement for “buffering” would exclude, for example, ZeroMQ (which is ideal for inter-process communication, but has no built-in message persistence).
This leaves you with options such as RabbitMQ or SQS if you happen to be on AWS. You might also look at Kafka (Kinesis on AWS). These services all offer “buffering” of messages, with RabbitMQ offering the greatest range of configurations, but probably the greatest implementation hurdle.
Another option would be to use Redis as a simple messaging service.
There are a number options, all of which suit different use-cases and environments. I should add though that “simple and lightweight” doesn’t really fit with any solution other than - perhaps - ZeroMQ | 0 | 3,857 | 0 | 3 | 2017-10-04T17:46:00.000 | python,node.js,websocket,ipc | Is using websockets a good idea for IPC? | 1 | 2 | 2 | 46,571,589 | 0 |
0 | 0 | I am receiving the following error message in spyder.
Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later.
I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation | false | 46,572,148 | 0 | 0 | 0 | 0 | You should try conda install requests>=2, so conda will take care of all dependencies and try to install a version of requests above 2.0.0 | 0 | 192 | 0 | 0 | 2017-10-04T18:51:00.000 | python,python-2.7,anaconda,spyder | Error message in spyder, upgrade requests | 1 | 1 | 2 | 46,593,691 | 0 |
0 | 0 | I am trying to Power on/off a device using appium. I modified Webdriver.py in Appium python client. I added a function and command for power off. Its not working. Can anyone help me with Appium commands for power on/off. PS - I can not use adb commands | false | 46,604,245 | 0 | 0 | 0 | 0 | I'm interested what function you added, because Appium server does not support device power on/off out of box, the only way you can do it is to use adb directly | 0 | 877 | 0 | 0 | 2017-10-06T11:00:00.000 | webdriver,appium,python-appium | Power on/off using Appium commands | 1 | 2 | 2 | 47,365,628 | 0 |
0 | 0 | I am trying to Power on/off a device using appium. I modified Webdriver.py in Appium python client. I added a function and command for power off. Its not working. Can anyone help me with Appium commands for power on/off. PS - I can not use adb commands | false | 46,604,245 | 0 | 0 | 0 | 0 | adb shell reboot -p power off device
adb reboot -p restart the device | 0 | 877 | 0 | 0 | 2017-10-06T11:00:00.000 | webdriver,appium,python-appium | Power on/off using Appium commands | 1 | 2 | 2 | 46,726,929 | 0 |
0 | 0 | I am trying to use python and understand SVG drawings. I would like python to behave similar to java script and get information from SVG. I understand that there can be 2 types of information in SVG.
XML based information - such as elementbyID, elementbyTagNames
Structural information - positional information taking transformations in to consideration too - such as getelementfrompoint, getboundingbox
I have searched around and found python libraries such as lxml for xml processing in svg. Also I found libraries such as svgpathtools, svg.path , but as I understand, these deal only with svgpath elements.
So my question is,
Are there any good libraries which support processing svg in python?(similar to java script) | false | 46,624,831 | 0 | 0 | 0 | 0 | Try to use Pygal. It's used for creating interactive .svg pictures. | 0 | 1,202 | 0 | 2 | 2017-10-07T20:32:00.000 | javascript,python,svg,libraries | Processing SVG in Python | 1 | 2 | 4 | 46,625,764 | 0 |
0 | 0 | I am trying to use python and understand SVG drawings. I would like python to behave similar to java script and get information from SVG. I understand that there can be 2 types of information in SVG.
XML based information - such as elementbyID, elementbyTagNames
Structural information - positional information taking transformations in to consideration too - such as getelementfrompoint, getboundingbox
I have searched around and found python libraries such as lxml for xml processing in svg. Also I found libraries such as svgpathtools, svg.path , but as I understand, these deal only with svgpath elements.
So my question is,
Are there any good libraries which support processing svg in python?(similar to java script) | false | 46,624,831 | -0.049958 | 0 | 0 | -1 | Start your search by visiting www.pypi.org and search for "svg". Review what exists and see what suits your needs. | 0 | 1,202 | 0 | 2 | 2017-10-07T20:32:00.000 | javascript,python,svg,libraries | Processing SVG in Python | 1 | 2 | 4 | 46,625,377 | 0 |
0 | 0 | Using the Python requests library, is there a way to fetch the HTTP response headers and only fetch the body over the network when the Content-Type header is some specific type?
I can of course issue a HEAD request, inspect the Content-Type and if the type matches, issue a GET request. But is there a way to avoid fetching the HTTP headers twice? | false | 46,634,665 | 0 | 0 | 0 | 0 | I choose to do requests.head(), inspect the content-type and if the type is something that should be fetched, do requests.get() to get the body.
The extra network I/O of fetching headers twice is outweighed by not fetching bodies of other content types. | 0 | 78 | 0 | 0 | 2017-10-08T18:35:00.000 | python-3.x,python-requests | Fetch content of HTML page using python requests dependig on Content-Type? | 1 | 1 | 1 | 46,694,608 | 0 |
1 | 0 | When I do something like:
ec2_client.describe_images(ImageIds=['ami-123456'])
The response I get is missing the 'Tags'. This is not the case when I do the same call using aws cli:
aws ec2 describe-images --image-ids ami-123456 | false | 46,645,664 | 0 | 0 | 0 | 0 | FYI: The problem is caused by the fact that credentials from another account was used due to our setup with parent and child accounts | 0 | 113 | 0 | 0 | 2017-10-09T11:44:00.000 | python,amazon-web-services,amazon-ec2,botocore | botocore - Tags are missing from 'describe_images' response | 1 | 1 | 1 | 46,651,021 | 0 |
0 | 0 | I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException.
Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata? | false | 46,665,924 | 0 | 0 | 0 | 0 | Intraday Tick fields are limited to the following fields:
TRADE
BID
ASK
BID_BEST
ASK_BEST
BID_YIELD
ASK_YIELD
MID_PRICE
AT_TRADE
BEST_BID
BEST_ASK
SETTLE
You can optionally include the following fields:
Action Codes
BicMic Codes
Broker Codes
Client Specific Fields
Condition Codes
Eq Ref Price
Exchange Codes
Indicator Codes
Non Plottable Events
Rps Codes
Spread Price
Trade Id
Trade Time
Upfront Price
Yield
As for IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, you can pull them using a ReferenceDataRequest, or subscribe to IN_AUCTION_RT, RT_EXCH_TRADE_STATUS, respectively. | 0 | 1,211 | 0 | 0 | 2017-10-10T11:44:00.000 | python,bloomberg | Bloomberg API /blp/refdata: stockinfo | 1 | 3 | 3 | 46,687,728 | 0 |
0 | 0 | I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException.
Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata? | false | 46,665,924 | 0 | 0 | 0 | 0 | Those fields are not available for all securities.
For example IN_AUCTION returns a value for VOD LN Equity but not for IBM US Equity. HELP HELP may be able to explain why.
So you need to add some logic and check for the exception. | 0 | 1,211 | 0 | 0 | 2017-10-10T11:44:00.000 | python,bloomberg | Bloomberg API /blp/refdata: stockinfo | 1 | 3 | 3 | 46,666,854 | 0 |
0 | 0 | I am using blpapi 3.5.5. windows python api. I am getting intraday tick data using //blp/refdata, following fields: BEST_BID, BEST_ASK and TRADE. Using Bloomberg terminal I found fields: IN_AUCTION, AUCTION_TYPE and TRADE_STATUS, but none of it works, returning NotFoundException.
Dou you know any field that is containing stock info (e.g. in auction/continiuos trading) available in //blp/refdata? | true | 46,665,924 | 1.2 | 0 | 0 | 0 | After communicating with support, we finally found the answer. When sending request, 'conditionCodes' need to be set as True, then depending on stock exchange codes mainly for auction will be send, such as OA as opening auction, IA intraday auction, etc. Meaning some of codes can be found in the terminal using QR <GO> | 0 | 1,211 | 0 | 0 | 2017-10-10T11:44:00.000 | python,bloomberg | Bloomberg API /blp/refdata: stockinfo | 1 | 3 | 3 | 46,724,437 | 0 |
1 | 0 | I was able to get the callback with the redirect_uri and auth code and I was able to authorize the user and redirect him but I am not getting the account_linking in the request object after successful login ie.., I want to check whether the user is logged in or not for every message he sends. | false | 46,667,754 | 0 | 1 | 0 | 0 | The account linking feature does not support this type of token validation. You would need to send a request to your auth server to check if the person is still logged in. | 0 | 125 | 0 | 0 | 2017-10-10T13:16:00.000 | python,bots,facebook-messenger,messenger,facebook-messenger-bot | Facebook messenger account linking | 1 | 1 | 1 | 46,671,593 | 0 |
0 | 0 | How can I identify a remote host OS (Unix/Windows) using python ? One solution I found is to check whether the port22 is open but came to know that some Windows hosts also having Port 22 open but connections refused. Please let me know the efficient way to do the same. Thanks in advance. | false | 46,669,453 | 0.197375 | 0 | 0 | 1 | For security reasons, most operating systems do not advertise information over the network. While tools such as nmap can deduce the OS running on a remote system by scanning ports over the network the only way to reliably know the OS is to login to the system. In many cases the OS will be reported as part of the login process so establishing a connection over the network will suffice to determine the OS. Running "uname -a" on the remote system will also retrieve the OS type on linux systems.
This will retrieve the welcome string from HOST which usually includes the OS type. Substitute a valid user name for UNAME and host name for HOST.
#!/usr/bin/env python3
import sys
import subprocess
CMD="uname -a"
conn = subprocess.Popen(["ssh", "UNAME@HOST", CMD],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
res = conn.stdout.readlines()
print(res) | 0 | 3,377 | 1 | 2 | 2017-10-10T14:36:00.000 | python,linux,windows,remote-access,remote-server | Efficient way of finding remote host's operating system using Python | 1 | 1 | 1 | 46,670,748 | 0 |
0 | 0 | i have had a rough time getting my scripts to work on my raspberry pi zero w and the last program i need installed requires selenium. This script was designed for windows 10 + python 2.7 because i make my scripts in this environment.
I was wondering if it is possible to use selenium on a raspberry pi zero w and preferably headless if possible.
I can't find any info, help or guidelines online anywhere and have no idea how to use pip in raspbian (if it even has pip). | false | 46,675,769 | 0 | 1 | 0 | 0 | I don't see why you couldn't. You can install pip with apt install python-pip, you'll probably need to sudo that command unless you login as root.
Then you can just open a terminal and use the pip install command to get selenium. If that doesn't work you can try running python -m pip install instead. | 0 | 757 | 0 | 0 | 2017-10-10T20:49:00.000 | python-2.7,selenium | Selenium (Maybe headless) on raspberry pi zero w | 1 | 1 | 1 | 46,677,415 | 0 |
0 | 0 | i have a python script which uses requests module. I had installed requests on my machine and the script runs fine. Now I wanted to run this script on server so that it's always available(otherwise it requires my local machine to be running that script all the time, for it to function)
I installed requests (via pip install requests) and when I do pip freeze, it does show requests as one of the installed modules. but when I run the script, I get an error
import requests
ImportError: No module named requests
it is unable to find requests even when I try importing it in python shell on server, gives the same error - No module named requests.
How do I get this going - TIA
EDIT: It was an issue with the version difference - I was trying on python2.6(which happened to be the default on the server) and requests module was for 2.7. After trying to run the script using python2.7, it worked just fine. | false | 46,683,335 | 0 | 0 | 0 | 0 | It was an issue with the python versions. My server's default was 2.6 and requests was 2.7 - tried running python2.7 and then it worked. Closing this. | 1 | 752 | 0 | 1 | 2017-10-11T08:25:00.000 | python,python-requests | unable to import requests in python on server | 1 | 1 | 1 | 46,684,566 | 0 |
1 | 0 | I'm using BOT API for telegram,
through setGameScore i tried to set game score of user with _user_id_ and score but its not working ...
used bot.setGameScore (user_id = 56443156,score=65)
Iam not using game to set only for inlinequery
i received a caused error : "Message to set game score not found" | false | 46,685,936 | 0 | 1 | 0 | 0 | Used :
bot.setGameScore (user_id = 5432131, score=76,inline_message_id=uygrtfghfxGKJB)
I received these two(user_id,inline_message_id) parameters from update.callback_query | 0 | 489 | 0 | 0 | 2017-10-11T10:30:00.000 | python,python-telegram-bot | How to set game score using setGameScore in python telegram bot | 1 | 1 | 1 | 46,703,372 | 0 |
0 | 0 | I can see that we can create account on PyPI using OpenID as well. Can we also upload python packages to PyPI server using OpenID? Something like generic upload procedure by creating .pypirc file and using PyPI username and password. | true | 46,702,163 | 1.2 | 1 | 0 | 1 | I don't think it's possible. Setup username and password at PyPI and use them in your .pypirc. | 0 | 111 | 0 | 1 | 2017-10-12T05:38:00.000 | python,pip,pypi,python-wheel,twine | Is it possible to upload python package on PyPI using OpenID? | 1 | 1 | 1 | 46,708,728 | 0 |
0 | 0 | I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice.
Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently.
Remember at server side I have Python while client side is C# with unity game-engine. | false | 46,708,236 | 0 | 1 | 0 | 0 | It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons:
The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need
The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK)
You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale.
The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go.
Hope that helps and good luck with your project. | 0 | 46 | 0 | 0 | 2017-10-12T11:16:00.000 | c#,python,json,rest,web-services | Check the data has updated at server without requesting every frame of the game | 1 | 1 | 1 | 46,719,477 | 0 |
0 | 0 | I am trying to web scrape some Tweets from this url using Python 3.5
url = "https://twitter.com/search?l=en&q=ecb%20draghi%20since%3A2012-09-01%20until%3A2012-09-02&src=typd"
My problem is that %20d %20s %20u are already encoded in Python 3.5, so my code does not run on this url. Is there a way to solve this issue?
Thanks in advance,
Best | false | 46,709,569 | 0 | 1 | 0 | 0 | %20 is the URL encoding for space (0x20 being space's ASCII code). Just replace all those %20 by spaces and everything will likely work. | 0 | 622 | 0 | 0 | 2017-10-12T12:25:00.000 | python,hyperlink,web-scraping,percentage | %20d %20s %20u in link Python 3.5 | 1 | 1 | 3 | 46,709,727 | 0 |
0 | 0 | wanna have a script that scrapes the titles of a list of URLs, but it could be super slow if we need to wait until the whole page gets loaded. The title is the only thing I am looking for.
Can we stop page loading when the title gets loaded? maybe with something like EC.title_contains. | false | 46,751,028 | 0 | 0 | 0 | 0 | The problem is that webdriver.io as example waits until the page has fully loaded and the loading timer in the tab is away. This is for a good reason because a lot of API´s like .getText are not working until the complete page is loaded because sometimes the element will only be loaded at the end as example.
But you can reduce the loading time by:
1. You use extension like script safe or other simple script blocker that block EVERYTHING with javascript inline or external.
2. Go to chrome settings and disable everything like cookies, javascript, flash etc. just everything.
3. Go to chrome://flags and disable everything from javascript (all API´s like gamepad API ETC.) to WebGL, Canvas etc. - You can really disable everything I also have a chrome profile where I disabled everything.
Now with normal Internet Speed and good CPU you can open every site in 1-3 seconds.
Or alternative you can try a headless browser. | 0 | 1,464 | 0 | 0 | 2017-10-15T02:04:00.000 | python,css,selenium,web-scraping | Selenium python: How to stop page loading when the head/title gets loaded? | 1 | 1 | 2 | 46,963,554 | 0 |
1 | 0 | I know very little about js and I'm trying to create a program that will get information about a browser based javascript game while I play it. I can't use a webdriver as I will be playing the game at the time.
When I inspect the js on google chrome and look at the console, I can see all the information that I want to work with but I don't know how I can save that to a file or access it at the time in order to parse it. Preferably I'd be able to do this with python as that's what I will use for my code that will handle the info once I have it.
Any help or a point in the right direction would be appreciated, thank you :)
ps, I'm on Windows if that's important | false | 46,759,726 | 0 | 0 | 0 | 0 | hacking a game I see. Provided you are aware that what you are doing may diminish the validity of other's playtime as well as potentially committing a crime, I shall provide a solution:
You would need to get a piece of "sniffing" software which allows modifications.
The modifications are likely to be the addition of "Querystring" and "JSON" parsers to read the data traffic. At this point, you can begin learning how their particular system works, slowly replacing traffic with modified versions for your nefarious purposes.
"TCP Sniffing" includes creating a "RAW TCP SOCKET" in whatever language and then repeatedly "READ'ing / RECV'ing" from that socket. The socket MUST be bound TO THE SPECIFIC NETWORK INTERFACE CARD (NIC). Hint: "LOCALHOST" and "127.0.0.1" are NOT the addresses of any NIC.
You would then parse the data as a HTTP req/res stream, ensuring that you can read the contents of the frame correctly.
You would then be looking to either modify the contents of the POST body or the GET querystring. Either, depending on how the game designers designed their network system. | 0 | 36 | 0 | 0 | 2017-10-15T20:35:00.000 | python,screen-scraping | How to scrape javascript while using a webpage normally? | 1 | 1 | 1 | 46,760,481 | 0 |
1 | 0 | For the past few months, I have been using Selenium to automate a very boring but essential part of my job which involves uploading forms.
Recently, the site has implemented a feed back survey which randomly pops up and switches the driver to a new frame. I have written the code which can handle this pop up and switch back to my default frame however the issue is with the randomness.
Is there a way to run my popup handling code as soon the exception generated by the wrong frame is thrown?
I could encase my whole script in a try and except block but how do I direct the script to pick back up from where it left off?
Any advice is much appreciated! | false | 46,776,659 | 0.099668 | 0 | 0 | 1 | I have encountered a similar kind of situation.
My recommendation to you is speak to your front end devs or the company which provides the feedback survey and ask them for a script to disable the pop-ups for a browser session that you run.
Then, execute the script using selenium library(like JavascriptExecutor) as soon as you open the web page so that you do not see the random occuring pop-ups. | 0 | 2,555 | 0 | 5 | 2017-10-16T18:17:00.000 | python,selenium,automation,selenium-chromedriver | Selenium Python - How to handle a random pop up? | 1 | 2 | 2 | 46,777,247 | 0 |
1 | 0 | For the past few months, I have been using Selenium to automate a very boring but essential part of my job which involves uploading forms.
Recently, the site has implemented a feed back survey which randomly pops up and switches the driver to a new frame. I have written the code which can handle this pop up and switch back to my default frame however the issue is with the randomness.
Is there a way to run my popup handling code as soon the exception generated by the wrong frame is thrown?
I could encase my whole script in a try and except block but how do I direct the script to pick back up from where it left off?
Any advice is much appreciated! | true | 46,776,659 | 1.2 | 0 | 0 | 2 | I've encountered this on sites that I work on also. What I've found on our site is that it's not really random, it's on a timer. So basically the way it works is a customer comes to the site. If they haven't been prompted to sign up for X, then they get a popup. If they signed up or dismissed the popup, a cookie is written tracking that they already got the message. You want to find that tracking cookie and recreate it to prevent the popups.
What I did was find that cookie, recreate it on an error page on the site domain (some made up page like example.com/some404page), and then start the script as normal. It completely prevents the popup from occurring.
To find out if this is your case, navigate to the home page (or whatever is appropriate) and just wait for the popup. That will give you a sense of how long the timer is. Close the popup and reload the page. If the popup doesn't occur again, then I'm guessing it's the same mechanic.
To find the cookie, clear cookies in your browser and open your site again. (Don't close the popup yet). I use Chrome so I'll describe how to do it. The general approach should work in other browsers but the specific steps and command names may vary.
Open up the devtools (F12)
Navigate to the Application tab and find Cookies (left panel)
Expand Cookies and click on the domain of the site you are working on. There may be a lot of cookies in there from other sites, depending on your site.
Click the Clear All button to clear all the cookies.
Close the popup and watch for cookies to be created. Hopefully you will see one or just a few cookies created.
Note the name of the cookies and delete them one at a time and reload the page and wait the 15s (or whatever) and see if the popup occurs.
Once the popup appears, you know that the last cookie you deleted is the one you need to recreate. You can now write a quick function that creates the cookie using Selenium by navigating to the dummy error page, example.com/some404page, and create the cookie. Call that function at the start of your script. Profit. | 0 | 2,555 | 0 | 5 | 2017-10-16T18:17:00.000 | python,selenium,automation,selenium-chromedriver | Selenium Python - How to handle a random pop up? | 1 | 2 | 2 | 46,777,980 | 0 |
1 | 0 | This is probably not the best title for this question.
So i have a nodejs application running on my server which currently uses a python script for web-scraping but i am looking at moving this to the client-side due to individual client seeing different versions (potentially unique) of the same site.
I an ideal world i would like to use javascript to get the html response from a page (what i can see in chrome by right-clicking and choosing view source) to then be processed in javascript.
However from what i have read online this does not seem to be possible. I am aware of sites that provide the response (such as anyorigin.com) that can be scraped. However, these are not really suitable for me as i need to be able to scrape what the user see's as each user can potentially see something different on the site i want to scrape. The python script i am currently using would do this but it would require the user to have python installed in order for me to be able to execute it and this cannot be guaranteed.
Apologies for the block of text.
Is there any solution to this problem ? | true | 46,776,763 | 1.2 | 0 | 0 | 1 | After some research and the suggestions received, i created a chrome extension using the simple guide on the Chrome Developer site and used a CORSrequest to get what i needed.
If anyone finds this question and would like help, i am happy to provide further details/assistance :) | 0 | 1,650 | 0 | 1 | 2017-10-16T18:23:00.000 | javascript,python,node.js,client-side | Web Scraping on client-side | 1 | 2 | 2 | 47,111,903 | 0 |
1 | 0 | This is probably not the best title for this question.
So i have a nodejs application running on my server which currently uses a python script for web-scraping but i am looking at moving this to the client-side due to individual client seeing different versions (potentially unique) of the same site.
I an ideal world i would like to use javascript to get the html response from a page (what i can see in chrome by right-clicking and choosing view source) to then be processed in javascript.
However from what i have read online this does not seem to be possible. I am aware of sites that provide the response (such as anyorigin.com) that can be scraped. However, these are not really suitable for me as i need to be able to scrape what the user see's as each user can potentially see something different on the site i want to scrape. The python script i am currently using would do this but it would require the user to have python installed in order for me to be able to execute it and this cannot be guaranteed.
Apologies for the block of text.
Is there any solution to this problem ? | false | 46,776,763 | 0 | 0 | 0 | 0 | I was recently trying to do something very similar, and unfortunately, as far as I know there's not a way to do this on the client-side. You may be able to do some trickery and "post" the data you need back you the server where you deal with it, but I don't imagine that will be very efficient or straight forward.
Though if you do find something, please do share. | 0 | 1,650 | 0 | 1 | 2017-10-16T18:23:00.000 | javascript,python,node.js,client-side | Web Scraping on client-side | 1 | 2 | 2 | 46,776,879 | 0 |
0 | 0 | I'm trying to build a chatbot in FB messenger.
I could SEND typing indicator with sender action API.
However, I can't find information about receiving it.
Is there any way to do that or is it unavailable??
Thank you! | true | 46,783,029 | 1.2 | 1 | 0 | 0 | Nope, there is no way to detect the indicator programmatically. | 0 | 162 | 0 | 0 | 2017-10-17T05:26:00.000 | python,facebook,chatbot,messenger | how to receive typing indicator from user in facebook messenger | 1 | 1 | 1 | 46,815,095 | 0 |
0 | 0 | I am planning to do a data extraction from web sources (web scraping) as part of my work. I would like to extract info around my company's 10km radius.
I would like to extract information such as condominiums, its address, number of units and its price per sqft. Other things like number of schools and kindergarten in the area and hotels.
I understand I need to extract from few sources/webpages. I will also be using Python.
I would like to know which library or libraries should I be using. Is web scraping the only means? Can we extract info from Google Maps?
Also, if anyone has any experience I will really appreciate if you can guide me on this.
Thanks a lot, guys. | true | 46,785,536 | 1.2 | 0 | 0 | 0 | For Google Maps, try the API. Using web scraping tools for Maps data extraction is highly discouraged by Google TOS.
If you are using Python, it has very nice libraries BeautifulSoup and Scrapy for this purpose.
Other means? You can extract POIs from OSM data, try the open source tools. Property Info? May be it's available for your county / state from Govt Office, give it a try. | 0 | 360 | 0 | 0 | 2017-10-17T08:15:00.000 | python,web,beautifulsoup,data-extraction | data extraction from web | 1 | 1 | 1 | 46,786,396 | 0 |
0 | 0 | I'm trying to download a directory with python3 using the requests library. I wanted to do it by "walking" (like os.walk), but never found the corresponding functions in requests.
I struggle to find another way to do it. | true | 46,789,770 | 1.2 | 0 | 0 | 0 | You won't find the "corresponding functions" (=> to os.walk()) in requests because there's no concept of "directory" or "directory listing" in the HTTP protocol. All you have are urls and resources...
Given your specs ("I try to get data from my own server, so it lists directory contents"), the best you can do is parsing the HTML your server generates for directory listing and issue requests for the links you find. BeautifulSoup is possibly the simplest solution for the parsing. | 0 | 501 | 0 | 0 | 2017-10-17T12:08:00.000 | python,directory,python-requests | Trying to download a directory with requests | 1 | 1 | 2 | 46,790,562 | 0 |
0 | 0 | I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS.
I have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status.
This is happening with in cluster configuration as well as kube config.
When pod/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones.
What does it mean and can someone point me in the right direction. | false | 46,789,946 | 0 | 1 | 0 | 0 | For me, The reason for 500 was basically pod unable to pull the image from GCR | 0 | 1,730 | 1 | 2 | 2017-10-17T12:16:00.000 | python,kubernetes | kubectl exec returning `Handshake status 500` | 1 | 2 | 3 | 69,754,077 | 0 |
0 | 0 | I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS.
I have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status.
This is happening with in cluster configuration as well as kube config.
When pod/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones.
What does it mean and can someone point me in the right direction. | false | 46,789,946 | 0 | 1 | 0 | 0 | For me the reason was,
I had two pods, with same label attached, 1 pod was in Evicted state and other was running , i deleted that pod, which was Evicted and issue was fixed | 0 | 1,730 | 1 | 2 | 2017-10-17T12:16:00.000 | python,kubernetes | kubectl exec returning `Handshake status 500` | 1 | 2 | 3 | 70,119,831 | 0 |
0 | 0 | This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue.
My question is : Is it common to run more than 20 workers to listen to Redis ?
python /usr/src/worker1.py
python /usr/src/worker2.py
python /usr/src/worker3.py
python /usr/src/worker4.py
python /usr/src/worker5.py
....
....
python /usr/src/worker6.py | false | 46,808,342 | 0 | 0 | 0 | 0 | If your worker need to do a long task with data, it's a solution. but each data must be treated by a single worker.
By this way, you can easly (without thread,etc..) distribute your tasks, it's better if your worker doesn't work in the same server | 0 | 834 | 1 | 0 | 2017-10-18T10:44:00.000 | python,redis,queue,worker | Is it common to run 20 python workers which uses Redis as Queue ? | 1 | 2 | 2 | 46,808,422 | 0 |