Python Pillow – Creating a Watermark In this article, we will see how the creation of a watermark is done using the Pillow library in Python. Pillow is a Python Image. Web scraping is one of the most powerful things you can learn, so let’s Learn to scrape some data from some websites using Python! Basic introduction you could probably skip that I copied from my other article. First things first, we will need to have Python installed, read my article here to make sure you have Python and some IDE installed.
This is the third edition of this post. It was originally an intro to web scraping with Python (in Python 2) using the Requests library. It was then updated to cover some extra topics and also update for Python 3.
The scenario is to download the back catalogue of the excellent MagPi magazine which is published monthly and the PDF is available for free. More info on the background is in the original post.
However, since the original post a fair bit has changed: the MagPi website was updated so the scraping broke, Python has moved on and I found that despite downloading the issues, having them on a Pi meant I never actually read them because I forgot they were there!
So this edition includes updates for all that: it works with the new MagPi website, there are more design / coding thoughts – and additional functionality such as (only) checking for new issues and then uploading to Dropbox.
Let’s get started!
Structure
The basic structure of the code is the same, but what we’d like to do in this version is:
- Start up
- Retrieve the issue number that we most recently downloaded
- Check the MagPi website to see if there is a newer version
- Additionally, handle paging
- If not, do nothing
- If yes, download the file locally
- Upload the file to Dropbox
As before, this is not supposed to be extensive or complete – it could do with more error checking and so on. A link to a repo is at the bottom.
Also as before, the code was written in Python 3.7.3 running on macOS but we’d like it to run autonomously under Raspbian on a Raspberry Pi Model 3 which only has Python 3.5.3.
We do this by editing the file in macOS and then SFTP-ing it to the Pi. FWIW, I’ve moved from Jetbrains PyCharm to using VS Code for editing. In fact, I’ve moved to VS Code for most things.
Some config
The differences in Python versions between the environments causes some issues. The obvious thing to do would be to update the Pi to have e.g., Python 3.6. However, I’ve left it as is for two reasons:
1) the Pi I’m using does other things and I don’t want to deal with accidentally borking them by updating it;
2) having this dev/prod-esque environment is quasi-real life since it forces me to do a couple of other things which will be useful.
One such example here is config. The script requires paths-to-things which are environment specific and using absolute paths helps. If these paths were hard-coded in to the script, then they’ll work locally but every time they’re transferred to the Pi they’ll break, and/or you’ll need to change them. This gets tiresome quickly so what we really want is a single script that knows how to use different paths – without having to modify the script every time – so we need to use config.
By far the simplest way to do this is in Python is to create a new config.py file, add the variables to that and then import it in the main script. You only need to now update your main script (and possibly the config if/when you add new variables.)
To use it:
Python Web Scraping Pdf
So this is super easy.
Note: I’m using general coding principles that variables in the config.py are essentially public properties hence they get Capital Case. Local variables (within the script) would be camelCase.
You could obviously do this other ways, such as a .json file as config, and this would work fine. However, I rather like the autocomplete I get from VS Code doing it this way.
Side-note: another good example of why you should do this would be in the case where you have credentials / client secrets etc. You should *never* put these in public source control. So, by externalising them from your main script you can efficiently tell your source control client to ignore them without breaking everything.
Logging
It’s probably about time we did some proper logging as opposed to just writing things out to the screen. Fortunately, this is ridiculously easy in Python, using the built-in logger and some config:
This is the absolute basics. Read the docs to learn more.
Efficient scrapeage
What else. Well, now we’ve sorted some config, the other thing we need to do is store (persist) the latest issue we downloaded, and then refer to this as part of the next run of the script (rather than laboriously checking everything every time.)
There are plenty of ways to do this but in the spirit of keeping things really (really) simple, we’ll just store the latest issue in a file called latest.txt, (which is just a file with a number in it, e.g. 1)
You’ll notice the ‘WriteLatest’ method has a little check in there to only write the value if it’s higher. This is not strictly necessary and is only in there to make the initial scrape of the back catalogue simpler.
Paging
The original version of the MagPi website had all issues on one big page but now it’s paged. So we need to handle that. There are plenty of ways to do this but the simplest is to load the home page, looking for a specific div by class. It has the text ‘x of y’ pages in it; so we’ll just extract the y value:
This uses new BeautifulSoup functionality introduced in BS4 to use the ‘class_’ selector; if you’re using an older version, you may need to adjust this to use a different way to select a div by class.
Err
First potential problem if you’re entirely new to this. Hopefully you’ve followed the original post to get your Python environment setup, but if not, you may encounter an issue:
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?
This can usually be resolved using pip:
However, if the issue persists, you may also be in need of the python-lxml package:
Using sudo if you so need. This should make this particular issue go away.
Stringy
Now that we know the number of pages, we can update our search method to iterate through them:
Now we encounter our first language difference to do with string handling / formatting. Python 3.6 introduces the wonderful f-string functionality which is very similar to C#’s string interpolation:
If we don’t have this, then we need to use the older .format method:
Still, this is better than the old %s replace stuff which gives me a few C++ nightmares.
For ease, I’m only using the latter method, but if you have 3.6+ then definitely replace with f-strings.
Dropbox
Moving on. As before, we hunt the issue page look for anchor tags. If we find one that matches the format we expect, we now extract the issue from it and – if that’s greater than our last retrieved – then we go ahead with downloading the PDF.
In the new version of the site, the download link is actually stored somewhere else so we go off and find it using similar methods as before.
We download the file locally, and then for extra – upload it to Dropbox.
I’m not going to go through the full setup because I just followed this which uses the open source Dropbox-Uploader shell script. The only difference is that I created an ‘App Folder’ app as opposed to Full Dropbox. Least concerns and all that.
This then just magically appeared in my Dropbox folder on my machine, which was nice.
I call the dropbox_uploader.sh script from Python using:
And that is everything. Should be ready to rock.
Done
Bear in mind that the first time it runs, it will try and download *everything* so if you want to test a bit, then update the latest.txt file to have e.g., 90 in it.
All that’s left to do is to add it to a scheduler of some sort (maybe cron…?)
With previous editions, people have struggled with the files and indentation and so on, so instead I have created a public repo (which, ftr, is my first ever public repo, ftw. Woo.) Should make snagging bugs a bit easier in the future.
As ever please comment with issues / bugs / stupid things below. There are many ways this could be done better or more efficiently but I’ve tried to keep it as simple and easy to follow as possible.
In the meantime, happy scraping!
I’ve recently had to perform some web scraping from a site that required login.It wasn’t very straight forward as I expected so I’ve decided to write a tutorial for it.
For this tutorial we will scrape a list of projects from our bitbucket account.
The code from this tutorial can be found on my Github.
We will perform the following steps:
- Extract the details that we need for the login
- Perform login to the site
- Scrape the required data
What Is Web Scraping
For this tutorial, I’ve used the following packages (can be found in the requirements.txt):
Open the login page
Go to the following page “bitbucket.org/account/signin” .You will see the following page (perform logout in case you’re already logged in)
Check the details that we need to extract in order to login
In this section we will build a dictionary that will hold our details for performing login:
- Right click on the “Username or email” field and select “inspect element”. We will use the value of the “name” attribue for this input which is “username”. “username” will be the key and our user name / email will be the value (on other sites this might be “email”, “user_name”, “login”, etc.).
- Right click on the “Password” field and select “inspect element”. In the script we will need to use the value of the “name” attribue for this input which is “password”. “password” will be the key in the dictionary and our password will be the value (on other sites this might be “user_password”, “login_password”, “pwd”, etc.).
- In the page source, search for a hidden input tag called “csrfmiddlewaretoken”. “csrfmiddlewaretoken” will be the key and value will be the hidden input value (on other sites this might be a hidden input with the name “csrf_token”, “authentication_token”, etc.). For example “Vy00PE3Ra6aISwKBrPn72SFml00IcUV8”.
We will end up with a dict that will look like this:
Keep in mind that this is the specific case for this site. While this login form is simple, other sites might require us to check the request log of the browser and find the relevant keys and values that we should use for the login step.
For this script we will only need to import the following:
First, we would like to create our session object. This object will allow us to persist the login session across all our requests.
Second, we would like to extract the csrf token from the web page, this token is used during login.For this example we are using lxml and xpath, we could have used regular expression or any other method that will extract this data.
** More about xpath and lxml can be found here.
Next, we would like to perform the login phase.In this phase, we send a POST request to the login url. We use the payload that we created in the previous step as the data.We also use a header for the request and add a referer key to it for the same url.
Now, that we were able to successfully login, we will perform the actual scraping from bitbucket dashboard page
In order to test this, let’s scrape the list of projects from the bitbucket dashboard page.Again, we will use xpath to find the target elements and print out the results. If everything went OK, the output should be the list of buckets / project that are in your bitbucket account.
You can also validate the requests results by checking the returned status code from each request.It won’t always let you know that the login phase was successful but it can be used as an indicator.
for example:
That’s it.
Basic Web Scraping Python Code
Full code sample can be found on Github.