Web Scraping Python Libraries



Advanced Web Scraping: Bypassing “403 Forbidden,” captchas, and more; Also, there are multiple libraries for web scraping. BeautifulSoup is one of those libraries. You go through our free course- Introduction to Web Scraping using Python to learn more. Learn, Engage, Compete & Get Hired You can also read this article on our Mobile APP. Thanks for your question. While there are other libraries for web-scraping like Scrapy and BeautifulSoup, here is my bit of pros/cons – Pros 1. Selenium can scrape dynamic contents. BeautifulSoup can scrape static contents only. Selenium can be used stand alone. BeautifulSoup, however, depends on other libraries to work. The Farm: Requests. The Requests library is vital to add to your data science toolkit.

Internet extends fast and modern websites pretty often use dynamic content load mechanisms to provide the best user experience. Still, on the other hand, it becomes harder to extract data from such web pages, as it requires the execution of internal Javascript in the page context while scraping. Let's review several conventional techniques that allow data extraction from dynamic websites using Python.

What is a dynamic website?#

A dynamic website is a type of website that can update or load content after the initial HTML load. So the browser receives basic HTML with JS and then loads content using received Javascript code. Such an approach allows increasing page load speed and prevents reloading the same layout each time you'd like to open a new page.

Usually, dynamic websites use AJAX to load content dynamically, or even the whole site is based on a Single-Page Application (SPA) technology.

In contrast to dynamic websites, we can observe static websites containing all the requested content on the page load.

A great example of a static website is example.com:

The whole content of this website is loaded as a plain HTML while the initial page load.

To demonstrate the basic idea of a dynamic website, we can create a web page that contains dynamically rendered text. It will not include any request to get information, just a render of a different HTML after the page load:

<head>
<script>
window.addEventListener('DOMContentLoaded',function(){
document.getElementById('test').innerHTML='I ❤️ ScrapingAnt'
</script>
<body>
</body>

All we have here is an HTML file with a single <div> in the body that contains text - Web Scraping is hard, but after the page load, that text is replaced with the text generated by the Javascript:

window.addEventListener('DOMContentLoaded',function(){
document.getElementById('test').innerHTML='I ❤️ ScrapingAnt'
</script>

To prove this, let's open this page in the browser and observe a dynamically replaced text:

Alright, so the browser displays a text, and HTML tags wrap this text.
Can't we use BeautifulSoup or LXML to parse it? Let's find out.

Extract data from a dynamic web page#

BeautifulSoup is one of the most popular Python libraries across the Internet for HTML parsing. Almost 80% of web scraping Python tutorials use this library to extract required content from the HTML.

Let's use BeautifulSoup for extracting the text inside <div> from our sample above.

import os
soup = BeautifulSoup(test_file)

This code snippet uses os library to open our test HTML file (test.html) from the local directory and creates an instance of the BeautifulSoup library stored in soup variable. Using the soup we find the tag with id test and extracts text from it.

In the screenshot from the first article part, we've seen that the content of the test page is I ❤️ ScrapingAnt, but the code snippet output is the following:

And the result is different from our expectation (except you've already found out what is going on there). Everything is correct from the BeautifulSoup perspective - it parsed the data from the provided HTML file, but we want to get the same result as the browser renders. The reason is in the dynamic Javascript that not been executed during HTML parsing.

We need the HTML to be run in a browser to see the correct values and then be able to capture those values programmatically.

Below you can find four different ways to execute dynamic website's Javascript and provide valid data for an HTML parser: Selenium, Pyppeteer, Playwright, and Web Scraping API.

Selenuim: web scraping with a webdriver#

Selenium is one of the most popular web browser automation tools for Python. It allows communication with different web browsers by using a special connector - a webdriver.

To use Selenium with Chrome/Chromium, we'll need to download webdriver from the repository and place it into the project folder. Don't forget to install Selenium itself by executing:

Selenium instantiating and scraping flow is the following:

  • define and setup Chrome path variable
  • define and setup Chrome webdriver path variable
  • define browser launch arguments (to use headless mode, proxy, etc.)
  • instantiate a webdriver with defined above options
  • load a webpage via instantiated webdriver

In the code perspective, it looks the following:

from selenium.webdriver.chrome.options import Options
import os
opts = Options()
# opts.add_argument(' — headless') # Uncomment if the headless version needed
opts.binary_location ='<path to Chrome executable>'
# Set the location of the webdriver
chrome_driver = os.getcwd()+'<Chrome webdriver filename>'
# Instantiate a webdriver
driver = webdriver.Chrome(options=opts, executable_path=chrome_driver)
# Load the HTML page
soup = BeautifulSoup(driver.page_source)

And finally, we'll receive the required result:

Selenium usage for dynamic website scraping with Python is not complicated and allows you to choose a specific browser with its version but consists of several moving components that should be maintained. The code itself contains some boilerplate parts like the setup of the browser, webdriver, etc.

I like to use Selenium for my web scraping project, but you can find easier ways to extract data from dynamic web pages below.

Pyppeteer: Python headless Chrome#

Pyppeteer is an unofficial Python port of Puppeteer JavaScript (headless) Chrome/Chromium browser automation library. It is capable of mainly doing the same as Puppeteer can, but using Python instead of NodeJS.

Puppeteer is a high-level API to control headless Chrome, so it allows you to automate actions you're doing manually with the browser: copy page's text, download images, save page as HTML, PDF, etc.

To install Pyppeteer you can execute the following command:

Python crawler library

The usage of Pyppeteer for our needs is much simpler than Selenium:

from bs4 import BeautifulSoup

Web Scraping Python Libraries Examples

import os
# Launch the browser
page =await browser.newPage()
# Create a URI for our test file
await page.goto(page_path)
soup = BeautifulSoup(page_content)
await browser.close()
asyncio.get_event_loop().run_until_complete(main())

I've tried to comment on every atomic part of the code for a better understanding. However, generally, we've just opened a browser page, loaded a local HTML file into it, and extracted the final rendered HTML for further BeautifulSoup processing.

As we can expect, the result is the following:

We did it again and not worried about finding, downloading, and connecting webdriver to a browser. Though, Pyppeteer looks abandoned and not properly maintained. This situation may change in the nearest future, but I'd suggest looking at the more powerful library.

Playwright: Chromium, Firefox and Webkit browser automation#

Playwright can be considered as an extended Puppeteer, as it allows using more browser types (Chromium, Firefox, and Webkit) to automate modern web app testing and scraping. You can use Playwright API in JavaScript & TypeScript, Python, C# and, Java. And it's excellent, as the original Playwright maintainers support Python.

The API is almost the same as for Pyppeteer, but have sync and async version both.

Installation is simple as always:

playwright install

Let's rewrite the previous example using Playwright.

from playwright.sync_api import sync_playwright
with sync_playwright()as p:
browser = p.chromium.launch()
# Open a new browser page
page_path ='file://'+ os.getcwd()+'/test.html'
# Open our test file in the opened page
page_content = page.content()
# Process extracted content with BeautifulSoup
print(soup.find(id='test').get_text())
# Close browser

As a good tradition, we can observe our beloved output:

We've gone through several different data extraction methods with Python, but is there any more straightforward way to implement this job? How can we scale our solution and scrape data with several threads?

Meet the web scraping API!

Web Scraping API#

ScrapingAnt web scraping API provides an ability to scrape dynamic websites with only a single API call. It already handles headless Chrome and rotating proxies, so the response provided will already consist of Javascript rendered content. ScrapingAnt's proxy poll prevents blocking and provides a constant and high data extraction success rate.

Usage of web scraping API is the simplest option and requires only basic programming skills.

You do not need to maintain the browser, library, proxies, webdrivers, or every other aspect of web scraper and focus on the most exciting part of the work - data analysis.

As the web scraping API runs on the cloud servers, we have to serve our file somewhere to test it. I've created a repository with a single file: https://github.com/kami4ka/dynamic-website-example/blob/main/index.html

To check it out as HTML, we can use another great tool: HTMLPreview

The final test URL to scrape a dynamic web data has a following look: http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html

The scraping code itself is the simplest one across all four described libraries. We'll use ScrapingAntClient library to access the web scraping API.

Let's install in first:

And use the installed library:

from scrapingant_client import ScrapingAntClient
# Define URL with a dynamic web content
url ='http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html'
# Create a ScrapingAntClient instance
client = ScrapingAntClient(token='<YOUR-SCRAPINGANT-API-TOKEN>')
# Get the HTML page rendered content
page_content = client.general_request(url).content
# Parse content with BeautifulSoup
print(soup.find(id='test').get_text())

To get you API token, please, visit Login page to authorize in ScrapingAnt User panel. It's free.

And the result is still the required one.

All the headless browser magic happens in the cloud, so you need to make an API call to get the result.

Check out the documentation for more info about ScrapingAnt API.

Summary#

Today we've checked four free tools that allow scraping dynamic websites with Python. All these libraries use a headless browser (or API with a headless browser) under the hood to correctly render the internal Javascript inside an HTML page. Below you can find links to find out more information about those tools and choose the handiest one:

Happy web scraping, and don't forget to use proxies to avoid blocking 🚀

  • Python Web Scraping Tutorial
  • Python Web Scraping Resources
  • Selected Reading

In this chapter, let us learn various Python modules that we can use for web scraping.

Web Scraping Python Libraries

Python Development Environments using virtualenv

Virtualenv is a tool to create isolated Python environments. With the help of virtualenv, we can create a folder that contains all necessary executables to use the packages that our Python project requires. It also allows us to add and modify Python modules without access to the global installation.

You can use the following command to install virtualenv

Now, we need to create a directory which will represent the project with the help of following command −

Now, enter into that directory with the help of this following command −

Now, we need to initialize virtual environment folder of our choice as follows −

Now, activate the virtual environment with the command given below. Once successfully activated, you will see the name of it on the left hand side in brackets.

We can install any module in this environment as follows −

For deactivating the virtual environment, we can use the following command −

You can see that (websc) has been deactivated.

Python Modules for Web Scraping

Web Scraping In Python Libraries

Web scraping is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically. In other words, instead of manually saving the data from websites, the web scraping software will automatically load and extract data from multiple websites as per our requirement.

In this section, we are going to discuss about useful Python libraries for web scraping.

Requests

It is a simple python web scraping library. It is an efficient HTTP library used for accessing web pages. With the help of Requests, we can get the raw HTML of web pages which can then be parsed for retrieving the data. Before using requests, let us understand its installation.

Web Scraping Python Libraries

Installing Requests

We can install it in either on our virtual environment or on the global installation. With the help of pip command, we can easily install it as follows −

Example

In this example, we are making a GET HTTP request for a web page. For this we need to first import requests library as follows −

In this following line of code, we use requests to make a GET HTTP requests for the url: https://authoraditiagarwal.com/ by making a GET request.

Now we can retrieve the content by using .text property as follows −

Observe that in the following output, we got the first 200 characters.

Urllib3

It is another Python library that can be used for retrieving data from URLs similar to the requests library. You can read more on this at its technical documentation athttps://urllib3.readthedocs.io/en/latest/.

Installing Urllib3

Using the pip command, we can install urllib3 either in our virtual environment or in global installation.

Example: Scraping using Urllib3 and BeautifulSoup

In the following example, we are scraping the web page by using Urllib3 and BeautifulSoup. We are using Urllib3 at the place of requests library for getting the raw data (HTML) from web page. Then we are using BeautifulSoup for parsing that HTML data.

This is the output you will observe when you run this code −

Selenium

It is an open source automated testing suite for web applications across different browsers and platforms. It is not a single tool but a suite of software. We have selenium bindings for Python, Java, C#, Ruby and JavaScript. Here we are going to perform web scraping by using selenium and its Python bindings. You can learn more about Selenium with Java on the link Selenium.

Web Scraping Python Libraries List

Selenium Python bindings provide a convenient API to access Selenium WebDrivers like Firefox, IE, Chrome, Remote etc. The current supported Python versions are 2.7, 3.5 and above.

Installing Selenium

Using the pip command, we can install urllib3 either in our virtual environment or in global installation.

As selenium requires a driver to interface with the chosen browser, we need to download it. The following table shows different browsers and their links for downloading the same.

Chrome

Edge

Firefox

Safari

Example

This example shows web scraping using selenium. It can also be used for testing which is called selenium testing.

Python Website Library

After downloading the particular driver for the specified version of browser, we need to do programming in Python.

First, need to import webdriver from selenium as follows −

Now, provide the path of web driver which we have downloaded as per our requirement −

Now, provide the url which we want to open in that web browser now controlled by our Python script.

We can also scrape a particular element by providing the xpath as provided in lxml.

Web Scraping Python Libraries Pdf

You can check the browser, controlled by Python script, for output.

Scrapy

Scrapy is a fast, open-source web crawling framework written in Python, used to extract the data from the web page with the help of selectors based on XPath. Scrapy was first released on June 26, 2008 licensed under BSD, with a milestone 1.0 releasing in June 2015. It provides us all the tools we need to extract, process and structure the data from websites.

Installing Scrapy

Using the pip command, we can install urllib3 either in our virtual environment or in global installation.

For more detail study of Scrapy you can go to the linkScrapy