Scraping Webpages in Python With Beautiful Soup: The Basics

In a previous tutorial, I showed you how to use the Requests module to access webpages using Python. The tutorial covered a lot of topics like making GET/POST requests and downloading things like images or PDFs programmatically. The one thing missing from that tutorial was a guide on scraping webpages you accessed using Requests to extract the information that you need.

In this tutorial, you will learn about Beautiful Soup, which is a Python library to extract data from HTML files. The focus in this tutorial will be on learning the basics of the library, and more advanced topics will be covered in the next tutorial. Please note that this tutorial uses Beautiful Soup 4 for all the examples.

Installation

You can install Beautiful Soup 4 using pip. The package name is beautifulsoup4. It should work on both Python 2 and Python 3.

If you don’t have pip installed on your system, you can directly download the Beautiful Soup 4 source tarball and install it using setup.py.

BeautifulSoup is originally packaged as Python 2 code. When you install it for use with Python 3, it is automatically updated to Python 3 code. The code won’t be converted unless you install the package. Here are a few common errors that you might notice:

  • The “No module named HTMLParser” ImportError occurs when you are running the Python 2 version of the code under Python 3.
  • The “No module named html.parser” ImportError occurs when you are running the Python 3 version of the code under Python 2.

Both the errors above can be corrected by uninstalling and reinstalling Beautiful Soup.

Installing a Parser

Before discussing the differences between different parsers that you can use with Beautiful Soup, let’s write the code to create a soup.

The BeautifulSoup object can accept two arguments. The first argument is the actual markup, and the second argument is the parser that you want to use. The different parsers are: html.parser, lxml, and html5lib. The lxml parser has two versions, an HTML parser and an XML parser.

The html.parser is a built-in parser, and it does not work so well in older versions of Python. You can install the other parsers using the following commands:

The lxml parser is very fast and can be used to quickly parse given HTML. On the other hand, the html5lib parser is very slow, but it is also extremely lenient. Here is an example of using each of these parsers:

The differences outlined by the above example only matter when you are parsing invalid HTML. However, most of the HTML on the web is malformed, and knowing these differences will help you in debugging some parsing errors and deciding which parser you want to use in a project. Generally, the lxml parser is a very good choice.

Objects in Beautiful Soup

Beautiful Soup parses the given HTML document into a tree of Python objects. There are four main Python objects that you need to know about: Tag, NavigableString, BeautifulSoup, and Comment.

The Tag object refers to an actual XML or HTML tag in the document. You can access the name of a tag using tag.name. You can also set a tag’s name to something else. The name change will be visible in the markup generated by Beautiful Soup.

You can access different attributes like the class and id of a tag using tag['class'] and tag['id'] respectively. You can also access the whole dictionary of attributes using tag.attrs. You can also add, remove or modify a tag’s attributes. The attributes like an element’s class which can take multiple values are stored as a list.

The text within a tag is stored as a NavigableString in Beautiful Soup. It has a few useful methods like replace_with("string") to replace the text within a tag. You can also convert a NavigableString to unicode string using unicode().

Beautiful Soup also allows you to access the comments in a webpage. These comments are stored as a Comment object, which is also basically a NavigableString.

You have already learned about the BeautifulSoup object in the previous section. It is used to represent the document as a whole. Since it is not an actual object, it does not have any name or attributes.

Getting the Title, Headings, and Links

You can extract the page title and other such data very easily using Beautiful Soup. Let’s scrape the Wikipedia page about Python. First, you will have to get the markup of the page using the following code based on the Requests module tutorial to access webpages.

Now that you have created the soup, you can get the title of the webpage using the following code:

You can also scrape the webpage for other information like the main heading or the first paragraph, their classes, or the id attribute.

Similarly, you can iterate through all the links or subheading in a document using the following code:

Handling Multi-Valued and Duplicate Attributes

Different elements in a HTML document use a variety of attributes for different purposes. For example, you can add class or id attributes to style, group or identify elements. Similarly, you can use data attributes to store any additional information. Not all attributes can accept multiple values but a few can. The HTML specification has a clear set of rules for these situations and Beautiful Soup tries to follow them all. However, it also allows you to specify how you want to handle the data returned by multi-valued attributes. This feature was added in version 4.8 so make sure that you have installed the right version before using it.

By default, attributes like class which can have multiple values will return a list but id etc. will return a single string value. You can pass an argument called multi_valued_attributes in the BeautifulSoup constructor with its value set to None. This will make sure that the value returned by all the attributes is a string.

Here is an example:

There is no guarantee that the HTML you get from different websites will always be completely valid. It could have many different issues like duplicated attributes. Starting from version 4.9.1, Beautiful Soup allows you to specify what should be done is such situations by setting a value for the on_duplicate_attribute argument. Different parsers handle this issue differently and you will need to use the built-in html.parser to force a specific behavior.

Navigating the DOM

You can navigate through the DOM tree using regular tag names. Chaining those tag names can help you navigate the tree more deeply. For example, you can get the first link in the first paragraph of the given Wikipedia page by using soup.p.a. All the links in the first paragraph can be accessed by using soup.p.find_all('a').

You can also access all the children of a tag as a list by using tag.contents. To get the children at a specific index, you can use tag.contents[index]. You can also iterate over a tag’s children by using the .children attribute.

Both .children and .contents are useful only when you want to access the direct or first-level descendants of a tag. To get all the descendants, you can use the .descendants attribute.

You can also access the parent of an element using the .parent attribute. Similarly, you can access all the ancestors of an element using the .parents attribute. The parent of the top-level <html> tag is the BeautifulSoup Object itself, and its parent is None.

You can access the previous and next sibling of an element using the .previous_sibling and .next_sibling attributes.

For two elements to be siblings, they should have the same parent. This means that the first child of an element will not have a previous sibling. Similarly, the last child of the element will not have a next sibling. In actual webpages, the previous and next siblings of an element will most probably be a new line character.

You can also iterate over all the siblings of an element using .previous_siblings and .next_siblings.

You can go to the element that comes immediately after the current element using the .next_element attribute. To access the element that comes immediately before the current element, use the .previous_element attribute.

Similarly, you can iterate over all the elements that come before and after the current element using .previous_elements and .next_elements respectively.

Parsing Only Part of a Document

Lets say that you need to process a large amount of data when looking for something specific and it is important for you to save some processing time or memory. In that case, you can take advantage of the SoupStrainer class in Beautiful Soup. This class allows you to only focus on specific elements while ignoring the rest of the document. For example, you can use it to ignore everything else on the webpage besides images by passing appropriate selectors in the SoupStrainer constructor.

Keep in mind that the Soup Strainer will not work with the html5lib parser. However, you can use it with both lxml and the built-in parser. Here is an example where we parse the Wikipedia page for the United States and get all the images with the class thumbimage.

You should note that I used class_ instead of class to get these elements because class is a reserved keyword in Python.

Final Thoughts

After completing this tutorial, you should now have a good understanding of the main differences between different HTML parsers. You should now also be able to navigate through a webpage and extract important data. This can be helpful when you want to analyze all the headings or links on a given website.

In the next part of the series, you will learn how to use the Beautiful Soup library to search and modify the DOM.


This content originally appeared on Envato Tuts+ Tutorials and was authored by Monty Shokeen

In a previous tutorial, I showed you how to use the Requests module to access webpages using Python. The tutorial covered a lot of topics like making GET/POST requests and downloading things like images or PDFs programmatically. The one thing missing from that tutorial was a guide on scraping webpages you accessed using Requests to extract the information that you need.

In this tutorial, you will learn about Beautiful Soup, which is a Python library to extract data from HTML files. The focus in this tutorial will be on learning the basics of the library, and more advanced topics will be covered in the next tutorial. Please note that this tutorial uses Beautiful Soup 4 for all the examples.

Installation

You can install Beautiful Soup 4 using pip. The package name is beautifulsoup4. It should work on both Python 2 and Python 3.

If you don’t have pip installed on your system, you can directly download the Beautiful Soup 4 source tarball and install it using setup.py.

BeautifulSoup is originally packaged as Python 2 code. When you install it for use with Python 3, it is automatically updated to Python 3 code. The code won’t be converted unless you install the package. Here are a few common errors that you might notice:

  • The “No module named HTMLParser” ImportError occurs when you are running the Python 2 version of the code under Python 3.
  • The “No module named html.parser” ImportError occurs when you are running the Python 3 version of the code under Python 2.

Both the errors above can be corrected by uninstalling and reinstalling Beautiful Soup.

Installing a Parser

Before discussing the differences between different parsers that you can use with Beautiful Soup, let’s write the code to create a soup.

The BeautifulSoup object can accept two arguments. The first argument is the actual markup, and the second argument is the parser that you want to use. The different parsers are: html.parser, lxml, and html5lib. The lxml parser has two versions, an HTML parser and an XML parser.

The html.parser is a built-in parser, and it does not work so well in older versions of Python. You can install the other parsers using the following commands:

The lxml parser is very fast and can be used to quickly parse given HTML. On the other hand, the html5lib parser is very slow, but it is also extremely lenient. Here is an example of using each of these parsers:

The differences outlined by the above example only matter when you are parsing invalid HTML. However, most of the HTML on the web is malformed, and knowing these differences will help you in debugging some parsing errors and deciding which parser you want to use in a project. Generally, the lxml parser is a very good choice.

Objects in Beautiful Soup

Beautiful Soup parses the given HTML document into a tree of Python objects. There are four main Python objects that you need to know about: Tag, NavigableString, BeautifulSoup, and Comment.

The Tag object refers to an actual XML or HTML tag in the document. You can access the name of a tag using tag.name. You can also set a tag’s name to something else. The name change will be visible in the markup generated by Beautiful Soup.

You can access different attributes like the class and id of a tag using tag['class'] and tag['id'] respectively. You can also access the whole dictionary of attributes using tag.attrs. You can also add, remove or modify a tag’s attributes. The attributes like an element’s class which can take multiple values are stored as a list.

The text within a tag is stored as a NavigableString in Beautiful Soup. It has a few useful methods like replace_with("string") to replace the text within a tag. You can also convert a NavigableString to unicode string using unicode().

Beautiful Soup also allows you to access the comments in a webpage. These comments are stored as a Comment object, which is also basically a NavigableString.

You have already learned about the BeautifulSoup object in the previous section. It is used to represent the document as a whole. Since it is not an actual object, it does not have any name or attributes.

Getting the Title, Headings, and Links

You can extract the page title and other such data very easily using Beautiful Soup. Let’s scrape the Wikipedia page about Python. First, you will have to get the markup of the page using the following code based on the Requests module tutorial to access webpages.

Now that you have created the soup, you can get the title of the webpage using the following code:

You can also scrape the webpage for other information like the main heading or the first paragraph, their classes, or the id attribute.

Similarly, you can iterate through all the links or subheading in a document using the following code:

Handling Multi-Valued and Duplicate Attributes

Different elements in a HTML document use a variety of attributes for different purposes. For example, you can add class or id attributes to style, group or identify elements. Similarly, you can use data attributes to store any additional information. Not all attributes can accept multiple values but a few can. The HTML specification has a clear set of rules for these situations and Beautiful Soup tries to follow them all. However, it also allows you to specify how you want to handle the data returned by multi-valued attributes. This feature was added in version 4.8 so make sure that you have installed the right version before using it.

By default, attributes like class which can have multiple values will return a list but id etc. will return a single string value. You can pass an argument called multi_valued_attributes in the BeautifulSoup constructor with its value set to None. This will make sure that the value returned by all the attributes is a string.

Here is an example:

There is no guarantee that the HTML you get from different websites will always be completely valid. It could have many different issues like duplicated attributes. Starting from version 4.9.1, Beautiful Soup allows you to specify what should be done is such situations by setting a value for the on_duplicate_attribute argument. Different parsers handle this issue differently and you will need to use the built-in html.parser to force a specific behavior.

Navigating the DOM

You can navigate through the DOM tree using regular tag names. Chaining those tag names can help you navigate the tree more deeply. For example, you can get the first link in the first paragraph of the given Wikipedia page by using soup.p.a. All the links in the first paragraph can be accessed by using soup.p.find_all('a').

You can also access all the children of a tag as a list by using tag.contents. To get the children at a specific index, you can use tag.contents[index]. You can also iterate over a tag's children by using the .children attribute.

Both .children and .contents are useful only when you want to access the direct or first-level descendants of a tag. To get all the descendants, you can use the .descendants attribute.

You can also access the parent of an element using the .parent attribute. Similarly, you can access all the ancestors of an element using the .parents attribute. The parent of the top-level <html> tag is the BeautifulSoup Object itself, and its parent is None.

You can access the previous and next sibling of an element using the .previous_sibling and .next_sibling attributes.

For two elements to be siblings, they should have the same parent. This means that the first child of an element will not have a previous sibling. Similarly, the last child of the element will not have a next sibling. In actual webpages, the previous and next siblings of an element will most probably be a new line character.

You can also iterate over all the siblings of an element using .previous_siblings and .next_siblings.

You can go to the element that comes immediately after the current element using the .next_element attribute. To access the element that comes immediately before the current element, use the .previous_element attribute.

Similarly, you can iterate over all the elements that come before and after the current element using .previous_elements and .next_elements respectively.

Parsing Only Part of a Document

Lets say that you need to process a large amount of data when looking for something specific and it is important for you to save some processing time or memory. In that case, you can take advantage of the SoupStrainer class in Beautiful Soup. This class allows you to only focus on specific elements while ignoring the rest of the document. For example, you can use it to ignore everything else on the webpage besides images by passing appropriate selectors in the SoupStrainer constructor.

Keep in mind that the Soup Strainer will not work with the html5lib parser. However, you can use it with both lxml and the built-in parser. Here is an example where we parse the Wikipedia page for the United States and get all the images with the class thumbimage.

You should note that I used class_ instead of class to get these elements because class is a reserved keyword in Python.

Final Thoughts

After completing this tutorial, you should now have a good understanding of the main differences between different HTML parsers. You should now also be able to navigate through a webpage and extract important data. This can be helpful when you want to analyze all the headings or links on a given website.

In the next part of the series, you will learn how to use the Beautiful Soup library to search and modify the DOM.


This content originally appeared on Envato Tuts+ Tutorials and was authored by Monty Shokeen


Print Share Comment Cite Upload Translate Updates
APA

Monty Shokeen | Sciencx (2017-02-12T13:13:47+00:00) Scraping Webpages in Python With Beautiful Soup: The Basics. Retrieved from https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/

MLA
" » Scraping Webpages in Python With Beautiful Soup: The Basics." Monty Shokeen | Sciencx - Sunday February 12, 2017, https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/
HARVARD
Monty Shokeen | Sciencx Sunday February 12, 2017 » Scraping Webpages in Python With Beautiful Soup: The Basics., viewed ,<https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/>
VANCOUVER
Monty Shokeen | Sciencx - » Scraping Webpages in Python With Beautiful Soup: The Basics. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/
CHICAGO
" » Scraping Webpages in Python With Beautiful Soup: The Basics." Monty Shokeen | Sciencx - Accessed . https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/
IEEE
" » Scraping Webpages in Python With Beautiful Soup: The Basics." Monty Shokeen | Sciencx [Online]. Available: https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/. [Accessed: ]
rf:citation
» Scraping Webpages in Python With Beautiful Soup: The Basics | Monty Shokeen | Sciencx | https://www.scien.cx/2017/02/12/scraping-webpages-in-python-with-beautiful-soup-the-basics/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.