The following tutorial is meant for educational purposes and introduces to the basics of web scraping and utilizing Smartproxy for it. We suggest to reseach the Requests and BeautifulSoup documentation in order to build upon the given example.
To run our example scraper, you are going to need these libraries:
If youâre here that means you are interested in finding out more about how to scrape and enjoy all the data that you gather. However, before we dive into it, we first need to understand what web scraping is. In general terms, scraping is the process of acquiring a web page with all of its information and then extracting selected fields for further processing. Usually the purpose of gathering that information is so that a person could easily monitor it. Some examples could be reviews, prices, weather reports, billboard hits,and so on.
Just as you are polite and caring in the real world, you should be such online as well. Before you start scraping, make sure that the website youâre targeting allows it. You can do that by checking its Robots.txt file. If the site doesnât condone crawling or scraping of its content, be kind and respect the ownerâs wishes. Failing to do so might get your IP blocked or even lead to legal action taken against you, so be wary. Moreover, check if the site youâre targeting has an API. If it does, just use that â it will be easier to get the needed data, and you wonât put unnecessary load on the sites infrastructures.
In the following tutorial, you will not only see how a basic scraper is written but will also learn how to adjust it to your own needs. Moreover, you will learn how to do it via a proxy!
As mentioned, we will be using these libraries: Requests BeautifulSoup 4 The page weâre going to scrape is http://books.toscrape.com/. It doesnât have robots.txt, but I think we can agree that the name of the site is asking you to scrape it. But before we carry on with the coding part, let's inspect the website first. First off, letâs import the libraries weâll be using:
So, this is what the main page of the website looks like. We can see it contains books, their titles, prices, ratings, availability information, and a list of genres in the sidebar.
When we select a specific book, we are greeted with even more information, such as its description, how many are in stock, the number of reviews, etc.
Great! Now we can think about what information weâd like to extract from this site. Generally, when scraping, we want to get valuable information which we could use later on. In this example, the most important points would be the price and the title of the book, so we could, for example, make a comparison with books on another website. We can also extract the direct link to a book, so it would be easier to reach later on. Finally, it would be great to know if the book is even available. As a finishing touch, we can scrape its description as well â perhaps it might catch your eye and youâll read it.
So, now that we know exactly what we want to get from the site, we can go on and inspect those elements to see where they can be found later. Just a note: you donât need to memorise everything now; when scraping, youâll have to go back to the HTML code numerous times. Letâs have a look at the siteâs code and inspect the elements we need. To do so, just right-click anywhere on the site with your mouse and select âInspectâ.
Once you do that, a gazillion things will open â but donât worry, we don't need to go through all of them. After a quick inspection, we can see that all the data we need is located in the article element with a class name product_pod. The same is for all other books, as well.
This means that all the data we need will be nested in that article element. Now, letâs inspect the price. We can see that the price value is the text of the price_color paragraph. And if we inspect In stock, we can see that it is a text value of the instock availability paragraph. Now go on and get familiar with the rest of the elements weâll be scraping. Once you're done, we need to get coding and turn our data extraction wishes into reality.
First off, letâs import the libraries weâll be using:
Weâll need the Requests library to send HTTP requests and BeautifulSoup to parse the responses we receive from the website. Go ahead and import them. Then, weâll need to write a GET request to retrieve the contents of the site. Lets assign the response to the variable r.
The requests.get function has only one required argument, which is the URL of the site you are targeting. However, because we wish to use a proxy to reach the content, we need to pass in an additional proxies parameter. As you can see, both values are already assigned to variables, so letâs have a look at them.
For the proxy, we first need to specify its kind, in this case, HTTP. Then, we have to enter the Smartproxy userâs username and password, separated by a colon, as well as the endpoint which weâll be using to connect to the proxy server. And, well, the url is the address of the site we wish to scrape.
At the moment, the variable r holds the full response data from the website, including the status code, headers, URL itself, and, most importantly, the content we need. You can print it out with print(r.content), and youâll see that itâs the HTML code of the site you inspected previously. However, this time itâs on your device! (Except that itâs awkwardly formatted and unreadable, but weâll fix that.) https://i.imgur.com/fzV4P8D.png
To start working with the HTML code, we first need to parse it with BeautifulSoup â make a parse tree which we can use to extract the necessary information. Letâs create a variable called html. Weâll use it to store the parsed r.content. To parse the HTML code, we just need to call the BeautifulSoup class and pass in the content and âhtml.parserâ (âcause, you know, we are parsing HTML content here) as arguments. Try printing it out!
If you noticed in the image above, I used prettify(). Itâs a method that comes with BeautifulSoup, and it makes the HTML even more understandable by adding indents and things like that.
As we found out earlier, all the data we need can be found in the product_pod articles. So, to make our lives easier, we can collect and work only with them. This way, we wonât need to parse all of the siteâs HTML each time we want to get any data about a book. To do so, we can use one of BeautifulSoupâs methods called find_all(); it will find all instances of specified content.
So, in our case, we need to find and assign all articles with the product_pod class to a variable. Letâs call it all_books. Now we need to parse through the html variable which we created earlier and which holds the entire HTML of the site. Weâll use the find_all() method to do so. As arguments for the find_all() method, we need to pass in two attributes: âarticleâ, which is the tag of the content, and the class product_pod. Please note that because class is a Python keyword, it canât be used as an argument name, so you need to add a trailing underscore. Hereâs how it should look like:
Now, if you print out all_books, youâll see that it contains a list of all the âproduct_podâ articles found in the page.
Weâve narrowed down the HTML to as much as we need. Now we can start gathering data about the books. Because all_books is a list containing all the necessary information about each book in the page, weâll need to cycle through it using a for loop. Like this:
book is just a variable we created which weâll be calling to get specific information in each loop. You can name it however you wish, but in our case, book is exactly what we are working with each iteration of the loop in the all_books list. Remember that we want to find the title, price, availability, description, and the link to each book. Letâs get started!
When inspecting the site, we can see that the title is located in the h3 element, which is the only one in the product_pod article weâre working with.
BeautifulSoup allows you to find a specific element very easily, by just specifying the HTML tags. To find the article, all we need to write is this:
Once again, book is just the current iteration of the product_pod article, so we can just add .h3 and .a to specify which componentâs data we want to assign to the title. We could also add .text to get the text value of the book.h3.a. â which is indeed the title, but if you noticed, longer titles are not complete and have â...â at the end for styling purposes. Thatâs not really what we need. Instead, we need to get the value of the title element, which can be done by just adding âtitleâ in the square brackets.
If we run print(title), youâll see that we have successfully extracted all the titles of the books in the page.
Some objects are not as easy to extract. They may be located in paragraphs, nested in other paragraphs further nested in other div containers. In such cases, itâs easier to use the find() method. Itâs very similar to the find_all() method, however, it only returns the first found element â in our case, thatâs exactly what we need. To find the price, we want to find the paragraph with the price_color class and extract its text.
To find out if the element is in stock, we need to do the same thing we did with the price, simply specify a different paragraph. That would be the one containing the instock availability class. If you were to print out the availability just like that, youâd see a lot of blank lines. Itâs just the way the siteâs HTML is styled. To combat that, we can use a simple Python method called strip(), which will remove any blank spaces or lines from the string. If youâve done everything correctly, it should look like this:
Furthermore, we need to get the description of the book. The problem is that itâs located on another page dedicated to the specific book. First, we need to get the link to the said book and make another HTTP request to retrieve the description. While inspecting, youâll see that the link occupies the same place as the title. You can create a new variable, copy the command you used for the title, and just change the value in the square brackets to âhrefâ, as thatâs what weâre looking for there.
But, if you print out link_to_book, youâll see that it contains only a part of the link - the location of where the book can be found on the site, but not the domain. One easy way to solve this is to assign the websiteâs domain link to a variable and just add the link_to_book, like this:
Boom! Now you have the complete link, which we can use to extract the bookâs description.To get the description, we need to make another request inside the for loop, so we get one for each of the books. Basically, we need to do the same thing we did in the beginning: send a GET request to the link and parse the HTML response with BeautifulSoup.
When inspecting the HTML of a bookâs page, we can see that the description is just plain text stored in a paragraph. However, this paragraph is not the first in the product_page article and does not have a specified class. If we just try to use find() without any additional parameters, it will return the price because itâs the value located in the very first paragraph.
In such a case, when using the find() method, we need to state that the paragraph weâre looking for has no class (no sass intended). We can do so by specifying that the class_ equals none.
And, of course, because we just want to get the text value, we add **.text** at the very endThatâs it! Weâve gathered all the information that we needed. We can now print it all out and check what weâve got. Just a quick note: because the description might be quite long, you can trim it by adding [:x], where x = number of characters you want to print. Some Python tricks for you!
And the response we get, which is just beautiful:
To conclude, I would just like to note that there really are a thousand ways to get the data you need by using different functions, loops, and so on. But we sure hope that by the end of this article, you have a better idea of what, when, and how to scrape, and do it with proxies!