How To Follow Links With Python Scrapy ?
In this article, we will use Scrapy, for scraping data, presenting on linked webpages, and, collecting the same. We will scrape data from the website âhttps://quotes.toscrape.com/â.
Creating a Scrapy Project
Scrapy comes with an efficient command-line tool, also called the âScrapy toolâ. Commands are used for different purposes and, accept a different set of arguments, and options. To write the Spider code, we begin by creating, a Scrapy project, by executing the following command, at the terminal:
scrapy startproject gfg_spiderfollowlink
This should create a âgfg_spiderfollowlinkâ folder in your current directory. It contains a âscrapy.cfgâ, which is a configuration file, of the project. The folder structure is as shown below â
The folder contains items.py,middlerwares.py and other settings files, along with the âspidersâ folder.
Keep the contents of the configuration files as they are currently.
Extracting Data from one Webpage
The code for web scraping is written in the spider code file. To create the spider file, we will make use of the âgenspiderâ command. Please note, that this command is executed at the same level where scrapy.cfg file is present.
We are scraping all quotes present, on âhttp://quotes.toscrape.com/â. Hence, we will run the command as:
scrapy genspider gfg_spilink "quotes.toscrape.com"
The above command will create a spider file, âgfg_spilink.pyâ in the âspidersâ folder. The default code, for the same, is as follows:
Python3
# Import the required libraries import scrapy # Spider class name class GfgSpilinkSpider(scrapy.Spider): # Name of the spider name = 'gfg_spilink' # The domain to be scraped allowed_domains = [ 'quotes.toscrape.com' ] # The URLs to be scraped from the domain start_urls = [ 'http://quotes.toscrape.com/' ] # Default callback method def parse( self , response): pass |
We will scrape all Quotes Title, Authors, and Tags from the website âquotes.toscrape.comâ. The website landing page looks as shown below:
Scrapy provides us, with Selectors, to âselectâ parts of the webpage, desired. Selectors are CSS or XPath expressions, written to extract data from HTML documents. In this tutorial, we will make use of XPath expressions, to select the details we need.
Let us understand the steps for writing the selector syntax in the spider code:
- Firstly, we will write the code in the parse() method. This is the default callback method, present in the spider class, responsible for processing the response received. The data extraction code, using Selectors, will be written here.
- For writing the XPath expressions, we will select the element on the webpage, say Right-Click, and choose the Inspect option. This will allow us to view its CSS attributes.
- When we right-click on the first Quote and choose Inspect, we can see it has the CSS âclassâ attribute âquoteâ. Similarly, all the other quotes on the webpage have the same CSS âclassâ attribute. It can be seen below:
Hence, the XPath expression, for the same, can be written as â quotes = response.xpath(â//*[@class=âquoteâ]â). This syntax will fetch all elements, having âquoteâ, as the CSS âclassâ attribute. The quotes present on further pages have the same CSS attribute. For example, the quotes present on Page 3, of the website, belong to the âclassâ attribute, as shown below â
We need to fetch the Quote Title, Author, and Tags of all the Quotes. Hence, we will write XPath expressions for extracting them, in a loop.
- The CSS âclassâ attribute, for Quote Title, is âtextâ. Hence, the XPath expression, for the same, would be â quote.xpath(â.//*[@class=âtextâ]/text()â).extract_first(). The text() method, will extract the text, of the Quote title. The extract_first() method, will give the first matching value, with the CSS attribute âtextâ. The dot operator â.â in the start, indicates extracting data, from a single quote.
- The CSS attributes, âclassâ and âitempropâ, for author element, is âauthorâ. We can use, any of these, in the XPath expression. The syntax would be â quote.xpath(â.//*[@itemprop=âauthorâ]/text()â).extract(). This will extract, the Author name, where the CSS âitempropâ attribute is âauthorâ.
- The CSS attributes, âclassâ and âitempropâ, for tags element, is âkeywordsâ. We can use, any of these, in the XPath expression. Since there are many tags, for any quote, looping through them, will be tedious. Hence, we will extract the CSS attribute âcontentâ, from every quote. The XPath expression for the same is â quote.xpath(â.//*[@itemprop=âkeywordsâ]/@contentâ).extract(). This will extract, all tags values, from âcontentâ attribute, for quotes.
- We use âyieldâ syntax to get the data. We can collect, and, transfer data to CSV, JSON, and other file formats, by using âyieldâ.
If we observe the code till here, it will crawl and extract data for one webpage. The code is as follows â
Python3
# Import the required libraries import scrapy # Spider class name class GfgSpilinkSpider(scrapy.Spider): # Name of the spider name = 'gfg_spilink' # The domain to be scraped allowed_domains = [ 'quotes.toscrape.com' ] # The URLs to be scraped from the domain start_urls = [ 'http://quotes.toscrape.com/' ] # Default callback method def parse( self , response): # All quotes have CSS 'class 'attribute as 'quote' quotes = response.xpath( '//*[@class="quote"]' ) # Loop through the quotes # selectors to fetch data for every quote for quote in quotes: # XPath expression to fetch # text of the Quote title # note the 'dot' operator since # we are extracting from single 'quote' element title = quote.xpath( './/*[@class="text"]/text()' ).extract_first() # XPath expression to fetch author of the Quote authors = quote.xpath( './/*[@itemprop="author"]/text()' ).extract() # XPath expression to fetch tags of the Quote tags = quote.xpath( './/*[@itemprop="keywords"]/@content' ).extract() # Yield the data desired yield { "Quote Text " : title, "Authors " : authors, "Tags " : tags} |
Following Links
Till now, we have seen the code, to extract data, from a single webpage. Our final aim is to fetch, the Quoteâs related data, from all the web pages. To do so, we need to make our spider, follow links, so that it can navigate, to the subsequent pages. The hyperlinks are usually defined, by writing <a> tags. The âhrefâ attribute, of the <a> tags, indicates the linkâs destination. We need to extract, the âhrefâ attribute, to traverse, from one page to another. Let us study, how to implement the same â
- To traverse to the next page, check the CSS attribute of the âNextâ hyperlink.
We need to extract, the âhrefâ attribute, of the <a> tag of HTML. The âhrefâ attribute, denotes the URL of the page, where the link goes to. Hence, we need to fetch the same, and, join to our current path, for the spider to navigate, to further pages seamlessly. For the first page, the âhrefâ value of <a> tag is, â/page/2â, which means, it links to the second page.
If you click, and, observe the âNextâ link of the second webpage, it has a CSS attribute as ânextâ. For this page, the âhrefâ value of <a> tag, is â/page/3â which means, it links to the third page, and so on.
Hence, the XPath expression, for the next page link, can be fetched writing expression as â further_page_url = response.xpath(â//*[@class=ânextâ]/a/@hrefâ).extract_first(). This will give us, value of â@hrefâ , which is â/page/2â for the first page.
The URL above, is not sufficient, to make the spider crawl, to the next page. We need to form, an absolute URL, by merging the response object URL, with the above relative URL. To do so, we will use urljoin() method.
The Response object URL is âhttps://quotes.toscrape.com/â. To travel, to the next page, we need to join it, with the relative URL â/page/2â. The syntax, for the same is â complete_url_next_page = response.urljoin(further_page_url). This syntax, will give us, the complete path as, âhttps://quotes.toscrape.com/page/2/â. Similarly, for second page, it will modify, according to the webpage number, as âhttps://quotes.toscrape.com/page/3/â and so on.
The parse method, will now make a new request, using this âcomplete_url_next_page â URL.
Hence, our final Request object, for navigating to the second page, and crawling it, will be â yield scrapy.Request(complete_url_next_page). The complete code of the spider will be as follows:
Python3
# Import the required libraries import scrapy # Spider class name class GfgSpilinkSpider(scrapy.Spider): # Name of the spider name = 'gfg_spilink' # The domain to be scraped allowed_domains = [ 'quotes.toscrape.com' ] # The URLs to be scraped from the domain start_urls = [ 'http://quotes.toscrape.com/' ] # Default callback method def parse( self , response): quotes = response.xpath( '//*[@class="quote"]' ) for quote in quotes: # XPath expression to fetch # text of the Quote title title = quote.xpath( './/*[@class="text"]/text()' ).extract_first() # XPath expression to fetch # author of the Quote authors = quote.xpath( './/*[@itemprop="author"]/text()' ).extract() tags = quote.xpath( './/*[@itemprop="keywords"]/@content' ).extract() yield { "Quote Text " : title, "Authors " : authors, "Tags " : tags} # Check CSS attribute of the "Next" # hyperlink and extract its "href" value further_page_url = response.xpath( '//*[@class="next"]/a/@href' ).extract_first() # Append the "href" value, to the current page, # to form a complete URL, of next page complete_url_next_page = response.urljoin(further_page_url) # Make the spider crawl, to the next page, # and extract the same data # A new Request with the URL is made yield scrapy.Request(complete_url_next_page) |
Execute the Spider, at the terminal, by using the command âcrawlâ. The syntax is as follows â scrapy crawl spider_name. Hence, we can run our spider as â scrapy crawl gfg_spilink. It will crawl, the entire website, by following links, and yield the Quotes data. The output is as seen below â
If we check, the Spider output statistics, we can see that the Spider has crawled, over ten webpages, by following the links. Also, the number of Quotes is close to 100.
We can collect data, in any file format, for storage or analysis. To collect the same, in a JSON file, we can mention the filename, in the âcrawlâ, syntax as follows:
scrapy crawl gfg_spilink -o spiderlinks.json
The above command will collect the entire scraped Quotes data, in a JSON file âspiderlinks.jsonâ. The file contents are as seen below:
Contact Us