Spider Code to Extract Data

The code for web scraping is written in the spider code file. To create the spider file, we will make use, of the ‘genspider’ command. Please note, that this command, is executed, at the same level, where scrapy.cfg file is present. 

We are scraping, reading quotes present, on https://quotes.toscrape.com/tag/reading/ webpage. Hence, we will run the command as –

scrapy genspider spider_name url_to_be_scraped

Use ‘genspider’ command to create Spider file

The above command will create a spider file, “gfg_spiitemsread.py” in the ‘spiders’ folder. The spider name will also be,’gfg_spiitemsread’. The default code, for the same, is as follows:

Python3




# Import the required libraries
import scrapy
 
# Spider Class Created
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
    # The domain to be scraped
    allowed_domains = ['quotes.toscrape.com/tag/reading/']
    # The URLs from domain to scrape
    start_urls = ['http://quotes.toscrape.com/tag/reading//']
 
    # Spider default callback function
    def parse(self, response):
        pass


We will scrape Quotes Title, Author and Tags from the webpage https://quotes.toscrape.com/tag/reading/. Scrapy provides us, with Selectors, to “select” parts of the webpage, desired. Selectors are CSS or XPath expressions, written to extract data, from the HTML documents. In this tutorial, we will make use of XPath expressions, to select the details we need. Let us understand, the steps for writing the selector syntax, in the spider code.  

  • The default callback method, present in the spider class, responsible for, processing the response received, is the parse() method. We will write, selectors with XPath expressions, responsible for data extraction, here.
  • Select the element to be extracted, on the webpage, say Right-Click, and choose the Inspect option. This will allow us, to view its CSS attributes.
  • When we right-click on the first Quote and choose Inspect, we can see it has the CSS ‘class’ attribute “quote”. Similarly, all the quotes on the webpage, have CSS ‘class’ attribute as “quote”. It can be as seen below:

Right Click first quote, and, check its CSS “class” attribute

Based on this, the XPath expression, for the same, can be written as – 

  • quotes = response.xpath(‘//*[@class=”quote”]’). This syntax will fetch all elements, having “quote”, as the CSS ‘class’ attribute.
  • We will fetch the Quote Title, Author and Tags, of all the Quotes. Hence, we will write XPath expressions for extracting them, in a loop. For Quote Title, CSS ‘class’ attribute, is “text”. Hence, the XPath expression, for the same, would be – quote.xpath(‘.//*[@class=”text”]/text()’).extract_first(). The text() method, will extract the text, of the Quote title. The extract_first() method, will give the first matching value, with the CSS attribute “text”. The dot operator ‘.’ in the start, indicates extracting data, from a single quote.
  • Similarly, CSS  attributes, “class” and “itemprop”, for author element, is “author”. We can use, any of these, in the XPath expression. The syntax would be – quote.xpath(‘.//*[@itemprop=”author”]/text()’).extract(). This will extract, the Author name, where the CSS ‘itemprop’ attribute is ‘author’.
  • The CSS  attributes, “class” and “itemprop”, for tags element, is “keywords”. We can use, any of these, in the XPath expression. Since there are many tags, for any quote, looping through them, will be complex. Hence, we will extract the CSS attribute “content”, from every quote. The XPath expression for the same is – quote.xpath(‘.//*[@itemprop=”keywords”]/@content’).extract(). This will extract, all tags values, from “content” attribute, for quotes.
  • We use ‘yield’ syntax to get the data. We can collect, and, transfer data to CSV, JSON and other file formats, with the ‘yield’ syntax.
  • If we observe the code till here, it will crawl, and, extract data for the webpage.

The code is as follows:

Python3




# Import the required library
import scrapy
 
# The Spider class
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
     
    # The domain allowed to scrape
    allowed_domains = ['quotes.toscrape.com/tag/reading']
     
    # The URL to be scraped
    start_urls = ['http://quotes.toscrape.com/tag/reading/']
     
    # Default callback function
    def parse(self, response):
         
        # Fetch all quotes tags
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through the Quote selector elements
        # to get details of each
        for quote in quotes:
             
            # XPath expression to fetch text of the Quote title
            title = quote.xpath('.//*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            authors = quote.xpath('.//*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch Tags of the Quote
            tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract()
             
            # Yield all elements
            yield {"Quote Text ": title, "Authors ": authors, "Tags ": tags}


The crawl command is used to run the spider. Mention the spider name, in the crawl command. If we run, the above code, using the crawl command, then the output at the terminal would be:

scrapy crawl filename

Output:

Quotes scraped as shown by the ‘yield’ statement

Here, the yield statement, returns the data, in Python dictionary objects. 

Understanding Python Dictionary and Scrapy Item

The data yielded above, are  Python dictionary objects. Advantages of using them are –

  • They are convenient, and, easy to handle key-value pair structures, when the data size is less.
  • Use them, when no further processing, or, formatting on scraped data, is required.
  • Use a dictionary, when the data you want to scrape, is complete and simple.

For using Item objects we will make changes in the following files –

  • The items.py file present
  • Current spider class generated, gfg_spiitemsread.py file.

How to use Scrapy Items?

In this article, we will scrape Quotes data using scrapy items, from the webpage https://quotes.toscrape.com/tag/reading/. The main objective of scraping, is to prepare structured data, from unstructured resources. Scrapy Items are wrappers around, the dictionary data structures. Code can be written, such that, the extracted data is returned, as Item objects, in the format of “key-value” pairs.  Using Scrapy Items is beneficial when –

  • As the scraped data volume increases, they become irregular to handle.
  • As your data gets complex, it is vulnerable to typos, and, at times may return faulty data.
  • Formatting of data scraped, is easier, as Item objects, can be further passed to Item Pipelines.
  • Cleansing the data, is easy, if we scrape the data, as Items.
  • Validating data, handling missing data, is easier with Scrapy Items.

Via the Item adapter library, Scrapy supports various Item Types. One can choose, the Item type, they want. Following, are the Item Types supported:

  • Dictionaries – Items can be written in form of dictionary objects. They are convenient to use.
  • Item objects – They provide dictionary like API, wherein we need to declare, fields for the Item, needed. It consists of key-value pair, of Field objects used, while declaring the Item class. In this tutorial, we are using Item objects.
  • Dataclass objects – They are used, when you need to store, the scraped values, in JSON or CSV files. Here we need to define, the datatype of each field, needed.
  • attr.s – The attr.s allows, defining item classes, with field names, so that scraped data, can be imported, to different file formats. They work similar to Dataclass objects only that the attr package needs to be installed.

Similar Reads

Installing Scrapy library

The Scrapy library, requires a Python version, of 3.6 and above. Install the Scrapy library, by executing the following command, at the terminal –...

Create a Scrapy Project

Scrapy has, an efficient command-line tool, also called the ‘Scrapy tool’. Commands accept a different set of arguments and options based on their purpose. To write the Spider code, we begin by creating, a Scrapy project, by executing the following command, at the terminal –...

Spider Code to Extract Data

The code for web scraping is written in the spider code file. To create the spider file, we will make use, of the ‘genspider’ command. Please note, that this command, is executed, at the same level, where scrapy.cfg file is present....

Use Scrapy Items to Collect Data

...

Contact Us