Use Scrapy Items to Collect Data

Now, we will learn, the process of writing our Scrapy Item, for Quotes. To do so, we will follow, the steps as mentioned below –

  • Open the items.py file. It is present, on the same level, as the ‘spiders’  folder. Mention the fields, we need to extract, in the file, as shown below:

Python3




# Define here the models for your scraped
# items
# Import the required library
import scrapy
 
# Define the fields for Scrapy item here
# in class
class GfgSpiderreadingitemsItem(scrapy.Item):
     
    # Item key for Title of Quote
    quotetitle = scrapy.Field()
     
    # Item key for Author of Quote
    author = scrapy.Field()
     
    # Item key for Tags of Quote
    tags = scrapy.Field()


As seen, in the file above, we have defined one scrapy Item called ‘GfgSpiderreadingitemsItem’. This class, is our blueprint, for all elements, we will scrape. It is going to persist, three fields namely, quote title, author name, and tags. We can now add, only the fields, we mention in the class.

The Field() class, is an alias, to built-in dictionary class. It allows a way to define all field metadata, in one location. It does not provide, any extra attributes.

Now modify the spider file,  to store the values, in the item file’s class’s object, instead of yielding them directly. Please note, you need to import the Item class module, as seen in the code below.

Python3




# Import the required library
import scrapy
 
# Import the Item class with fields
# mentioned in the items.py file
from ..items import GfgSpiderreadingitemsItem
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    name = 'gfg_spiitemsread'
    allowed_domains = ['quotes.toscrape.com/tag/reading']
    start_urls = ['http://quotes.toscrape.com/tag/reading/']
 
    def parse(self, response):
       
        # Write XPath expression to loop through
        # all quotes
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through all quotes
        for quote in quotes:
             
            # Create an object of Item class
            item = GfgSpiderreadingitemsItem()
             
            # XPath expression to fetch text of the
            # Quote title Store the title in the class
            # attribute in key-value pair
            item['quotetitle'] = quote.xpath(
                './/*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            # Store the author in the class attribute in
            # key-value pair
            item['author'] = quote.xpath(
                './/*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch tags of the Quote title
            # Store the tags in the class attribute in key-value
            # pair
            item['tags'] = quote.xpath(
                './/*[@itemprop="keywords"]/@content').extract()
             
            # Yield the item object
            yield item


 
 

As seen above, the keys mentioned, in the Item class, can now be used, to collect the data scraped, by XPath expressions. Make sure you mention, the exact key names, at both places. For example, use “item[‘author’]”, when ‘author’ is the key defined, in the items.py file.

 

The items, yielded at the terminal, are as shown below :

 

Data extracted from webpage using Scrapy Items

 



How to use Scrapy Items?

In this article, we will scrape Quotes data using scrapy items, from the webpage https://quotes.toscrape.com/tag/reading/. The main objective of scraping, is to prepare structured data, from unstructured resources. Scrapy Items are wrappers around, the dictionary data structures. Code can be written, such that, the extracted data is returned, as Item objects, in the format of “key-value” pairs.  Using Scrapy Items is beneficial when –

  • As the scraped data volume increases, they become irregular to handle.
  • As your data gets complex, it is vulnerable to typos, and, at times may return faulty data.
  • Formatting of data scraped, is easier, as Item objects, can be further passed to Item Pipelines.
  • Cleansing the data, is easy, if we scrape the data, as Items.
  • Validating data, handling missing data, is easier with Scrapy Items.

Via the Item adapter library, Scrapy supports various Item Types. One can choose, the Item type, they want. Following, are the Item Types supported:

  • Dictionaries – Items can be written in form of dictionary objects. They are convenient to use.
  • Item objects – They provide dictionary like API, wherein we need to declare, fields for the Item, needed. It consists of key-value pair, of Field objects used, while declaring the Item class. In this tutorial, we are using Item objects.
  • Dataclass objects – They are used, when you need to store, the scraped values, in JSON or CSV files. Here we need to define, the datatype of each field, needed.
  • attr.s – The attr.s allows, defining item classes, with field names, so that scraped data, can be imported, to different file formats. They work similar to Dataclass objects only that the attr package needs to be installed.

Similar Reads

Installing Scrapy library

The Scrapy library, requires a Python version, of 3.6 and above. Install the Scrapy library, by executing the following command, at the terminal –...

Create a Scrapy Project

Scrapy has, an efficient command-line tool, also called the ‘Scrapy tool’. Commands accept a different set of arguments and options based on their purpose. To write the Spider code, we begin by creating, a Scrapy project, by executing the following command, at the terminal –...

Spider Code to Extract Data

The code for web scraping is written in the spider code file. To create the spider file, we will make use, of the ‘genspider’ command. Please note, that this command, is executed, at the same level, where scrapy.cfg file is present....

Use Scrapy Items to Collect Data

...

Contact Us