Making Requests using Scrapy
Now, when sending Requests using Scrapy, we send a Request object which represents an HTTP request sent to a website. To understand the requests in Scrapy let’s consider the following example,
The consists of a class named `Myspider` in which we have 2 methods first `start_requests` which will make a scrapy request to the URL provided and the second `parse` which will be called when a response is received it is used for parsing the response but in this, we are just printing it.
Python
# performing a scrapy request to get the data from the website import scrapy class MySpider(scrapy.Spider): name = 'scrapy_example' def start_requests( self ): yield scrapy.Request(url = 'http://www.example.com' , callback = self .parse) def parse( self , response): # Process the response here print (response.body) |
To run the code use the following command,
$ scrapy runspider .\<script_filename>.py
Once the script is executed it makes an HTTP request to the URL mentioned and after getting a response it will print the content of the response body as follow.
The first image contains and in the second one we can see all the stat related to this particular Scapy request.
Scrapy – Requests and Responses
In this article, we will explore the Request and Response-ability of Scrapy through a demonstration in which we will scrape some data from a website using Scrapy request and process that scraped data from Scrapy response.
Contact Us