Request、Response

时间:2023-03-09 04:56:54
Request、Response

Request

Request对象在我们写爬虫发送请求的时候调用,参数如下:

  • url: 就是需要请求的url

  • callback: 指定该请求返回的Response由那个函数来处理。

  • method: 请求方法,默认GET方法,可设置为"GET", "POST", "PUT"等,且保证字符串大写

  • headers: 请求时,包含的头文件。一般不需要。内容一般如下:

    • Host: media.readthedocs.org

    • User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:33.0) Gecko/20100101 Firefox/33.0

    • Accept: text/css,/;q=0.1

    • Accept-Language: zh-cn,zh;q=0.8,en-us;q=0.5,en;q=0.3

    • Accept-Encoding: gzip, deflate

    • Referer: http://scrapy-chs.readthedocs.org/zh_CN/0.24/

    • Cookie: _ga=GA1.2.1612165614.1415584110;

    • Connection: keep-alive

    • If-Modified-Since: Mon, 25 Aug 2014 21:59:35

    • GMT Cache-Control: max-age=0

  • meta: 在不同的解析函数之间传递数据使用的。字典dict型

    # -*- coding: utf-8 -*-
    import scrapy
    from TencentHR.items import TencenthrItem

    class HrSpider(scrapy.Spider):
       name = 'hr'
       # allowed_domains = ['ddd']
       start_urls = ['https://hr.tencent.com/position.php']

       def parse(self, response):
           trs = response.xpath('//table[@class="tablelist"]/tr[@class="odd"] | //table[@class="tablelist"]/tr[@class="even"]')
           # print(len(trs))
           for tr in trs:
               items = TencenthrItem()
               detail_url = tr.xpath('./td/a/@href').extract()[0]
               items['position_name'] = tr.xpath('./td/a/text()').extract()[0]
               try:
                   items['position_type'] = tr.xpath('./td[2]/text()').extract()[0]
               except:
                   print("{}职位没有类型,url为{}".format(items['position_name'], "https://hr.tencent.com/" + detail_url))
                   items['position_type'] = None
               items['position_num'] = tr.xpath('./td[3]/text()').extract()[0]
               items['publish_time'] = tr.xpath('./td[5]/text()').extract()[0]
               items['work_addr'] = tr.xpath('./td[4]/text()').extract()[0]

               detail_url = 'https://hr.tencent.com/' + detail_url
               yield scrapy.Request(detail_url,
                                    comallback=self.parse_detail,
                                    meta={"items":items}
                                    )
               
           next_url = response.xpath('//a[text()="下一页"]/@href').extract_first()
           next_url = 'https://hr.tencent.com/' + next_url
           print(next_url)
           yield scrapy.Request(next_url,
                                callback=self.parse
                              )

       def parse_detail(self,response):
           items = response.meta['items']
           items["work_duty"] = response.xpath('//table[@class="tablelist textl"]/tr[3]//li/text()').extract()
           items["work_require"] =response.xpath('//table[@class="tablelist textl"]/tr[4]//li/text()').extract()
           yield items
  • encoding: 使用默认的 'utf-8' 就行。

  • dont_filter: 表明该请求不由调度器过滤。这是当你想使用多次执行相同的请求,忽略重复的过滤器。默认为False。

  • errback: 指定错误处理函数

Response

Response属性和可以调用的方法

  • meta: 从其他解析函数传递过来的meta属性,可以保持多个解析函数之间的数据连接

  • encoding: 返回当前字符串编码和编码的格式

  • text: 返回Unicode字符串

  • body: 返回bytes字符串

  • xpath: 可以调用xpath方法解析数据

  • css: 调用css选择器解析数据

发送POST请求

  • 当我们需要发送Post请求的时候,就调用Request中的子类FormRequest 来实现,如果需要在爬虫一开始的时候就发送post请求,那么需要在爬虫类中重写 start_requests(self) 方法, 并且不再调用start_urls中的url

  • 案例 登录豆瓣网

    # -*- coding: utf-8 -*-
    import scrapy

    class TestSpider(scrapy.Spider):
       name = 'login'
       allowed_domains = ['www.douban.com']
       # start_urls = ['http://www.baidu.com/']

       def start_requests(self):
           login_url = "https://accounts.douban.com/j/mobile/login/basic"
           headers = {
               'Referer': 'https://accounts.douban.com/passport/login_popup?login_source=anony',
               'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'
          }
           formdata = {
               'ck': '',
               'name': 用户名,
               'password': 密码,
               'remember': 'true',
               'ticket': ''
          }
           request = scrapy.FormRequest(login_url, callback=self.parse, formdata=formdata, headers=headers)
           yield request

       def parse(self, response):
           print(response.text)
  • 返回结果,可以看到登录成功了

    {"status":"success","message":"success","description":"处理成功","payload":{"account_info":{"name":"仅此而已","weixin_binded":false,"phone":"手机号","avatar":{"medium":"https://img3.doubanio.com\/icon\/user_large.jpg","median":"https://img1.doubanio.com\/icon\/user_normal.jpg","large":"https://img3.doubanio.com\/icon\/user_large.jpg","raw":"https://img3.doubanio.com\/icon\/user_large.jpg","small":"https://img1.doubanio.com\/icon\/user_normal.jpg","icon":"https://img3.doubanio.com\/pics\/icon\/user_icon.jpg"},"id":"193317985","uid":"193317985"}}}
  • 登录成功之后请求个人主页,可以看到我们可以访问登录之后的页面了

# -*- coding: utf-8 -*-
import scrapy class TestSpider(scrapy.Spider):
name = 'login'
allowed_domains = ['www.douban.com']
# start_urls = ['http://www.baidu.com/'] def start_requests(self):
login_url = "https://accounts.douban.com/j/mobile/login/basic"
headers = {
'Referer': 'https://accounts.douban.com/passport/login_popup?login_source=anony',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'
}
formdata = {
'ck': '',
'name': 用户名,
'password': 密码,
'remember': 'true',
'ticket': ''
}
request = scrapy.FormRequest(login_url, callback=self.parse, formdata=formdata, headers=headers)
yield request def parse(self, response):
print(response.text)
# 登录成功之后访问个人主页
url = "https://www.douban.com/people/193317985/"
yield scrapy.Request(url=url, callback=self.parse_detail) def parse_detail(self, response):
print(response.text)