爬虫 Scrapy框架 爬取图虫图片并下载

时间:2022-12-21 09:28:13

items.py,根据需求确定自己的数据要求

 # -*- coding: utf-8 -*-

 # Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class TodayScrapyItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass class TuchongItem(scrapy.Item):
title = scrapy.Field() #图片名字
views = scrapy.Field() #浏览人数
favorites = scrapy.Field()#点赞人数
img_url = scrapy.Field()#图片地址 # def get_insert_sql(self):
# # 存储时候用的sql语句
# sql = 'insert into tuchong(title,views,favorites,img_url)' \
# ' VALUES (%s, %s, %s, %s)'
# # 存储的数据
# data = (self['title'], self['views'], self['favorites'], self['img_url'])
# return (sql, data)

setting.py 设置headers和items

# -*- coding: utf-8 -*-

# Scrapy settings for today_scrapy project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'today_scrapy' SPIDER_MODULES = ['today_scrapy.spiders']
NEWSPIDER_MODULE = 'today_scrapy.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'today_scrapy (+http://www.yourdomain.com)' # Obey robots.txt rules
ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default)
#COOKIES_ENABLED = False # Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False # Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agnet':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
} # Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'today_scrapy.middlewares.TodayScrapySpiderMiddleware': 543,
#} # Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'today_scrapy.middlewares.TodayScrapyDownloaderMiddleware': 543,
#} # Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#} # Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
# 'today_scrapy.pipelines.TodayScrapyPipeline': 300,
'today_scrapy.pipelines.TuchongPipeline': 200, } # Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

pipelines.py 将图片下载到指定文件夹

 # -*- coding: utf-8 -*-

 # Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
import os
import requests class TodayScrapyPipeline(object):
def process_item(self, item, spider):
return item class TuchongPipeline(object):
def process_item(self, item, spider):
img_url = item['img_url'] #从items中得到图片url地址
img_title= item['title'] #得到图片的名字
headers = {
'User-Agnet': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
'cookie':'webp_enabled=1; bad_ide7dfc0b0-b3b6-11e7-b58e-df773034efe4=78baed41-a870-11e8-b7fd-370d61367b46; _ga=GA1.2.1188216139.1535263387; _gid=GA1.2.1476686092.1535263387; PHPSESSID=4k7pb6hmkml8tjsbg0knii25n6'
}
if not os.path.exists(img_title):
os.mkdir(img_title)
filename =img_url.split('/')[-1]
with open(img_title+'/'+filename, 'wb+') as f:
f.write(requests.get(img_url, headers=headers).content)
f.close()
return item

爬虫文件

tuchong.py

图片的url可以直接拼接

 # -*- coding: utf-8 -*-
import scrapy
import json
from today_scrapy.items import TuchongItem class TuchongSpider(scrapy.Spider):
name = 'tuchong'
allowed_domains = ['tuchong.com']
start_urls = ['http://tuchong.com/'] def start_requests(self):
for pag in range(1, 20):
referer_url = 'https://tuchong.com/rest/tags/自然/posts?page={}&count=20'.format(pag) # url中红字部分可以换
form_req = scrapy.Request(url=referer_url, callback=self.parse)
form_req.headers['referer'] = referer_url
yield form_req def parse(self, response):
tuchong_info_html = json.loads(response.text)
# print(tuchong_info_html)
postList_c = len(tuchong_info_html['postList'])
# print(postList_c)
for c in range(postList_c):
print(c)
# print(tuchong_info_html['postList'][c])
title = tuchong_info_html['postList'][c]['title']
print('图集名称:'+title)
views = tuchong_info_html['postList'][c]['views']
print('有'+str(views)+'人浏览')
favorites = tuchong_info_html['postList'][c]['favorites']
print('喜欢的人数:'+str(favorites))
images_c = len(tuchong_info_html['postList'][c]['images'])
for img_c in range(images_c):
user_id = tuchong_info_html['postList'][c]['images'][img_c]['user_id']
img_id = tuchong_info_html['postList'][c]['images'][img_c]['img_id']
img_url = 'https://photo.tuchong.com/{}/f/{}.jpg'.format(user_id,img_id)
item = TuchongItem()
item['title'] = title
item['img_url'] = img_url
# 返回我们的item
yield item

爬虫 Scrapy框架 爬取图虫图片并下载的更多相关文章

  1. python爬虫scrapy框架——爬取伯乐在线网站文章

    一.前言  1. scrapy依赖包: 二.创建工程 1. 创建scrapy工程: scrapy staratproject ArticleSpider 2. 开始(创建)新的爬虫: cd Artic ...

  2. python3爬虫-通过requests爬取图虫网

    import requests from fake_useragent import UserAgent from requests.exceptions import Timeout from ur ...

  3. 使用scrapy框架爬取自己的博文(2)

    之前写了一篇用scrapy框架爬取自己博文的博客,后来发现对于中文的处理一直有问题- - 显示的时候 [u'python\u4e0b\u722c\u67d0\u4e2a\u7f51\u9875\u76 ...

  4. scrapy框架爬取糗妹妹网站妹子图分类的所有图片

    爬取所有图片,一个页面的图片建一个文件夹.难点,图片中有不少.gif图片,需要重写下载规则, 创建scrapy项目 scrapy startproject qiumeimei 创建爬虫应用 cd qi ...

  5. 使用scrapy框架爬取图片网全站图片(二十多万张),并打包成exe可执行文件

    目标网站:https://www.mn52.com/ 本文代码已上传至git和百度网盘,链接分享在文末 网站概览 目标,使用scrapy框架抓取全部图片并分类保存到本地. 1.创建scrapy项目 s ...

  6. python 爬虫入门----案例爬取上海租房图片

    前言 对于一个net开发这爬虫真真的以前没有写过.这段时间学习python爬虫,今天周末无聊写了一段代码爬取上海租房图片,其实很简短就是利用爬虫的第三方库Requests与BeautifulSoup. ...

  7. scrapy框架爬取笔趣阁完整版

    继续上一篇,这一次的爬取了小说内容 pipelines.py import csv class ScrapytestPipeline(object): # 爬虫文件中提取数据的方法每yield一次it ...

  8. scrapy框架爬取笔趣阁

    笔趣阁是很好爬的网站了,这里简单爬取了全部小说链接和每本的全部章节链接,还想爬取章节内容在biquge.py里在加一个爬取循环,在pipelines.py添加保存函数即可 1 创建一个scrapy项目 ...

  9. 爬虫入门(四)——Scrapy框架入门:使用Scrapy框架爬取全书网小说数据

    为了入门scrapy框架,昨天写了一个爬取静态小说网站的小程序 下面我们尝试爬取全书网中网游动漫类小说的书籍信息. 一.准备阶段 明确一下爬虫页面分析的思路: 对于书籍列表页:我们需要知道打开单本书籍 ...

随机推荐

  1. 利用webview实现在andorid中嵌入swf

    项目背景是这样的,一套系统有三个客户端分别是网页,flex和android,现在已经在flex上面做好了一个在线客户视频聊天系统,然后在这个基础上修改打包成了SWF,放在网页上面使用效果不错,但是利用 ...

  2. leetcode——Evaluate Reverse Polish Notation 求算式值(AC)

    Evaluate the value of an arithmetic expression in Reverse Polish Notation. Valid operators are +, -, ...

  3. Redis事务管理

    用过其他关系型数据库(比如msql)的肯定都指定,在关系型数据库里面的事务可以保证多个命令操作要么同时成功,要么同时失败.并且在执行事务的时候,可以有隔离级别. 但是在Redis中的事务,只是保证事务 ...

  4. Spring--AOP 例子

    先用代码讲一下什么是传统的AOP(面向切面编程)编程 需求:实现一个简单的计算器,在每一步的运算前添加日志.最传统的方式如下: Calculator.Java package cn.limbo.spr ...

  5. 虚拟机中ubuntu不能联网问题的解决——NAT方式

    困惑我多时的Ubuntu联网问题终于解决啦,开心!!!现记录如下,方便日后取用. 可先直接尝试第3步,若不行,则走完全程. 1.查看/设置下NAT的网络 打开VMware Workstation, 点 ...

  6. C++反汇编(一)

    对象/结构体 对象的大小只包括数据成员,成员函数属于执行代码. 对象长度 = sizeof(数据成员1) + sizeof(数据成员2) + ...... + sizeof(数据成员n) 特殊情况公式 ...

  7. Linux期中总结

    在MOOC八周内容高度概括总结如下 (一)计算机是如何工作的 冯诺依曼体系结构——核心:存储程序计算机; X86汇编基础 (二)操作系统是如何工作的 三个法宝——存储程序计算机.函数调用堆栈.中断机制 ...

  8. 基于net.tcp的WCF配置实例解析(转)

    http://www.cnblogs.com/scy251147/archive/2012/11/23/2784902.html 原文 本文主要通过文件配置来讲解如何编写一个基于net.tcp的Win ...

  9. iOS WKWebView ios9以上版本配置 与 设置UserAgent(用户代理), 解决点击web, 客户端接收不到web事件问题

    项目运行在ios9上需要在info.plist文件中配置加入如下信息, App Transport Security Settings Allow Arbitrary Loads = YES < ...

  10. atitit&period;html5动画特效----打水漂 ducks&lowbar;and&lowbar;drakes

    atitit.html5动画特效----打水漂  ducks_and_drakes 1. 原理 1 2. fly jquery插件 1 3. ---------code 2 4. 参考 4 1. 原理 ...