环境
win8, python3.7, pycharm
正文
1.Scrapy框架的安装
在cmd命令行窗口执行:
pip install Scrapy
即可完成Scrapy框架的安装
2. 创建Scrapy项目
在cmd命令行窗口下切换到想要的目录下, 我这里是C:\Users\Administrator\PycharmProjects\untitled\Tests\Scrapy
执行下面代码, 即可在当前的"Scrapy"目录下生成JianShu项目文件夹.
scrapy startproject JianShu
文件夹结构如下:
items.py: 定义要爬取的项目
middlewares.py: 定义爬取时的中间介质
pipelines.py: 定义数据管道
settings.py: 配置文件
scrapy.cfg: Scrapy部署时的配置文件
3. 创建JianShuSpider
在cmd命令行依次执行以下代码, 即可在"JianShu/spiders"目录下创建JianShuSpider.py文件
cd JianShu
scrapy genspider JianShuSpider JianShuSpider.toscrape.com
4. 定义要爬取的项目
在items.py中确定要爬取的信息: 简书热门专题中的主题, 内容, 文章数, 粉丝数这四个信息
import scrapy
from scrapy.item import Item, Field class JianshuItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
title = Field() #主题
content = Field() #内容
article = Field() #文章
fans = Field() #粉丝
5. 编写爬虫主程序
简书热门专题采用异步加载, 在NetWork中选择XHR来确定异步加载的url: https://www.jianshu.com/recommendations/collections?page=(1,2,3,4.....)&order_by=hot
在JianShuSpider.py中编写主程序:
import scrapy
from scrapy.spiders import CrawlSpider
from scrapy.selector import Selector
from JianShu.items import JianshuItem
from scrapy.http import Request
class JianShu(CrawlSpider):
name = 'JianShu'
allowed_domains = ['JianShuSpider.toscrape.com']
start_urls = ['https://www.jianshu.com/recommendations/collections?page=1&order_by=hot']
def parse(self, response):
item = JianshuItem()
#对源码进行初始化
selector = Selector(response)
#采用xpath进行解析
infos = selector.xpath('//div[@class="collection-wrap"]')
for info in infos:
title = info.xpath('a[1]/h4/text()').extract()[0]
content = info.xpath('a[1]/p/text()').extract()
article = info.xpath('div/a/text()').extract()[0]
fans = info.xpath('div/text()').extract()[0]
#加入判断, 如果content存在则返回content[0], 否则返回''
if content:
content = content[0]
else:
content = ''
item['title'] = title
item['content'] = content
item['article'] = article
item['fans'] = fans
yield item
#列表生成式, 生成多个url
urls = ['https://www.jianshu.com/recommendations/collections?page={0}&order_by=hot'.format(str(page)) for page in range(2,37)]
for url in urls:
yield Request(url,callback=self.parse)
6. 保存到MongoDB
利用pipelines数据管道将其存储至MongoDB, 在pipelines.py编写:
import pymongo class JianshuPipeline(object):
def __init__(self):
'''连接Mongodb'''
client = pymongo.MongoClient(host='localhost')
db = client['test']
jianshu = db["jianshu"]
self.post = jianshu
def process_item(self, item, spider):
'''写入Mongodb'''
info = dict(item)
self.post.insert(info)
return item
7. setting配置
BOT_NAME = 'JianShu'
SPIDER_MODULES = ['JianShu.spiders']
NEWSPIDER_MODULE = 'JianShu.spiders'
#从网站请求头复制粘贴User-Agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
ROBOTSTXT_OBEY = True
#设置等待时间5秒
DOWNLOAD_DELAY = 5
#配置项目管道
ITEM_PIPELINES = {
'JianShu.pipelines.JianshuPipeline': 300,
}
8. 新建main.py文件
在JianShu文件目录下新建main.py文件, 编辑如下代码:
from scrapy import cmdline
cmdline.execute('scrapy crawl JianShu'.split())
9. 运行main.py文件
在运行之前, 需确保mongodb服务已经启动, 执行结果如下: