├── show-images ├── 1.png ├── 2.png ├── 3.png └── 4.png ├── README.md └── 妹子图.py /show-images/1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Ymy214/meizitu-spider/HEAD/show-images/1.png -------------------------------------------------------------------------------- /show-images/2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Ymy214/meizitu-spider/HEAD/show-images/2.png -------------------------------------------------------------------------------- /show-images/3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Ymy214/meizitu-spider/HEAD/show-images/3.png -------------------------------------------------------------------------------- /show-images/4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Ymy214/meizitu-spider/HEAD/show-images/4.png -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # meizitu-spider 2 | ### python通用爬虫-绕过防盗链爬取妹子图 3 | #### 这是一只小巧方便,强大的爬虫,由python编写 4 | #### 所需的库有 5 | - 1. **requests** 6 | - 2. **BeautifulSoup** 7 | - 3. **os** 8 | - 4. **lxml** 9 | 10 | #### 伪装成chrome浏览器,并加上referer请求头访问服务器不会被拒绝。 11 | 12 | ### 完整项目放在GitHub:[https://github.com/Ymy214/meizitu-spider](https://github.com/Ymy214/meizitu-spider) 13 | 14 | 15 | ## 具体实现思路: 16 | - 1. **分析网页源代码结构** 17 | - 2. **找到合适的入口** 18 | - 3. **循环爬取并去重加到循环队列** 19 | - 4. **基本上实现了爬取所有图片** 20 | 21 | ## 代码思路/程序流程: 22 | **我通过观察发现meizitu网站的分布结构虽然找不到切入口但是其结构每一个页面都会展示一个main-image主图,并且页面下面都会有*推荐*这个板块,所以就i昂到了利用从*一个页面当作入口*,利用beautifulsoup或者pyquery分析HTML页面提取出推荐的其他页面,添加到循环访问队列,整体程序最外蹭利用while循环控制结构,循环不重复地遍历队列里面的url页面,每个页面都只保存一个作为展示的主图这样就循环下去程序不停歇地运行也可以放到服务器上面爬取,顺便上传到网盘分享给广大--你懂的** 23 | 24 | ## 下面是功能以及效果展示 25 | 26 | ### 整体展示 27 | ![](./show-images/2.png) 28 | 29 | ### 爬取效果展示-丰功伟绩 30 | ![](./show-images/1.png) 31 | 32 | ### 爬取效果展示-硕果累累 33 | ![](./show-images/3.png) 34 | 35 | ### 定制请求头 36 | ![](./show-images/4.png) 37 | ### 代码展示 38 | ### python源代码如下 39 | ## 另外本人还有面下给小白的 40 | - 1. **王者荣耀皮肤高清大图** 41 | - 2. **背景故事爬虫** 42 | ### 欢迎学习支持 43 | ### 有用或帮到你的话不妨点个star我将感激不尽 44 | ## Stargazers over time 45 | [![Stargazers over time](https://starchart.cc/Ymy214/meizitu-spider.svg?variant=light)](https://starchart.cc/Ymy214/meizitu-spider) 46 | -------------------------------------------------------------------------------- /妹子图.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- encoding: utf-8 -*- 3 | import requests 4 | from bs4 import BeautifulSoup 5 | 6 | # 定制请求头 7 | headers = {'Referer':'https://www.mzitu.com','User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3679.0 Safari/537.36'} 8 | 9 | path = 'R:/python123全国等考/meizitu/' 10 | meizi_url = [] 11 | meizitu_img = [] 12 | 13 | start_url = 'https://www.mzitu.com/177007' 14 | meizi_url.append(start_url) 15 | r = requests.get(start_url) 16 | soup = BeautifulSoup(r.text) 17 | main_img = soup.find('div', 'main-image').img.get('src') 18 | meizitu_img.append(main_img) 19 | 20 | guess_like = soup.find('dl', 'widgets_like').find_all('a') 21 | for a in guess_like: 22 | meizi_url.append(a.get('href')) 23 | # 删除起始引导url 24 | # del meizi_url[0] 25 | 26 | # print(meizi_url) 27 | # print(meizitu_img) 28 | with open("R:/python123全国等考/meizitu/meizi-main-jpg.txt", "w") as fo: 29 | x = 1 30 | y = 1 31 | for node_url in meizi_url: 32 | r = requests.get(node_url) 33 | soup = BeautifulSoup(r.text) 34 | main_img = soup.find('div', 'main-image').img.get('src') 35 | # 添加到文件日志并下载主图 36 | if main_img not in meizitu_img: 37 | x += 1 38 | meizitu_img.append(main_img) 39 | # 写入日志 40 | fo.write(main_img+'\n') 41 | # 下载主图 42 | res = requests.get(main_img, headers=headers) 43 | if res.status_code == 200: 44 | with open(path+str(x)+'-'+str(y)+'.jpg', 'wb') as f: 45 | f.write(res.content) 46 | print('成功保存图片') 47 | # 猜你喜欢,跳转其他页面 48 | guess_like = soup.find('dl', 'widgets_like').find_all('a') 49 | for a in guess_like: 50 | like = a.get('href') 51 | # 添加推荐页面 52 | if like not in meizi_url: 53 | y += 1 54 | meizi_url.append(like) 55 | 56 | 57 | --------------------------------------------------------------------------------