├── crawler ├── tmSpider │ ├── tmSpider │ │ ├── __init__.py │ │ ├── gooseeker │ │ │ ├── gsextractor.py │ │ │ └── __pycache__ │ │ │ │ └── gsextractor.cpython-35.pyc │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-35.pyc │ │ │ ├── settings.cpython-35.pyc │ │ │ └── downloader.cpython-35.pyc │ │ ├── spiders │ │ │ ├── __pycache__ │ │ │ │ ├── tmall.cpython-35.pyc │ │ │ │ ├── __init__.cpython-35.pyc │ │ │ │ └── runcrawl.cpython-35.pyc │ │ │ ├── __init__.py │ │ │ ├── runcrawl.py │ │ │ └── tmall.py │ │ ├── middlewares │ │ │ ├── __pycache__ │ │ │ │ ├── downloader.cpython-35.pyc │ │ │ │ └── middleware.cpython-35.pyc │ │ │ ├── middleware.py │ │ │ └── downloader.py │ │ ├── pipelines.py │ │ ├── items.py │ │ └── settings.py │ └── scrapy.cfg ├── simpleSpider │ ├── simpleSpider │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-35.pyc │ │ │ └── settings.cpython-35.pyc │ │ ├── spiders │ │ │ ├── __pycache__ │ │ │ │ ├── __init__.cpython-35.pyc │ │ │ │ ├── gooseeker.cpython-35.pyc │ │ │ │ └── simplespider.cpython-35.pyc │ │ │ ├── __init__.py │ │ │ └── simplespider.py │ │ ├── pipelines.py │ │ ├── items.py │ │ └── settings.py │ ├── gooseeker.py │ ├── __pycache__ │ │ └── gooseeker.cpython-35.pyc │ └── scrapy.cfg ├── README ├── crawl_gooseeker_bbs.py ├── anjuke.py ├── douban.py ├── xslt_bbs.xml ├── result1.xml └── result2.xml ├── docs └── README.MD ├── core ├── gooseeker-2.1.zip ├── README └── gooseeker.py ├── test ├── README └── readPdf.py └── README.md /crawler/tmSpider/tmSpider/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/README.MD: -------------------------------------------------------------------------------- 1 | Created on May 26,2016 2 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /core/gooseeker-2.1.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/core/gooseeker-2.1.zip -------------------------------------------------------------------------------- /crawler/simpleSpider/gooseeker.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/gooseeker.py -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/gooseeker/gsextractor.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/gooseeker/gsextractor.py -------------------------------------------------------------------------------- /crawler/simpleSpider/__pycache__/gooseeker.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/__pycache__/gooseeker.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/__pycache__/settings.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/__pycache__/settings.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/__pycache__/downloader.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/__pycache__/downloader.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/__pycache__/tmall.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/spiders/__pycache__/tmall.cpython-35.pyc -------------------------------------------------------------------------------- /test/README: -------------------------------------------------------------------------------- 1 | # Created at 11:10, Jul 19,2016 2 | # Updated at 11:10, Jul 19,2016 3 | 4 | 目录文件说明 5 | ================ 6 | test 7 | 8 | - readPdf.py 读取pdf文档内容 9 | 10 | 11 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/simpleSpider/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/__pycache__/settings.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/simpleSpider/__pycache__/settings.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/spiders/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/__pycache__/runcrawl.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/spiders/__pycache__/runcrawl.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/gooseeker/__pycache__/gsextractor.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/gooseeker/__pycache__/gsextractor.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/middlewares/__pycache__/downloader.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/middlewares/__pycache__/downloader.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/middlewares/__pycache__/middleware.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/tmSpider/tmSpider/middlewares/__pycache__/middleware.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/spiders/__pycache__/__init__.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/simpleSpider/spiders/__pycache__/__init__.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/spiders/__pycache__/gooseeker.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/simpleSpider/spiders/__pycache__/gooseeker.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/spiders/__pycache__/simplespider.cpython-35.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Rockyzsu/gooseeker/master/crawler/simpleSpider/simpleSpider/spiders/__pycache__/simplespider.cpython-35.pyc -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/__init__.py: -------------------------------------------------------------------------------- 1 | # This package will contain the spiders of your Scrapy project 2 | # 3 | # Please refer to the documentation for information on how to create and manage 4 | # your spiders. 5 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/spiders/__init__.py: -------------------------------------------------------------------------------- 1 | # This package will contain the spiders of your Scrapy project 2 | # 3 | # Please refer to the documentation for information on how to create and manage 4 | # your spiders. 5 | -------------------------------------------------------------------------------- /crawler/tmSpider/scrapy.cfg: -------------------------------------------------------------------------------- 1 | # Automatically created by: scrapy startproject 2 | # 3 | # For more information about the [deploy] section see: 4 | # https://scrapyd.readthedocs.org/en/latest/deploy.html 5 | 6 | [settings] 7 | default = tmSpider.settings 8 | 9 | [deploy] 10 | #url = http://localhost:6800/ 11 | project = tmSpider 12 | -------------------------------------------------------------------------------- /crawler/simpleSpider/scrapy.cfg: -------------------------------------------------------------------------------- 1 | # Automatically created by: scrapy startproject 2 | # 3 | # For more information about the [deploy] section see: 4 | # https://scrapyd.readthedocs.org/en/latest/deploy.html 5 | 6 | [settings] 7 | default = simpleSpider.settings 8 | 9 | [deploy] 10 | #url = http://localhost:6800/ 11 | project = simpleSpider 12 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/pipelines.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Define your item pipelines here 4 | # 5 | # Don't forget to add your pipeline to the ITEM_PIPELINES setting 6 | # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html 7 | 8 | 9 | class TmspiderPipeline(object): 10 | def process_item(self, item, spider): 11 | return item 12 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/items.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Define here the models for your scraped items 4 | # 5 | # See documentation in: 6 | # http://doc.scrapy.org/en/latest/topics/items.html 7 | 8 | import scrapy 9 | 10 | 11 | class TmspiderItem(scrapy.Item): 12 | # define the fields for your item here like: 13 | # name = scrapy.Field() 14 | pass 15 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/pipelines.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Define your item pipelines here 4 | # 5 | # Don't forget to add your pipeline to the ITEM_PIPELINES setting 6 | # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html 7 | 8 | 9 | class SimplespiderPipeline(object): 10 | def process_item(self, item, spider): 11 | return item 12 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/items.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Define here the models for your scraped items 4 | # 5 | # See documentation in: 6 | # http://doc.scrapy.org/en/latest/topics/items.html 7 | 8 | import scrapy 9 | 10 | 11 | class SimplespiderItem(scrapy.Item): 12 | # define the fields for your item here like: 13 | # name = scrapy.Field() 14 | pass 15 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/runcrawl.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import scrapy 4 | from twisted.internet import reactor 5 | from scrapy.crawler import CrawlerRunner 6 | 7 | from tmall import TmallSpider 8 | 9 | spider = TmallSpider(domain='tmall.com') 10 | crawler = CrawlerRunner() 11 | crawler.crawl(spider) 12 | d = crawler.join() 13 | d.addBoth(lambda _: reactor.stop()) 14 | reactor.run() 15 | -------------------------------------------------------------------------------- /crawler/README: -------------------------------------------------------------------------------- 1 | # Created at 15:10, May 18,2016 2 | # Updated at 15:20, Jul 6,2016 3 | 4 | 目录文件说明 5 | ================ 6 | crawler 7 | 8 | - anjuke.py 采集安居客房产经纪人 9 | - result1.xml 安居客房产经纪人结果文件1 10 | - result2.xml 安居客房产经纪人结果文件2 11 | - crawl_gooseeker_bbs.py 采集集搜客论坛内容 12 | - xslt_bbs.xml 集搜客论坛内容提取本地xslt文件 13 | - douban.py 采集豆瓣小组讨论话题 14 | 15 | - simpleSpider 一个小爬虫(基于Scrapy开源框架) 16 | - tmSpider 采集天猫商品信息(基于Scrapy开源框架) 17 | 18 | -------------------------------------------------------------------------------- /crawler/crawl_gooseeker_bbs.py: -------------------------------------------------------------------------------- 1 | #-*_coding:utf8-*- 2 | # 使用gsExtractor类的示例程序 3 | # 访问集搜客论坛,以xslt为模板提取论坛内容 4 | # xslt保存在xslt_bbs.xml中 5 | from urllib import request 6 | from lxml import etree 7 | from gooseeker import GsExtractor 8 | 9 | # 访问并读取网页内容 10 | url = "http://www.gooseeker.com/cn/forum/7" 11 | conn = request.urlopen(url) 12 | doc = etree.HTML(conn.read()) 13 | 14 | # 生成xsltExtractor对象 15 | bbsExtra = GsExtractor() 16 | # 调用set方法设置xslt内容 17 | bbsExtra.setXsltFromFile("xslt_bbs.xml") 18 | # 调用extract方法提取所需内容 19 | result = bbsExtra.extract(doc) 20 | # 显示提取结果 21 | print(str(result)) 22 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/middlewares/middleware.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from scrapy.exceptions import IgnoreRequest 4 | from scrapy.http import HtmlResponse, Response 5 | 6 | import tmSpider.middlewares.downloader as downloader 7 | 8 | class CustomMiddlewares(object): 9 | def process_request(self, request, spider): 10 | url = str(request.url) 11 | dl = downloader.CustomDownloader() 12 | content = dl.VisitPersonPage(url) 13 | return HtmlResponse(url, status = 200, body = content) 14 | 15 | def process_response(self, request, response, spider): 16 | if len(response.body) == 100: 17 | return IgnoreRequest("body length == 100") 18 | else: 19 | return response -------------------------------------------------------------------------------- /core/README: -------------------------------------------------------------------------------- 1 | # Created at 15:10, May 18,2016 2 | # 这个目录存放基本类 3 | # 基本类实现:使用xslt作为模板提取html或DOM内容 4 | 5 | # 2016-05-25 Upload gooseeker.py 6 | # Class: gsExtractor V1.0 7 | 8 | # 2016-05-26 gooseeker updated to V2.0 9 | 10 | # Windows下安装使用 11 | # 1. 安装Python3.5.2 12 | # 官网下载链接: https://www.python.org/ftp/python/3.5.2/python-3.5.2.exe 13 | # 下载完成后,双击安装。 14 | # 这个版本会自动安装pip和setuptools,方便安装其它的库 15 | # 2. 安装Lxml 3.6.0 16 | # Lxml官网地址: http://lxml.de/ 17 | # Windows版安装包下载: http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml 18 | # 对应windows下python3.5的安装文件为 lxml-3.6.0-cp35-cp35m-win32.whl 19 | # 下载完成后,在windows下打开一个命令窗口, 切换到刚下载的whl文件的存放目录 20 | # 运行pip install lxml-3.6.0-cp35-cp35m-win32.whl 21 | # 3. 下载提取器文件 22 | # 下载地址: https://github.com/FullerHua/gooseeker/blob/master/core/gooseeker.py 23 | # 把gooseeker.py保存在项目目录下 24 | 25 | # 2016-07-06 添加Windows下的安装使用说明 26 | -------------------------------------------------------------------------------- /test/readPdf.py: -------------------------------------------------------------------------------- 1 | # _*_coding:utf8_*_ 2 | # readPdf.py 3 | # python读取pdf格式的文档 4 | 5 | from urllib.request import urlopen 6 | from pdfminer.pdfinterp import PDFResourceManager, process_pdf 7 | from pdfminer.converter import TextConverter 8 | from pdfminer.layout import LAParams 9 | from io import StringIO 10 | from io import open 11 | 12 | def readPDF(pdfFile): 13 | rsrcmgr = PDFResourceManager() 14 | retstr = StringIO() 15 | laparams = LAParams() 16 | device = TextConverter(rsrcmgr, retstr, laparams=laparams) 17 | 18 | process_pdf(rsrcmgr, device, pdfFile) 19 | device.close() 20 | 21 | content = retstr.getvalue() 22 | retstr.close() 23 | return content 24 | 25 | pdfFile = urlopen("http://pythonscraping.com/pages/warandpeace/chapter1.pdf") 26 | outputString = readPDF(pdfFile) 27 | print(outputString) 28 | pdfFile.close() -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/spiders/tmall.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import time 3 | import scrapy 4 | 5 | import tmSpider.gooseeker.Gsextractor as gsextractor 6 | 7 | class TmallSpider(scrapy.Spider): 8 | name = "tmall" 9 | allowed_domains = ["tmall.com"] 10 | start_urls = ( 11 | 'https://world.tmall.com/item/526449276263.htm', 12 | ) 13 | 14 | # 获得当前时间戳 15 | def getTime(self): 16 | current_time = str(time.time()) 17 | m = current_time.find('.') 18 | current_time = current_time[0:m] 19 | return current_time 20 | 21 | def parse(self, response): 22 | html = response.body 23 | print("----------------------------------------------------------------------------") 24 | extra=gsextractor.GsExtractor() 25 | extra.setXsltFromAPI("31d24931e043e2d5364d03b8ff9cc77e", "淘宝天猫_商品详情30474","tmall","list") 26 | 27 | result = extra.extract(html) 28 | print(str(result).encode('gbk','ignore').decode('gbk')) 29 | #file_name = 'F:/temp/淘宝天猫_商品详情30474_' + self.getTime() + '.xml' 30 | #open(file_name,"wb").write(result) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 项目名称 2 | 3 | ========= 4 | 5 | gooseeker 6 | 7 | 集搜客即时模式网络爬虫项目 8 | 9 | 项目背景 10 | ======== 11 | 在python 即时网络爬虫项目启动说明中我们讨论一个数字:程序员浪费在调测内容提取规则上的时间。 网络数据抓取的工作量有80%是在为各种网站的各种数据结构编写抓取规则。 12 | 13 | 所以我们发起了这个项目,把程序员从繁琐的调测规则中解放出来,投入到更高端的数据处理工作中。 14 | 15 | GooSeeker发布基于xslt的内容提取器,xslt可以通过GooSeeker API获得,让大家能省掉90%的调测正则表达式或者XPath的时间 16 | 17 | 18 | 项目资源 19 | ======== 20 | 入口页 21 | 22 | * http://www.gooseeker.com/land/python.html 23 | 24 | Python交流园地 25 | 26 | * http://www.gooseeker.com/doc/forum-59-1.html 27 | 28 | 知乎专栏 29 | 30 | * https://zhuanlan.zhihu.com/gooseeker 31 | 32 | GooSeeker收割模式网络爬虫 33 | 34 | * http://www.gooseeker.com 35 | 36 | 项目目录文件说明 37 | ================ 38 | gooseeker 39 | 40 | - core/gooseeker.py 提取器类 41 | - core/README 说明文件 42 | 43 | - crawler/anjuke.py 采集安居客房产经纪人 44 | - crawler/result1.xml 安居客房产经纪人结果文件1 45 | - crawler/result2.xml 安居客房产经纪人结果文件2 46 | - crawler/crawl_gooseeker_bbs.py 采集集搜客论坛内容 47 | - crawler/xslt_bbs.xml 集搜客论坛内容提取本地xslt文件 48 | - crawler/douban.py 采集豆瓣小组讨论话题 49 | 50 | - crawler/simpleSpider 一个小爬虫(基于Scrapy开源框架) 51 | - crawler/tmSpider 采集天猫商品信息(基于Scrapy开源框架) 52 | 53 | - test/readPdf.py python读取pdf文档 -------------------------------------------------------------------------------- /crawler/anjuke.py: -------------------------------------------------------------------------------- 1 | # _*_coding:utf8_*_ 2 | # anjuke.py 3 | # 爬取安居客房产经纪人 4 | 5 | from urllib import request 6 | from lxml import etree 7 | from gooseeker import GsExtractor 8 | 9 | totalpages = 50 10 | 11 | class Spider: 12 | def getContent(self, url): 13 | conn = request.urlopen(url) 14 | output = etree.HTML(conn.read()) 15 | return output 16 | 17 | def saveContent(self, filepath, content): 18 | file_obj = open(filepath, 'w', encoding='UTF-8') 19 | file_obj.write(content) 20 | file_obj.close() 21 | 22 | bbsExtra = GsExtractor() 23 | bbsExtra.setXsltFromAPI("31d24931e043e2d5364d03b8ff9cc77e" , "安居客房产经纪人") # 设置xslt抓取规则,第一个参数是app key,请到会员中心申请 24 | 25 | url = "http://shenzhen.anjuke.com/tycoon/nanshan/p" 26 | anjukeSpider = Spider() 27 | print("爬取开始") 28 | 29 | for pagenumber in range(1 , totalpages): 30 | currenturl = url + str(pagenumber) 31 | print("正在爬取", currenturl) 32 | content = anjukeSpider.getContent(currenturl) 33 | outputxml = bbsExtra.extract(content) 34 | outputfile = "result" + str(pagenumber) + ".xml" 35 | anjukeSpider.saveContent(outputfile , str(outputxml)) 36 | 37 | print("爬取结束") 38 | 39 | 40 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/middlewares/downloader.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import time 3 | from scrapy.exceptions import IgnoreRequest 4 | from scrapy.http import HtmlResponse, Response 5 | from selenium import webdriver 6 | import selenium.webdriver.support.ui as ui 7 | 8 | class CustomDownloader(object): 9 | def __init__(self): 10 | # use any browser you wish 11 | cap = webdriver.DesiredCapabilities.PHANTOMJS 12 | cap["phantomjs.page.settings.resourceTimeout"] = 1000 13 | cap["phantomjs.page.settings.loadImages"] = True 14 | cap["phantomjs.page.settings.disk-cache"] = True 15 | cap["phantomjs.page.customHeaders.Cookie"] = 'SINAGLOBAL=3955422793326.2764.1451802953297; ' 16 | self.driver = webdriver.PhantomJS(executable_path='F:/phantomjs/bin/phantomjs.exe', desired_capabilities=cap) 17 | #wait = ui.WebDriverWait(self.driver,10) 18 | 19 | def VisitPersonPage(self, url): 20 | print('正在加载网站.....') 21 | self.driver.get(url) 22 | time.sleep(1) 23 | # 翻到底,详情加载 24 | js="var q=document.documentElement.scrollTop=10000" 25 | self.driver.execute_script(js) 26 | time.sleep(5) 27 | content = self.driver.page_source.encode('gbk','ignore') 28 | print('网页加载完毕.....') 29 | return content 30 | 31 | def __del__(self): 32 | self.driver.quit() -------------------------------------------------------------------------------- /crawler/douban.py: -------------------------------------------------------------------------------- 1 | # _*_coding:utf8_*_ 2 | # douban.py 3 | # 爬取豆瓣小组讨论话题 4 | 5 | from urllib import request 6 | from lxml import etree 7 | from gooseeker import GsExtractor 8 | from selenium import webdriver 9 | 10 | class PhantomSpider: 11 | def getContent(self, url): 12 | browser = webdriver.PhantomJS(executable_path='C:\\phantomjs-2.1.1-windows\\bin\\phantomjs.exe') 13 | browser.get(url) 14 | time.sleep(3) 15 | html = browser.execute_script("return document.documentElement.outerHTML") 16 | output = etree.HTML(html) 17 | return output 18 | 19 | def saveContent(self, filepath, content): 20 | file_obj = open(filepath, 'w', encoding='UTF-8') 21 | file_obj.write(content) 22 | file_obj.close() 23 | 24 | doubanExtra = GsExtractor() 25 | # 下面这句调用gooseeker的api来设置xslt抓取规则 26 | # 第一个参数是app key,请到GooSeeker会员中心申请 27 | # 第二个参数是规则名,是通过GooSeeker的图形化工具: 谋数台MS 来生成的 28 | doubanExtra.setXsltFromAPI("ffd5273e213036d812ea298922e2627b" , "豆瓣小组讨论话题") 29 | 30 | url = "https://www.douban.com/group/haixiuzu/discussion?start=" 31 | totalpages = 5 32 | doubanSpider = PhantomSpider() 33 | print("爬取开始") 34 | 35 | for pagenumber in range(1 , totalpages): 36 | currenturl = url + str((pagenumber-1)*25) 37 | print("正在爬取", currenturl) 38 | content = doubanSpider.getContent(currenturl) 39 | outputxml = doubanExtra.extract(content) 40 | outputfile = "result" + str(pagenumber) +".xml" 41 | doubanSpider.saveContent(outputfile , str(outputxml)) 42 | 43 | print("爬取结束") 44 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/spiders/simplespider.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import time 3 | import scrapy 4 | from lxml import etree 5 | from selenium import webdriver 6 | import gooseeker.GsExtractor as GsExtractor 7 | 8 | class SimpleSpider(scrapy.Spider): 9 | name = "simplespider" 10 | allowed_domains = ["taobao.com"] 11 | start_urls = [ 12 | "https://item.taobao.com/item.htm?spm=a230r.1.14.197.e2vSMY&id=44543058134&ns=1&abbucket=10" 13 | ] 14 | 15 | def __init__(self): 16 | # use any browser you wish 17 | self.browser = webdriver.Firefox() 18 | 19 | # 获得当前时间戳 20 | def getTime(self): 21 | current_time = str(time.time()) 22 | m = current_time.find('.') 23 | current_time = current_time[0:m] 24 | return current_time 25 | 26 | def parse(self, response): 27 | print("start...") 28 | #start browser 29 | self.browser.get(response.url) 30 | #loading time interval 31 | time.sleep(3) 32 | #get xslt 33 | extra=GsExtractor() 34 | extra.setXsltFromAPI("31d24931e043e2d5364d03b8ff9cc77e", "淘宝天猫_商品详情30474") 35 | # get doc 36 | html = self.browser.execute_script("return document.documentElement.outerHTML") 37 | doc = etree.HTML(html) 38 | result = extra.extract(doc) 39 | 40 | # out file 41 | file_name = 'F:/temp/淘宝天猫_商品详情30474_' + self.getTime() + '.xml' 42 | open(file_name,"wb").write(result) 43 | self.browser.close() 44 | print("end") 45 | 46 | 47 | -------------------------------------------------------------------------------- /core/gooseeker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | # 模块名: gooseeker 4 | # 类名: GsExtractor 5 | # Version: 2.1 6 | # 说明: html内容提取器 7 | # 功能: 使用xslt作为模板,快速提取HTML DOM中的内容。 8 | # released by 集搜客(http://www.gooseeker.com) on May 18, 2016 9 | # github: https://github.com/FullerHua/jisou/core/gooseeker.py 10 | 11 | import time 12 | from urllib import request 13 | from urllib.parse import quote 14 | from lxml import etree 15 | 16 | class GsExtractor(object): 17 | def _init_(self): 18 | self.xslt = "" 19 | # 从文件读取xslt 20 | def setXsltFromFile(self , xsltFilePath): 21 | file = open(xsltFilePath , 'r' , encoding='UTF-8') 22 | try: 23 | self.xslt = file.read() 24 | finally: 25 | file.close() 26 | # 从字符串获得xslt 27 | def setXsltFromMem(self , xsltStr): 28 | self.xslt = xsltStr 29 | # 通过GooSeeker API接口获得xslt 30 | def setXsltFromAPI(self , APIKey , theme, middle=None, bname=None): 31 | apiurl = "http://www.gooseeker.com/api/getextractor?key="+ APIKey +"&theme="+quote(theme) 32 | if (middle): 33 | apiurl = apiurl + "&middle="+quote(middle) 34 | if (bname): 35 | apiurl = apiurl + "&bname="+quote(bname) 36 | apiconn = request.urlopen(apiurl) 37 | self.xslt = apiconn.read() 38 | # 返回当前xslt 39 | def getXslt(self): 40 | return self.xslt 41 | # 提取方法,入参是一个HTML DOM对象,返回是提取结果 42 | def extract(self , html): 43 | xslt_root = etree.XML(self.xslt) 44 | transform = etree.XSLT(xslt_root) 45 | result_tree = transform(html) 46 | return result_tree 47 | # 提取方法,入参是html源码,返回是提取结果 48 | def extractHTML(self , html): 49 | doc = etree.HTML(html) 50 | return self.extract(doc) 51 | -------------------------------------------------------------------------------- /crawler/xslt_bbs.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | <论坛列表> 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | <标题> 12 | 13 | 14 | 15 | 16 | 17 | 18 | <发帖人> 19 | 20 | 21 | 22 | 23 | 24 | 25 | <帖子详细链接> 26 | 27 | 28 | 29 | 30 | 31 | 32 | <回复数> 33 | 34 | 35 | 36 | 37 | 38 | 39 | <发帖时间> 40 | 41 | 42 | 43 | 44 | 45 | 46 | <最后回复时间> 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | -------------------------------------------------------------------------------- /crawler/simpleSpider/simpleSpider/settings.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Scrapy settings for simpleSpider project 4 | # 5 | # For simplicity, this file contains only settings considered important or 6 | # commonly used. You can find more settings consulting the documentation: 7 | # 8 | # http://doc.scrapy.org/en/latest/topics/settings.html 9 | # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html 10 | # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 11 | 12 | BOT_NAME = 'simpleSpider' 13 | 14 | SPIDER_MODULES = ['simpleSpider.spiders'] 15 | NEWSPIDER_MODULE = 'simpleSpider.spiders' 16 | 17 | 18 | # Crawl responsibly by identifying yourself (and your website) on the user-agent 19 | #USER_AGENT = 'simpleSpider (+http://www.yourdomain.com)' 20 | 21 | # Obey robots.txt rules 22 | ROBOTSTXT_OBEY = False 23 | 24 | # Configure maximum concurrent requests performed by Scrapy (default: 16) 25 | #CONCURRENT_REQUESTS = 32 26 | 27 | # Configure a delay for requests for the same website (default: 0) 28 | # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay 29 | # See also autothrottle settings and docs 30 | #DOWNLOAD_DELAY = 3 31 | # The download delay setting will honor only one of: 32 | #CONCURRENT_REQUESTS_PER_DOMAIN = 16 33 | #CONCURRENT_REQUESTS_PER_IP = 16 34 | 35 | # Disable cookies (enabled by default) 36 | #COOKIES_ENABLED = False 37 | 38 | # Disable Telnet Console (enabled by default) 39 | #TELNETCONSOLE_ENABLED = False 40 | 41 | # Override the default request headers: 42 | #DEFAULT_REQUEST_HEADERS = { 43 | # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 44 | # 'Accept-Language': 'en', 45 | #} 46 | 47 | # Enable or disable spider middlewares 48 | # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 49 | #SPIDER_MIDDLEWARES = { 50 | # 'simpleSpider.middlewares.MyCustomSpiderMiddleware': 543, 51 | #} 52 | 53 | # Enable or disable downloader middlewares 54 | # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html 55 | #DOWNLOADER_MIDDLEWARES = { 56 | # 'simpleSpider.middlewares.MyCustomDownloaderMiddleware': 543, 57 | #} 58 | 59 | # Enable or disable extensions 60 | # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html 61 | #EXTENSIONS = { 62 | # 'scrapy.extensions.telnet.TelnetConsole': None, 63 | #} 64 | 65 | # Configure item pipelines 66 | # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html 67 | #ITEM_PIPELINES = { 68 | # 'simpleSpider.pipelines.SomePipeline': 300, 69 | #} 70 | 71 | # Enable and configure the AutoThrottle extension (disabled by default) 72 | # See http://doc.scrapy.org/en/latest/topics/autothrottle.html 73 | #AUTOTHROTTLE_ENABLED = True 74 | # The initial download delay 75 | #AUTOTHROTTLE_START_DELAY = 5 76 | # The maximum download delay to be set in case of high latencies 77 | #AUTOTHROTTLE_MAX_DELAY = 60 78 | # The average number of requests Scrapy should be sending in parallel to 79 | # each remote server 80 | #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 81 | # Enable showing throttling stats for every response received: 82 | #AUTOTHROTTLE_DEBUG = False 83 | 84 | # Enable and configure HTTP caching (disabled by default) 85 | # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings 86 | #HTTPCACHE_ENABLED = True 87 | #HTTPCACHE_EXPIRATION_SECS = 0 88 | #HTTPCACHE_DIR = 'httpcache' 89 | #HTTPCACHE_IGNORE_HTTP_CODES = [] 90 | #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' 91 | -------------------------------------------------------------------------------- /crawler/tmSpider/tmSpider/settings.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # Scrapy settings for tmSpider project 4 | # 5 | # For simplicity, this file contains only settings considered important or 6 | # commonly used. You can find more settings consulting the documentation: 7 | # 8 | # http://doc.scrapy.org/en/latest/topics/settings.html 9 | # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html 10 | # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 11 | 12 | BOT_NAME = 'tmSpider' 13 | 14 | SPIDER_MODULES = ['tmSpider.spiders'] 15 | NEWSPIDER_MODULE = 'tmSpider.spiders' 16 | 17 | 18 | # Crawl responsibly by identifying yourself (and your website) on the user-agent 19 | #USER_AGENT = 'tmSpider (+http://www.yourdomain.com)' 20 | 21 | # Obey robots.txt rules 22 | ROBOTSTXT_OBEY = False 23 | 24 | # Configure maximum concurrent requests performed by Scrapy (default: 16) 25 | #CONCURRENT_REQUESTS = 32 26 | 27 | # Configure a delay for requests for the same website (default: 0) 28 | # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay 29 | # See also autothrottle settings and docs 30 | #DOWNLOAD_DELAY = 3 31 | # The download delay setting will honor only one of: 32 | #CONCURRENT_REQUESTS_PER_DOMAIN = 16 33 | #CONCURRENT_REQUESTS_PER_IP = 16 34 | 35 | # Disable cookies (enabled by default) 36 | #COOKIES_ENABLED = False 37 | 38 | # Disable Telnet Console (enabled by default) 39 | #TELNETCONSOLE_ENABLED = False 40 | 41 | # Override the default request headers: 42 | #DEFAULT_REQUEST_HEADERS = { 43 | # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 44 | # 'Accept-Language': 'en', 45 | #} 46 | 47 | # Enable or disable spider middlewares 48 | # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html 49 | #SPIDER_MIDDLEWARES = { 50 | # 'tmSpider.middlewares.MyCustomSpiderMiddleware': 543, 51 | #} 52 | 53 | # Enable or disable downloader middlewares 54 | # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html 55 | DOWNLOADER_MIDDLEWARES = { 56 | 'tmSpider.middlewares.middleware.CustomMiddlewares': 543, 57 | 'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None 58 | } 59 | 60 | # Enable or disable extensions 61 | # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html 62 | #EXTENSIONS = { 63 | # 'scrapy.extensions.telnet.TelnetConsole': None, 64 | #} 65 | 66 | # Configure item pipelines 67 | # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html 68 | #ITEM_PIPELINES = { 69 | # 'tmSpider.pipelines.SomePipeline': 300, 70 | #} 71 | 72 | # Enable and configure the AutoThrottle extension (disabled by default) 73 | # See http://doc.scrapy.org/en/latest/topics/autothrottle.html 74 | #AUTOTHROTTLE_ENABLED = True 75 | # The initial download delay 76 | #AUTOTHROTTLE_START_DELAY = 5 77 | # The maximum download delay to be set in case of high latencies 78 | #AUTOTHROTTLE_MAX_DELAY = 60 79 | # The average number of requests Scrapy should be sending in parallel to 80 | # each remote server 81 | #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 82 | # Enable showing throttling stats for every response received: 83 | #AUTOTHROTTLE_DEBUG = False 84 | 85 | # Enable and configure HTTP caching (disabled by default) 86 | # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings 87 | #HTTPCACHE_ENABLED = True 88 | #HTTPCACHE_EXPIRATION_SECS = 0 89 | #HTTPCACHE_DIR = 'httpcache' 90 | #HTTPCACHE_IGNORE_HTTP_CODES = [] 91 | #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' 92 | -------------------------------------------------------------------------------- /crawler/result1.xml: -------------------------------------------------------------------------------- 1 | 2 | <经纪人><区>南山<区域>华侨城<地产公司>全部<姓名>杨兴登<电话>15099911681 <公司>中原地产<分店>宝能太古城分行<熟悉小区>熟悉澳城花园   大南山紫园(独栋)   半岛城邦   <自我介绍>享受当下!等待浪费生命!<姓名>李文飞<电话>13590146793 <公司>Q房网<分店>麒麟分行<熟悉小区>熟悉麒麟花园三期   麒麟花园一期   名家富居   <自我介绍>竭诚为您服务力争在最短的时间里帮到您!服务项目:实验学位房!<姓名>张文宝<电话>15875554087 <公司>中原地产<分店>皇庭港湾分行<熟悉小区>熟悉三湘海尚   阳光海滨花园   宝能太古城(南区)   <自我介绍>我会珍惜每位客户,我坚信只有优质的服务才能赢的客户的信任<姓名>林映兴<电话>13802234765 <公司>中原地产<分店>南光路分行<熟悉小区>熟悉鸿瑞花园   缤纷假日豪园   现代城华庭   <自我介绍>本人毕业于广州大学建筑工程系,毕业至今一直在中原地产从事三级市场的居间业务.本人对楼房的设计和装修的部署有独到的见解,对居家风水也有一定的认识.<姓名>韦川<电话>13590107925 <公司>中原地产<分店>阳光带海滨城一期分行<熟悉小区>熟悉卓越浅水湾花园   阳光带海滨城二期   <自我介绍>本人在南硅谷片区从事房产行业五年有余,热情 、真诚、善于沟通,精通相关学位物业所有事务!欢迎您的来电相信一定能帮到您!<姓名>郑高贤<电话>13631652867 <公司>Q房网<分店>麒麟分行<熟悉小区>熟悉麒麟花园三期   御林华府   豪方现代豪园   <自我介绍>专做学位房保证价格真实。我的微信:zhengaoxian<姓名>林巧珠<电话>13684955910 <公司>美联物业<分店>雷圳分行<熟悉小区>熟悉诺德国际居住区   雷圳0755   中海阳光玫瑰园   <自我介绍>世纪广场黄金地段精装复式房,138平仅售160万,月租超月供<姓名>段卫国<电话>13714376029 <公司>中原地产<分店>世纪村分行<熟悉小区>熟悉红树西岸   中信红树湾四期(住宅)   中信红树湾五期   <自我介绍>真诚到永远! 细节决定成败!<姓名>李俊峰<电话>13714777452 <公司>阳光带海滨城一期<分店>岸芷汀兰<熟悉小区>熟悉阳光带海滨城一期   岸芷汀兰   卓越浅水湾花园   <自我介绍/><姓名>李明<电话>13510113121 <公司>中海阳光玫瑰园<分店>依云伴山<熟悉小区>熟悉中海阳光玫瑰园   依云伴山   山海津   <自我介绍/><姓名>韦艳玲<电话>13530111380 <公司>海印长城(一到二期)<分店>观海台<熟悉小区>熟悉海印长城(一到二期)   观海台   信和自由广场   <自我介绍/><姓名>黄绍华<电话>13928467313 <公司>中原地产<分店>中信红树湾分行<熟悉小区>熟悉红树西岸   中信红树湾四期(住宅)   中信红树湾一期   <自我介绍>用我的努力、专业、诚信,换来万家灯火通明!<姓名>涂鸿高<电话>15999670809 <公司>美联物业<分店>雷圳分行<熟悉小区>熟悉雷圳0755   华联城市山林一期   诺德国际居住区   <自我介绍>免佣代理中熙君南山,单价6.5万起88-89㎡ 压轴山景单位<姓名>罗志平<电话>13691835401 <公司>中投地产<分店>桑泰丹华分行<熟悉小区>熟悉崇文花园   水木丹华   塘朗城   <自我介绍>人生因梦想而伟大,因学习而改变,因行动而成功。<姓名>邹冬峰<电话>13699839505 <公司>Q房网<分店>前海分行<熟悉小区>熟悉星海名城六期   诺德国际居住区   中海阳光棕榈园   <自我介绍>世华地产高级职业顾问小邹,为您提供真实房源,提供规范,专业,<姓名>詹巧玲<电话>13798353959 <公司>中原地产<分店>花果山分行<熟悉小区>熟悉半山兰溪谷一期   景园大厦(蛇口)   招商桃花园   <自我介绍>中原地产:地产中介的龙头老大,现在代理深圳绝大多数的新楼盘,以及商铺,不需支付佣金,目前正在代理的热销一手楼盘有“太古城”“海境界”……<姓名>邱发娟<电话>18002566137 <公司>家家顺地产<分店>德意名居分行<熟悉小区>熟悉德意名居   西湖林语   丽岛花园   <自我介绍>对人一定要真诚,真心对待每一个有缘人!<姓名>林青松<电话>13631670431 <公司>中原地产<分店>南光路分行<熟悉小区>熟悉南油生活B区   现代城华庭   深蓝季节   <自我介绍>为您提供真实房源,做您值得信赖的经纪人,助您实现你的梦想。<姓名>单王刚<电话>13544189144 <公司>中原地产<分店>前海花园二期分行<熟悉小区>熟悉星海名城一期   山海翠庐   星海名城六期   <自我介绍>为奔波在深圳的你们找到合适的居家之所,我很乐意帮你们的事情!<姓名>刘俊阳<电话>13590468900 <公司>中原地产<分店>依云伴山分行<熟悉小区>熟悉诺德国际居住区   恒立心海湾花园   月亮湾花园   <自我介绍>★★★机会永远是留给有准备的人★★★<姓名>程根<电话>15814683193 <公司>佳兆业前海广场<分店>中海阳光玫瑰园<熟悉小区>熟悉佳兆业前海广场   中海阳光玫瑰园   诺德国际居住区   <自我介绍/><姓名>欧阳珊<电话>13640980101 <公司>中原地产<分店>花果山分行<熟悉小区>熟悉招北小区   花园城三期   招商雍景湾   <自我介绍>温馨提醒:“珍惜时间,选择可靠经纪人”。找家就要找专家!<姓名>朱正良<电话>18927486599 <公司>现代城华庭<分店>鸿瑞花园<熟悉小区>熟悉现代城华庭   鸿瑞花园   缤纷假日豪园   <自我介绍/><姓名>甘波<电话>13723471225 <公司>中原地产<分店>阳光带海滨城二期分行<熟悉小区>熟悉海怡东方花园   阳光带海滨城二期   阳光带海滨城一期   <自我介绍>yes,一切皆有可能!<姓名>牛立虎<电话>15118157959 <公司>美联物业<分店>雷圳分行<熟悉小区>熟悉诺德国际居住区   悠山美地   太子山庄二期   <自我介绍>深圳寸土如金、永远都是求大于供,只有越来越好的、没有退后的,只要合适就可以出手,我将于我最专业的知识与最诚心的态度帮您找到您想要的。<姓名>胡清林<电话>13530802309 <公司>Q房网<分店>碧海云天分行<熟悉小区>熟悉金海燕花园   碧海云天二期   碧海云天一期   <自我介绍>以诚待人,用心做事,本者:专业、规范、创值的信念助愿天下有志者都能够早日在深圳安居乐夜!<姓名>许文友<电话>13670023438 <公司>友邻国际公寓<分店>飞行员公寓<熟悉小区>熟悉友邻国际公寓   飞行员公寓   荟芳园   <自我介绍/><姓名>向红<电话>13714732303 <公司>南海玫瑰花园三期<分店>半山兰溪谷一期<熟悉小区>熟悉南海玫瑰花园三期   半山兰溪谷一期   鲸山觐海   <自我介绍/><姓名>王萌<电话>13510631863 <公司>信和自由广场<分店>鸿瑞花园<熟悉小区>熟悉信和自由广场   鸿瑞花园   蔚蓝海岸二期   <自我介绍/><姓名>郭景<电话>13556863291 <公司>中原地产<分店>麒麟分行<熟悉小区>熟悉麒麟花园三期   名家富居   豪方现代豪园   <自我介绍>以专业,诚信与服务赢得客户的再次回头! 3 | -------------------------------------------------------------------------------- /crawler/result2.xml: -------------------------------------------------------------------------------- 1 | 2 | <经纪人><区>南山<区域>华侨城<地产公司>全部<姓名>冯志强<电话>13622396265 <公司>美联物业<分店>美联诺德分行<熟悉小区>熟悉诺德国际居住区   雷圳0755   瑞景华庭   <自我介绍>因为了解,所以专业,真诚为您服务,以诚待人!<姓名>罗武群<电话>13632793248 <公司>家家顺地产<分店>桑泰丹华府分行<熟悉小区>熟悉光大江与城   富盈都市华府   爱琴海   <自我介绍>本人从业5年专注西丽二手房,免佣代理,深莞惠一手楼,专车接送<姓名>任静<电话>13480742725 <公司>Q房网<分店>鼎太风华分行<熟悉小区>熟悉鼎太风华七期   鼎太风华三期   中海阳光玫瑰园   <自我介绍>中原地产公司高级顾问,为您提供真实房源,做您值得信赖的经纪人<姓名>王春明<电话>13823541949 <公司>中原地产<分店>华彩天成分行<熟悉小区>熟悉华联城市山林一期   创世纪滨海花园   蔚蓝海岸二期   <自我介绍>生活其实很简单,适合自己的就是最好的!您的需求就是我的使命!<姓名>邓江微<电话>13510465292 <公司>中原地产<分店>中原地产西部电子分行<熟悉小区>熟悉中海阳光棕榈园   悠然天地家园   漾日湾畔   <自我介绍>专业代理一手二手房买卖<姓名>李德全<电话>13689534585 <公司>中原地产<分店>阳光带海滨城二期分行<熟悉小区>熟悉阳光带海滨城二期   珑御府   纯海岸   <自我介绍>中原地产的品牌将会给您带来最专业和最有效率的服务<姓名>段涛古<电话>13428985008 <公司>中原地产<分店>观海台分行<熟悉小区>熟悉滨海之窗   观海台   漾日湾畔   <自我介绍>房源真实!绝对不浪费你的时间 !<姓名>李桂恒<电话>13380359952 <公司>中原地产<分店>花果山分行<熟悉小区>熟悉园景园名苑   百花苑(蛇口)   招北小区   <自我介绍>我最专业二三级房地产租售 让你省心放心 安心 用良心为你服务<姓名>温有来<电话>15986800478 <公司>中原地产<分店>光彩新天地分行<熟悉小区>熟悉阳光里雅居   缤纷假日豪园   向南瑞峰花园   <自我介绍>信任是我们交流的开始,成交不是结束,服务才是结果。<姓名>温练哪<电话>13556814927 <公司>美联物业<分店>蔚蓝海岸分行<熟悉小区>熟悉蔚蓝海岸三期   浪琴屿花园   蔚蓝海岸二期   <自我介绍>放心店铺!本从事房地产行业多年,为了节省您的时间本人郑重承诺本所录房源均为真实有效房源我的网店铺http://shenzhen.anjuke.com/shop.php?bid=118535<姓名>于清峰<电话>13603001571 <公司>Q房网<分店>星海名城分行<熟悉小区>熟悉星海名城六期   星海名城五期   中海阳光棕榈园   <自我介绍>各位房友,大家好!我是中原地产的职业经纪人,对南山楼盘信息十分熟悉,本着诚信,专业,热情的原则,为您找到温馨的家为目的,孜孜不倦为您服务!<姓名>吴丽媚<电话>13554898623 <公司>德意名居<分店>留仙居<熟悉小区>熟悉德意名居   留仙居   新华大厦(二期)   <自我介绍/><姓名>刘博<电话>15899774305 <公司>康乐园<分店>英达钰龙园<熟悉小区>熟悉康乐园   英达钰龙园   鸿瑞花园   <自我介绍/><姓名>黄小花<电话>13537787324 <公司>Q房网<分店>半岛城邦分行<熟悉小区>熟悉半岛城邦二期   半岛城邦   半岛城邦一期   <自我介绍>我专做半岛城邦一期二期,在我这买一定比其他人低20-100万<姓名>王成<电话>15818676341 <公司>中原地产<分店>天鹅堡二期分行<熟悉小区>熟悉波托菲诺纯水岸十五期   波托菲诺纯水岸一期   波托菲诺纯水岸七期   <自我介绍>您的满意就是我无限的动力,(微信号):15818676341<姓名>张帅昌<电话>13421398979 <公司>中原地产<分店>棕榈园分行<熟悉小区>熟悉   <自我介绍>公司免佣代理大量,南山,宝安,龙岗,东莞,惠洲的新楼盘.欢迎来电咨询最新优惠折扣.<<参加团购争取更底的折扣>><姓名>王丰瑜<电话>13640938977 <公司>花果山大厦<分店>园景园名苑<熟悉小区>熟悉花果山大厦   园景园名苑   景园大厦(蛇口)   <自我介绍/><姓名>胡高军<电话>13691937559 <公司>中原地产<分店>红树湾分行<熟悉小区>熟悉红树西岸   中信红树湾   京基御景东方   <自我介绍>专业服务,诚信为本,一定帮你买到满意好房子.<姓名>刘强<电话>13410830683 <公司>阳光带海滨城二期<分店>岸芷汀兰<熟悉小区>熟悉阳光带海滨城二期   岸芷汀兰   珑御府   <自我介绍/><姓名>张涛<电话>15920055667 <公司>中原地产<分店>阳光荔景分行<熟悉小区>熟悉悠然天地家园   缤纷年华   万象新园   <自我介绍>7年房地产置业经验,给我一个电话,让你享受专业的房产置业过程<姓名>谢晓燕<电话>15818796892 <公司>Q房网<分店>前海分行<熟悉小区>熟悉东方银座公馆   鼎太风华   中海阳光棕榈园   <自我介绍>本公司长期免拥代理:惠州、深圳、东莞、香港等一手楼盘,谢谢!<姓名>刘非林<电话>13554994332 <公司>Q房网<分店>半岛城邦分行<熟悉小区>熟悉半岛城邦   半岛城邦二期   半岛城邦一期   <自我介绍>Q房网高级客户经理,以最真诚的态度,最专业的服务。让您满意<姓名>梅玲<电话>15219509713 <公司>美联物业<分店>雷圳分行<熟悉小区>熟悉中海阳光玫瑰园   诺德国际居住区   光彩山居岁月   <自我介绍>只要你想要,我能以最快的速度为您找到安心的家。<姓名>杨可荣<电话>13632765670 <公司>Q房网<分店>香山里分行<熟悉小区>熟悉首地容御   西丽山庄   锦绣花园四期   <自我介绍>Q房网高级客户经理,为您提供真实房源,做您值得信赖的经纪人<姓名>孙露<电话>15099943950 <公司>深圳链家<分店>碧海云天分行<熟悉小区>熟悉金海燕花园   碧海云天二期   碧海云天一期   <自我介绍>主打华侨城周边楼盘,并保证所有的房源都真实!<姓名>刘小斌<电话>13751160340 <公司>一方天地产<分店>中海阳光玫瑰园分行<熟悉小区>熟悉中海阳光玫瑰园   依云伴山   泛海拉菲花园(别墅)   <自我介绍>前海诚信经纪人-刘小斌 为您提供最优质的服务!<姓名>王国强<电话>15899888871 <公司>蔚蓝置业<分店>蔚蓝海岸分行<熟悉小区>熟悉蔚蓝海岸二期   蔚蓝海岸一期   宝能太古城(南区)   <自我介绍>蔚蓝海岸社区租售中心!金牌置业王国强,让您省心省钱!<姓名>周健林<电话>13794492675 <公司>Q房网<分店>麒麟分行<熟悉小区>熟悉麒麟花园三期   荔林春晓   缤纷年华   <自我介绍>客户的事情就是我的事情!~<姓名>王海鸥<电话>13632664507 <公司>中原地产<分店>中原地产西部电子分行<熟悉小区>熟悉创世纪滨海花园   绿海名都   怡园大厦   <自我介绍>为你找到心仪的房子是我的责任,让你更加放心、省心。<姓名>陆协容<电话>13631578750 <公司>中原地产<分店>中原地产鼎泰分行<熟悉小区>熟悉云顶翠峰二期   云顶翠峰三期   嘉意台   <自我介绍>为您,我做到!中原地产陆协容竭诚为您服务,以专业的业务技能,高效的服务态度赢得新老客户的信赖! 3 | --------------------------------------------------------------------------------