heartfly / ajax_crawler
A flexible web crawler based on Scrapy for fetching most of Ajax or other various types of web pages. Easy to use: To customize a new web crawler-You just need to write a config file and run.
☆45Updated 9 years ago
Alternatives and similar repositories for ajax_crawler:
Users that are interested in ajax_crawler are comparing it to the libraries listed below
- 淘宝爬虫原型,基于gevent☆49Updated 11 years ago
- 分布式定向抓取集群☆71Updated 7 years ago
- Redis-based components for scrapy that allows distributed crawling☆46Updated 10 years ago
- A scrapy zhihu crawler☆76Updated 6 years ago
- WEIBO_SCRAPY is a Multi-Threading SINA WEIBO data extraction Framework in Python.☆154Updated 7 years ago
- 一个基于scrapy-redis的分布式爬虫模板☆42Updated 7 years ago
- 爬取百度指数和阿里指数,采用selenium,存入hbase,验证码自动识别,多线程控制☆32Updated 8 years ago
- ☆95Updated 10 years ago
- A distributed Sina Weibo Search spider base on Scrapy and Redis.☆146Updated 11 years ago
- Obsolete 已废弃.☆86Updated 7 years ago
- python3 scrapy crawler crawl taobao.com, data import to MySQL☆21Updated 8 years ago
- 微信公众号爬虫☆42Updated 8 years ago
- ☆20Updated 8 years ago
- Scrapy Spider for 各种新闻网站☆107Updated 9 years ago
- 将会陆续添加豆瓣里面各种信息的爬虫代码和分析☆25Updated 10 years ago
- 利用urllib2加beautifulsoup爬取新浪微博☆69Updated 9 years ago
- Python爬虫的学习历程☆51Updated 7 years ago
- web resources crawler for pdf or doc by python 3☆27Updated 10 years ago
- 知道创宇爬虫题目 持续更新版本☆95Updated 10 years ago
- jobSpider是一只scrapy爬虫,用于爬取职位信息☆27Updated 8 years ago
- 基于scrapy,scrapy-redis实现的一个分布式网络爬虫,爬取了新浪房产的楼盘信息及户型图片,实现了常用的爬虫功能需求.☆40Updated 8 years ago
- 爬虫资料汇总☆17Updated 9 years ago
- scrapy模拟淘宝登陆☆74Updated 4 years ago
- scrapy examples for crawling zhihu and github☆224Updated 2 years ago
- 为爬虫引用创建container,包括的模块:scrapy, mongo, celery, rabbitmq☆36Updated 9 years ago
- Scrapy the Zhihu content and user social network information☆46Updated 11 years ago
- mirror of bitbucket project☆43Updated 8 years ago
- 查理歌词, 一个微信公众帐号, 1.0版本. 暂时可以实现快速查找歌词.☆67Updated 10 years ago
- weixin.sogou.com 微信爬虫 -- 基于scrapy☆28Updated 8 years ago
- A proxy pool that scrapes free anonymous proxies and maintains its proxies' availability.☆93Updated 7 years ago