scrapinghub / aduanaLinks
Frontera backend to guide a crawl using PageRank, HITS or other ranking algorithms based on the link structure of the web graph, even when making big crawls (one billion pages).
☆55Updated last year
Alternatives and similar repositories for aduana
Users that are interested in aduana are comparing it to the libraries listed below
Sorting:
- Automatic Item List Extraction☆87Updated 9 years ago
- Find which links on a web page are pagination links☆29Updated 8 years ago
- A python implementation of DEPTA☆83Updated 8 years ago
- High Level Kafka Scanner☆19Updated 7 years ago
- Modularly extensible semantic metadata validator☆84Updated 9 years ago
- NER toolkit for HTML data☆259Updated last year
- Paginating the web☆37Updated 11 years ago
- [UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.☆11Updated 10 years ago
- Small set of utilities to simplify writing Scrapy spiders.☆49Updated 9 years ago
- Scrapy extension which writes crawled items to Kafka☆30Updated 6 years ago
- [NO LONGER MAINTAINED AS OPEN SOURCE - USE SCALETEXT.COM INSTEAD]☆108Updated 12 years ago
- Experimental parallel data analysis toolkit.☆121Updated 3 years ago
- ☆18Updated 8 years ago
- Python implementation of the Parsley language for extracting structured data from web pages☆92Updated 7 years ago
- Python search module for fast approximate string matching☆54Updated 2 years ago
- WebAnnotator is a tool for annotating Web pages. WebAnnotator is implemented as a Firefox extension (https://addons.mozilla.org/en-US/fi…☆48Updated 3 years ago
- Probabilistic Data Structures in Python (originally presented at PyData 2013)☆55Updated 3 years ago
- MapReduce platform in python☆34Updated 9 years ago
- Python bindings to the Compact Language Detector☆33Updated 5 years ago
- Code for "Performance shootout between nearest-neighbour libraries": http://radimrehurek.com/2013/11/performance-shootout-of-nearest-neig…☆99Updated 10 years ago
- Fast Python Bloom Filter using Mmap☆13Updated 12 years ago
- a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine☆97Updated last year
- A component that tries to avoid downloading duplicate content☆27Updated 7 years ago
- Easy extraction of keywords and engines from search engine results pages (SERPs).☆90Updated 3 years ago
- Python Logging for Humans☆119Updated 9 years ago
- Tool to flatten stream of JSON-like objects, configured via schema☆33Updated 5 years ago
- unofficial git mirror of http://svn.whoosh.ca svn repo☆49Updated 15 years ago
- Collection of dask example notebooks☆58Updated 7 years ago
- A simple algorithm for clustering web pages, suitable for crawlers☆34Updated 8 years ago
- Scrapy middleware which allows to crawl only new content☆79Updated 2 years ago