scrapinghub / portia2codeLinks
☆50Updated 3 years ago
Alternatives and similar repositories for portia2code
Users that are interested in portia2code are comparing it to the libraries listed below
Sorting:
- Easy extraction of keywords and engines from search engine results pages (SERPs).☆90Updated 3 years ago
- Simple Web UI for Scrapy spider management via Scrapyd☆51Updated 6 years ago
- Find which links on a web page are pagination links☆29Updated 8 years ago
- Python implementation of the Parsley language for extracting structured data from web pages☆92Updated 7 years ago
- Scrapy middleware to add extra fields to items, like timestamp, response fields, spider attributes etc.☆56Updated 3 years ago
- Scrapy middleware which allows to crawl only new content☆79Updated 2 years ago
- A RabbitMQ Scheduler for Scrapy☆87Updated 2 years ago
- A client interface for Scrapinghub's API☆206Updated 3 months ago
- Scrapy Eagle is a tool that allow us to run any Scrapy based project in a distributed fashion and monitor how it is going on and how many…☆24Updated 4 years ago
- Scraper for categories and lists on ecommerce and other listing websites☆42Updated 4 years ago
- MongoDB extensions for Scrapy☆44Updated 10 years ago
- ☆223Updated 10 years ago
- PyQuery-based scraping micro-framework.☆116Updated 3 years ago
- ☆29Updated 4 years ago
- A decorator to write coroutine-like spider callbacks.☆109Updated 2 years ago
- Collection of Scrapy utilities (extensions, middlewares, pipelines, etc)☆32Updated 7 years ago
- Scrapinghub Command Line Client☆133Updated last month
- A complimentary proxy to help to use SPM with headless browsers☆108Updated 2 years ago
- A python library detect and extract listing data from HTML page.☆108Updated 8 years ago
- A scrapy pipeline which send items to Elastic Search server☆98Updated 7 years ago
- Analyze scraped data☆46Updated 5 years ago
- Splash + HAProxy + Docker Compose☆197Updated 6 years ago
- A project to attempt to automatically login to a website given a single seed☆124Updated 2 years ago
- a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine☆97Updated last year
- Automatic Item List Extraction☆87Updated 8 years ago
- A scrapy extension to store requests and responses information in storage service☆26Updated 3 years ago
- A Scrapy pipeline to categorize items using MonkeyLearn☆38Updated 8 years ago
- Paginating the web☆37Updated 11 years ago
- ☆143Updated 9 years ago
- Detect and classify pagination links☆103Updated 4 years ago