TeamHG-Memex / sitehound-frontend
Site Hound (previously THH) is a Domain Discovery Tool
☆23Updated 3 years ago
Alternatives and similar repositories for sitehound-frontend:
Users that are interested in sitehound-frontend are comparing it to the libraries listed below
- A component that tries to avoid downloading duplicate content☆27Updated 6 years ago
- Word analysis, by domain, on the Common Crawl data set for the purpose of finding industry trends☆55Updated last year
- extract difference between two html pages☆32Updated 6 years ago
- Show summary of a large number of URLs in a Jupyter Notebook☆17Updated 3 years ago
- Traptor -- A distributed Twitter feed☆26Updated 2 years ago
- Exporters is an extensible export pipeline library that supports filter, transform and several sources and destinations☆40Updated 8 months ago
- Aviation grade news article metadata extraction☆36Updated last year
- Virtual patent marking crawler at iproduct.epfl.ch☆14Updated 7 years ago
- A classifier for detecting soft 404 pages☆57Updated last year
- Paginating the web☆37Updated 11 years ago
- Small set of utilities to simplify writing Scrapy spiders.☆49Updated 9 years ago
- API - extract a list of keywords from a text.☆18Updated 7 years ago
- Broad crawler for domain discovery☆19Updated 6 years ago
- REST API for Text Summarization and Keywords Extraction☆16Updated 2 years ago
- Scrapy middleware for the autologin☆37Updated 6 years ago
- Easy extraction of keywords and engines from search engine results pages (SERPs).☆90Updated 3 years ago
- WebAnnotator is a tool for annotating Web pages. WebAnnotator is implemented as a Firefox extension (https://addons.mozilla.org/en-US/fi…☆48Updated 3 years ago
- Keyword Extraction system using Brown Clustering - (This version is trained to extract keywords from job listings)☆17Updated 10 years ago
- Slides to learn a little natural language processing (NLP) with Python. Written in reST with S5/Docutils.☆28Updated 12 years ago
- a scaleable and efficient crawelr with docker cluster , crawl million pages in 2 hours with a single machine☆97Updated 10 months ago
- Get user ids from social network handlers☆12Updated 8 years ago
- Take streaming tweets, extract hashtags & usernames, create graph, export graphml for Gephi visualisation☆34Updated 11 years ago
- common data interchange format for document processing pipelines that apply natural language processing tools to large streams of text☆35Updated 8 years ago
- [UNMAINTAINED] Firefox addon for Scrapely☆5Updated 9 years ago
- General Architecture for Text Engineering☆48Updated 8 years ago
- [UNMAINTAINED] Deploy, run and monitor your Scrapy spiders.☆11Updated 9 years ago
- Data science tools from Moz☆22Updated 8 years ago
- Automated NLP sentiment predictions- batteries included, or use your own data☆18Updated 7 years ago
- Pipeline for distributed Natural Language Processing, made in Python☆65Updated 8 years ago
- Processes data from images which are tagged with the specified Instagram tag.☆13Updated 11 years ago