commoncrawl / gzipstreamLinks
gzipstream allows Python to process multi-part gzip files from a streaming source
☆23Updated 8 years ago
Alternatives and similar repositories for gzipstream
Users that are interested in gzipstream are comparing it to the libraries listed below
Sorting:
- Traptor -- A distributed Twitter feed☆26Updated 3 years ago
- Keyword Extraction system using Brown Clustering - (This version is trained to extract keywords from job listings)☆18Updated 10 years ago
- A Python library for learning from dimensionality reduction, supporting sparse and dense matrices.☆78Updated 8 years ago
- WebAnnotator is a tool for annotating Web pages. WebAnnotator is implemented as a Firefox extension (https://addons.mozilla.org/en-US/fi…☆48Updated 3 years ago
- Neural Elastic Inference and Search☆19Updated 5 years ago
- Semanticizest: dump parser and client☆20Updated 9 years ago
- A Topic Modeling toolbox☆92Updated 9 years ago
- This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet…☆29Updated 8 months ago
- ☆16Updated 9 years ago
- A dataset of popular pages (taken from <dir.yahoo.com>) with manually marked up semantic blocks.☆15Updated 11 years ago
- Supervised learning for novelty detection in text☆78Updated 8 years ago
- Data science tools from Moz☆23Updated 8 years ago
- Algorithms for "schema matching"☆26Updated 9 years ago
- Relatively simple text classification powered by spaCy☆41Updated 9 years ago
- Json Wikipedia, contains code to convert the Wikipedia xml dump into a json dump. Questions? https://gitter.im/idio-opensource/Lobby☆17Updated 3 years ago
- Python search module for fast approximate string matching☆54Updated 2 years ago
- Deprecated Module: See Xponents or OpenSextantToolbox as active code base.☆31Updated 12 years ago
- Reduction is a python script which automatically summarizes a text by extracting the sentences which are deemed to be most important.☆54Updated 10 years ago
- A repository for the "Combining DBpedia and Topic Modeling" GSoC 2016 idea☆13Updated 9 years ago
- Find which links on a web page are pagination links☆29Updated 8 years ago
- (BROKEN, help wanted)☆15Updated 9 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 7 years ago
- A Cython implementation of the affine gap string distance☆57Updated 2 years ago
- An automated ingestion service for blogs to construct a corpus for NLP research.☆86Updated 7 years ago
- Web page segmentation and noise removal☆55Updated last year
- Pipeline for distributed Natural Language Processing, made in Python☆65Updated 8 years ago
- [NO LONGER MAINTAINED AS OPEN SOURCE - USE SCALETEXT.COM INSTEAD]☆108Updated 12 years ago
- Frontera backend to guide a crawl using PageRank, HITS or other ranking algorithms based on the link structure of the web graph, even whe…☆55Updated last year
- A tool to segment text based on frequencies and the Viterbi algorithm "#TheBoyWhoLived" => ['#', 'The', 'Boy', 'Who', 'Lived']☆81Updated 9 years ago
- Tweets Sentiment Analyzer☆52Updated 13 years ago