commoncrawl / gzipstreamLinks
gzipstream allows Python to process multi-part gzip files from a streaming source
☆23Updated 8 years ago
Alternatives and similar repositories for gzipstream
Users that are interested in gzipstream are comparing it to the libraries listed below
Sorting:
- High Level Kafka Scanner☆19Updated 7 years ago
- Traptor -- A distributed Twitter feed☆26Updated 2 years ago
- ☆16Updated 8 years ago
- Data science tools from Moz☆22Updated 8 years ago
- Semanticizest: dump parser and client☆20Updated 9 years ago
- Implicit relation extractor using a natural language model.☆24Updated 7 years ago
- code and slides for my PyGotham 2016 talk, "Higher-level Natural Language Processing with textacy"☆15Updated 8 years ago
- ☆24Updated 7 years ago
- Linking Entities in CommonCrawl Dataset onto Wikipedia Concepts☆59Updated 12 years ago
- common data interchange format for document processing pipelines that apply natural language processing tools to large streams of text☆35Updated 8 years ago
- WebAnnotator is a tool for annotating Web pages. WebAnnotator is implemented as a Firefox extension (https://addons.mozilla.org/en-US/fi…☆48Updated 3 years ago
- framework for making streamcorpus data☆11Updated 8 years ago
- Pipeline for distributed Natural Language Processing, made in Python☆65Updated 8 years ago
- Extract statistics from Wikipedia Dump files.☆26Updated 3 years ago
- ☆18Updated 8 years ago
- Deprecated Module: See Xponents or OpenSextantToolbox as active code base.☆31Updated 11 years ago
- Algorithms for "schema matching"☆26Updated 8 years ago
- A Cython implementation of the affine gap string distance☆57Updated 2 years ago
- A python module that will check for package updates.☆28Updated 3 years ago
- This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet…☆29Updated 6 months ago
- Automated NLP sentiment predictions- batteries included, or use your own data☆18Updated 7 years ago
- Show summary of a large number of URLs in a Jupyter Notebook☆17Updated 4 years ago
- A component that tries to avoid downloading duplicate content☆27Updated 7 years ago
- Find which links on a web page are pagination links☆29Updated 8 years ago
- Tools to manipulate and extract data from wikipedia dumps☆46Updated 12 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 6 years ago
- Contains the implementation of algorithms that estimate the geographic location of media content based on their content and metadata. It …☆15Updated 8 years ago
- brat rapid annotation tool (brat) - for all your textual annotation needs☆10Updated 7 years ago
- A web application tagging and retrieval of arguments in text☆29Updated 2 years ago
- A disk-based key/value store in Python with no dependencies.☆21Updated 10 years ago