commoncrawl / gzipstream
gzipstream allows Python to process multi-part gzip files from a streaming source
☆23Updated 7 years ago
Alternatives and similar repositories for gzipstream:
Users that are interested in gzipstream are comparing it to the libraries listed below
- ☆16Updated 8 years ago
- Semanticizest: dump parser and client☆20Updated 8 years ago
- Traptor -- A distributed Twitter feed☆26Updated 2 years ago
- Hidden alignment conditional random field for classifying string pairs.☆24Updated 4 months ago
- GraphPipe helpers for TensorFlow☆22Updated 6 years ago
- Learning String Alignments for Entity Aliases☆37Updated 5 years ago
- code and slides for my PyGotham 2016 talk, "Higher-level Natural Language Processing with textacy"☆15Updated 8 years ago
- framework for making streamcorpus data☆11Updated 7 years ago
- Collects multimedia content shared through social networks.☆19Updated 10 years ago
- common data interchange format for document processing pipelines that apply natural language processing tools to large streams of text☆35Updated 8 years ago
- Entity Linking for the masses☆56Updated 9 years ago
- Data science tools from Moz☆22Updated 8 years ago
- Similarity search on Wikipedia using gensim in Python.☆60Updated 6 years ago
- Neural Elastic Inference and Search☆19Updated 5 years ago
- Pipeline for distributed Natural Language Processing, made in Python☆65Updated 8 years ago
- Keyword Extraction system using Brown Clustering - (This version is trained to extract keywords from job listings)☆17Updated 10 years ago
- Linking Entities in CommonCrawl Dataset onto Wikipedia Concepts☆59Updated 12 years ago
- This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet…☆29Updated 2 months ago
- Implicit relation extractor using a natural language model.☆25Updated 6 years ago
- A dataset of popular pages (taken from <dir.yahoo.com>) with manually marked up semantic blocks.☆15Updated 11 years ago
- extract difference between two html pages☆32Updated 6 years ago
- A web application tagging and retrieval of arguments in text☆29Updated last year
- Ranking Entity Types using the Web of Data☆30Updated 8 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 6 years ago
- Algorithms for "schema matching"☆26Updated 8 years ago
- An attempt at creating a silver/gold standard dataset for backtesting yesterday & today's content-extractors☆34Updated 9 years ago
- Scrapy extension which writes crawled items to Kafka☆30Updated 6 years ago
- Automated NLP sentiment predictions- batteries included, or use your own data☆18Updated 7 years ago
- Algorithms for URL Classification☆19Updated 9 years ago
- Deprecated Module: See Xponents or OpenSextantToolbox as active code base.☆31Updated 11 years ago