commoncrawl / gzipstreamLinks
gzipstream allows Python to process multi-part gzip files from a streaming source
☆23Updated 8 years ago
Alternatives and similar repositories for gzipstream
Users that are interested in gzipstream are comparing it to the libraries listed below
Sorting:
- Traptor -- A distributed Twitter feed☆26Updated 3 years ago
- A Python library for learning from dimensionality reduction, supporting sparse and dense matrices.☆78Updated 8 years ago
- Reduction is a python script which automatically summarizes a text by extracting the sentences which are deemed to be most important.☆54Updated 10 years ago
- Relatively simple text classification powered by spaCy☆41Updated 10 years ago
- Data science tools from Moz☆23Updated 8 years ago
- Collects multimedia content shared through social networks.☆19Updated 10 years ago
- Neural Elastic Inference and Search☆19Updated 5 years ago
- Contains the implementation of algorithms that estimate the geographic location of media content based on their content and metadata. It …☆15Updated 9 years ago
- ☆24Updated 7 years ago
- Algorithms for "schema matching"☆26Updated 9 years ago
- A dataset of popular pages (taken from <dir.yahoo.com>) with manually marked up semantic blocks.☆15Updated 11 years ago
- Deprecated Module: See Xponents or OpenSextantToolbox as active code base.☆31Updated 12 years ago
- Keyword Extraction system using Brown Clustering - (This version is trained to extract keywords from job listings)☆18Updated 11 years ago
- 💫 Runtime performance comparison of spaCy against other NLP libraries☆20Updated 3 years ago
- This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet…☆30Updated 10 months ago
- A Topic Modeling toolbox☆92Updated 9 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 7 years ago
- (BROKEN, help wanted)☆15Updated 9 years ago
- Similarity search on Wikipedia using gensim in Python.☆60Updated 6 years ago
- Code for "Performance shootout between nearest-neighbour libraries": http://radimrehurek.com/2013/11/performance-shootout-of-nearest-neig…☆98Updated 10 years ago
- stav text annotation visualiser☆34Updated 13 years ago
- ☆16Updated 9 years ago
- Json Wikipedia, contains code to convert the Wikipedia xml dump into a json dump. Questions? https://gitter.im/idio-opensource/Lobby☆17Updated 3 years ago
- Supervised learning for novelty detection in text☆78Updated 9 years ago
- Semanticizest: dump parser and client☆20Updated 9 years ago
- [NO LONGER MAINTAINED AS OPEN SOURCE - USE SCALETEXT.COM INSTEAD]☆107Updated 12 years ago
- An attempt at creating a silver/gold standard dataset for backtesting yesterday & today's content-extractors☆35Updated 10 years ago
- Tweets Sentiment Analyzer☆52Updated 13 years ago
- Raw Wikipedia counts for entity linking☆19Updated 8 years ago
- An Apache Lucene TokenFilter that uses a word2vec vectors for term expansion.☆24Updated 11 years ago