Smerity / cc-warc-examples
CommonCrawl WARC/WET/WAT examples and processing code for Java + Hadoop
☆56Updated 3 years ago
Alternatives and similar repositories for cc-warc-examples:
Users that are interested in cc-warc-examples are comparing it to the libraries listed below
- Mirror of Apache Stanbol (incubating)☆112Updated last year
- Common web archive utility code.☆55Updated 3 weeks ago
- Solr Dictionary Annotator (Microservice for Spark)☆71Updated 5 years ago
- Behemoth is an open source platform for large scale document analysis based on Apache Hadoop.☆281Updated 6 years ago
- RDF-Centric Map/Reduce Framework and Freebase data conversion tool☆148Updated 3 years ago
- Elasticsearch Latent Semantic Indexing experimentation☆33Updated 5 years ago
- Additional opennlp mapping type for elasticsearch in order to perform named entity recognition☆136Updated 8 years ago
- SIREn - Semi-Structured Information Retrieval Engine☆107Updated 3 years ago
- DKPro C4CorpusTools is a collection of tools for processing CommonCrawl corpus, including Creative Commons license detection, boilerplate…☆52Updated 4 years ago
- SKOS Support for Apache Lucene and Solr☆56Updated 3 years ago
- A text tagger based on Lucene / Solr, using FST technology☆176Updated last year
- Search a single field with different query time analyzers in Solr☆25Updated 5 years ago
- ☆49Updated 8 years ago
- Solr Query Segmenter for structuring unstructured queries☆21Updated 3 years ago
- WARC (Web Archive) Input and Output Formats for Hadoop☆35Updated 10 years ago
- A toolkit that wraps various natural language processing implementations behind a common interface.☆101Updated 7 years ago
- Warcbase is an open-source platform for managing analyzing web archives☆162Updated 7 years ago
- NLP tools developed by Emory University.☆60Updated 8 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 6 years ago
- Java library for reading and writing WARC files with a typed API☆48Updated 3 months ago
- CommonCrawl WARC/WET/WAT examples and processing code for Java + Hadoop☆38Updated 3 months ago
- an open-source data management platform for knowledge workers (https://github.com/dswarm/dswarm-documentation/wiki)☆54Updated 7 years ago
- The Common Crawl Crawler Engine and Related MapReduce code (2008-2012)☆213Updated 2 years ago
- Software and resources for natural language processing.☆131Updated 8 years ago
- ☆71Updated 7 years ago
- Various tools for the DBpedia project - This does NOT contain the DBpedia extaction framework☆101Updated 11 months ago
- Tools for bulk indexing of WARC/ARC files on Hadoop, EMR or local file system.☆44Updated 7 years ago
- Launch AWS Elastic MapReduce jobs that process Common Crawl data.☆49Updated 8 years ago
- Extract statistics from Wikipedia Dump files.☆26Updated 3 years ago
- ☆33Updated 10 years ago