commoncrawl / example-warc-java
☆49Updated 7 years ago
Alternatives and similar repositories for example-warc-java:
Users that are interested in example-warc-java are comparing it to the libraries listed below
- Java port of TLSH (Trend Micro Locality Sensitive Hash)☆20Updated 3 years ago
- RDF store on a cloud-based architecture (previously on https://code.google.com/p/cumulusrdf)☆31Updated 8 years ago
- Storm / Solr Integration☆19Updated last year
- Quickly analyze and explore email with advanced analytics and visualization.☆56Updated 3 years ago
- Solr Dictionary Annotator (Microservice for Spark)☆71Updated 5 years ago
- Extensions for and tools to work with CoreNlp☆24Updated 2 years ago
- InsightEdge Core☆20Updated 10 months ago
- Provided Guidance on Creating End to End Solutions for Common SILK Use Cases☆13Updated 9 years ago
- Scala utilities for teaching computational linguistics and prototyping algorithms.☆42Updated 12 years ago
- Fast and robust NLP components implemented in Java.☆52Updated 4 years ago
- An easy-to-use and highly customizable crawler that enables you to create your own little Web archives (WARC/CDX)☆25Updated 7 years ago
- Alenka JDBC is a library for accessing and manipulating data with the open-source GPU database Alenka.☆19Updated 10 years ago
- Compiler for writing DeepDive applications in a Datalog-like language — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👇🏿☆19Updated 8 years ago
- Tools for exploring the contents of web archive files.☆39Updated 4 years ago
- SKOS Support for Apache Lucene and Solr☆56Updated 3 years ago
- Vizlinc☆14Updated 9 years ago
- Blazegraph Tinkerpop3 Implementation☆61Updated 4 years ago
- A library of examples showing how to use the Common Crawl corpus (2008-2012, ARC format)☆65Updated 8 years ago
- Educational Examle of a custom Lucene Query & Scorer☆48Updated 5 years ago
- Uses Apache Lucene, OpenNLP and geonames and extracts locations from text and geocodes them.☆36Updated 10 months ago
- How to spot first stories on Twitter using Storm.☆125Updated last year
- Behemoth is an open source platform for large scale document analysis based on Apache Hadoop.☆281Updated 6 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 6 years ago
- Document clustering based on Latent Semantic Analysis☆96Updated 14 years ago
- A java library for stored queries☆16Updated last year
- Mirror of Apache Marmotta☆54Updated 4 years ago
- RDF-Centric Map/Reduce Framework and Freebase data conversion tool☆148Updated 3 years ago
- Kafka connector for Solr Sink☆16Updated 8 years ago
- spark-sparql-connector☆17Updated 9 years ago
- Java text categorization system☆55Updated 7 years ago