dmnapolitano / stanford-thriftLinks
A Stanford CoreNLP server, with example clients, using Apache Thrift.
☆47Updated 6 years ago
Alternatives and similar repositories for stanford-thrift
Users that are interested in stanford-thrift are comparing it to the libraries listed below
Sorting:
- A toolkit that wraps various natural language processing implementations behind a common interface.☆101Updated 8 years ago
- RDF-Centric Map/Reduce Framework and Freebase data conversion tool☆148Updated 4 years ago
- Solr Dictionary Annotator (Microservice for Spark)☆71Updated 5 years ago
- NLP tools developed by Emory University.☆61Updated 9 years ago
- Json Wikipedia, contains code to convert the Wikipedia xml dump into a json/avro dump☆254Updated 2 years ago
- Apache Pig utilities to build training corpora for machine learning / NLP out of public Wikipedia and DBpedia dumps.☆160Updated 3 years ago
- ☆92Updated 10 years ago
- Additional opennlp mapping type for elasticsearch in order to perform named entity recognition☆136Updated 9 years ago
- Software and resources for natural language processing.☆132Updated 9 years ago
- Elasticsearch Latent Semantic Indexing experimentation☆33Updated 6 years ago
- DKPro WSD: A Java framework for word sense disambiguation☆20Updated 3 years ago
- DBpedia.org RDF to CSV for import into Neo4j☆52Updated 10 years ago
- Automatically exported from code.google.com/p/deepsyntacticparsing☆23Updated 10 years ago
- The Berkeley Entity Resolution System jointly solves the problems of named entity recognition, coreference resolution, and entity linking…☆186Updated 5 years ago
- ReactiveLDA is a fast, lightweight implementation of the Latent Dirichlet Allocation (LDA) algorithm, using a parallel vanilla Gibbs samp…☆61Updated 10 years ago
- NLP toolkit (tokenizer, POS-tagger, parser, etc.)☆43Updated 8 years ago
- Spark implementation of the Google Correlate algorithm to quickly find highly correlated vectors in huge datasets☆92Updated 9 years ago
- Hadoop jobs for WikiReverse project. Parses Common Crawl data for links to Wikipedia articles.☆38Updated 7 years ago
- Linking Entities in CommonCrawl Dataset onto Wikipedia Concepts☆59Updated 13 years ago
- Semanticizest: dump parser and client☆20Updated 9 years ago
- Behemoth is an open source platform for large scale document analysis based on Apache Hadoop.☆283Updated 7 years ago
- Mirror of Apache Stanbol (incubating)☆114Updated last year
- A RESTful web service that runs microtasks across multiple crowds, provides quality control techniques, and is easily extensible.☆52Updated 8 years ago
- Fast and robust NLP components implemented in Java.☆53Updated 5 years ago
- Hybrid Question Answering (HAWK) -- is going to drive forth the OKBQA vision of hybrid question answering system using Linked Data and fu…☆16Updated 3 years ago
- Entity Linking for the masses☆56Updated 10 years ago
- [NO LONGER MAINTAINED AS OPEN SOURCE - USE SCALETEXT.COM INSTEAD]☆107Updated 12 years ago
- ☆20Updated 8 years ago
- A set of hacks to setup a dbpedia endpoint through neo4j☆44Updated 12 years ago
- Scala utilities for teaching computational linguistics and prototyping algorithms.☆42Updated 12 years ago