zouzias / spark-lucenerddLinks
Spark RDD with Lucene's query and entity linkage capabilities
☆128Updated last month
Alternatives and similar repositories for spark-lucenerdd
Users that are interested in spark-lucenerdd are comparing it to the libraries listed below
Sorting:
- An efficient updatable key-value store for Apache Spark☆254Updated 8 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 8 years ago
- Live-updating Spark UI built with Meteor☆189Updated 4 years ago
- Secondary sort and streaming reduce for Apache Spark☆78Updated 2 years ago
- Drizzle integration with Apache Spark☆120Updated 7 years ago
- Support Highcharts in Apache Zeppelin☆81Updated 8 years ago
- Big Data Toolkit for the JVM☆145Updated 4 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 5 years ago
- ☆92Updated 8 years ago
- Serverless proxy for Spark cluster☆324Updated 4 years ago
- Read SparkSQL parquet file as RDD[Protobuf]☆93Updated 7 years ago
- Spark SQL index for Parquet tables☆134Updated 4 years ago
- Quark is a data virtualization engine over analytic databases.☆100Updated 8 years ago
- something to help you spark☆64Updated 7 years ago
- functionstest☆33Updated 9 years ago
- flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Fl…☆96Updated 6 years ago
- Sparkline BI Accelerator provides fast ad-hoc query capability over Logical Cubes. This has been folded into our SNAP Platform(http://bit…☆282Updated 7 years ago
- Druid indexing plugin for using Spark in batch jobs☆101Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Tools for reading data from Solr as a Spark RDD and indexing objects from Spark into Solr using SolrJ.☆445Updated last month
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 7 years ago
- Library for organizing batch processing pipelines in Apache Spark☆42Updated 8 years ago
- Splittable Gzip codec for Hadoop☆74Updated 2 weeks ago
- Low level integration of Spark and Kafka☆130Updated 7 years ago
- Hadoop output committers for S3☆111Updated 5 years ago
- Profiler for large-scale distributed java applications (Spark, Scalding, MapReduce, Hive,...) on YARN.☆128Updated 7 years ago
- An example of using Avro and Parquet in Spark SQL☆60Updated 9 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago