nerdammer / spark-additions
Utilities for Apache Spark
☆34Updated 8 years ago
Related projects ⓘ
Alternatives and complementary repositories for spark-additions
- A library to expose more of Apache Spark's metrics system☆146Updated 4 years ago
- Big Data Toolkit for the JVM☆145Updated 4 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 8 years ago
- Low level integration of Spark and Kafka☆130Updated 6 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 8 years ago
- Support Highcharts in Apache Zeppelin☆81Updated 7 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- File compaction tool that runs on top of the Spark framework.☆59Updated 5 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 7 years ago
- sbt plugin for spark-submit☆96Updated 7 years ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 6 years ago
- ScalaCheck for Spark☆63Updated 6 years ago
- Simple Spark example of generating table stats for use of data quality checks☆28Updated 7 years ago
- ☆92Updated 7 years ago
- something to help you spark☆65Updated 6 years ago
- Hadoop output committers for S3☆108Updated 4 years ago
- Custom state store providers for Apache Spark☆93Updated 2 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆118Updated 8 years ago
- An example of using Avro and Parquet in Spark SQL☆60Updated 9 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 5 years ago
- Spark connector for SFTP☆100Updated last year
- MLeap allows for easily putting Spark ML pipelines into production☆78Updated 8 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 6 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 3 years ago
- A full example of my blog post regarding Sparks stateful streaming (http://asyncified.io/2016/07/31/exploring-stateful-streaming-with-apa…☆34Updated 7 years ago