isarn / isarn-sketches-spark
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
☆29Updated 8 months ago
Alternatives and similar repositories for isarn-sketches-spark:
Users that are interested in isarn-sketches-spark are comparing it to the libraries listed below
- Joins for skewed datasets in Spark☆57Updated 7 years ago
- something to help you spark☆65Updated 6 years ago
- Miscellaneous functionality for manipulating Apache Spark RDDs.☆22Updated 6 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 8 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Secondary sort and streaming reduce for Apache Spark☆78Updated last year
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- Drizzle integration with Apache Spark☆120Updated 6 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 7 years ago
- A quotation-based Scala DSL for scalable data analysis.☆63Updated 2 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 3 years ago
- Custom state store providers for Apache Spark☆92Updated 2 years ago
- Deriving Spark DataFrame schemas from case classes☆44Updated 7 months ago
- ☆104Updated last year
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆109Updated 6 years ago
- Utilities for Apache Spark☆34Updated 8 years ago
- functionstest☆33Updated 8 years ago
- A library to expose more of Apache Spark's metrics system☆146Updated 5 years ago
- A collection of Apache Parquet add-on modules☆31Updated this week
- Read SparkSQL parquet file as RDD[Protobuf]☆93Updated 6 years ago
- Writing application logic for Spark jobs that can be unit-tested without a SparkContext☆76Updated 6 years ago
- Bucketing and partitioning system for Parquet☆29Updated 6 years ago
- Splittable Gzip codec for Hadoop☆69Updated this week
- ☆22Updated 10 years ago
- ScalaCheck for Spark☆63Updated 6 years ago
- Hadoop output committers for S3☆108Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Fl…☆96Updated 5 years ago