cerndb / sparkMeasure
This is a mirror of https://github.com/LucaCanali/sparkMeasure - sparkMeasure is a tool for performance troubleshooting of Apache Spark workloads. It simplifies the collection and analysis of Spark task metrics.
☆14Updated last year
Alternatives and similar repositories for sparkMeasure:
Users that are interested in sparkMeasure are comparing it to the libraries listed below
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 5 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 11 months ago
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated 3 months ago
- Magic to help Spark pipelines upgrade☆34Updated 6 months ago
- functionstest☆33Updated 8 years ago
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated last year
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated last year
- ☆63Updated 5 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 4 years ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- Dione - a Spark and HDFS indexing library☆51Updated last year
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- ☆102Updated 5 years ago
- Custom state store providers for Apache Spark☆92Updated last month
- ACID Data Source for Apache Spark based on Hive ACID☆97Updated 3 years ago
- Code snippets used in demos recorded for the blog.☆30Updated this week
- Support Highcharts in Apache Zeppelin☆81Updated 7 years ago
- An opinionated auto-deployer for the Hortonworks Platform☆34Updated 4 years ago
- Splittable Gzip codec for Hadoop☆70Updated last month
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 7 years ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- Task Metrics Explorer☆13Updated 6 years ago
- Cascading on Apache Flink®☆54Updated last year
- Spooker is a dynamic framework for processing high volume data streams via processing pipelines☆29Updated 9 years ago
- Rokku project. This project acts as a proxy on top of any S3 storage solution providing services like authentication, authorization, shor…☆66Updated last month
- Spark cloud integration: tests, cloud committers and more☆19Updated 2 months ago