Cargill / pipewrenchLinks
Data pipeline automation tool
☆26Updated last year
Alternatives and similar repositories for pipewrench
Users that are interested in pipewrench are comparing it to the libraries listed below
Sorting:
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 6 years ago
- Support Highcharts in Apache Zeppelin☆81Updated 8 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆62Updated last year
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 8 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 7 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- functionstest☆33Updated 9 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 6 years ago
- Demonstrates NiFi template deployment and configuration via a REST API☆70Updated 8 years ago
- An Apache access log parser written in Scala☆73Updated 4 years ago
- Low level integration of Spark and Kafka☆130Updated 7 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 8 years ago
- Utilities for writing tests that use Apache Spark.☆24Updated 6 years ago
- Complete Pipeline Training at Big Data Scala By the Bay☆71Updated 10 years ago
- Build configuration-driven ETL pipelines on Apache Spark☆162Updated 3 years ago
- Big Data Toolkit for the JVM☆145Updated 5 years ago
- Custom state store providers for Apache Spark☆92Updated 10 months ago
- A Spark Streaming job reading events from Amazon Kinesis and writing event counts to DynamoDB☆93Updated 5 years ago
- ☆33Updated 9 years ago
- Example project to show how to use Spark to read and write Avro/Parquet files☆50Updated 12 years ago
- Streaming Analytics platform, built with Apache Flink and Kafka☆35Updated 2 years ago
- A super simple utility for testing Apache Hive scripts locally for non-Java developers.☆73Updated 8 years ago
- An example of using Avro and Parquet in Spark SQL☆60Updated 10 years ago
- Random implementation notes☆33Updated 12 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago
- Kite SDK Examples☆99Updated 4 years ago
- Simplify getting Zeppelin up and running☆56Updated 9 years ago