avensolutions / cdc-at-scale-using-spark
Scalable CDC Pattern Implemented using PySpark
☆18Updated 5 years ago
Alternatives and similar repositories for cdc-at-scale-using-spark
Users that are interested in cdc-at-scale-using-spark are comparing it to the libraries listed below
Sorting:
- Sample processing code using Spark 2.1+ and Scala☆52Updated 4 years ago
- Magic to help Spark pipelines upgrade☆35Updated 7 months ago
- Provide functionality to build statistical models to repair dirty tabular data in Spark☆12Updated 2 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆51Updated last year
- Apache Spark ETL Utilities☆40Updated 6 months ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated last year
- Testing Scala code with scalatest☆12Updated 2 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆30Updated 6 months ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- Examples for High Performance Spark☆15Updated 6 months ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated last year
- Scala API for Apache Spark SQL high-order functions☆14Updated last year
- Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple…☆26Updated 3 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 6 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- ☆63Updated 5 years ago
- Delta Lake Examples☆12Updated 5 years ago
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- Yet Another (Spark) ETL Framework☆21Updated last year
- Spark cloud integration: tests, cloud committers and more☆19Updated 3 months ago
- Multi-stage, config driven, SQL based ETL framework using PySpark☆25Updated 5 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆61Updated 8 months ago
- ☆26Updated 4 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 5 years ago
- ☆14Updated 3 months ago
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated 4 months ago
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆29Updated last week