avensolutions / cdc-at-scale-using-sparkLinks
Scalable CDC Pattern Implemented using PySpark
☆18Updated 5 years ago
Alternatives and similar repositories for cdc-at-scale-using-spark
Users that are interested in cdc-at-scale-using-spark are comparing it to the libraries listed below
Sorting:
- Provide functionality to build statistical models to repair dirty tabular data in Spark☆12Updated 2 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Magic to help Spark pipelines upgrade☆35Updated 8 months ago
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple…☆26Updated 4 years ago
- ☆14Updated 3 weeks ago
- Apache Spark ETL Utilities☆40Updated 8 months ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated last year
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆51Updated last week
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 4 years ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆45Updated last week
- Yet Another (Spark) ETL Framework☆21Updated last year
- A library that brings useful functions from various modern database management systems to Apache Spark☆59Updated last year
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated last year
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- Testing Scala code with scalatest☆12Updated 2 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated 7 months ago
- Nested Data (JSON/AVRO/XML) Parsing and Flattening in Spark☆16Updated last year
- Basic framework utilities to quickly start writing production ready Apache Spark applications☆36Updated 6 months ago
- Spark UDFs to deserialize Avro messages with schemas stored in Schema Registry.☆20Updated 7 years ago
- Multi-stage, config driven, SQL based ETL framework using PySpark☆25Updated 5 years ago
- Dione - a Spark and HDFS indexing library☆52Updated last year
- Utilities for writing tests that use Apache Spark.☆24Updated 6 years ago
- Spark on Kubernetes using Helm☆34Updated 5 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- Flink Examples☆39Updated 9 years ago