sparsecode / DaFlow
Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple categories of transformation rules.
☆26Updated 3 years ago
Alternatives and similar repositories for DaFlow:
Users that are interested in DaFlow are comparing it to the libraries listed below
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Apache Spark ETL Utilities☆40Updated 4 months ago
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- ☆39Updated 6 years ago
- Yet Another Spark SQL JDBC/ODBC server based on the PostgreSQL V3 protocol☆34Updated 2 years ago
- A temporary home for LinkedIn's changes to Apache Iceberg (incubating)☆62Updated 3 months ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 4 years ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- UberScriptQuery, a SQL-like DSL to make writing Spark jobs super easy☆62Updated last year
- Flink Examples☆39Updated 8 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 10 months ago
- A sink to save Spark Structured Streaming DataFrame into Hive table☆23Updated 6 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆30Updated 4 months ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 5 years ago
- ☆26Updated 3 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆50Updated last year
- A light Kafka to HDFS/S3 ETL library based on Apache Spark☆41Updated 7 years ago
- Yet Another (Spark) ETL Framework☆20Updated last year
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Lab project to showcase Flink's performance differences between using a SQL query and implementing the same logic via the DataStream API☆14Updated 4 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated last year
- Jumbune, an open source BigData APM & Data Quality Management Platform for Data Clouds. Enterprise feature offering is available at http:…☆71Updated 2 years ago
- ACID Data Source for Apache Spark based on Hive ACID☆97Updated 3 years ago
- spark-drools tutorials☆16Updated 10 months ago
- Rocksdb state storage implementation for Structured Streaming.☆17Updated 4 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- Provide functionality to build statistical models to repair dirty tabular data in Spark☆12Updated last year