FINRAOS / MegaSparkDiffLinks
A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations of possible data sources. Multiple execution modes in multiple environments enable the user to generate a diff report as a Java/Scala-friendly DataFrame or as a file for future use. Comes with out of the box Spa…
☆52Updated 7 months ago
Alternatives and similar repositories for MegaSparkDiff
Users that are interested in MegaSparkDiff are comparing it to the libraries listed below
Sorting:
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated last year
- Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple…☆26Updated 4 years ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆61Updated 2 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆62Updated last year
- An implementation of the DatasourceV2 interface of Apache Spark™ for writing Spark Datasets to Apache Druid™.☆43Updated 3 weeks ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- Extensible streaming ingestion pipeline on top of Apache Spark☆46Updated 6 months ago
- Quark is a data virtualization engine over analytic databases.☆100Updated 8 years ago
- Superglue is a lineage-tracking tool built to help visualize the propagation of data through complex pipelines composed of tables, jobs …☆160Updated 3 years ago
- Spark functions to run popular phonetic and string matching algorithms☆59Updated 3 years ago
- A Spark datasource for the HadoopOffice library☆37Updated 4 months ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆115Updated 5 years ago
- Smart Automation Tool for building modern Data Lakes and Data Pipelines☆122Updated this week
- Spark cloud integration: tests, cloud committers and more☆20Updated last year
- Amundsen Gremlin☆22Updated 3 years ago
- Spark package to "plug" holes in data using SQL based rules ⚡️ 🔌☆29Updated 5 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 6 years ago
- Apache Spark ETL Utilities☆39Updated last year
- Splittable Gzip codec for Hadoop☆74Updated last month
- A tool to validate data, built around Apache Spark.☆100Updated this week
- Yet Another Spark SQL JDBC/ODBC server based on the PostgreSQL V3 protocol☆34Updated 3 years ago
- A simple Spark-powered ETL framework that just works 🍺☆182Updated 3 months ago
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- PySpark for ETL jobs including lineage to Apache Atlas in one script via code inspection☆18Updated 9 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago