justhackit / spark-utils
☆10Updated 2 years ago
Alternatives and similar repositories for spark-utils:
Users that are interested in spark-utils are comparing it to the libraries listed below
- Examples for High Performance Spark☆15Updated 4 months ago
- Code snippets used in demos recorded for the blog.☆30Updated last month
- Magic to help Spark pipelines upgrade☆34Updated 5 months ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- PySpark for ETL jobs including lineage to Apache Atlas in one script via code inspection☆18Updated 8 years ago
- ☆12Updated 2 years ago
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated 2 months ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- hive_compared_bq compares/validates 2 (SQL like) tables, and graphically shows the rows/columns that are different.☆28Updated 7 years ago
- Basic framework utilities to quickly start writing production ready Apache Spark applications☆35Updated 3 months ago
- Examples of Spark 3.0☆47Updated 4 years ago
- Data validation library for PySpark 3.0.0☆33Updated 2 years ago
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Hadoop Data Pipeline using Falcon☆15Updated 8 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 5 years ago
- Rocksdb state storage implementation for Structured Streaming.☆17Updated 4 years ago
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- ☆63Updated 5 years ago
- A curated list of awesome PrestoDB / Trino software, libraries, tools and resources☆17Updated 3 years ago
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆28Updated last week
- CDF Tech Bootcamp☆9Updated 5 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 10 months ago
- Data Exploration Using Spark 2.0☆14Updated 6 years ago
- Smart Automation Tool for building modern Data Lakes and Data Pipelines☆120Updated this week
- Parcel for Apache Airflow☆17Updated 5 years ago
- JSON schema parser for Apache Spark☆81Updated 2 years ago