agile-lab-dev / witboost-starter-kit
Witboost is a versatile platform that addresses a wide range of sophisticated data engineering challenges. The Starter Kit showcases the integration capabilities and provides a "batteries-included" product.
☆20Updated this week
Alternatives and similar repositories for witboost-starter-kit:
Users that are interested in witboost-starter-kit are comparing it to the libraries listed below
- An open specification for data products in Data Mesh☆55Updated 2 months ago
- Data validation library for PySpark 3.0.0☆34Updated 2 years ago
- A Table format agnostic data sharing framework☆38Updated 11 months ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- Delta lake and filesystem helper methods☆50Updated 10 months ago
- Spark and Delta Lake Workshop☆22Updated 2 years ago
- Yet Another (Spark) ETL Framework☆18Updated last year
- ☆38Updated 7 months ago
- Delta reader for the Ray open-source toolkit for building ML applications☆43Updated 11 months ago
- Code snippets used in demos recorded for the blog.☆29Updated this week
- How to evaluate the Quality of your Data with Great Expectations and Spark.☆29Updated last year
- Declarative text based tool for data analysts and engineers to extract, load, transform and orchestrate their data pipelines.☆71Updated this week
- Flowchart for debugging Spark applications☆104Updated 3 months ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- Delta Lake Documentation☆48Updated 7 months ago
- Support for generating modern platforms dynamically with services such as Kafka, Spark, Streamsets, HDFS, ....☆74Updated this week
- Magic to help Spark pipelines upgrade☆34Updated 3 months ago
- Kafka Connector for Iceberg tables☆16Updated last year
- Visits sessionization pipeline used for the talk☆13Updated 7 months ago
- The Data Product Descriptor Specification (DPDS) Repository☆76Updated this week
- ☆47Updated 5 months ago
- A Python Library to support running data quality rules while the spark job is running⚡☆167Updated last week
- PyJaws: A Pythonic Way to Define Databricks Jobs and Workflows☆41Updated 6 months ago