adidas / lakehouse-engineLinks
The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for several lakehouse algorithms, data flows and utilities for Data Products.
☆249Updated 3 months ago
Alternatives and similar repositories for lakehouse-engine
Users that are interested in lakehouse-engine are comparing it to the libraries listed below
Sorting:
- Delta Lake helper methods in PySpark☆326Updated 8 months ago
- A Python Library to support running data quality rules while the spark job is running⚡☆188Updated this week
- Pythonic Programming Framework to orchestrate jobs in Databricks Workflow☆216Updated 3 weeks ago
- Delta Lake examples☆225Updated 7 months ago
- A Python package that creates fine-grained dbt tasks on Apache Airflow☆70Updated 8 months ago
- Demo DAGs that show how to run dbt Core in Airflow using Cosmos☆62Updated 3 weeks ago
- A repository of sample code to accompany our blog post on Airflow and dbt.☆173Updated last year
- Custom PySpark Data Sources☆53Updated last month
- PySpark test helper methods with beautiful error messages☆696Updated last month
- ☆130Updated 10 months ago
- Delta Lake Documentation☆49Updated 11 months ago
- Code snippets for Data Engineering Design Patterns book☆113Updated 2 months ago
- This repo is a collection of tools to deploy, manage and operate a Databricks based Lakehouse.☆45Updated 4 months ago
- Turning PySpark Into a Universal DataFrame API☆403Updated this week
- Home of the Open Data Contract Standard (ODCS).☆495Updated last week
- A portable Datamart and Business Intelligence suite built with Docker, Dagster, dbt, DuckDB and Superset☆230Updated 3 months ago
- Data pipeline with dbt, Airflow, Great Expectations☆163Updated 3 years ago
- Possibly the fastest DataFrame-agnostic quality check library in town.☆190Updated last week
- PyJaws: A Pythonic Way to Define Databricks Jobs and Workflows☆43Updated 10 months ago
- Scalefree's dbt package for a Data Vault 2.0 implementation congruent to the original Data Vault 2.0 definition by Dan Linstedt including…☆154Updated 3 weeks ago
- Modern serverless lakehouse implementing HOOK methodology, Unified Star Schema (USS), and Analytical Data Storage System (ADSS) principle…☆115Updated 2 months ago
- Template for a data contract used in a data mesh.☆472Updated last year
- dbt-spark contains all of the code enabling dbt to work with Apache Spark and Databricks☆431Updated 3 months ago
- A CLI tool to streamline getting started with Apache Airflow™ and managing multiple Airflow projects☆220Updated last month
- A collection of Airflow operators, hooks, and utilities to elevate dbt to a first-class citizen of Airflow.☆199Updated last month
- Great Expectations Airflow operator☆164Updated last week
- Data product portal created by Dataminded☆186Updated this week
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆71Updated last year
- Code samples, etc. for Databricks☆64Updated this week
- Step-by-step tutorial on building a Kimball dimensional model with dbt☆140Updated 10 months ago