data-engineering-helpers / data-contracts
Food for thoughts around data contracts
☆25Updated last month
Alternatives and similar repositories for data-contracts:
Users that are interested in data-contracts are comparing it to the libraries listed below
- Pythonic Programming Framework to orchestrate jobs in Databricks Workflow☆215Updated last week
- Data product portal created by Dataminded☆183Updated this week
- [DEPRECATED] Demo repository implementing an end-to-end MLOps workflow on Databricks. Project derived from dbx basic python template☆112Updated 2 years ago
- Ingesting data with Pulumi, AWS lambdas and Snowflake in a scalable, fully replayable manner☆71Updated 3 years ago
- Example repo to kickstart integration with mlflow pipelines.☆76Updated 2 years ago
- A simple and easy to use Data Quality (DQ) tool built with Python.☆50Updated last year
- Possibly the fastest DataFrame-agnostic quality check library in town.☆186Updated last week
- Demo of Streamlit application with Databricks SQL Endpoint☆36Updated 2 years ago
- A project to kickstart your ML development☆30Updated 8 months ago
- Delta Lake helper methods in PySpark☆322Updated 7 months ago
- Modern serverless lakehouse implementing HOOK methodology, Unified Star Schema (USS), and Analytical Data Storage System (ADSS) principle…☆111Updated 3 weeks ago
- 🧱 A collection of supplementary utilities and helper notebooks to perform admin tasks on Databricks☆54Updated 4 months ago
- A CLI tool to streamline getting started with Apache Airflow™ and managing multiple Airflow projects☆219Updated last week
- Data pipeline with dbt, Airflow, Great Expectations☆162Updated 3 years ago
- A Python Library to support running data quality rules while the spark job is running⚡☆183Updated last week
- Sample configuration to deploy a modern data platform.☆88Updated 3 years ago
- Learn how to add data validation and documentation to a data pipeline built with dbt and Airflow.☆168Updated last year
- Joining the modern data stack with the modern ML stack☆195Updated last year
- A "modern" Strava data pipeline fueled by dlt, duckdb, dbt, and evidence.dev☆32Updated 3 months ago
- ☆28Updated last year
- A portable Datamart and Business Intelligence suite built with Docker, sqlmesh + dbtcore, DuckDB and Superset☆49Updated 5 months ago
- A SQL port of python's scikit-learn preprocessing module, provided as cross-database dbt macros.☆184Updated last year
- Accompanying solution accelerator notebook for the Databricks blog on parallel training and inference☆15Updated 2 years ago
- Kedro Plugin to support running workflows on Kubeflow Pipelines☆53Updated 7 months ago
- A curated list of awesome blogs, videos, tools and resources about Data Contracts☆173Updated 8 months ago
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆244Updated 2 months ago
- Code snippets for Data Engineering Design Patterns book☆80Updated last month
- First-party plugins maintained by the Kedro team.☆99Updated last week
- Airbyte made simple (no UI, no database, no cluster)☆171Updated 2 weeks ago