tcmlabs / hexagonal-architecture-python-sparkLinks
Hexagonal (ports and adapters) architecture applied to Spark and Python data engineering project
☆33Updated last year
Alternatives and similar repositories for hexagonal-architecture-python-spark
Users that are interested in hexagonal-architecture-python-spark are comparing it to the libraries listed below
Sorting:
- A simple and easy to use Data Quality (DQ) tool built with Python.☆50Updated last year
- A CLI tool to streamline getting started with Apache Airflow™ and managing multiple Airflow projects☆220Updated last month
- Code snippets for Data Engineering Design Patterns book☆116Updated 2 months ago
- Example repo to create end to end tests for data pipeline.☆24Updated 11 months ago
- A project to kickstart your ML development☆31Updated 9 months ago
- Run, mock and test fake Snowflake databases locally.☆141Updated this week
- A Python package that creates fine-grained dbt tasks on Apache Airflow☆70Updated 8 months ago
- Soda Spark is a PySpark library that helps you with testing your data in Spark Dataframes☆63Updated 2 years ago
- ☆130Updated 10 months ago
- 🏃♀️ Minimalist SQL orchestrator☆244Updated this week
- A curated list of awesome blogs, videos, tools and resources about Data Contracts☆173Updated 9 months ago
- Full stack data engineering tools and infrastructure set-up☆53Updated 4 years ago
- ☆80Updated 7 months ago
- Modern serverless lakehouse implementing HOOK methodology, Unified Star Schema (USS), and Analytical Data Storage System (ADSS) principle…☆115Updated 2 months ago
- Ingesting data with Pulumi, AWS lambdas and Snowflake in a scalable, fully replayable manner☆71Updated 3 years ago
- Delta Lake helper methods in PySpark☆326Updated 9 months ago
- Sample configuration to deploy a modern data platform.☆88Updated 3 years ago
- A portable Datamart and Business Intelligence suite built with Docker, sqlmesh + dbtcore, DuckDB and Superset☆52Updated 6 months ago
- csv and flat-file sniffer built in Rust.☆42Updated last year
- Pythonic Programming Framework to orchestrate jobs in Databricks Workflow☆216Updated 3 weeks ago
- ☆49Updated 11 months ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆251Updated 4 months ago
- A Python PySpark Projet with Poetry☆23Updated 8 months ago
- ☆43Updated 3 years ago
- A write-audit-publish implementation on a data lake without the JVM☆46Updated 9 months ago
- End to end data engineering project☆56Updated 2 years ago
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- A Python Library to support running data quality rules while the spark job is running⚡☆188Updated last week
- Package to assert rows in-line with dbt macros.☆68Updated last month
- A Series of Notebooks on how to start with Kafka and Python☆154Updated 3 months ago