harrydevforlife / building-lakehouseLinks
Building Data Lakehouse by open source technology. Support end to end data pipeline, from source data on AWS S3 to Lakehouse, visualize and recommend app.
☆35Updated 3 weeks ago
Alternatives and similar repositories for building-lakehouse
Users that are interested in building-lakehouse are comparing it to the libraries listed below
Sorting:
- Code snippets for Data Engineering Design Patterns book☆307Updated last week
- Open source stack lakehouse☆25Updated last year
- Repo for everything open table formats (Iceberg, Hudi, Delta Lake) and the overall Lakehouse architecture☆126Updated last month
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆44Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆75Updated 2 years ago
- Cost Efficient Data Pipelines with DuckDB☆60Updated 7 months ago
- End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore,…☆47Updated last year
- build dw with dbt☆51Updated last year
- Installer for DataKitchen's Open Source Data Observability Products. Data breaks. Servers break. Your toolchain breaks. Ensure your team …☆130Updated 2 months ago
- Quick Guides from Dremio on Several topics☆80Updated last month
- Delta Lake helper methods. No Spark dependency.☆23Updated last year
- Delta Lake examples☆236Updated last year
- A custom end-to-end analytics platform for customer churn☆11Updated 7 months ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆279Updated 3 months ago
- Open Data Stack Projects: Examples of End to End Data Engineering Projects☆91Updated 2 years ago
- ☆214Updated 11 months ago
- A Docker Compose template that builds a interactive development environment for PySpark with Jupyter Lab, MinIO as object storage, Hive M…☆47Updated last year
- Local Environment to Practice Data Engineering☆143Updated last year
- Code for dbt tutorial☆166Updated 4 months ago
- reating a modern data pipeline using a combination of Terraform, AWS Lambda and S3, Snowflake, DBT, Mage AI, and Dash.☆15Updated 2 years ago
- End-to-end data platform leveraging the Modern data stack☆52Updated last year
- New generation opensource data stack☆76Updated 3 years ago
- Data Product Portal created by Dataminded☆197Updated this week
- A Python package that creates fine-grained dbt tasks on Apache Airflow☆80Updated 2 weeks ago
- Sample code to collect Apache Iceberg metrics for table monitoring☆29Updated last year
- Apache Airflow Best Practices, published by Packt☆50Updated last year
- Data Agents are intelligent assistants built by data engineers to help non-data professionals navigate the organization’s data infrastruc…☆19Updated 8 months ago
- A sample implementation of stream writes to an Iceberg table on GCS using Flink and reading it using Trino☆22Updated 3 years ago
- ☆58Updated 11 months ago
- 📡 Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset. Ideal for learning data systems.☆58Updated 11 months ago