ysfesr / Building-Data-LakeHouseLinks
Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data
☆49Updated last year
Alternatives and similar repositories for Building-Data-LakeHouse
Users that are interested in Building-Data-LakeHouse are comparing it to the libraries listed below
Sorting:
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 3 years ago
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Simple stream processing pipeline☆110Updated last year
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆42Updated last year
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆55Updated 2 years ago
- Code for dbt tutorial☆164Updated 2 months ago
- Spark data pipeline that processes movie ratings data.☆30Updated last week
- ☆16Updated last year
- A repository of sample code to show data quality checking best practices using Airflow.☆78Updated 2 years ago
- Repo for everything open table formats (Iceberg, Hudi, Delta Lake) and the overall Lakehouse architecture☆123Updated last week
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆63Updated 2 years ago
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆40Updated 6 years ago
- Execution of DBT models using Apache Airflow through Docker Compose☆124Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆125Updated 2 years ago
- Trino dbt demo project to mix and load BigQuery data with and in a local PostgreSQL database☆77Updated 4 years ago
- Delta Lake examples☆233Updated last year
- ☆269Updated last year
- One click deploy docker-compose with Kafka, Spark Streaming, Zeppelin UI and Monitoring (Grafana + Kafka Manager)☆120Updated 4 years ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆74Updated 2 years ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆36Updated last year
- Multi-container environment with Hadoop, Spark and Hive☆226Updated 6 months ago
- New Generation Opensource Data Stack Demo☆450Updated 2 years ago
- ☆23Updated 4 years ago
- ☆88Updated 3 years ago
- A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for …☆139Updated 5 years ago
- Building a Data Pipeline with an Open Source Stack☆54Updated 4 months ago
- Spark all the ETL Pipelines☆35Updated 2 years ago
- Near real time ETL to populate a dashboard.☆73Updated 2 months ago
- ☆14Updated 2 years ago