ysfesr / Building-Data-LakeHouseLinks
Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data
☆49Updated 2 years ago
Alternatives and similar repositories for Building-Data-LakeHouse
Users that are interested in Building-Data-LakeHouse are comparing it to the libraries listed below
Sorting:
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆56Updated 2 years ago
- Dockerizing an Apache Spark Standalone Cluster☆42Updated 3 years ago
- Docker with Airflow and Spark standalone cluster☆262Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆45Updated last year
- ☆14Updated 2 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆48Updated last year
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆64Updated 2 years ago
- Code for dbt tutorial☆168Updated 5 months ago
- Open source stack lakehouse☆25Updated last year
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆38Updated 2 years ago
- Building a Data Pipeline with an Open Source Stack☆56Updated 7 months ago
- Simple stream processing pipeline☆110Updated last year
- Trino dbt demo project to mix and load BigQuery data with and in a local PostgreSQL database☆76Updated 4 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆40Updated last year
- Delta Lake examples☆238Updated last year
- Execution of DBT models using Apache Airflow through Docker Compose☆126Updated 3 years ago
- Data Engineering with Spark and Delta Lake☆106Updated 3 years ago
- End to end data engineering project☆58Updated 3 years ago
- ☆90Updated 3 years ago
- ☆46Updated 2 years ago
- ☆16Updated last year
- A repository of sample code to show data quality checking best practices using Airflow.☆78Updated 2 years ago
- ☆270Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆75Updated 2 years ago
- Streaming Synthetic Sales Data Generator: Streaming sales data generator for Apache Kafka, written in Python☆44Updated 3 years ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆507Updated 3 months ago
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆40Updated 6 years ago
- Spark data pipeline that processes movie ratings data.☆31Updated last week
- Apache Spark 3 - Structured Streaming Course Material☆126Updated 2 years ago