ysfesr / Building-Data-LakeHouse
Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data
☆43Updated last year
Alternatives and similar repositories for Building-Data-LakeHouse:
Users that are interested in Building-Data-LakeHouse are comparing it to the libraries listed below
- Spark data pipeline that processes movie ratings data.☆27Updated 3 weeks ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆26Updated last year
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆53Updated last year
- Building Data Lakehouse by open source technology. Support end to end data pipeline, from source data on AWS S3 to Lakehouse, visualize a…☆19Updated 9 months ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆60Updated last year
- Open source stack lakehouse☆25Updated 11 months ago
- ☆13Updated last year
- ☆15Updated 11 months ago
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆30Updated 11 months ago
- Delta-Lake, ETL, Spark, Airflow☆46Updated 2 years ago
- End to end data engineering project☆53Updated 2 years ago
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 2 years ago
- Simple stream processing pipeline☆98Updated 7 months ago
- Trino dbt demo project to mix and load BigQuery data with and in a local PostgreSQL database☆72Updated 3 years ago
- Quick Guides from Dremio on Several topics☆67Updated 3 weeks ago
- ☆22Updated 2 years ago
- A custom end-to-end analytics platform for customer churn☆10Updated 3 weeks ago
- A docker using the airflow with Hadoop ecosystem (hive, spark, and sqoop)☆11Updated 3 years ago
- Execution of DBT models using Apache Airflow through Docker Compose☆114Updated 2 years ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆46Updated last year
- Demo DAGs that show how to run dbt Core in Airflow using Cosmos☆53Updated 4 months ago
- Code for dbt tutorial☆151Updated 8 months ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆136Updated 4 years ago
- Code snippets for Data Engineering Design Patterns book☆68Updated last week
- Building a Data Pipeline with an Open Source Stack☆45Updated 7 months ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- ☆40Updated 7 months ago
- Spark all the ETL Pipelines☆32Updated last year
- Streaming Synthetic Sales Data Generator: Streaming sales data generator for Apache Kafka, written in Python☆42Updated 2 years ago
- A project for exploring how Great Expectations can be used to ensure data quality and validate batches within a data pipeline defined in …☆21Updated 2 years ago