ysfesr / Building-Data-LakeHouseLinks
Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data
☆48Updated last year
Alternatives and similar repositories for Building-Data-LakeHouse
Users that are interested in Building-Data-LakeHouse are comparing it to the libraries listed below
Sorting:
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆55Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- A repository of sample code to show data quality checking best practices using Airflow.☆78Updated 2 years ago
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆61Updated 2 years ago
- Trino dbt demo project to mix and load BigQuery data with and in a local PostgreSQL database☆76Updated 4 years ago
- Execution of DBT models using Apache Airflow through Docker Compose☆121Updated 2 years ago
- Open source stack lakehouse☆25Updated last year
- Simple stream processing pipeline☆110Updated last year
- One click deploy docker-compose with Kafka, Spark Streaming, Zeppelin UI and Monitoring (Grafana + Kafka Manager)☆120Updated 4 years ago
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 3 years ago
- Code for dbt tutorial☆162Updated last month
- Delta Lake examples☆230Updated last year
- New Generation Opensource Data Stack Demo☆449Updated 2 years ago
- ☆14Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆124Updated 2 years ago
- Real-time Data Warehouse with Apache Flink & Apache Kafka & Apache Hudi☆117Updated last year
- ☆269Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆74Updated 2 years ago
- A docker using the airflow with Hadoop ecosystem (hive, spark, and sqoop)☆12Updated 4 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆47Updated last year
- Spark data pipeline that processes movie ratings data.☆30Updated 3 weeks ago
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆41Updated last year
- Infraestructura para Big Data : Hadoop + NiFi +Spark + Hive usando Docker☆20Updated 2 years ago
- A sample implementation of stream writes to an Iceberg table on GCS using Flink and reading it using Trino☆21Updated 3 years ago
- Near real time ETL to populate a dashboard.☆72Updated last month
- End to end data engineering project☆57Updated 3 years ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated 2 years ago
- Example for article Running Spark 3 with standalone Hive Metastore 3.0☆102Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆249Updated 7 months ago