thanhENC / e2e-data-platform
End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore, Minio, Postgres)
☆15Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for e2e-data-platform
- Nyc_Taxi_Data_Pipeline - DE Project☆83Updated 3 weeks ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆133Updated 4 years ago
- Vietnam stock price crawling☆18Updated last year
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆94Updated last year
- Crawl data from the TIKI e-commerce, designing a data warehouse, implementing an ETL (Extract, Transform, Load) process, and loading the …☆14Updated last year
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆40Updated 11 months ago
- This repo gives an introduction to setting up streaming analytics using open source technologies☆22Updated last year
- Spark application to consume kafka events generated by a python producer.☆12Updated 3 years ago
- End to end data engineering project☆49Updated 2 years ago
- Simple stream processing pipeline☆91Updated 4 months ago
- Building a Data Pipeline with an Open Source Stack☆38Updated 4 months ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated 11 months ago
- Open source stack lakehouse☆25Updated 8 months ago
- ☆37Updated 4 months ago
- Docker with Airflow and Spark standalone cluster☆244Updated last year
- This project shows how to capture changes from postgres database and stream them into kafka☆31Updated 5 months ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆56Updated 2 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆237Updated 4 months ago
- RedditR for Content Engagement and Recommendation☆21Updated 6 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆128Updated last year
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆25Updated 8 months ago
- Code for "Efficient Data Processing in Spark" Course☆239Updated last month
- Apartments Data Pipeline using Airflow and Spark.☆18Updated 2 years ago
- DataTalks.Club's Data Engineering Zoomcamp Project☆22Updated 2 years ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆18Updated 2 months ago
- Creating Data Pipelines with Apache Airflow to manage ETL from Amazon S3 into Amazon Redshift☆13Updated 5 years ago
- build dw with dbt☆29Updated 2 weeks ago
- Spark all the ETL Pipelines☆32Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆56Updated last year