pyjaime / docker-airflow-sparkLinks
Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks
☆24Updated 3 years ago
Alternatives and similar repositories for docker-airflow-spark
Users that are interested in docker-airflow-spark are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆262Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- ☆41Updated 3 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆144Updated 2 years ago
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆80Updated 2 years ago
- ☆46Updated last year
- End to end data engineering project☆58Updated 3 years ago
- A Series of Notebooks on how to start with Kafka and Python☆151Updated 11 months ago
- Simple stream processing pipeline☆110Updated last year
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆43Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆202Updated last year
- Code for dbt tutorial☆168Updated 5 months ago
- Code snippets for Data Engineering Design Patterns book☆331Updated last month
- This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.☆38Updated 2 years ago
- ☆93Updated last year
- Code for my "Efficient Data Processing in SQL" book.☆60Updated last year
- Project for "Data pipeline design patterns" blog.☆50Updated last year
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆38Updated 4 years ago
- Near real time ETL to populate a dashboard.☆73Updated 5 months ago
- Building a Data Pipeline with an Open Source Stack☆56Updated 7 months ago
- ☆90Updated 3 years ago
- build dw with dbt☆50Updated last year
- Delta Lake examples☆238Updated last year
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆488Updated last year
- ☆30Updated 2 years ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆65Updated 2 years ago
- Example repo to create end to end tests for data pipeline.☆25Updated last year
- Local Environment to Practice Data Engineering☆144Updated last year
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- Code for blog at: https://www.startdataengineering.com/post/docker-for-de/☆40Updated last year