mrn-aglic / spark-standalone-cluster
This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.
☆30Updated last year
Alternatives and similar repositories for spark-standalone-cluster:
Users that are interested in spark-standalone-cluster are comparing it to the libraries listed below
- ☆79Updated 2 weeks ago
- Sample project to demonstrate data engineering best practices☆179Updated 11 months ago
- End to end data engineering project☆53Updated 2 years ago
- Delta Lake examples☆217Updated 4 months ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆60Updated last year
- Near real time ETL to populate a dashboard.☆73Updated 8 months ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆256Updated 7 months ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆18Updated 5 months ago
- Code for dbt tutorial☆151Updated 8 months ago
- ☆119Updated last week
- Project for "Data pipeline design patterns" blog.☆43Updated 6 months ago
- Execution of DBT models using Apache Airflow through Docker Compose☆114Updated 2 years ago
- Docker with Airflow and Spark standalone cluster☆249Updated last year
- Local Environment to Practice Data Engineering☆141Updated last month
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆22Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆46Updated 2 years ago
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆63Updated last year
- 📡 Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset. Ideal for learning data systems.☆36Updated last month
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆38Updated 11 months ago
- Code snippets for Data Engineering Design Patterns book☆69Updated 2 weeks ago
- build dw with dbt☆36Updated 3 months ago
- Simple stream processing pipeline☆98Updated 8 months ago
- Building a Data Pipeline with an Open Source Stack☆45Updated 7 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆59Updated last year
- Code for "Efficient Data Processing in Spark" Course☆275Updated 4 months ago
- Step-by-step tutorial on building a Kimball dimensional model with dbt☆126Updated 7 months ago
- ☆13Updated last year
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆26Updated last year
- End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore,…☆29Updated 4 months ago
- Code for my "Efficient Data Processing in SQL" book.☆56Updated 6 months ago