mrn-aglic / spark-standalone-cluster
This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.
☆23Updated last year
Related projects ⓘ
Alternatives and complementary repositories for spark-standalone-cluster
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆39Updated 3 years ago
- ☆68Updated 5 months ago
- Docker with Airflow and Spark standalone cluster☆245Updated last year
- Simple stream processing pipeline☆92Updated 5 months ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆56Updated last year
- Delta Lake examples☆207Updated last month
- End to end data engineering project☆51Updated 2 years ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆70Updated 6 months ago
- Project for "Data pipeline design patterns" blog.☆41Updated 3 months ago
- Code for "Efficient Data Processing in Spark" Course☆245Updated last month
- The resources of the preparation course for Databricks Data Engineer Professional certification exam☆86Updated last month
- A Python Library to support running data quality rules while the spark job is running⚡☆163Updated last week
- Example of how to leverage Apache Spark distributed capabilities to call REST-API using a UDF☆50Updated 2 years ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆32Updated 4 years ago
- Delta-Lake, ETL, Spark, Airflow☆44Updated 2 years ago
- Delta Lake helper methods in PySpark☆304Updated 2 months ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆18Updated 2 months ago
- Sample project to demonstrate data engineering best practices☆166Updated 8 months ago
- Custom PySpark Data Sources☆27Updated 2 months ago
- ☆38Updated 4 months ago
- ☆40Updated 10 months ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆223Updated 3 weeks ago
- Git Repo for EDW Best Practice Assets on the Lakehouse☆15Updated 11 months ago
- ☆32Updated last year
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆123Updated 2 years ago
- ☆113Updated last month
- ☆23Updated 3 years ago
- Spark style guide☆256Updated last month