mrn-aglic / spark-standalone-cluster
This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.
☆28Updated last year
Alternatives and similar repositories for spark-standalone-cluster:
Users that are interested in spark-standalone-cluster are comparing it to the libraries listed below
- ☆75Updated 7 months ago
- Docker with Airflow and Spark standalone cluster☆247Updated last year
- Delta-Lake, ETL, Spark, Airflow☆45Updated 2 years ago
- Simple stream processing pipeline☆94Updated 7 months ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆32Updated 4 years ago
- Delta Lake examples☆214Updated 3 months ago
- Local Environment to Practice Data Engineering☆112Updated 2 weeks ago
- End to end data engineering project☆53Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆174Updated 10 months ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆252Updated 6 months ago
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆62Updated last year
- Code for "Efficient Data Processing in Spark" Course☆269Updated 3 months ago
- Near real time ETL to populate a dashboard.☆71Updated 7 months ago
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆39Updated 3 years ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆58Updated last year
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆96Updated 7 months ago
- ☆116Updated 3 months ago
- A Python Library to support running data quality rules while the spark job is running⚡☆167Updated last week
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆126Updated 2 years ago
- Example of how to leverage Apache Spark distributed capabilities to call REST-API using a UDF☆50Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆49Updated last week
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆229Updated 2 months ago
- Step-by-step tutorial on building a Kimball dimensional model with dbt☆118Updated 6 months ago
- Code for dbt tutorial☆149Updated 7 months ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆473Updated 2 years ago
- ☆44Updated last year
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆21Updated 2 years ago