mrn-aglic / spark-standalone-clusterLinks
This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.
☆36Updated 2 years ago
Alternatives and similar repositories for spark-standalone-cluster
Users that are interested in spark-standalone-cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Code for dbt tutorial☆162Updated last month
- Local Environment to Practice Data Engineering☆141Updated 10 months ago
- ☆90Updated 8 months ago
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆78Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆197Updated last year
- Code for "Efficient Data Processing in Spark" Course☆345Updated 2 weeks ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆279Updated last year
- Project for "Data pipeline design patterns" blog.☆46Updated last year
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆268Updated 3 weeks ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆20Updated 2 months ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆74Updated 2 years ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆34Updated 5 years ago
- End to end data engineering project☆57Updated 3 years ago
- Code snippets for Data Engineering Design Patterns book☆249Updated 7 months ago
- Project utilising data from the Age of Empires api at 'https://aoestats.io'☆52Updated 10 months ago
- Simple stream processing pipeline☆110Updated last year
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆494Updated 2 years ago
- In this repository we store all materials for dlt workshops, courses, etc.☆233Updated 2 weeks ago
- Near real time ETL to populate a dashboard.☆72Updated last month
- Code for blog at: https://www.startdataengineering.com/post/docker-for-de/☆40Updated last year
- ☆137Updated 8 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- The resources of the preparation course for Databricks Data Engineer Professional certification exam☆144Updated this week
- Event data simulator. Generates a stream of pseudo-random events from a set of users, designed to simulate web traffic.☆89Updated last year
- Notebooks to learn Databricks Lakehouse Platform☆35Updated last week
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- Execution of DBT models using Apache Airflow through Docker Compose☆121Updated 2 years ago
- build dw with dbt☆47Updated last year