mrn-aglic / spark-standalone-cluster
This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.
☆32Updated last year
Alternatives and similar repositories for spark-standalone-cluster:
Users that are interested in spark-standalone-cluster are comparing it to the libraries listed below
- Docker with Airflow and Spark standalone cluster☆253Updated last year
- Local Environment to Practice Data Engineering☆143Updated 3 months ago
- Simple stream processing pipeline☆99Updated 9 months ago
- Building a Modern Data Lake with Minio, Spark, Airflow via Docker.☆17Updated 10 months ago
- Code for dbt tutorial☆153Updated 9 months ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆33Updated 4 years ago
- Delta-Lake, ETL, Spark, Airflow☆46Updated 2 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆260Updated 8 months ago
- Sample project to demonstrate data engineering best practices☆184Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆141Updated 4 years ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆63Updated last year
- Code for "Efficient Data Processing in Spark" Course☆287Updated 5 months ago
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆128Updated 2 years ago
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆38Updated 4 years ago
- End to end data engineering project☆53Updated 2 years ago
- ☆82Updated last month
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆23Updated 2 years ago
- End-to-end data pipeline that ingests, processes, and stores data. It uses Apache Airflow to schedule scripts that fetch data from an API…☆14Updated 8 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆59Updated last year
- Near real time ETL to populate a dashboard.☆73Updated 9 months ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆18Updated 6 months ago
- Delta Lake examples☆218Updated 5 months ago
- 📡 Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset. Ideal for learning data systems.☆40Updated 2 months ago
- ☆36Updated 2 years ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆86Updated this week
- Resources for video demonstrations and blog posts related to DataOps on AWS☆172Updated 3 years ago
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆67Updated last year
- ☆126Updated last month
- ☆40Updated 8 months ago
- Execution of DBT models using Apache Airflow through Docker Compose☆116Updated 2 years ago