mrn-aglic / pyspark-playgroundLinks
☆90Updated 6 months ago
Alternatives and similar repositories for pyspark-playground
Users that are interested in pyspark-playground are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Local Environment to Practice Data Engineering☆143Updated 7 months ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆268Updated 5 months ago
- 📡 Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset. Ideal for learning data systems.☆47Updated 6 months ago
- Code snippets for Data Engineering Design Patterns book☆142Updated 4 months ago
- This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.☆35Updated 2 years ago
- Code for "Efficient Data Processing in Spark" Course☆326Updated 2 months ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆496Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- The resources of the preparation course for Databricks Data Engineer Professional certification exam☆127Updated last month
- Building a Data Pipeline with an Open Source Stack☆55Updated last month
- Code for dbt tutorial☆159Updated 2 months ago
- Sample project to demonstrate data engineering best practices☆195Updated last year
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆271Updated last year
- Nyc_Taxi_Data_Pipeline - DE Project☆115Updated 9 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated 2 years ago
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆59Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆73Updated last year
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆74Updated last year
- Simple stream processing pipeline☆103Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆98Updated 4 months ago
- Python data repo, jupyter notebook, python scripts and data.☆519Updated 7 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆141Updated 2 years ago
- Delta Lake examples☆227Updated 10 months ago
- End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore,…☆42Updated 9 months ago
- build dw with dbt☆48Updated 9 months ago
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆257Updated last week
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆133Updated 2 years ago
- Multi-container environment with Hadoop, Spark and Hive☆218Updated 3 months ago
- ☆40Updated 2 years ago