ongxuanhong / de02-pyspark-optimizationLinks
☆14Updated 2 years ago
Alternatives and similar repositories for de02-pyspark-optimization
Users that are interested in de02-pyspark-optimization are comparing it to the libraries listed below
Sorting:
- Simple stream processing pipeline☆110Updated last year
- Code for dbt tutorial☆162Updated last month
- ☆268Updated 11 months ago
- Docker with Airflow and Spark standalone cluster☆260Updated 2 years ago
- ☆90Updated 8 months ago
- Delta Lake examples☆229Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆74Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆198Updated last year
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆48Updated last year
- Building a Data Pipeline with an Open Source Stack☆54Updated 3 months ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆20Updated 2 months ago
- Code snippets for Data Engineering Design Patterns book☆232Updated 7 months ago
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆61Updated 2 years ago
- Repo for everything open table formats (Iceberg, Hudi, Delta Lake) and the overall Lakehouse architecture☆110Updated 4 months ago
- End to end data engineering project☆57Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆34Updated 5 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆279Updated last year
- End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore,…☆44Updated last year
- Code for "Efficient Data Processing in Spark" Course☆344Updated 4 months ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆494Updated 2 years ago
- ☆140Updated 8 months ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- Quick Guides from Dremio on Several topics☆78Updated 2 weeks ago
- Local Environment to Practice Data Engineering☆141Updated 9 months ago
- A Python package that creates fine-grained dbt tasks on Apache Airflow☆74Updated last week
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆134Updated 2 years ago
- ☆17Updated last year
- Delta Lake Documentation☆50Updated last year
- End-to-end data platform leveraging the Modern data stack☆51Updated last year