palantir / pyspark-style-guideLinks
This is a guide to PySpark code style presenting common situations and the associated best practices based on the most frequent recurring topics across the PySpark repos we've encountered.
☆1,184Updated last month
Alternatives and similar repositories for pyspark-style-guide
Users that are interested in pyspark-style-guide are comparing it to the libraries listed below
Sorting:
- PySpark test helper methods with beautiful error messages☆723Updated last month
- Python API for Deequ☆801Updated 6 months ago
- Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io☆2,204Updated last week
- pyspark methods to enhance developer productivity 📣 👯 🎉☆675Updated 7 months ago
- Code for Data Pipelines with Apache Airflow☆802Updated last year
- The easiest way to run Airflow locally, with linting & tests for valid DAGs and Plugins.☆257Updated 4 years ago
- Delta Lake helper methods in PySpark☆323Updated last year
- Port(ish) of Great Expectations to dbt test macros☆1,204Updated 10 months ago
- ☆833Updated 6 months ago
- Guides and docs to help you get up and running with Apache Airflow.☆810Updated 3 months ago
- A Data Engineering & Machine Learning Knowledge Hub☆1,134Updated last year
- A curated list of awesome dbt resources☆1,572Updated last week
- Run your dbt Core or dbt Fusion projects as Apache Airflow DAGs and Task Groups with a few lines of code☆1,066Updated last week
- Open Source LeetCode for PySpark, Spark, Pandas and DBT/Snowflake☆220Updated 4 months ago
- Template for a data contract used in a data mesh.☆479Updated last year
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆479Updated last year
- Accumulated knowledge and experience in the field of Data Engineering☆872Updated 2 years ago
- Construct Apache Airflow DAGs Declaratively via YAML configuration files☆1,362Updated this week
- The Lakehouse Engine is a configuration driven Spark framework, written in Python, serving as a scalable and distributed engine for sever…☆268Updated 3 weeks ago
- This repository goes over how to handle massive variety in data engineering☆304Updated 2 years ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆495Updated 2 years ago
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Educational project on how to build an ETL (Extract, Transform, Load) data pipeline, orchestrated with Airflow.☆335Updated 3 years ago
- 🐍 Quick reference guide to common patterns & functions in PySpark.☆623Updated 2 years ago
- Utility functions for dbt projects.☆1,637Updated last month
- Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow.☆375Updated 5 months ago
- Code for "Efficient Data Processing in Spark" Course☆345Updated 2 weeks ago
- Implementing best practices for PySpark ETL jobs and applications.☆2,015Updated 2 years ago
- A self-contained dbt project for testing purposes☆509Updated last year
- Tracking and measuring neighborhood and district-level eviction rates in the city of San Francisco.☆139Updated 5 years ago