MrPowers / farsanteLinks
Fake Pandas / PySpark DataFrame creator
☆47Updated last year
Alternatives and similar repositories for farsante
Users that are interested in farsante are comparing it to the libraries listed below
Sorting:
- Read Delta tables without any Spark☆47Updated last year
- Possibly the fastest DataFrame-agnostic quality check library in town.☆195Updated last week
- Delta lake and filesystem helper methods☆51Updated last year
- Dask integration for Snowflake☆30Updated 8 months ago
- Supporting materials/code examples for my course in data engineering for machine learning.☆38Updated 2 years ago
- A simple and easy to use Data Quality (DQ) tool built with Python.☆50Updated last year
- Pandas helper functions☆31Updated 2 years ago
- Cost Efficient Data Pipelines with DuckDB☆54Updated 2 months ago
- Data-aware orchestration with dagster, dbt, and airbyte☆30Updated 2 years ago
- Tutorials for Fugue - A unified interface for distributed computing. Fugue executes SQL, Python, and Pandas code on Spark and Dask withou…☆113Updated last year
- A Python Library to support running data quality rules while the spark job is running⚡☆188Updated this week
- A Python package to help Databricks Unity Catalog users to read and query Delta Lake tables with Polars, DuckDb, or PyArrow.☆25Updated last year
- Weekly Data Engineering Newsletter☆95Updated last year
- Utility functions for dbt projects running on Spark☆34Updated 5 months ago
- JumpSpark - A modern cookiecutter template for pyspark projects with batteries included.☆10Updated 2 years ago
- csv and flat-file sniffer built in Rust.☆41Updated last year
- PyJaws: A Pythonic Way to Define Databricks Jobs and Workflows☆43Updated 3 weeks ago
- Pythonic Programming Framework to orchestrate jobs in Databricks Workflow☆217Updated 3 weeks ago
- A GitHub Action that makes it easy to use Great Expectations to validate your data pipelines in your CI workflows.☆80Updated last year
- Soda Spark is a PySpark library that helps you with testing your data in Spark Dataframes☆64Updated 3 years ago
- ✨ A Pydantic to PySpark schema library☆98Updated last week
- Write your dbt models using Ibis☆68Updated 3 months ago
- A write-audit-publish implementation on a data lake without the JVM☆46Updated 11 months ago
- A dbt-Core package for generating models from an activity stream.☆43Updated last year
- ☆75Updated 4 months ago
- The Picnic Data Vault framework.☆128Updated last year
- Full stack data engineering tools and infrastructure set-up☆53Updated 4 years ago
- Delta Lake examples☆226Updated 9 months ago
- Delta Lake helper methods in PySpark☆324Updated 10 months ago
- VSCode extension to work with Databricks☆132Updated last month