Nordstrom / bigdata-profilerLinks
Profiles the data, validates the schema and runs data quality checks and produces a report
☆20Updated 6 years ago
Alternatives and similar repositories for bigdata-profiler
Users that are interested in bigdata-profiler are comparing it to the libraries listed below
Sorting:
- Data validation library for PySpark 3.0.0☆33Updated 2 years ago
- PySpark for ETL jobs including lineage to Apache Atlas in one script via code inspection☆18Updated 8 years ago
- Amundsen Gremlin☆21Updated 2 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆61Updated 9 months ago
- Examples for High Performance Spark☆16Updated 7 months ago
- ☆10Updated 3 years ago
- Skeleton project for Apache Airflow training participants to work on.☆16Updated 4 years ago
- Spark app to merge different schemas☆23Updated 4 years ago
- Rules based grant management for Snowflake☆40Updated 6 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated last year
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- A toolset to streamline running spark python on EMR☆20Updated 8 years ago
- Yet Another (Spark) ETL Framework☆21Updated last year
- The sane way of building a data layer in Airflow☆24Updated 5 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- A Spark datasource for the HadoopOffice library☆38Updated 2 years ago
- How to manage Slowly Changing Dimensions with Apache Hive☆55Updated 5 years ago
- A tool to validate data, built around Apache Spark.☆101Updated last month
- Magic to help Spark pipelines upgrade☆35Updated 8 months ago
- Weekly Data Engineering Newsletter☆96Updated 11 months ago
- Code snippets used in demos recorded for the blog.☆37Updated last week
- A library that brings useful functions from various modern database management systems to Apache Spark☆59Updated last year
- 📆 Run, schedule, and manage your dbt jobs using Kubernetes.☆24Updated 6 years ago
- Nested Data (JSON/AVRO/XML) Parsing and Flattening in Spark☆16Updated last year
- A bunch of hacks developed around dbt☆48Updated 5 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- Data Catalog for Databases and Data Warehouses☆35Updated last year
- Flowman is an ETL framework powered by Apache Spark. With its declarative approach, Flowman simplifies the development of complex data pi…☆95Updated this week
- Delta reader for the Ray open-source toolkit for building ML applications☆46Updated last year