julioasotodv / spark-df-profilingView external linksLinks
Create HTML profiling reports from Apache Spark DataFrames
β197Feb 2, 2020Updated 6 years ago
Alternatives and similar repositories for spark-df-profiling
Users that are interested in spark-df-profiling are comparing it to the libraries listed below
Sorting:
- Data Exploration in PySpark made easy - Pyspark_dist_explore provides methods to get fast insights in your Spark DataFrames.β102Aug 20, 2019Updated 6 years ago
- pyspark methods to enhance developer productivity π£ π― πβ682Mar 6, 2025Updated 11 months ago
- This repository contains NiFi processors for interacting with Snowflake Cloud Data Platform.β12Dec 13, 2024Updated last year
- Agile Data Preparation Workflows madeΒ easy with Pandas, Dask, cuDF, Dask-cuDF, Vaex and PySparkβ1,541Dec 2, 2024Updated last year
- Movie Recommendation System Using Spark ML, Akka and Cassandraβ12Oct 4, 2019Updated 6 years ago
- Regularized latent variable mixed membership modelingβ13Aug 12, 2013Updated 12 years ago
- The PEDSnet Data Quality Assessment Toolkit (OMOP CDM)β27Apr 16, 2021Updated 4 years ago
- Jupyter magics and kernels for working with remote Spark clustersβ1,363Sep 9, 2025Updated 5 months ago
- β10Jun 29, 2023Updated 2 years ago
- β12Aug 6, 2020Updated 5 years ago
- Pandas in black and white: a collection of opinionated pandas flashcardsβ14Feb 15, 2019Updated 7 years ago
- DataQuality for BigDataβ147Dec 15, 2023Updated 2 years ago
- β26Jul 9, 2023Updated 2 years ago
- 1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.β13,372Feb 2, 2026Updated last week
- This code allows you to load any existing Azure Data Factory project file (*.dfproj) and perform further actions like "Export to ARM Tempβ¦β26May 5, 2019Updated 6 years ago
- Spark package for checking data qualityβ222Feb 28, 2020Updated 5 years ago
- The easiest way to integrate Kedro and Great Expectationsβ54Dec 26, 2022Updated 3 years ago
- Customized Spark processor on NiFiβ15Dec 4, 2015Updated 10 years ago
- Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.β3,580Feb 2, 2026Updated last week
- CLI for data platformβ20Nov 12, 2025Updated 3 months ago
- Publication: Linked electronic health records for research on a nationwide cohort including over 54 million people in Englandβ19Mar 12, 2023Updated 2 years ago
- Single view demoβ14Feb 13, 2016Updated 10 years ago
- python automatic data quality check toolkitβ278Sep 15, 2020Updated 5 years ago
- Joblib Apache Spark Backendβ249Apr 7, 2025Updated 10 months ago
- β19Mar 24, 2018Updated 7 years ago
- Examples of metadata driven SQL processes implemented in Databricksβ16May 21, 2021Updated 4 years ago
- Flowchart for debugging Spark applicationsβ106Sep 25, 2024Updated last year
- β39Mar 4, 2019Updated 6 years ago
- A Spark datasource for the HadoopOffice library