sporveien / bricker
CLI tool for syncing a Databricks folder structure with a local git repo.
☆17Updated last month
Related projects: ⓘ
- ☕⛵WIP PySpark dependency management☆22Updated 6 years ago
- pytest plugin to run the tests with support of pyspark☆84Updated 6 months ago
- Make your libraries magically appear in Databricks.☆46Updated last year
- Accelerator to rapidly deploy customized features for your business☆55Updated 9 months ago
- Apache (Py)Spark type annotations (stub files).☆115Updated 2 years ago
- Presentation about Pyspark and how Arrow makes it faster☆22Updated 3 years ago
- A pyspark lib to validate data quality☆18Updated last year
- DEPRECATED: Integrating Jupyter with Databricks via SSH☆71Updated 2 years ago
- Unit and integration testing with PySpark can be tough to figure out, let's make that easier.☆22Updated 8 years ago
- Example unit tests for Apache Spark Python scripts using the py.test framework☆85Updated 8 years ago
- Conversion utility from Zeppelin notes to Jupyter notebooks.☆44Updated 4 years ago
- control spark-shell from vim☆10Updated 7 years ago
- Airflow workflow management platform chef cookbook.☆67Updated 5 years ago
- An example PySpark project with pytest☆17Updated 6 years ago
- Utilities to work with Scala/Java code with py4j☆40Updated 8 months ago
- Dask integration for Snowflake☆29Updated 2 months ago
- Functional Airflow DAG definitions.☆38Updated 7 years ago
- ☆54Updated 7 years ago
- Example for an airflow plugin☆49Updated 8 years ago
- A simplified, autogenerated API client interface using the databricks-cli package☆60Updated last year
- Python API for Deequ☆41Updated 3 years ago
- Machine Learning Pipeline Stages for Spark (exposed in Scala/Java + Python)☆74Updated 10 months ago
- Python client for Marquez☆12Updated 3 years ago
- Code Repository for the EVO-ODAS☆31Updated 6 years ago
- Supporting materials/code examples for my course in data engineering for machine learning.☆37Updated last year
- Deploy dask on YARN clusters☆69Updated last month
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆106Updated 6 years ago
- Splittable SAS (.sas7bdat) Input Format for Hadoop and Spark SQL☆88Updated last year
- Create HTML profiling reports from Apache Spark DataFrames☆195Updated 4 years ago
- Deploy dask-distributed on google container engine using kubernetes☆40Updated 5 years ago