fithisux / experiment-with-trino-minio-hiveLinks
☆13Updated 2 years ago
Alternatives and similar repositories for experiment-with-trino-minio-hive
Users that are interested in experiment-with-trino-minio-hive are comparing it to the libraries listed below
Sorting:
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆30Updated this week
- A Table format agnostic data sharing framework☆42Updated last year
- FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...☆22Updated last week
- A write-audit-publish implementation on a data lake without the JVM☆45Updated last year
- Delta lake and filesystem helper methods☆51Updated last year
- DataOps Observability is part of DataKitchen's Open Source Data Observability. DataOps Observability monitors every data journey from da…☆50Updated last month
- Delta Lake Documentation☆51Updated last year
- A platform and cloud-based service for data sharing based on the Delta Sharing protocol.☆21Updated last year
- Personal Finance Project to automatically collect swiss banking transaction into a DWH and visualise it☆25Updated last year
- The go to demo for public and private dbt Learn☆80Updated 8 months ago
- DataOps Data Quality TestGen is part of DataKitchen's Open Source Data Observability. DataOps TestGen delivers simple, fast data qualit…☆68Updated last week
- Yet Another (Spark) ETL Framework☆21Updated 2 years ago
- Data validation library for PySpark 3.0.0☆33Updated 3 years ago
- Delta Lake examples☆235Updated last year
- Support for generating modern platforms dynamically with services such as Kafka, Spark, Streamsets, HDFS, ....☆78Updated this week
- Code that was used as an example during the Data+AI Summit 2020☆15Updated 4 years ago
- Utility functions for dbt projects running on Spark☆34Updated last week
- Full stack data engineering tools and infrastructure set-up☆57Updated 4 years ago
- Fake Pandas / PySpark DataFrame creator☆48Updated last year
- Declarative text based tool for data analysts and engineers to extract, load, transform and orchestrate their data pipelines.☆177Updated this week
- Magic to help Spark pipelines upgrade☆34Updated last year
- Data Quality and Observability platform for the whole data lifecycle, from profiling new data sources to full automation with Data Observ…☆176Updated this week
- Sample configuration to deploy a modern data platform.☆89Updated 3 years ago
- PDF DataSource for Apache Spark, allow to read PDF files directly to the DataFrame and ocr it☆77Updated 7 months ago
- Data Catalog for Databases and Data Warehouses☆35Updated last year
- A dbt (data build tool) project you can use for testing purposes or experimentation☆36Updated 2 years ago
- Dask integration for Snowflake☆30Updated 4 months ago
- How to evaluate the Quality of your Data with Great Expectations and Spark.☆31Updated 2 years ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆61Updated 2 years ago
- A portable Datamart and Business Intelligence suite built with Docker, sqlmesh + dbtcore, DuckDB and Superset☆55Updated 2 months ago