wirelessr / flink-iceberg-playground
minio as local storage and DynamoDB as catalog
☆11Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for flink-iceberg-playground
- Using the Parquet file format (with Avro) to process data with Apache Flink☆14Updated 9 years ago
- Demos using Conduktor Gateway☆16Updated 7 months ago
- Dashboard for operating Flink jobs and deployments.☆25Updated this week
- Connect DBVisualizer to Hortonwork HiveServer2☆9Updated 9 years ago
- Demonstration of a Hive Input Format for Iceberg☆26Updated 3 years ago
- PySpark for ETL jobs including lineage to Apache Atlas in one script via code inspection☆18Updated 7 years ago
- This is a basic Apache Pinot example for ingesting real-time MySQL change logs using Debezium☆27Updated 3 years ago
- This repository contains recipes for Apache Pinot.☆24Updated last week
- ☆18Updated 6 months ago
- Cloud Storage Connector integrates Apache Pulsar with cloud storage.☆28Updated this week
- Data Profiler for AWS Glue Data Catalog application as described in the AWS Big Data Blog post "Build an automatic data profiling and rep…☆19Updated 4 years ago
- Automatically loads new partitions in AWS Athena☆18Updated 4 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Provide functionality to build statistical models to repair dirty tabular data in Spark☆12Updated last year
- A commandline tool for resetting Kafka Connect source connector offsets.☆27Updated 8 months ago
- Set of tools for creating backups, compaction and restoration of Apache Kafka® Clusters☆18Updated last week
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆28Updated 3 weeks ago
- Lab project to showcase Flink's performance differences between using a SQL query and implementing the same logic via the DataStream API☆14Updated 4 years ago
- ☆22Updated 5 years ago
- Traditionally, engineers were needed to implement business logic via data pipelines before business users can start using it. Using this …☆12Updated last month
- Hadoop/Hive/Spark container to perform CI tests☆11Updated 3 years ago
- Example Set up For DBT Cloud using Github Integrations☆11Updated 4 years ago
- Apiary provides modules which can be combined to create a federated cloud data lake☆36Updated 7 months ago
- Optimizing downstream data processing with Amazon Kinesis Data Firehose and Amazon EMR running Apache Spark☆13Updated last year
- A Java client library for Oxia☆18Updated last week
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- ☆13Updated last week
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated last year
- Code for Apache Hudi, Apache Iceberg and Delta Lake analysis☆9Updated 9 months ago