MartijnVisser / flink-only-sqlLinks
Traditionally, engineers were needed to implement business logic via data pipelines before business users can start using it. Using this demo, we would explain how data analysts and non-engineers can use only Flink SQL to explore and transform data into insights and actions, without writing any Java or Python code.
☆12Updated last week
Alternatives and similar repositories for flink-only-sql
Users that are interested in flink-only-sql are comparing it to the libraries listed below
Sorting:
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago
- Code for Apache Hudi, Apache Iceberg and Delta Lake analysis☆9Updated last year
- Java implementation for performing operations on Apache Iceberg and Hive tables☆19Updated last month
- Yet Another (Spark) ETL Framework☆21Updated last year
- ☆40Updated 2 years ago
- A tool to benchmark L (loading) workloads within ETL workloads☆24Updated last month
- Examples for using Apache Flink® with DataStream API, Table API, Flink SQL and connectors such as MySQL, JDBC, CDC, Kafka.☆64Updated last year
- ☆58Updated 11 months ago
- A Table format agnostic data sharing framework☆38Updated last year
- ☆22Updated last month
- This project contains a couple of tools to analyze data around the Apache Flink community.☆18Updated last year
- MCP Server for Trino developed via MCP Python SDK☆18Updated 2 months ago
- Multi-hop declarative data pipelines☆117Updated 3 weeks ago
- Apache flink☆18Updated 2 years ago
- Ecosystem website for Apache Flink☆12Updated last year
- Db2 JDBC connector for Trino☆19Updated 2 years ago
- A testing framework for Trino☆26Updated 3 months ago
- Mock streaming data generator☆17Updated last year
- Dashboard for operating Flink jobs and deployments.☆37Updated 7 months ago
- Code snippets used in demos recorded for the blog.☆37Updated 3 weeks ago
- Traffic routing for Trino Clusters☆27Updated 3 weeks ago
- In-Memory Analytics for Kafka using DuckDB☆129Updated this week
- minio as local storage and DynamoDB as catalog☆15Updated last year
- Repo for everything open table formats (Iceberg, Hudi, Delta Lake) and the overall Lakehouse architecture☆89Updated 2 weeks ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated last year
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆29Updated 2 weeks ago
- ☆30Updated 3 weeks ago
- An exploration of Flink and change-data-capture via flink-cdc-connectors☆11Updated 3 years ago
- Kubernetes Operator for the Ververica Platform☆35Updated 2 years ago
- Lab project to showcase Flink's performance differences between using a SQL query and implementing the same logic via the DataStream API☆14Updated 5 years ago