MartijnVisser / flink-only-sqlLinks
Traditionally, engineers were needed to implement business logic via data pipelines before business users can start using it. Using this demo, we would explain how data analysts and non-engineers can use only Flink SQL to explore and transform data into insights and actions, without writing any Java or Python code.
☆12Updated last week
Alternatives and similar repositories for flink-only-sql
Users that are interested in flink-only-sql are comparing it to the libraries listed below
Sorting:
- Java implementation for performing operations on Apache Iceberg and Hive tables☆19Updated last month
- Yet Another (Spark) ETL Framework☆21Updated last year
- ☆58Updated 10 months ago
- Examples for using Apache Flink® with DataStream API, Table API, Flink SQL and connectors such as MySQL, JDBC, CDC, Kafka.☆64Updated last year
- Demos for Nessie. Nessie provides Git-like capabilities for your Data Lake.☆29Updated 2 weeks ago
- 🌟 Examples of use cases that utilize Decodable, as well as demos for related open-source projects such as Apache Flink, Debezium, and Po…☆76Updated 2 months ago
- Multi-hop declarative data pipelines☆115Updated last week
- ☆40Updated 2 years ago
- A Table format agnostic data sharing framework☆38Updated last year
- Ecosystem website for Apache Flink☆12Updated last year
- minio as local storage and DynamoDB as catalog☆15Updated last year
- Apache flink☆18Updated 2 years ago
- This project contains a couple of tools to analyze data around the Apache Flink community.☆18Updated last year
- ☆80Updated last month
- Official repo for the Materialize + Redpanda + dbt Hack Day 2022, including a sample project to get everyone started!☆60Updated 2 years ago
- A testing framework for Trino☆26Updated 3 months ago
- Minimal example to run Trino, Minio, and Hive standalone metastore on docker☆52Updated 3 years ago
- ☆38Updated 2 years ago
- CLI tool to bulk migrate the tables from one catalog another without a data copy☆79Updated 2 months ago
- ☆22Updated 6 years ago
- Kafka Connector for Iceberg tables☆16Updated last year
- Docker envinroment to stream data from Kafka to Iceberg tables☆29Updated last year
- Presto Trino with Apache Hive Postgres metastore☆42Updated 9 months ago
- The Internals of PySpark☆26Updated 5 months ago
- In-Memory Analytics for Kafka using DuckDB☆126Updated this week
- MCP Server for Trino developed via MCP Python SDK☆16Updated last month
- Web-based query UI for Trino☆21Updated last month
- Explore Apache Kafka data pipelines in Kubernetes.☆46Updated 3 months ago
- The Control Plane for Apache Iceberg☆72Updated this week
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago