anemos-io / protobeam
☆22Updated 5 years ago
Alternatives and similar repositories for protobeam:
Users that are interested in protobeam are comparing it to the libraries listed below
- Kafka to Avro Writer based on Apache Beam. It's a generic solution that reads data from multiple kafka topics and stores it on in cloud s …☆25Updated 3 years ago
- Paper: A Zero-rename committer for object stores☆20Updated 3 years ago
- A protobuf schema registry on steroids. It will keep track of the contracts throughout your organization, making sure no contract is brok…☆43Updated 4 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- Using the Parquet file format (with Avro) to process data with Apache Flink☆14Updated 9 years ago
- Dione - a Spark and HDFS indexing library☆51Updated 10 months ago
- ☆81Updated last year
- Rokku project. This project acts as a proxy on top of any S3 storage solution providing services like authentication, authorization, shor…☆66Updated 11 months ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated 11 months ago
- Example program that writes Parquet formatted data to plain files (i.e., not Hadoop hdfs); Parquet is a columnar storage format.☆38Updated 2 years ago
- Example Spark applications that run on Kubernetes and access GCP products, e.g., GCS, BigQuery, and Cloud PubSub☆37Updated 7 years ago
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated 2 years ago
- A Transactional Metadata Store Backed by Apache Kafka☆22Updated 3 weeks ago
- Use SQL to transform your avro schema/records☆28Updated 7 years ago
- Demonstration of a Hive Input Format for Iceberg☆26Updated 3 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 6 years ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated 10 months ago
- HDFS compatible Distributed Filesystem backed Cassandra☆25Updated 9 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated 11 months ago
- ☆26Updated 5 years ago
- Shunting Yard is a real-time data replication tool that copies data between Hive Metastores.☆20Updated 3 years ago
- An Operator for scheduling and executing NiFi Flows as Jobs on Kubernetes☆53Updated 4 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Spark cloud integration: tests, cloud committers and more☆19Updated 2 weeks ago
- Lenses.io JDBC driver for Apache Kafka☆20Updated 3 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 3 years ago
- Example of a tested Apache Flink application.☆41Updated 5 years ago
- Apache Beam Site☆29Updated this week
- Extensions available for use in Apiary☆10Updated 5 months ago