anemos-io / protobeamLinks
☆22Updated 6 years ago
Alternatives and similar repositories for protobeam
Users that are interested in protobeam are comparing it to the libraries listed below
Sorting:
- Fast Apache Avro serialization/deserialization library☆46Updated 3 weeks ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- A native Kafka protocol proxy for Apache Kafka☆21Updated 8 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆117Updated 4 years ago
- Spark Connector to read and write with Pulsar☆117Updated last week
- An application that records stats about consumer group offset commits and reports them as prometheus metrics☆14Updated 6 years ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆115Updated 5 years ago
- ☆81Updated 2 years ago
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago
- Kafka to Avro Writer based on Apache Beam. It's a generic solution that reads data from multiple kafka topics and stores it on in cloud s…☆25Updated 4 years ago
- ☆50Updated 5 years ago
- A highly available and infinitely scalable, drop-in replacement for Kafka Streams☆21Updated 8 months ago
- Rokku project. This project acts as a proxy on top of any S3 storage solution providing services like authentication, authorization, shor…☆70Updated 5 months ago
- GCS support for avro-tools, parquet-tools and protobuf☆78Updated 9 months ago
- Spark cloud integration: tests, cloud committers and more☆20Updated last year
- A real-time data replication platform that "unbundles" the receiving, transforming, and transport of data streams.☆82Updated last year
- Extensible streaming ingestion pipeline on top of Apache Spark☆46Updated 6 months ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated last year
- Kubernetes Operator for the Ververica Platform☆35Updated 3 years ago
- Spark package to "plug" holes in data using SQL based rules ⚡️ 🔌☆29Updated 5 years ago
- Example of a tested Apache Flink application.☆43Updated 6 years ago
- Protobuf converter plugin for Kafka Connect☆94Updated 2 years ago
- Library offering http based query on top of Kafka Streams Interactive Queries☆69Updated 2 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 7 years ago
- DBeam exports SQL tables into Avro files using JDBC and Apache Beam☆193Updated 3 months ago
- Example program that writes Parquet formatted data to plain files (i.e., not Hadoop hdfs); Parquet is a columnar storage format.☆38Updated 3 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- ☆36Updated 3 years ago