anemos-io / protobeamLinks
☆22Updated 6 years ago
Alternatives and similar repositories for protobeam
Users that are interested in protobeam are comparing it to the libraries listed below
Sorting:
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated 2 years ago
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago
- Use SQL to transform your avro schema/records☆28Updated 7 years ago
- An application that records stats about consumer group offset commits and reports them as prometheus metrics☆14Updated 6 years ago
- Using the Parquet file format (with Avro) to process data with Apache Flink☆14Updated 9 years ago
- A testing framework for Trino☆26Updated 2 months ago
- A Transactional Metadata Store Backed by Apache Kafka☆24Updated last week
- Kafka to Avro Writer based on Apache Beam. It's a generic solution that reads data from multiple kafka topics and stores it on in cloud s…☆25Updated 4 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 4 years ago
- Kafka Streams + Memcached (e.g. AWS ElasticCache) for low-latency in-memory lookups☆13Updated 5 years ago
- Apache Amaterasu☆56Updated 5 years ago
- Hive Storage Handler for Kinesis.☆11Updated 10 years ago
- Paper: A Zero-rename committer for object stores☆20Updated 4 years ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated this week
- Example Spark applications that run on Kubernetes and access GCP products, e.g., GCS, BigQuery, and Cloud PubSub☆37Updated 7 years ago
- Dione - a Spark and HDFS indexing library☆52Updated last year
- A command line client for consuming Postgres logical decoding events in the pgoutput format☆11Updated 11 months ago
- A protobuf schema registry on steroids. It will keep track of the contracts throughout your organization, making sure no contract is brok…☆43Updated 5 years ago
- ☆26Updated 5 years ago
- Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.☆9Updated 3 years ago
- Kafka Streams DSL vs Processor API☆16Updated 7 years ago
- Apache Beam Site☆29Updated 2 weeks ago
- A Kafka Streams process to convert __consumer_offsets to a JSON-readable topic☆13Updated 5 years ago
- Kubernetes Operator for the Ververica Platform☆35Updated 2 years ago
- minio as local storage and DynamoDB as catalog☆15Updated last year
- Rokku project. This project acts as a proxy on top of any S3 storage solution providing services like authentication, authorization, shor…☆68Updated 3 months ago
- Spark UDFs to deserialize Avro messages with schemas stored in Schema Registry.☆19Updated 7 years ago
- Data Sketches for Apache Spark☆22Updated 2 years ago
- Spooker is a dynamic framework for processing high volume data streams via processing pipelines☆29Updated 9 years ago
- ☆50Updated 4 years ago