dvryaboy / idl_storage_guidelinesLinks
This document attempts to capture useful patterns and warn about subtle gotchas when it comes to designing and evolving schemas for long-term serialized data. It is not intended as a guide for how to best represent a particular dataset or process.
☆13Updated 8 years ago
Alternatives and similar repositories for idl_storage_guidelines
Users that are interested in idl_storage_guidelines are comparing it to the libraries listed below
Sorting:
- Graph Analytics with Apache Kafka☆106Updated last week
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated 3 years ago
- An application that records stats about consumer group offset commits and reports them as prometheus metrics☆14Updated 6 years ago
- No longer maintained. Use https://eventsizer.io instead.☆62Updated 6 years ago
- ## Auto-archived due to inactivity. ## Simple JVM Profiler Using StatsD and Other Metrics Backends☆15Updated 2 years ago
- Cascading on Apache Flink®☆54Updated last year
- Ephemeral Hadoop clusters using Google Compute Platform☆134Updated 3 years ago
- Kafka End to End Encryption☆52Updated 2 years ago
- A Kafka-Connect Sink for S3 with no Hadoop dependencies.☆57Updated 2 years ago
- Java/Scala library for easily authoring Flyte tasks and workflows☆44Updated this week
- Playbook to provision a Confluent Cluster☆10Updated 8 years ago
- Tool for exploring data on an Apache Kafka cluster☆42Updated 5 years ago
- GCS support for avro-tools, parquet-tools and protobuf☆77Updated 8 months ago
- Mirus is a cross data-center data replication tool for Apache Kafka☆207Updated 3 weeks ago
- Example: Convert Protobuf to Parquet using parquet-avro and avro-protobuf☆30Updated 10 years ago
- ☆21Updated 2 years ago
- Building Scio from scratch step by step☆20Updated 6 years ago
- ☆26Updated 6 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 6 years ago
- Mutation testing framework and code coverage for Hive SQL☆24Updated 4 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆114Updated 5 years ago
- Terraform Modules for Setting up the Confluent Platform in AWS☆12Updated 3 years ago
- Quark is a data virtualization engine over analytic databases.☆100Updated 8 years ago
- DBeam exports SQL tables into Avro files using JDBC and Apache Beam☆195Updated 2 months ago
- CLI and Go Clients to manage Kafka components (Kafka Connect & SchemaRegistry)☆29Updated 8 years ago
- Export Airflow metrics (from mysql) in prometheus format☆29Updated 9 months ago
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- Simple Samza Job Using Confluent Platform☆14Updated 9 years ago
- Splittable Gzip codec for Hadoop☆74Updated last month