dvryaboy / idl_storage_guidelinesLinks
This document attempts to capture useful patterns and warn about subtle gotchas when it comes to designing and evolving schemas for long-term serialized data. It is not intended as a guide for how to best represent a particular dataset or process.
☆13Updated 8 years ago
Alternatives and similar repositories for idl_storage_guidelines
Users that are interested in idl_storage_guidelines are comparing it to the libraries listed below
Sorting:
- ☆21Updated 2 years ago
- An application that records stats about consumer group offset commits and reports them as prometheus metrics☆14Updated 6 years ago
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated 2 years ago
- ## Auto-archived due to inactivity. ## Simple JVM Profiler Using StatsD and Other Metrics Backends☆15Updated 2 years ago
- Graph Analytics with Apache Kafka☆106Updated 2 weeks ago
- Kafka End to End Encryption☆53Updated 2 years ago
- Cascading on Apache Flink®☆54Updated last year
- Export Airflow metrics (from mysql) in prometheus format☆29Updated 5 months ago
- Use cases built on SnappyData. Use cases contained here: 1. Ad Analytics 2. Streaming data ingestion from RabbitMQ.☆32Updated 3 years ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆112Updated 5 years ago
- Quark is a data virtualization engine over analytic databases.☆100Updated 8 years ago
- Mirus is a cross data-center data replication tool for Apache Kafka☆206Updated 3 months ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆97Updated 5 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Bash completion for Kafka command line utilities.☆36Updated 2 months ago
- Example projects for using Spark and Cassandra With DSE Analytics☆57Updated 2 years ago
- Simple Samza Job Using Confluent Platform☆14Updated 9 years ago
- A Kafka-Connect Sink for S3 with no Hadoop dependencies.☆57Updated 2 years ago
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆90Updated last year
- Cassandra Node Diagnostics Tools☆51Updated 8 years ago
- ☆26Updated 5 years ago
- Splittable Gzip codec for Hadoop☆73Updated last week
- Spooker is a dynamic framework for processing high volume data streams via processing pipelines☆30Updated 9 years ago
- Kafka sink connector for streaming messages to PostgreSQL☆92Updated 4 years ago
- An Operator for scheduling and executing NiFi Flows as Jobs on Kubernetes☆53Updated 5 years ago
- Examples for using Apache Flink® with DataStream API, Table API, Flink SQL and connectors such as MySQL, JDBC, CDC, Kafka.☆64Updated 2 years ago
- Avro Schema Shredder is a REST API that enables storage of Avro Schemas in Apache Atlas. This API enables an organization to use Apache A…☆13Updated 8 years ago
- Using the Parquet file format (with Avro) to process data with Apache Flink☆14Updated 10 years ago
- Terraform Modules for Setting up the Confluent Platform in AWS☆12Updated 3 years ago