tubular / confluent-spark-avroLinks
Spark UDFs to deserialize Avro messages with schemas stored in Schema Registry.
☆20Updated 7 years ago
Alternatives and similar repositories for confluent-spark-avro
Users that are interested in confluent-spark-avro are comparing it to the libraries listed below
Sorting:
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- POC: Spark consumer for bottledwater-pg Kafka Avro topics☆16Updated 5 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- Docker Image and Kubernetes Configurations for Spark 2.x☆41Updated 5 years ago
- Spark cloud integration: tests, cloud committers and more☆19Updated 6 months ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- SQL for Kafka Connectors☆99Updated last year
- Flink Examples☆39Updated 9 years ago
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆97Updated 5 years ago
- Starter project for building MemSQL Streamliner Pipelines☆32Updated 8 years ago
- Spark Structured Streaming State Tools☆34Updated 5 years ago
- Cascading on Apache Flink®☆54Updated last year
- something to help you spark☆64Updated 6 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆90Updated last year
- Ansible playbook for automated HDP 2.x deployment install with Kerberos☆19Updated 8 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆118Updated 9 years ago
- Kafka Connect Tooling☆117Updated 4 years ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆61Updated 11 months ago
- JSON schema parser for Apache Spark☆81Updated 2 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 6 years ago
- Hadoop-Unit is a project which allow testing projects which need hadoop ecosysteme like kafka, solr, hdfs, hive, hbase, ...☆52Updated 2 years ago
- Spark package to "plug" holes in data using SQL based rules ⚡️ 🔌☆29Updated 5 years ago
- A connector for SingleStore and Spark☆162Updated last week
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- Prescriptive Applications over Kite and Hadoop☆12Updated 9 years ago
- Hadoop MapReduce tool to convert Avro data files to Parquet format.☆34Updated 12 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated 9 months ago
- Apache Spark and Apache Kafka integration example☆124Updated 7 years ago
- Kite SDK Examples☆99Updated 4 years ago