tubular / confluent-spark-avroLinks
Spark UDFs to deserialize Avro messages with schemas stored in Schema Registry.
☆20Updated 8 years ago
Alternatives and similar repositories for confluent-spark-avro
Users that are interested in confluent-spark-avro are comparing it to the libraries listed below
Sorting:
- POC: Spark consumer for bottledwater-pg Kafka Avro topics☆16Updated 5 years ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 6 years ago
- Spark cloud integration: tests, cloud committers and more☆20Updated 11 months ago
- Flink Examples☆38Updated 9 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Ansible playbook for automated HDP 2.x deployment install with Kerberos☆19Updated 9 years ago
- Docker Image and Kubernetes Configurations for Spark 2.x☆41Updated 6 years ago
- Starter project for building MemSQL Streamliner Pipelines☆32Updated 8 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆120Updated 9 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- A connector for SingleStore and Spark☆162Updated 3 months ago
- Cascading on Apache Flink®☆54Updated last year
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- Interactive Audience Analytics with Spark and HyperLogLog☆55Updated 10 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 7 years ago
- Demos around Ambari Views, Services, Blueprints☆63Updated 9 years ago
- functionstest☆33Updated 9 years ago
- Kite SDK Examples☆99Updated 4 years ago
- Prescriptive Applications over Kite and Hadoop☆12Updated 10 years ago
- Yet Another Spark SQL JDBC/ODBC server based on the PostgreSQL V3 protocol☆34Updated 3 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 6 years ago
- A plugin to Apache Airflow to allow you to run Spark Submit Commands as an Operator☆73Updated 6 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated last year
- An Apache Spark app for making data movement between Apache Hive and Apache Phoenix/HBase☆14Updated 9 years ago
- SQL for Kafka Connectors☆99Updated 2 years ago
- An example Apache Beam project.☆111Updated 8 years ago
- ☆48Updated 7 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 8 years ago