verisign / python-confluent-schemaregistryLinks
A client for the Confluent Schema Registry API implemented in Python
☆53Updated 2 years ago
Alternatives and similar repositories for python-confluent-schemaregistry
Users that are interested in python-confluent-schemaregistry are comparing it to the libraries listed below
Sorting:
- Serializes data into a JSON format using AVRO schema.☆138Updated 4 years ago
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- Examples on how to use the command line tools in Avro Tools to read and write Avro files☆152Updated last year
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆120Updated 9 years ago
- Hadoop output committers for S3☆113Updated 5 years ago
- DEPRECATED. PLEASE USE https://github.com/confluentinc/kafka-connect-bigquery. A Kafka Connect BigQuery sink connector☆152Updated last year
- Lightweight proxy to expose the UI of an Apache Spark cluster that is behind a firewall☆98Updated 5 years ago
- A Python implementation of Apache Kafka Streams☆311Updated 6 years ago
- Builds Airflow DAGs from configuration files. Powers all DAGs on the Etsy Data Platform☆259Updated 2 years ago
- A connector for SingleStore and Spark☆162Updated 3 months ago
- Apache (Py)Spark type annotations (stub files).☆118Updated 3 years ago
- Google BigQuery support for Spark, SQL, and DataFrames☆156Updated 6 years ago
- Event data simulator. Generates a stream of pseudo-random events from a set of users, designed to simulate web traffic.☆28Updated 8 years ago
- Spark package for checking data quality☆222Updated 5 years ago
- A rough prototype of a tool for discovering Apache Hive schemas from JSON documents.☆42Updated 2 years ago
- A client for the Confluent Schema Registry API implemented in Python☆21Updated 7 years ago
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- Scripts for generating Grafana dashboards for monitoring Spark jobs☆241Updated 10 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- ☆315Updated 2 years ago
- A command-line tool for launching Apache Spark clusters.☆651Updated last year
- Data Pipeline Clientlib provides an interface to tail and publish to data pipeline topics.☆110Updated 3 years ago
- A plugin to Apache Airflow to allow you to run Spark Submit Commands as an Operator☆73Updated 6 years ago
- Example for an airflow plugin☆49Updated 9 years ago
- Kafka Connect Tooling☆117Updated 4 years ago
- Schema Registry☆17Updated last year
- Airflow declarative DAGs via YAML☆133Updated 2 years ago
- Read SparkSQL parquet file as RDD[Protobuf]☆93Updated 7 years ago
- Tool for exploring data on an Apache Kafka cluster☆42Updated 5 years ago