benwatson528 / intellij-avro-parquet-pluginLinks
A Tool Window plugin for IntelliJ that displays Avro and Parquet files and their schemas in JSON.
☆46Updated 3 months ago
Alternatives and similar repositories for intellij-avro-parquet-plugin
Users that are interested in intellij-avro-parquet-plugin are comparing it to the libraries listed below
Sorting:
- Kafka Connector for Iceberg tables☆16Updated 2 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- An implementation of the DatasourceV2 interface of Apache Spark™ for writing Spark Datasets to Apache Druid™.☆43Updated 2 months ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated 10 months ago
- An avro converter for Kafka Connect without a Schema Registry☆53Updated 5 years ago
- Convert XSD -> AVSC and XML -> AVRO☆36Updated 3 years ago
- Avro Schema Evolution made easy☆36Updated last year
- Extensible streaming ingestion pipeline on top of Apache Spark☆45Updated 2 months ago
- Dione - a Spark and HDFS indexing library☆52Updated last year
- Collection of utilities to allow writing java code that operates across a wide range of avro versions.☆84Updated this week
- Serializer/Deserializer for Kafka to serialize/deserialize Protocol Buffers messages☆65Updated this week
- Utility project for working with Kafka Connect.☆34Updated last year
- Lenses.io JDBC driver for Apache Kafka☆21Updated 4 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆90Updated last year
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆52Updated 3 months ago
- Fast Apache Avro serialization/deserialization library☆45Updated 4 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 5 years ago
- Scala API for Apache Spark SQL high-order functions☆14Updated 2 years ago
- A testing framework for Trino☆26Updated 6 months ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆112Updated 5 years ago
- Collection of generic Apache Flink operators☆17Updated 8 years ago
- Minimal example code for integration testing of Apache Kafka.☆25Updated 7 years ago
- A library that provides in-memory instances of both Kafka and Confluent Schema Registry to run your tests against.☆115Updated last week
- Example program that writes Parquet formatted data to plain files (i.e., not Hadoop hdfs); Parquet is a columnar storage format.☆38Updated 3 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 6 years ago
- JDBC driver for Apache Kafka☆86Updated 3 years ago
- HadoopOffice - Analyze Office documents using the Hadoop ecosystem (Spark/Flink/Hive)☆63Updated 2 years ago
- Apache Amaterasu☆56Updated 5 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.