tideworks / arvo2parquet
Example program that writes Parquet formatted data to plain files (i.e., not Hadoop hdfs); Parquet is a columnar storage format.
☆37Updated 2 years ago
Related projects: ⓘ
- A user friendly API for checking for and reporting on Avro schema incompatibilities.☆59Updated 6 months ago
- Use SQL to transform your avro schema/records☆28Updated 6 years ago
- Scala and SQL happy together.☆28Updated 7 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- functionstest☆33Updated 7 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 3 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆49Updated 8 months ago
- ☆35Updated 2 years ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated 6 months ago
- Spark job for compacting avro files together☆12Updated 6 years ago
- An engineering report on using transactions in Kafka 0.11.0.0☆19Updated 6 years ago
- Source code and application accompanying the online inferencing blog☆2Updated last year
- Big Data Toolkit for the JVM☆145Updated 3 years ago
- Library offering http based query on top of Kafka Streams Interactive Queries☆69Updated last year
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated 2 months ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆43Updated 5 months ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆86Updated 6 months ago
- Spark package to "plug" holes in data using SQL based rules ⚡️ 🔌☆28Updated 4 years ago
- Fast Apache Avro serialization/deserialization library☆43Updated 3 years ago
- Collection of utilities to allow writing java code that operates across a wide range of avro versions.☆76Updated 3 weeks ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆48Updated 8 years ago
- Data-Driven Spark allows quick data exploration based on Apache Spark.☆28Updated 7 years ago
- Library for generating avro schema files (.avsc) based on DB tables structure☆50Updated 6 months ago
- something to help you spark☆65Updated 5 years ago
- Schema registry for CSV, TSV, JSON, AVRO and Parquet schema. Supports schema inference and GraphQL API.☆111Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- ☆22Updated 5 years ago
- Example projects for using Spark and Cassandra With DSE Analytics☆58Updated last year
- Hadoop output committers for S3☆108Updated 4 years ago