ing-bank / scruid
Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.
☆115Updated 3 years ago
Related projects: ⓘ
- Big Data Toolkit for the JVM☆145Updated 3 years ago
- Custom state store providers for Apache Spark☆93Updated 2 years ago
- A library to expose more of Apache Spark's metrics system☆146Updated 4 years ago
- ScalaCheck for Spark☆63Updated 6 years ago
- Thin Scala wrapper around Kafka Streams Java API☆190Updated last year
- Thin Scala wrapper for the Kafka Streams API☆50Updated 6 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- An embedded job scheduler.☆114Updated last month
- Dumping ground for random stuff☆55Updated 6 months ago
- Writing application logic for Spark jobs that can be unit-tested without a SparkContext☆76Updated 5 years ago
- ☆50Updated 3 years ago
- A library that provides in-memory instances of both Kafka and Confluent Schema Registry to run your tests against.☆110Updated this week
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆106Updated 6 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- Template for Spark Projects☆99Updated 3 months ago
- JSON schema parser for Apache Spark☆81Updated 2 years ago
- Hadoop output committers for S3☆108Updated 4 years ago
- Scala DSL for Unit-Testing Processing Topologies in Kafka Streams☆186Updated 3 years ago
- Deriving Spark DataFrame schemas from case classes☆44Updated 2 months ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 4 years ago
- ☆44Updated 4 years ago
- ☆35Updated 2 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆48Updated 8 years ago
- Read and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster.☆282Updated 3 weeks ago
- Lego bricks to build Apache Kafka serializers and deserializers☆120Updated last year
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 4 months ago
- Run spark calculations from Ammonite☆118Updated 3 weeks ago
- SparkSQL utils for ScalaPB☆42Updated last week
- Generate Scala case class definitions from Avro schemas☆201Updated last month