histogrammar / histogrammar-scalaLinks
Scala implementation of Histogrammar, with optional front-ends and back-ends as separate Maven projects.
☆15Updated last year
Alternatives and similar repositories for histogrammar-scala
Users that are interested in histogrammar-scala are comparing it to the libraries listed below
Sorting:
- Big Data Toolkit for the JVM☆145Updated 5 years ago
- A collection of Apache Parquet add-on modules☆30Updated this week
- low-level helpers for Apache Spark libraries and tests☆16Updated 6 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago
- Writing application logic for Spark jobs that can be unit-tested without a SparkContext☆76Updated 6 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆116Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- something to help you spark☆64Updated 7 years ago
- A quotation-based Scala DSL for scalable data analysis.☆63Updated 3 years ago
- Scala bindings for Bokeh plotting library☆137Updated 2 years ago
- Utilities for writing tests that use Apache Spark.☆24Updated 6 years ago
- Scala and SQL happy together.☆29Updated 8 years ago
- Fast, memory-efficient, minimal-serialization, binary data vectors for Scala and other languages☆67Updated 7 years ago
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆29Updated last year
- Run spark calculations from Ammonite☆117Updated 3 weeks ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- ScalaCheck for Spark☆63Updated 7 years ago
- Data-Driven Spark allows quick data exploration based on Apache Spark.☆29Updated 8 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago
- Secondary sort and streaming reduce for Apache Spark☆78Updated 2 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 7 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 8 years ago
- Deriving Spark DataFrame schemas from case classes☆44Updated last year
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 6 years ago
- Library for organizing batch processing pipelines in Apache Spark☆42Updated 8 years ago
- Use Cascading Taps and Scalding DSL with Spark☆49Updated 8 years ago
- Bucketing and partitioning system for Parquet☆30Updated 7 years ago
- sbt plugin for spark-submit☆96Updated 8 years ago