agile-lab-dev / wasp
WASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
☆30Updated last week
Alternatives and similar repositories for wasp:
Users that are interested in wasp are comparing it to the libraries listed below
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆30Updated 3 months ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated 11 months ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- Code snippets used in demos recorded for the blog.☆29Updated last week
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 9 months ago
- Smart Automation Tool for building modern Data Lakes and Data Pipelines☆118Updated last week
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- Basic framework utilities to quickly start writing production ready Apache Spark applications☆35Updated 2 months ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆58Updated last year
- Flowchart for debugging Spark applications☆104Updated 4 months ago
- Nested array transformation helper extensions for Apache Spark☆37Updated last year
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated last month
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Scala API for Apache Spark SQL high-order functions☆14Updated last year
- The Internals of Spark on Kubernetes☆70Updated 2 years ago
- The official repository for the Rock the JVM Spark Optimization 2 course☆38Updated last year
- ☆36Updated 2 years ago
- The Internals of Delta Lake☆183Updated last month
- A library to support building a coherent set of flink jobs☆16Updated 4 months ago
- Flink Scala API is a thin wrapper on top of Flink Java API which support Scala Types for serialisation as well the latest Scala version☆82Updated this week
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- Read and write Parquet in Scala. Use Scala classes as schema. No need to start a cluster.☆285Updated last month
- Custom state store providers for Apache Spark☆92Updated last week
- Avro Schema Evolution made easy☆34Updated last year
- Source code examples for the Second Edition of the Scala Cookbook☆47Updated 2 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 8 years ago
- JSON schema parser for Apache Spark☆81Updated 2 years ago
- A sink to save Spark Structured Streaming DataFrame into Hive table☆23Updated 6 years ago