agile-lab-dev / wasp
WASP is a framework to build complex real time big data applications. It relies on a kind of Kappa/Lambda architecture mainly leveraging Kafka and Spark. If you need to ingest huge amount of heterogeneous data and analyze them through complex pipelines, this is the framework for you.
☆30Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for wasp
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 6 months ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- type-class based data cleansing library for Apache Spark SQL☆79Updated 5 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 3 years ago
- Extensible streaming ingestion pipeline on top of Apache Spark☆44Updated 8 months ago
- ☆50Updated 4 years ago
- Code snippets used in demos recorded for the blog.☆29Updated last month
- A dynamic data completeness and accuracy library at enterprise scale for Apache Spark☆30Updated 2 weeks ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- ☆35Updated 2 years ago
- Flowchart for debugging Spark applications☆101Updated last month
- Custom state store providers for Apache Spark☆93Updated 2 years ago
- Nested array transformation helper extensions for Apache Spark☆36Updated last year
- Smart Automation Tool for building modern Data Lakes and Data Pipelines☆111Updated this week
- EtlFlow is an ecosystem of functional libraries in Scala based on ZIO for running complex Auditable workflows which can interact with Goo…☆43Updated 2 months ago
- A library that brings useful functions from various modern database management systems to Apache Spark☆56Updated last year
- Schema Registry integration for Apache Spark☆39Updated 2 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆72Updated 3 years ago
- ☆44Updated 4 years ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated 2 years ago
- The official repository for the Rock the JVM Spark Optimization with Scala course☆55Updated 11 months ago
- The official repository for the Rock the JVM Spark Optimization 2 course☆37Updated 11 months ago
- Template for Spark Projects☆101Updated 6 months ago
- Examples of Spark 3.0☆47Updated 4 years ago
- Streaming Analytics platform, built with Apache Flink and Kafka☆34Updated last year
- Basic framework utilities to quickly start writing production ready Apache Spark applications☆36Updated 2 months ago
- Lighthouse is a library for data lakes built on top of Apache Spark. It provides high-level APIs in Scala to streamline data pipelines an…☆60Updated 2 months ago