saikrishnapujari / Spark-Drools-Integration
☆23Updated 5 years ago
Alternatives and similar repositories for Spark-Drools-Integration:
Users that are interested in Spark-Drools-Integration are comparing it to the libraries listed below
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- spark-drools tutorials☆16Updated 10 months ago
- Scalable CDC Pattern Implemented using PySpark☆18Updated 5 years ago
- Apache-Spark based Data Flow(ETL) Framework which supports multiple read, write destinations of different types and also support multiple…☆26Updated 3 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 4 years ago
- Cloud-based SQL engine using SPARK where data is accessible as JDBC/ODBC data source via Spark ThriftServer.☆31Updated 7 years ago
- A sink to save Spark Structured Streaming DataFrame into Hive table☆23Updated 6 years ago
- Spark cloud integration: tests, cloud committers and more☆19Updated 3 weeks ago
- Collection of examples integrating NiFi with stream process frameworks.☆58Updated 8 years ago
- Examples of Spark 3.0☆47Updated 4 years ago
- Code snippets used in demos recorded for the blog.☆29Updated last week
- Testing Scala code with scalatest☆12Updated 2 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆50Updated last year
- Apache Spark ETL Utilities☆40Updated 3 months ago
- A modern real-time streaming application serving as a reference framework for developing a big data pipeline, complete with a broad range…☆41Updated 4 years ago
- Library for generating avro schema files (.avsc) based on DB tables structure☆50Updated 2 months ago
- Flink Examples☆39Updated 8 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 6 years ago
- SQL for Kafka Connectors☆98Updated last year
- ☆25Updated 3 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated 9 months ago
- Project to create configurable ETL via lightbend configuration using Spark Structured Streaming☆8Updated 6 years ago
- A demo combining Kafka Streams and Drools to create a lightweight real-time rules engine.☆37Updated last year
- HadoopOffice - Analyze Office documents using the Hadoop ecosystem (Spark/Flink/Hive)☆64Updated 2 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated 11 months ago
- Experiments with Apache Flink.☆5Updated last year
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- ☆14Updated last week
- Generate Avro schema and Avro binary from XSD schema and XML☆68Updated 8 years ago
- Kafka Examples repository.☆44Updated 6 years ago