bbende / nifi-dependency-exampleLinks
Demonstrates how to link a processor bundle with a custom controller service.
☆21Updated last year
Alternatives and similar repositories for nifi-dependency-example
Users that are interested in nifi-dependency-example are comparing it to the libraries listed below
Sorting:
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 7 years ago
- Big Data Toolkit for the JVM☆145Updated 4 years ago
- Support Highcharts in Apache Zeppelin☆81Updated 8 years ago
- ☆50Updated 5 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆115Updated 4 years ago
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Fl…☆96Updated 6 years ago
- This application comes as Spark2.1-as-Service-Provider using an embedded, Reactive-Streams-based, fully asynchronous HTTP server☆49Updated 2 years ago
- ☆70Updated 8 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 5 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 7 years ago
- Spark RDD with Lucene's query and entity linkage capabilities☆128Updated last month
- Kite SDK Examples☆99Updated 4 years ago
- Demo Ambari service to deploy/manage NiFi on HDP - Deprecated☆75Updated 7 years ago
- Wikipedia stream-processing demo using Kafka Connect and Kafka Streams.☆75Updated 7 years ago
- Showcase for IoT Platform Blog☆60Updated 6 years ago
- Sample custom Nifi processor to process tcpdump☆18Updated 9 years ago
- Kafka Connect Cassandra Connector. This project includes source/sink connectors for Cassandra to/from Kafka.☆78Updated 9 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- Flink Examples☆38Updated 9 years ago
- Spark Library for Bulk Loading into Elasticsearch☆13Updated 9 years ago
- ☆48Updated 7 years ago
- Data-Driven Spark allows quick data exploration based on Apache Spark.☆29Updated 8 years ago
- ☆92Updated 8 years ago
- Simple example for reading and writing into Kafka☆55Updated 5 years ago
- JDBC driver for Apache Kafka☆86Updated 3 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago
- Spooker is a dynamic framework for processing high volume data streams via processing pipelines☆30Updated 9 years ago
- Spark Structured Streaming State Tools☆34Updated 5 years ago