FlinkML / flink-jpmmlLinks
flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Flink streaming engine
☆96Updated 6 years ago
Alternatives and similar repositories for flink-jpmml
Users that are interested in flink-jpmml are comparing it to the libraries listed below
Sorting:
- A library to expose more of Apache Spark's metrics system☆146Updated 5 years ago
- Big Data Toolkit for the JVM☆146Updated 4 years ago
- Framework for Apache Flink unit tests☆209Updated 6 years ago
- Custom state store providers for Apache Spark☆92Updated 4 months ago
- Mirror of Apache Bahir☆336Updated last year
- Structured Streaming Machine Learning example with Spark 2.0☆92Updated 8 years ago
- Low level integration of Spark and Kafka☆130Updated 7 years ago
- PMML evaluator library for the Apache Spark cluster computing system (http://spark.apache.org/)☆94Updated 3 years ago
- Support Highcharts in Apache Zeppelin☆81Updated 7 years ago
- Write your Spark data to Kafka seamlessly☆174Updated 11 months ago
- Druid indexing plugin for using Spark in batch jobs☆101Updated 3 years ago
- Enabling Spark Optimization through Cross-stack Monitoring and Visualization☆47Updated 7 years ago
- Apache Spark and Apache Kafka integration example☆124Updated 7 years ago
- flink-tensorflow - TensorFlow support for Apache Flink☆215Updated 7 years ago
- Read SparkSQL parquet file as RDD[Protobuf]☆93Updated 6 years ago
- Examples of Spark 2.0☆211Updated 3 years ago
- Profiler for large-scale distributed java applications (Spark, Scalding, MapReduce, Hive,...) on YARN.☆127Updated 6 years ago
- How to use Parquet in Flink☆32Updated 8 years ago
- Drizzle integration with Apache Spark☆120Updated 6 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 8 years ago
- Apache Flink™ training material website☆78Updated 5 years ago
- Spark structured streaming with Kafka data source and writing to Cassandra☆62Updated 5 years ago
- ☆240Updated 3 years ago
- Tools for spark which we use on the daily basis☆65Updated 4 years ago
- spark + drools☆102Updated 3 years ago
- ☆103Updated 5 years ago
- Schedoscope is a scheduling framework for painfree agile development, testing, (re)loading, and monitoring of your datahub, lake, or what…☆96Updated 5 years ago
- Spark Structured Streaming State Tools☆34Updated 4 years ago
- A tool for monitoring and tuning Spark jobs for efficiency.☆358Updated 2 years ago
- A sink to save Spark Structured Streaming DataFrame into Hive table☆23Updated 7 years ago