pchanumolu / Spark-Streaming-Apache-Kafka-Apache-HBase
Spark Streaming example project which pulls messages from Kafka and write to HBase Table.
☆11Updated 9 years ago
Related projects: ⓘ
- ☆39Updated 9 years ago
- 使用spark streaming 导入kafka数据到hbase☆25Updated 8 years ago
- 一个为spark批量导入数据到hbase的库☆43Updated 7 years ago
- Code for processing AVRO data in Spark Streaming + Kafka (DirectKafka approach with custom offset management in zookeeper)☆29Updated 8 years ago
- Ambari stack for easily installing and managing Redis on HDP cluster☆15Updated 9 years ago
- A demo repository for "streaming etl" with Apache Flink☆43Updated 8 years ago
- A Spark SQL HBase connector☆29Updated 9 years ago
- conbine flume,spark-streaming and redis for real-time computing☆22Updated 9 years ago
- Java library to integrate Flink and Kudu☆53Updated 7 years ago
- Capture changes of HBase to Kafka☆30Updated 8 years ago
- A HBase datasource implementation for Spark and [MLSQL](http://www.mlsql.tech).☆13Updated 11 months ago
- Spark Streaming HBase Example☆96Updated 8 years ago
- ☆14Updated 7 years ago
- A web application for submitting spark application☆8Updated 3 years ago
- ☆12Updated 8 years ago
- Streaming using Flink to connect Kafka and Elasticsearch☆29Updated 8 years ago
- spark流数据处理,可以从flume-ng,kafka接收数据☆11Updated 9 years ago
- 使用Hive读写solr☆31Updated 2 years ago
- 使用Spark的MLlib、Hbase作为模型、Hive作数据清洗的核心推荐引擎,在Spark on Yarn测试通过☆28Updated 7 years ago
- Custom Spark Kafka consumer based on Kafka SimpleConsumer API.☆22Updated 9 years ago
- spark将hdfs数据高性能灌入kafka,然后spark streaming/structured streaming高速消费,关注性能,欢迎提供性能/代码优化建议☆33Updated 5 years ago
- Encapsulated spark 与其他组件的结合api,方便使用,例如 es,hbase,kudu,kafka,mq等☆35Updated 4 years ago
- SparkStreaming中利用MySQL保存Kafka偏移量保证0数据丢失☆45Updated 7 years ago
- Flink Sql 教程☆34Updated 2 years ago
- Flink performance tests☆29Updated 4 years ago
- ☆11Updated this week
- This is an example of real time stream processing using Spark Streaming, Kafka & Elasticsearch.☆41Updated 8 years ago
- DataFibers Data Service☆31Updated 2 years ago
- Kafka delivery semantics in the case of failure depend on how and when offsets are stored. Spark output operations are at-least-once. So …☆37Updated 7 years ago
- Using Spark SQLContext, HiveContext & Spark DataFrames API with ElasticSearch, Cassandra & MongoDB☆22Updated 8 years ago