Xchunguang / kudu-plusLinks
kudu可视化工具
☆38Updated 5 years ago
Alternatives and similar repositories for kudu-plus
Users that are interested in kudu-plus are comparing it to the libraries listed below
Sorting:
- datax web。datax中的web配置界面没有集成在一起开源出来,此为web端配置项目。☆100Updated 6 years ago
- Flink 案例代码☆43Updated 3 years ago
- Full Database Migration Tool based on Alibaba DataX 3.0☆98Updated 5 years ago
- mysql数据实时增量导入hive☆87Updated 8 years ago
- Atlas官方文档中文版☆69Updated 6 years ago
- Learning Flink : Flink CEP,Flink Core,Flink SQL☆72Updated 3 years ago
- 为DataX(https://github.com/alibaba/DataX) 提供远 程多语言调用(ThriftServer,HttpServer) 分布式运行(DataX on YARN) 功能☆144Updated last month
- 基于CDH5.x parcles安装,一键卸载脚本☆38Updated 2 years ago
- Real-time ETL developed by Flink, data from MySQL to Greenplum. Use canal to parse the MySQL binlog, put it into kafka, use Flink to cons…☆79Updated last year
- ☆42Updated 6 years ago
- java性能采集工具☆51Updated 6 years ago
- ☆50Updated 6 years ago
- work flow schedule☆90Updated 7 years ago
- 通过HBase Observer同步数据到ElasticSearch☆55Updated 10 years ago
- log、event 、time 、window 、table、sql、connect、join、async IO、维表、CEP☆68Updated 2 years ago
- Spark Streaming监控平台,支持任务部署与告警、自启动☆128Updated 7 years ago
- kafka connector 插件,支持输入 mysql binlog 和 json 格式写入ClickHouse。持续更新☆45Updated 4 years ago
- flink-sql 在 flink 上运行 sql 和 构建数据流的平台 基于 apache flink 1.10.0☆110Updated 3 years ago
- hbase+solr实现hbase的二级索引☆48Updated 3 months ago
- 执行Flink SQL 文件的客户端☆25Updated 3 years ago
- poseidonX 是一个基于jstorm和flink的一体化实时计算服务平台☆55Updated 6 years ago
- 数据血缘,Hive/Sqoop/HBase/Spark等,发送到kafka后,解析处理使用neo4j生成血缘☆82Updated 3 years ago
- Spark 脚手架工程,标准化 spark 开发、部署、测试流程。☆94Updated 8 months ago
- 通过语法树解析获取字段级血缘数据☆61Updated 2 years ago
- azkaban小助手,增加任务web配置、远程脚本调用、报警扩展、跨项目依赖等功能。☆117Updated 8 years ago
- 基于canal/kafka conenct的mysql/oracle数据实时同步、flink rest api、flink sql以及udf☆50Updated 2 years ago
- elasticsearch reader and writer plugin for datax☆39Updated 7 years ago
- 基于flink 1.8 源码二次开发,详见MD☆82Updated 5 years ago
- hive sql 字段血缘☆22Updated 3 years ago
- Kafka delivery semantics in the case of failure depend on how and when offsets are stored. Spark output operations are at-least-once. So …☆37Updated 8 years ago