jcrist / hdfscmLinks
An HDFS backed ContentsManager implementation for Jupyter
☆12Updated last year
Alternatives and similar repositories for hdfscm
Users that are interested in hdfscm are comparing it to the libraries listed below
Sorting:
- Apache Pulsar Adapters☆24Updated 6 months ago
- UberScriptQuery, a SQL-like DSL to make writing Spark jobs super easy☆61Updated last year
- Dione - a Spark and HDFS indexing library☆52Updated last year
- Bullet is a streaming query engine that can be plugged into any singular data stream using a Stream Processing framework like Apache Stor…☆41Updated 2 years ago
- A shim for using Cassandra as a backend for OpenTSDB. Not to be used as a general Cassandra client.☆7Updated 6 years ago
- Helm Chart for lyft/flinkk8soperator☆11Updated 5 years ago
- Apache Calcite Adapter for Apache Kudu☆28Updated 9 months ago
- Apache Phoenix Query Server☆50Updated 2 months ago
- Yet Another Spark SQL JDBC/ODBC server based on the PostgreSQL V3 protocol☆34Updated 2 years ago
- LinkedIn's version of Apache Calcite☆23Updated this week
- Demonstration of a Hive Input Format for Iceberg☆26Updated 4 years ago
- A HDFS-backed ContentsManager implementation for IPython☆24Updated 8 months ago
- An example of building kubernetes operator (Flink) using Abstract operator's framework☆26Updated 6 years ago
- Docker Image for Kudu☆38Updated 6 years ago
- Maelstrom is an open source Kafka integration with Spark that is designed to be developer friendly, high performance (millisecond stream …☆22Updated 8 years ago
- An example of using Flink for Fault-Tolerant Stream Processing☆12Updated 6 years ago
- Spark* plug-in for accelerating Spark* SQL performance by using cache and index at SQL data source layer.☆37Updated 2 years ago
- Secure HDFS Access from Kubernetes☆61Updated 5 years ago
- Read druid segments from hadoop☆10Updated 8 years ago
- Java event logs collector for hadoop and frameworks☆40Updated 3 months ago
- Fast and scalable timeseries database☆25Updated 5 years ago
- A High Performance Cluster Consumer for Kafka that creates Avro (boom) files in Hadoop in time based directory paths☆42Updated 9 years ago
- Using the Parquet file format (with Avro) to process data with Apache Flink☆14Updated 9 years ago
- Flink performance tests☆28Updated 5 years ago
- Cascading on Apache Flink®☆54Updated last year
- Camus Compressor merges files created by Camus and saves them in a compressed format.☆14Updated 2 years ago
- A tutorial on how to use pulsar-spark-connector☆11Updated 4 years ago
- Ambari stack service for easily installing and managing Solr on HDP cluster☆19Updated 6 years ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated last year
- Kubernetes Operator for the Ververica Platform☆35Updated 2 years ago