ven2day / Bigdata-docker-sandboxLinks
Docker Big Data Tools: This docker-compose file is configured to run multiple nodes. This is a Hadoop Cluster that contains the necessary tools that can be used in the BigData domain, It's a collection of docker containers that you can use directly.
☆31Updated 4 years ago
Alternatives and similar repositories for Bigdata-docker-sandbox
Users that are interested in Bigdata-docker-sandbox are comparing it to the libraries listed below
Sorting:
- Dockerizing an Apache Spark Standalone Cluster☆42Updated 3 years ago
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆83Updated last year
- Real-time Data Warehouse with Apache Flink & Apache Kafka & Apache Hudi☆119Updated 2 years ago
- Base Docker image with just essentials: Hadoop, Hive and Spark.☆69Updated 4 years ago
- Hadoop, Hive, Spark, Zeppelin and Livy: all in one Docker-compose file.☆169Updated 4 years ago
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆49Updated 2 years ago
- ☆80Updated 2 years ago
- One click deploy docker-compose with Kafka, Spark Streaming, Zeppelin UI and Monitoring (Grafana + Kafka Manager)☆120Updated 4 years ago
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆40Updated 6 years ago
- Example for article Running Spark 3 with standalone Hive Metastore 3.0☆103Updated 2 years ago
- Self-contained demo using Flink SQL and Debezium to build a CDC-based analytics pipeline. All you need is Docker!☆25Updated 4 years ago
- Apache Flink Demo Projects☆44Updated this week
- Hadoop, Hive, Parquet and Hue in docker-compose v3☆42Updated 5 years ago
- Dockerfiles and Docker Compose for HDP 2.6 with Blueprints☆23Updated 8 years ago
- ☆65Updated last year
- apache-nifi-templates☆54Updated 4 years ago
- ☆40Updated 2 years ago
- Tutorial on how to setup Trino and Apache Ranger using docker☆41Updated last year
- Docker image for Apache Hive Metastore☆73Updated 2 years ago
- Multi-container environment with Hadoop, Spark and Hive☆230Updated 8 months ago
- ☆32Updated 7 years ago
- A sample implementation of stream writes to an Iceberg table on GCS using Flink and reading it using Trino☆22Updated 3 years ago
- A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python,…☆209Updated 6 years ago
- Cluster in docker with Apache Atlas and a minimal Hadoop ecosystem to perform some basic experiments.☆30Updated 2 months ago
- Multi docker container images for main Big Data Tools. (Hadoop, Spark, Kafka, HBase, Cassandra, Zookeeper, Zeppelin, Drill, Flink, Hive, …☆36Updated last year
- The demo of using Kafka, Spark, Hive, Cassandra, etc by using Docker. It produces the production ready environment for any kinds of big d…☆37Updated 6 years ago
- Apache Spark Course Material☆96Updated 2 years ago
- Smart Automation Tool for building modern Data Lakes and Data Pipelines☆122Updated this week
- The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Pos…☆76Updated 2 years ago
- Db2 JDBC connector for Trino☆19Updated 3 years ago