myamafuj / hadoop-hive-spark-dockerLinks
Hadoop-Hive-Spark cluster + Jupyter on Docker
☆74Updated 5 months ago
Alternatives and similar repositories for hadoop-hive-spark-docker
Users that are interested in hadoop-hive-spark-docker are comparing it to the libraries listed below
Sorting:
- Base Docker image with just essentials: Hadoop, Hive and Spark.☆69Updated 4 years ago
- Hadoop, Hive, Spark, Zeppelin and Livy: all in one Docker-compose file.☆165Updated 4 years ago
- Multi-container environment with Hadoop, Spark and Hive☆217Updated 3 weeks ago
- Real-time Data Warehouse with Apache Flink & Apache Kafka & Apache Hudi☆113Updated last year
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆40Updated 6 years ago
- Example for article Running Spark 3 with standalone Hive Metastore 3.0☆98Updated 2 years ago
- A docker using the airflow with Hadoop ecosystem (hive, spark, and sqoop)☆12Updated 4 years ago
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆130Updated 2 years ago
- Infrastructure automation to deploy Hadoop,Hive,Spark,airflow nodes on a docker host☆20Updated 6 years ago
- Docker-compose contains the most common big data systems like: Apache Hadoop, Apache Hive, Apache Spark, Jupyter, Flink☆27Updated last year
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 2 years ago
- ☆47Updated last year
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆46Updated last year
- Apache Flink Training Excercises☆124Updated last week
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- Docker with Airflow and Spark standalone cluster☆256Updated last year
- This project provides Apache Spark SQL, RDD, DataFrame and Dataset examples in Scala language☆564Updated last year
- Apache Flink docker image☆193Updated 2 years ago
- ETL pipeline using pyspark (Spark - Python)☆116Updated 5 years ago
- O'Reilly Book: [Data Algorithms with Spark] by Mahmoud Parsian☆215Updated last year
- The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Pos…☆64Updated 2 years ago
- Postgresql configured to work as metastore for Hive.☆32Updated 2 years ago
- Delta Lake examples☆225Updated 7 months ago
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆55Updated 2 years ago
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆38Updated 4 years ago
- Spark Examples☆125Updated 3 years ago
- Data Engineering with Spark and Delta Lake☆98Updated 2 years ago
- A simplified, lightweight ETL Framework based on Apache Spark☆585Updated last year
- Notebooks to learn Databricks Lakehouse Platform☆28Updated this week