mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆64Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆258Updated last year
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆53Updated last year
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆45Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆41Updated last year
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- Spark all the ETL Pipelines☆32Updated last year
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- ☆41Updated 11 months ago
- Simple stream processing pipeline☆102Updated last year
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆139Updated last year
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆75Updated 5 months ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- ☆39Updated 2 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆36Updated last year
- Sample project to demonstrate data engineering best practices☆194Updated last year
- ☆28Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆96Updated 3 months ago
- ☆87Updated 2 years ago
- Near real time ETL to populate a dashboard.☆72Updated last year
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆66Updated last week
- Realtime Data Engineering Project☆31Updated 5 months ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆257Updated 4 months ago
- ☆39Updated 2 years ago
- Building a Modern Data Lake with Minio, Spark, Airflow via Docker.☆20Updated last year
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆71Updated last year
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆72Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆147Updated 5 years ago
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆55Updated 2 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆265Updated 11 months ago
- Spark data pipeline that processes movie ratings data.☆28Updated last week