mrugankray / Big-Data-Cluster
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆64Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆257Updated last year
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆67Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆37Updated last year
- ☆87Updated 2 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆249Updated 3 months ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆129Updated 11 months ago
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆83Updated 5 years ago
- Spark all the ETL Pipelines☆32Updated last year
- End to end data engineering project☆54Updated 2 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆42Updated last year
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆139Updated last year
- ☆51Updated last year
- Sample project to demonstrate data engineering best practices☆191Updated last year
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆122Updated last year
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆64Updated 2 months ago
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆45Updated 5 years ago
- Simple ETL pipeline using Python☆26Updated last year
- Building a Data Pipeline with an Open Source Stack☆54Updated 10 months ago
- ☆40Updated 10 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆93Updated last month
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆144Updated 4 years ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆161Updated 2 years ago
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆346Updated last year
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆103Updated 4 years ago
- ☆151Updated 2 years ago
- ETL pipeline using pyspark (Spark - Python)☆115Updated 5 years ago
- Code snippets for Data Engineering Design Patterns book☆106Updated last month
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆33Updated 4 years ago
- ☆37Updated 2 years ago