mrugankray / Big-Data-Cluster
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆59Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster:
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
- Docker with Airflow and Spark standalone cluster☆251Updated last year
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆67Updated 2 months ago
- Spark all the ETL Pipelines☆32Updated last year
- Simple stream processing pipeline☆99Updated 8 months ago
- ☆87Updated 2 years ago
- A data pipeline moving data from a Relational database system (RDBMS) to a Hadoop file system (HDFS).☆15Updated 3 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆134Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆139Updated 4 years ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆34Updated last year
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆18Updated 5 months ago
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆43Updated 5 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆256Updated 7 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆80Updated 6 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆59Updated last year
- Nyc_Taxi_Data_Pipeline - DE Project☆98Updated 4 months ago
- ☆41Updated 7 months ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆32Updated 4 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆230Updated 2 weeks ago
- Apartments Data Pipeline using Airflow and Spark.☆19Updated 2 years ago
- ETL pipeline using pyspark (Spark - Python)☆112Updated 4 years ago
- Sample project to demonstrate data engineering best practices☆179Updated last year
- ☆45Updated last year
- This project shows how to capture changes from postgres database and stream them into kafka☆35Updated 9 months ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆110Updated 9 months ago
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆80Updated 5 years ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆47Updated last year
- This project introduces PySpark, a powerful open-source framework for distributed data processing. We explore its architecture, component…☆27Updated 5 months ago
- Near real time ETL to populate a dashboard.☆73Updated 8 months ago
- ☆28Updated last year
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆135Updated 2 years ago