mrugankray / Big-Data-Cluster
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆53Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Big-Data-Cluster
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆67Updated 5 months ago
- Docker with Airflow and Spark standalone cluster☆242Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆132Updated 4 years ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆94Updated last year
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆128Updated last year
- Spark all the ETL Pipelines☆32Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆64Updated 3 months ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆99Updated 3 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆196Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆28Updated 10 months ago
- Sample project to demonstrate data engineering best practices☆163Updated 8 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆57Updated last year
- Simple stream processing pipeline☆91Updated 4 months ago
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆79Updated 5 years ago
- End to end data engineering project☆49Updated 2 years ago
- ☆86Updated 2 years ago
- ☆37Updated 4 months ago
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆43Updated 5 years ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated 10 months ago
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆37Updated last year
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆237Updated 3 months ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆56Updated 2 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆31Updated 5 months ago
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆35Updated last year
- This repository will help you to learn about databricks concept with the help of examples. It will include all the important topics which…☆91Updated 3 months ago
- Ravi Azure ADB ADF Repository☆64Updated 6 months ago
- ☆128Updated 2 years ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆32Updated 4 years ago
- Apache Spark 3 - Structured Streaming Course Material☆119Updated last year
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆21Updated 10 months ago