mrugankray / Big-Data-Cluster
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆63Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster:
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
- ☆87Updated 2 years ago
- Docker with Airflow and Spark standalone cluster☆255Updated last year
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆244Updated 2 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆91Updated last month
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆42Updated last year
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆67Updated last year
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆160Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆36Updated last year
- Spark all the ETL Pipelines☆32Updated last year
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆81Updated 5 years ago
- ETL pipeline using pyspark (Spark - Python)☆114Updated 5 years ago
- ☆40Updated 9 months ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆101Updated 4 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆137Updated last year
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆41Updated last year
- Sample project to demonstrate data engineering best practices☆186Updated last year
- Simple stream processing pipeline☆100Updated 10 months ago
- This project shows how to capture changes from postgres database and stream them into kafka☆36Updated 11 months ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆260Updated 9 months ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated last year
- This project introduces PySpark, a powerful open-source framework for distributed data processing. We explore its architecture, component…☆28Updated 6 months ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆49Updated last year
- ☆28Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆60Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆143Updated 4 years ago
- Price Crawler - Tracking Price Inflation☆185Updated 4 years ago
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆35Updated last year
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆45Updated 5 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆56Updated 2 years ago