mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue,  Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆74Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆285Updated 8 months ago
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆197Updated last year
- Simple stream processing pipeline☆110Updated last year
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆279Updated last year
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆69Updated last year
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆365Updated last year
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆164Updated 2 years ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆170Updated last month
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆103Updated 7 months ago
- Spark all the ETL Pipelines☆35Updated 2 years ago
- ☆161Updated 3 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆157Updated 5 years ago
- This is a template you can use for your next data engineering portfolio project.☆181Updated 4 years ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆479Updated last year
- This project introduces PySpark, a powerful open-source framework for distributed data processing. We explore its architecture, component…☆35Updated last year
- Local Environment to Practice Data Engineering☆141Updated 10 months ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆68Updated 4 months ago
- End to end data engineering project☆57Updated 3 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆38Updated last year
- Sample repo for startdataengineering DE 101 free course☆69Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆42Updated last year
- ☆44Updated last year
- Code for "Efficient Data Processing in Spark" Course☆345Updated 2 weeks ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆47Updated last year
- ☆88Updated 3 years ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆20Updated 2 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- Pipeline that extracts data from Crinacle's Headphone and InEarMonitor databases and finalizes data for a Metabase Dashboard. The dashboa…☆242Updated 2 years ago
- Data Engineering with AWS, Published by Packt☆332Updated 2 years ago