mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆64Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Simple stream processing pipeline☆103Updated 11 months ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆43Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆95Updated 2 months ago
- Docker with Airflow and Spark standalone cluster☆256Updated last year
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆53Updated last year
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆65Updated last week
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆46Updated last year
- Realtime Data Engineering Project☆30Updated 4 months ago
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆76Updated 5 months ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆262Updated 10 months ago
- Sample project to demonstrate data engineering best practices☆191Updated last year
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆252Updated 3 months ago
- Welcome to my data engineering projects repository! Here you will find a collection of data engineering projects that I have worked on.☆17Updated 2 years ago
- ☆87Updated 2 years ago
- ETL pipeline using pyspark (Spark - Python)☆116Updated 5 years ago
- ☆28Updated last year
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆84Updated 5 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆146Updated 4 years ago
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆67Updated last year
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 2 years ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆39Updated last year
- This project shows how to capture changes from postgres database and stream them into kafka☆36Updated last year
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆46Updated 5 years ago
- ELT Data Pipeline implementation in Data Warehousing environment☆26Updated last month
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆103Updated 4 years ago
- Local Environment to Practice Data Engineering☆142Updated 5 months ago
- Spark all the ETL Pipelines☆32Updated last year
- End to end data engineering project☆56Updated 2 years ago
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆88Updated 5 years ago