mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆65Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆268Updated 5 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆141Updated 2 years ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆55Updated last year
- ☆90Updated 6 months ago
- Spark all the ETL Pipelines☆33Updated 2 years ago
- Simple stream processing pipeline☆103Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆98Updated 4 months ago
- Sample project to demonstrate data engineering best practices☆195Updated last year
- ☆88Updated 2 years ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆476Updated 9 months ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆67Updated last month
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆271Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆149Updated 5 years ago
- ☆40Updated 2 years ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆161Updated 2 years ago
- End-to-end data platform leveraging the Modern data stack☆51Updated last year
- Local Environment to Practice Data Engineering☆143Updated 7 months ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆153Updated last year
- Stream processing with Azure Databricks☆139Updated 8 months ago
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆133Updated 2 years ago
- End to end data engineering project☆57Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆142Updated 4 months ago
- ☆142Updated 2 years ago
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆76Updated 7 months ago
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆68Updated last year
- This repository will help you to learn about databricks concept with the help of examples. It will include all the important topics which…☆100Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆41Updated last year