mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆74Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆287Updated 9 months ago
- Simple stream processing pipeline☆110Updated last year
- ☆142Updated 2 years ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆42Updated last year
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆364Updated last year
- ☆88Updated 3 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆280Updated last year
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆68Updated 5 months ago
- Price Crawler - Tracking Price Inflation☆187Updated 5 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆38Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆157Updated 5 years ago
- This project serves as a comprehensive guide to building an end-to-end data engineering pipeline using TCP/IP Socket, Apache Spark, OpenA…☆43Updated last year
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆328Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆105Updated 7 months ago
- Sample project to demonstrate data engineering best practices☆198Updated last year
- Python data repo, jupyter notebook, python scripts and data.☆539Updated 11 months ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆164Updated 2 years ago
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆69Updated last year
- Spark all the ETL Pipelines☆35Updated 2 years ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆173Updated 2 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆64Updated 2 years ago
- End to end data engineering project☆57Updated 3 years ago
- A data engineering project with Kafka, Spark Streaming, dbt, Docker, Airflow, Terraform, GCP and much more!☆763Updated 3 years ago
- ☆91Updated 9 months ago
- ☆297Updated last year
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- This is a template you can use for your next data engineering portfolio project.☆181Updated 4 years ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago