mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆64Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated last year
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆265Updated 5 months ago
- ☆87Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆147Updated 5 years ago
- Stream processing with Azure Databricks☆140Updated 7 months ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆470Updated 9 months ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆41Updated last year
- ETL pipeline using pyspark (Spark - Python)☆117Updated 5 years ago
- Sample project to demonstrate data engineering best practices☆194Updated last year
- Simple stream processing pipeline☆103Updated last year
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆268Updated last year
- This repository will help you to learn about databricks concept with the help of examples. It will include all the important topics which…☆100Updated 11 months ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆33Updated last year
- Spark all the ETL Pipelines☆33Updated last year
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆139Updated last year
- ☆142Updated 2 years ago
- End to end data engineering project☆57Updated 2 years ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆98Updated 3 months ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆66Updated 3 weeks ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆160Updated 2 years ago
- Python data repo, jupyter notebook, python scripts and data.☆518Updated 7 months ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- Price Crawler - Tracking Price Inflation☆186Updated 5 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆57Updated 2 years ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆152Updated last year
- Git Repository☆144Updated 5 months ago
- This project introduces PySpark, a powerful open-source framework for distributed data processing. We explore its architecture, component…☆33Updated 9 months ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆36Updated last year