mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆72Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆260Updated 2 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆281Updated 7 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆103Updated 6 months ago
- This project introduces PySpark, a powerful open-source framework for distributed data processing. We explore its architecture, component…☆34Updated last year
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆68Updated 3 months ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆41Updated last year
- Apache Spark 3 - Spark Programming in Python for Beginners☆497Updated last year
- Simple stream processing pipeline☆110Updated last year
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆321Updated last year
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆153Updated 5 years ago
- Sample project to demonstrate data engineering best practices☆198Updated last year
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆87Updated 6 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆45Updated last year
- ☆142Updated 2 years ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆480Updated 11 months ago
- Python data repo, jupyter notebook, python scripts and data.☆531Updated 10 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated 2 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆277Updated last year
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆167Updated last month
- The resources of the preparation course for Databricks Data Engineer Associate certification exam☆495Updated 3 weeks ago
- ☆88Updated 3 years ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆164Updated 2 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆57Updated 2 years ago
- PySpark Tutorial for Beginners - Practical Examples in Jupyter Notebook with Spark version 3.4.1. The tutorial covers various topics like…☆134Updated 2 years ago
- This is a template you can use for your next data engineering portfolio project.☆182Updated 4 years ago
- End to end data engineering project☆57Updated 2 years ago
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆360Updated last year
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆50Updated 6 years ago