mrugankray / Big-Data-ClusterLinks
The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Postgres, Cassandra, Hue, Zeppelin, Kadmin, Kafka Control Center and pgAdmin. This cluster is solely intended for usage in a development environment. Do not use it to run any production workloads.
☆66Updated 2 years ago
Alternatives and similar repositories for Big-Data-Cluster
Users that are interested in Big-Data-Cluster are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- Simple stream processing pipeline☆108Updated last year
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆273Updated 7 months ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆155Updated 5 years ago
- ☆88Updated 3 years ago
- Sample project to demonstrate data engineering best practices☆196Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆41Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆63Updated 2 years ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆67Updated 2 months ago
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆78Updated 8 months ago
- ☆40Updated 2 years ago
- Python data repo, jupyter notebook, python scripts and data.☆530Updated 9 months ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆163Updated 2 years ago
- ☆142Updated 2 years ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆478Updated 11 months ago
- Near real time ETL to populate a dashboard.☆72Updated last week
- End to end data engineering project☆57Updated 2 years ago
- Price Crawler - Tracking Price Inflation☆187Updated 5 years ago
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆92Updated 6 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆57Updated 2 years ago
- Spark all the ETL Pipelines☆33Updated 2 years ago
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆495Updated 2 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆44Updated last year
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆55Updated last year
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆69Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆102Updated 5 months ago
- Building a Data Pipeline with an Open Source Stack☆55Updated 2 months ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago
- Building a Modern Data Lake with Minio, Spark, Airflow via Docker.☆21Updated last year