dogukannulu / streaming_data_processingLinks
Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO
☆64Updated 2 years ago
Alternatives and similar repositories for streaming_data_processing
Users that are interested in streaming_data_processing are comparing it to the libraries listed below
Sorting:
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- ☆44Updated last year
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆42Updated last year
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆42Updated 2 years ago
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆287Updated 9 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆105Updated 7 months ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆37Updated 2 years ago
- ☆70Updated last week
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆328Updated last year
- ☆29Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆271Updated 8 months ago
- Simple stream processing pipeline☆110Updated last year
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- This project serves as a comprehensive guide to building an end-to-end data engineering pipeline using TCP/IP Socket, Apache Spark, OpenA…☆43Updated last year
- ☆142Updated 2 years ago
- Nyc_Taxi_Data_Pipeline - DE Project☆129Updated last year
- ☆88Updated 3 years ago
- Data Engineering Bootcamp☆30Updated 3 months ago
- Glue ETL job or EMR Spark that gets from data catalog, modifies and uploads to S3 and Data Catalog☆13Updated 2 years ago
- A batch processing data pipeline, using AWS resources (S3, EMR, Redshift, EC2, IAM), provisioned via Terraform, and orchestrated from loc…☆23Updated 3 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆46Updated last year
- Data Engineering with Google Cloud Platform, published by Packt☆119Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆157Updated 5 years ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆36Updated last year
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆94Updated 6 years ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated last year
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago
- A list of all my posts and personal projects☆74Updated last month
- Data Engineering with AWS, 2nd edition - Published by Packt☆161Updated 2 years ago