dogukannulu / streaming_data_processingLinks
Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO
☆65Updated 2 years ago
Alternatives and similar repositories for streaming_data_processing
Users that are interested in streaming_data_processing are comparing it to the libraries listed below
Sorting:
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆144Updated 2 years ago
- This repository contains the code for a realtime election voting system. The system is built using Python, Kafka, Spark Streaming, Postgr…☆45Updated 2 years ago
- ☆45Updated last year
- An end-to-end data engineering pipeline that orchestrates data ingestion, processing, and storage using Apache Airflow, Python, Apache Ka…☆312Updated 11 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆108Updated last month
- ☆88Updated 3 years ago
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆43Updated 2 years ago
- ☆30Updated 2 years ago
- ☆70Updated this week
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆336Updated last year
- This project serves as a comprehensive guide to building an end-to-end data engineering pipeline using TCP/IP Socket, Apache Spark, OpenA…☆44Updated 2 years ago
- This project shows how to capture changes from postgres database and stream them into kafka☆40Updated last year
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆38Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆331Updated last month
- Docker with Airflow and Spark standalone cluster☆262Updated 2 years ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- YouTube tutorial project☆108Updated 2 years ago
- Simple stream processing pipeline☆110Updated last year
- ☆148Updated 3 years ago
- Simple ETL pipeline using Python☆29Updated 2 years ago
- Classwork projects and home works done through Udacity data engineering nano degree☆75Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆162Updated 5 years ago
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆97Updated 6 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆46Updated last year
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆165Updated 3 years ago
- Sample project to demonstrate data engineering best practices☆202Updated last year
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 5 years ago
- Glue ETL job or EMR Spark that gets from data catalog, modifies and uploads to S3 and Data Catalog☆13Updated 2 years ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆38Updated 2 years ago
- A batch processing data pipeline, using AWS resources (S3, EMR, Redshift, EC2, IAM), provisioned via Terraform, and orchestrated from loc…☆23Updated 3 years ago