Joshua-omolewa / Stock_streaming_pipeline_project
Built a real-time streaming pipeline to extract stock data, using Apache Nifi, Debezium, Kafka, and Spark Streaming. Loaded the transformed data into Glue database and created real-time dashboards using Power BI and Tableau with Athena. The pipeline is orchestrated using Airflow.
☆26Updated last year
Alternatives and similar repositories for Stock_streaming_pipeline_project
Users that are interested in Stock_streaming_pipeline_project are comparing it to the libraries listed below
Sorting:
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆69Updated last year
- Data pipeline that scrapes Rust cheater Steam profiles☆52Updated 3 years ago
- ☆151Updated 2 years ago
- ☆40Updated 10 months ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆144Updated 4 years ago
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- Generate synthetic Spotify music stream dataset to create dashboards. Spotify API generates fake event data emitted to Kafka. Spark consu…☆67Updated last year
- Pipeline that extracts data from Crinacle's Headphone and InEarMonitor databases and finalizes data for a Metabase Dashboard. The dashboa…☆231Updated 2 years ago
- Code Repository for my 3rd Data Project.☆14Updated last year
- ☆139Updated 2 years ago
- Code for "Advanced data transformations in SQL" free live workshop☆81Updated last week
- 📡 Real-time data pipeline with Kafka, Flink, Iceberg, Trino, MinIO, and Superset. Ideal for learning data systems.☆44Updated 3 months ago
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆346Updated last year
- Spark all the ETL Pipelines☆32Updated last year
- This repo contains "Databricks Certified Data Engineer Professional" Questions and related docs.☆70Updated 9 months ago
- End to end data engineering project☆54Updated 2 years ago
- Data Pipeline from the Global Historical Climatology Network DataSet☆27Updated 2 years ago
- Step by step instructions to create a production-ready data pipeline☆50Updated 4 months ago
- ☆17Updated 2 years ago
- Sample project to demonstrate data engineering best practices☆191Updated last year
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆93Updated last month
- This project provides a comprehensive data pipeline solution to extract, transform, and load (ETL) Reddit data into a Redshift data wareh…☆137Updated last year
- A data engineering project with Airflow, dbt, Terrafrom, GCP and much more!☆24Updated 2 years ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆261Updated 10 months ago
- ☆38Updated 2 years ago
- In this project, we setup and end to end data engineering using Apache Spark, Azure Databricks, Data Build Tool (DBT) using Azure as our …☆29Updated last year
- Building a Data Pipeline with an Open Source Stack☆54Updated 10 months ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆19Updated 8 months ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆64Updated 2 months ago
- ☆87Updated 2 years ago