jkoth / Data-Lake-with-Spark-and-AWS-S3
Create Data Lake on AWS S3 to store dimensional tables after processing data using Spark on AWS EMR cluster
☆9Updated 5 years ago
Alternatives and similar repositories for Data-Lake-with-Spark-and-AWS-S3:
Users that are interested in Data-Lake-with-Spark-and-AWS-S3 are comparing it to the libraries listed below
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆80Updated 5 years ago
- RedditR for Content Engagement and Recommendation☆21Updated 7 years ago
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆53Updated last year
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆43Updated 5 years ago
- Udacity Data Engineering Nanodegree Capstone Project☆35Updated 4 years ago
- This repo contains commands that data engineers use in day to day work.☆60Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆139Updated 4 years ago
- With everything I learned from DEZoomcamp from datatalks.club, this project performs a batch processing on AWS for the cycling dataset wh…☆12Updated 2 years ago
- ☆87Updated 2 years ago
- ☆14Updated 2 years ago
- A production-grade data pipeline has been designed to automate the parsing of user search patterns to analyze user engagement. Extract d…☆24Updated 3 years ago
- Simple ETL pipeline using Python☆25Updated last year
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆101Updated 4 years ago
- Developed an ETL pipeline for a Data Lake that extracts data from S3, processes the data using Spark, and loads the data back into S3 as …☆16Updated 5 years ago
- ☆64Updated this week
- A repo to track data engineering projects☆13Updated 2 years ago
- A full data warehouse infrastructure with ETL pipelines running inside docker on Apache Airflow for data orchestration, AWS Redshift for …☆134Updated 4 years ago
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆76Updated 5 years ago
- Demonstration of using Apache Spark to build robust ETL pipelines while taking advantage of open source, general purpose cluster computin…☆24Updated last year
- Databricks Certified Associate Spark Developer preparation toolkit to setup single node Standalone Spark Cluster along with material in t…☆29Updated 10 months ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated last year
- My solutions for the Udacity Data Engineering Nanodegree☆33Updated 5 years ago
- PySpark Cheatsheet☆36Updated 2 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆56Updated 2 years ago
- Developed a data pipeline to automate data warehouse ETL by building custom airflow operators that handle the extraction, transformation,…☆90Updated 3 years ago
- ☆53Updated 4 years ago
- Data engineering interviews Q&A for data community by data community☆64Updated 4 years ago
- ☆14Updated 5 years ago
- ☆41Updated 7 months ago
- Near real time ETL to populate a dashboard.☆73Updated 8 months ago