Rajsingh92 / MUST_HAVE_SKILLSLinks
This repo consists of all important concepts for data engineers.
☆11Updated 11 months ago
Alternatives and similar repositories for MUST_HAVE_SKILLS
Users that are interested in MUST_HAVE_SKILLS are comparing it to the libraries listed below
Sorting:
- Simple ETL pipeline using Python☆29Updated 2 years ago
- ☆88Updated 3 years ago
- Apache Spark using SQL☆14Updated 4 years ago
- Apache Spark 3 - Structured Streaming Course Material☆125Updated 2 years ago
- PySpark Cheatsheet☆36Updated 2 years ago
- This repository will help you to learn about databricks concept with the help of examples. It will include all the important topics which…☆103Updated 2 months ago
- ☆27Updated 3 years ago
- Resources for the free AWS Data Engineering course on youtube☆102Updated 4 years ago
- My solutions for the Udacity Data Engineering Nanodegree☆34Updated 6 years ago
- ☆70Updated last month
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 5 years ago
- GitHub repository related to the course Mastering Elastic Map Reduce for Data Engineers☆23Updated 3 years ago
- In this project, we will build and ETL(Extract,Transform,Load) pipeline using the Spotify API on AWS. The pipeline will retrieve data fro…☆25Updated 2 years ago
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆57Updated 3 years ago
- Mastering Big Data Analytics with PySpark, Published by Packt☆163Updated last year
- ☆21Updated 2 years ago
- A production-grade data pipeline has been designed to automate the parsing of user search patterns to analyze user engagement. Extract d…☆24Updated 4 years ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated 2 years ago
- YouTube tutorial project☆105Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆158Updated 5 years ago
- All Data Engineering notebooks from Datacamp course☆115Updated 6 years ago
- Repository for Spark using Python material. It is popularly known as PySpark.☆20Updated 4 years ago
- Big Data Engineering practice project, including ETL with Airflow and Spark using AWS S3 and EMR☆88Updated 6 years ago
- A course by DataTalks Club that covers Spark, Kafka, Docker, Airflow, Terraform, DBT, Big Query etc☆14Updated 3 years ago
- Code Repository for AWS Certified Big Data Specialty 2019 - In Depth and Hands On!, published by Packt☆43Updated 2 years ago
- Developed an ETL pipeline for a Data Lake that extracts data from S3, processes the data using Spark, and loads the data back into S3 as …☆16Updated 6 years ago
- ☆24Updated 7 months ago
- ☆117Updated 5 years ago
- 😈Complete End to End ETL Pipeline with Spark, Airflow, & AWS☆50Updated 6 years ago
- Built a Data Pipeline for a Retail store using AWS services that collects data from its transactional database (OLTP) in Snowflake and tr…☆11Updated 2 years ago