developershomes / SparkETLLinks
Spark all the ETL Pipelines
☆35Updated 2 years ago
Alternatives and similar repositories for SparkETL
Users that are interested in SparkETL are comparing it to the libraries listed below
Sorting:
- Simple stream processing pipeline☆110Updated last year
- Sample project to demonstrate data engineering best practices☆198Updated last year
- Docker with Airflow and Spark standalone cluster☆261Updated 2 years ago
- Local Environment to Practice Data Engineering☆141Updated 10 months ago
- End to end data engineering project☆57Updated 3 years ago
- Series follows learning from Apache Spark (PySpark) with quick tips and workaround for daily problems in hand☆56Updated 2 years ago
- Sample Data Lakehouse deployed in Docker containers using Apache Iceberg, Minio, Trino and a Hive Metastore. Can be used for local testin…☆74Updated 2 years ago
- Code snippets for Data Engineering Design Patterns book☆256Updated 7 months ago
- This repo contains "Databricks Certified Data Engineer Professional" Questions and related docs.☆119Updated last year
- Repo for everything open table formats (Iceberg, Hudi, Delta Lake) and the overall Lakehouse architecture☆114Updated 4 months ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆103Updated 7 months ago
- End-to-end data platform: A PoC Data Platform project utilizing modern data stack (Spark, Airflow, DBT, Trino, Lightdash, Hive metastore,…☆46Updated last year
- Ultimate guide for mastering Spark Performance Tuning and Optimization concepts and for preparing for Data Engineering interviews☆173Updated 2 months ago
- Projects done in the Data Engineer Nanodegree Program by Udacity.com☆164Updated 2 years ago
- Code for "Efficient Data Processing in Spark" Course☆346Updated 3 weeks ago
- A template repository to create a data project with IAC, CI/CD, Data migrations, & testing☆279Updated last year
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆93Updated 6 years ago
- The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Pos…☆74Updated 2 years ago
- ☆44Updated last year
- Solution to all projects of Udacity's Data Engineering Nanodegree: Data Modeling with Postgres & Cassandra, Data Warehouse with Redshift,…☆57Updated 3 years ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 4 years ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆46Updated last year
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆78Updated 2 years ago
- velib-v2: An ETL pipeline that employs batch and streaming jobs using Spark, Kafka, Airflow, and other tools, all orchestrated with Docke…☆20Updated 2 months ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- Code for dbt tutorial☆161Updated 2 months ago
- ☆88Updated 3 years ago
- Code for "Advanced data transformations in SQL" free live workshop☆86Updated 6 months ago
- Stream processing pipeline from Finnhub websocket using Spark, Kafka, Kubernetes and more☆364Updated last year
- ☆29Updated last year