IhorLuk / medium-materialsLinks
☆14Updated 6 months ago
Alternatives and similar repositories for medium-materials
Users that are interested in medium-materials are comparing it to the libraries listed below
Sorting:
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆37Updated 2 years ago
- AWS ETL Pipleine☆30Updated last year
- Code for my "Efficient Data Processing in SQL" book.☆60Updated last year
- Code for blog at https://www.startdataengineering.com/post/python-for-de/☆89Updated last year
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆42Updated 2 years ago
- Project for "Data pipeline design patterns" blog.☆47Updated last year
- Code snippets for Data Engineering Design Patterns book☆271Updated 8 months ago
- Portfolio of projects and studies conducted in data engineering.☆34Updated 8 months ago
- A course by DataTalks Club that covers Spark, Kafka, Docker, Airflow, Terraform, DBT, Big Query etc☆14Updated 3 years ago
- Apache Airflow Best Practices, published by Packt☆51Updated last year
- ☆26Updated 2 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆143Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- End to end data engineering project☆57Updated 3 years ago
- End-to-end data platform leveraging the Modern data stack☆52Updated last year
- Code for dbt tutorial☆165Updated 2 months ago
- ☆88Updated 3 years ago
- A portable Datamart and Business Intelligence suite built with Docker, Airflow, dbt, PostgreSQL and Superset☆46Updated last year
- Building a Data Pipeline with an Open Source Stack☆54Updated 4 months ago
- Bigdata on Kubernetes, Published by Packt☆36Updated last year
- ☆13Updated last year
- Retail data pipeline using Airflow, Dbt, Soda, GCP (GCS and BigQuery) and Metabase☆39Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆64Updated 2 years ago
- ☆40Updated 2 years ago
- Full stack data engineering tools and infrastructure set-up☆57Updated 4 years ago
- Code for blog at: https://www.startdataengineering.com/post/docker-for-de/☆40Updated last year
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆79Updated 2 years ago
- Local Environment to Practice Data Engineering☆143Updated 10 months ago
- Data Engineering com Apache Spark☆42Updated 4 years ago