PacktPublishing / Bigdata-on-Kubernetes
Bigdata on Kubernetes, Published by Packt
☆30Updated 6 months ago
Alternatives and similar repositories for Bigdata-on-Kubernetes:
Users that are interested in Bigdata-on-Kubernetes are comparing it to the libraries listed below
- Produce Kafka messages, consume them and upload into Cassandra, MongoDB.☆41Updated last year
- Data Engineering with Databricks Cookbook, published by Packt☆80Updated 10 months ago
- Data Engineering with Google Cloud Platform - Second Edition, published by Packt☆36Updated 11 months ago
- Building ETL Pipelines with Python☆134Updated 9 months ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆42Updated last year
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆35Updated last year
- Data Engineering with AWS, 2nd edition - Published by Packt☆138Updated last year
- This project shows how to capture changes from postgres database and stream them into kafka☆36Updated 11 months ago
- Apache Airflow Best Practices, published by Packt☆40Updated 5 months ago
- Essential PySpark for Scalable Data Analytics, published by Packt☆44Updated 2 years ago
- ☆64Updated last week
- Code for blog at https://www.startdataengineering.com/post/python-for-de/☆74Updated 10 months ago
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆60Updated last year
- Building a Data Pipeline with an Open Source Stack☆53Updated 9 months ago
- Data engineering with dbt, published by Packt☆77Updated last year
- Local Environment to Practice Data Engineering☆142Updated 3 months ago
- AWS ETL Pipleine☆28Updated 11 months ago
- Duke MIDS: Data Engineering and DataOps Course☆66Updated 3 months ago
- A self-contained, ready to run Airflow ELT project. Can be run locally or within codespaces.☆67Updated last year
- Step by step instructions to create a production-ready data pipeline☆45Updated 4 months ago
- ☆40Updated 9 months ago
- This project serves as a comprehensive guide to building an end-to-end data engineering pipeline using TCP/IP Socket, Apache Spark, OpenA…☆34Updated last year
- Project for "Data pipeline design patterns" blog.☆45Updated 8 months ago
- Databricks ML in Action, Published by Packt☆30Updated 11 months ago
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆23Updated 3 years ago
- This repository contains an Apache Flink application for real-time sales analytics built using Docker Compose to orchestrate the necessar…☆44Updated last year
- Found a data engineering challenge or participated in a selection process ? Share with us!☆65Updated 2 years ago
- Apartments Data Pipeline using Airflow and Spark.☆20Updated 3 years ago
- Data Engineering with Scala, published by Packt☆23Updated last year
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆32Updated last year