abxda / micro-data-lakeLinks
Micro Data Lake based on Docker Compose
☆17Updated 5 years ago
Alternatives and similar repositories for micro-data-lake
Users that are interested in micro-data-lake are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆262Updated 2 years ago
- Datalake☆31Updated this week
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆506Updated 3 months ago
- ☆29Updated 4 years ago
- Infraestructura para Big Data : Hadoop + NiFi +Spark + Hive usando Docker☆20Updated last month
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆135Updated 3 years ago
- ☆14Updated 2 years ago
- Zeppelin docker☆16Updated 5 years ago
- Project with Airflow + Spark + MinIO + Postgres + Python3.8☆28Updated 3 years ago
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆49Updated 2 years ago
- Apache Airflow in Docker Compose (for both versions 1.10.* and 2.*)☆184Updated 2 years ago
- A data engineering project (Twitter monitor app)☆87Updated 3 years ago
- A collection of data engineering projects: data modeling, ETL pipelines, data lakes, infrastructure configuration on AWS, data warehousin…☆15Updated 4 years ago
- How to use Presto (with Hive metastore) and MinIO?☆27Updated 2 years ago
- Grafana dashboards and StatsD exporter config for Airflow monitoring☆292Updated last year
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆336Updated last year
- Docker Airflow - Contains a docker compose file for Airflow 2.0☆70Updated 3 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- Multi-container environment with Hadoop, Spark and Hive☆232Updated 9 months ago
- trino monitoring with JMX metrics through Prometheus and Grafana☆17Updated last year
- ☆46Updated 2 years ago
- Building a Data Pipeline with an Open Source Stack☆55Updated 7 months ago
- The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Pos…☆76Updated 2 years ago
- This project implements an ELT (Extract - Load - Transform) data pipeline with the goodreads dataset, using dagster (orchestration), spar…☆42Updated 2 years ago
- ☆41Updated 3 years ago
- The Python fake data producer for Apache Kafka® is a complete demo app allowing you to quickly produce JSON fake streaming datasets and …☆84Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆65Updated 2 years ago
- A Series of Notebooks on how to start with Kafka and Python☆151Updated 11 months ago
- Data Engineering examples for Airflow, Prefect; dbt for BigQuery, Redshift, ClickHouse, Postgres, DuckDB; PySpark for Batch processing; K…☆69Updated last week
- Developed an ETL pipeline for a Data Lake that extracts data from S3, processes the data using Spark, and loads the data back into S3 as …☆16Updated 6 years ago