martinkarlssonio / big-data-solutionLinks
☆46Updated 2 years ago
Alternatives and similar repositories for big-data-solution
Users that are interested in big-data-solution are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆262Updated 2 years ago
- Multi-container environment with Hadoop, Spark and Hive☆232Updated 9 months ago
- This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python…☆48Updated last year
- Learn Apache Spark in Scala, Python (PySpark) and R (SparkR) by building your own cluster with a JupyterLab interface on Docker.☆507Updated 3 months ago
- O'Reilly Book: [Data Algorithms with Spark] by Mahmoud Parsian☆228Updated 2 years ago
- Delta-Lake, ETL, Spark, Airflow☆48Updated 3 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆144Updated 2 years ago
- ☆90Updated 3 years ago
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆49Updated 2 years ago
- End to end data engineering project with kafka, airflow, spark, postgres and docker.☆108Updated last month
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆56Updated 2 years ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆104Updated 5 years ago
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- Data Engineering with Spark and Delta Lake☆106Updated 3 years ago
- PySpark Cheat Sheet - example code to help you learn PySpark and develop apps faster☆488Updated last year
- Create a streaming data, transfer it to Kafka, modify it with PySpark, take it to ElasticSearch and MinIO☆65Updated 2 years ago
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆83Updated last year
- ☆93Updated last year
- Building a Modern Data Lake with Minio, Spark, Airflow via Docker.☆23Updated last year
- Apache Spark 3 - Structured Streaming Course Material☆126Updated 2 years ago
- This project helps me to understand the core concepts of Apache Airflow. I have created custom operators to perform tasks such as staging…☆97Updated 6 years ago
- Playground for Lakehouse (Iceberg, Hudi, Spark, Flink, Trino, DBT, Airflow, Kafka, Debezium CDC)☆64Updated 2 years ago
- Building a Data Pipeline with an Open Source Stack☆56Updated 7 months ago
- Simple stream processing pipeline☆110Updated last year
- Writes the CSV file to Postgres, read table and modify it. Write more tables to Postgres with Airflow.☆38Updated 2 years ago
- Data pipeline performing ETL to AWS Redshift using Spark, orchestrated with Apache Airflow☆162Updated 5 years ago
- Fundamentals of Spark with Python (using PySpark), code examples☆362Updated 3 years ago
- Code snippets for Data Engineering Design Patterns book☆331Updated last month
- Source code of the Apache Airflow Tutorial for Beginners on YouTube Channel Coder2j (https://www.youtube.com/c/coder2j)☆336Updated last year
- The goal of this project is to build a docker cluster that gives access to Hadoop, HDFS, Hive, PySpark, Sqoop, Airflow, Kafka, Flume, Pos…☆76Updated 2 years ago