martinkarlssonio / big-data-solutionLinks
☆47Updated last year
Alternatives and similar repositories for big-data-solution
Users that are interested in big-data-solution are comparing it to the libraries listed below
Sorting:
- Docker with Airflow and Spark standalone cluster☆258Updated last year
- Multi-container environment with Hadoop, Spark and Hive☆215Updated last month
- Hadoop-Hive-Spark cluster + Jupyter on Docker☆75Updated 5 months ago
- This repo contains a spark standalone cluster on docker for anyone who wants to play with PySpark by submitting their applications.☆35Updated 2 years ago
- ☆87Updated 4 months ago
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- ETL pipeline using pyspark (Spark - Python)☆117Updated 5 years ago
- Simple stream processing pipeline☆102Updated last year
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆46Updated last year
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆38Updated 4 years ago
- Simple repo to demonstrate how to submit a spark job to EMR from Airflow☆33Updated 4 years ago
- A real-time streaming ETL pipeline for streaming and performing sentiment analysis on Twitter data using Apache Kafka, Apache Spark and D…☆30Updated 4 years ago
- O'Reilly Book: [Data Algorithms with Spark] by Mahmoud Parsian☆216Updated last year
- Tutorial for setting up a Spark cluster running inside of Docker containers located on different machines☆133Updated 2 years ago
- Docker Airflow - Contains a docker compose file for Airflow 2.0☆67Updated 2 years ago
- Docker with Airflow + Postgres + Spark cluster + JDK (spark-submit support) + Jupyter Notebooks☆24Updated 3 years ago
- ☆14Updated 2 years ago
- PySpark functions and utilities with examples. Assists ETL process of data modeling☆103Updated 4 years ago
- ☆39Updated 2 years ago
- Apache Spark 3 - Structured Streaming Course Material☆121Updated last year
- ☆23Updated 4 years ago
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆40Updated 6 years ago
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆55Updated 2 years ago
- Code for dbt tutorial☆156Updated 2 weeks ago
- ☆87Updated 2 years ago
- Get data from API, run a scheduled script with Airflow, send data to Kafka and consume with Spark, then write to Cassandra☆140Updated last year
- The resources of the preparation course for Databricks Data Engineer Professional certification exam☆117Updated this week
- Building a Data Pipeline with an Open Source Stack☆55Updated 11 months ago
- Classwork projects and home works done through Udacity data engineering nano degree☆74Updated last year
- Building a Modern Data Lake with Minio, Spark, Airflow via Docker.☆20Updated last year