adaltas / spark-streaming-pyspark
Build and run Spark Structured Streaming pipelines in Hadoop - project using PySpark.
☆12Updated 5 years ago
Related projects: ⓘ
- Dockerizing an Apache Spark Standalone Cluster☆43Updated 2 years ago
- This repository contains code for Spark Streaming☆21Updated 3 years ago
- A project for exploring how Great Expectations can be used to ensure data quality and validate batches within a data pipeline defined in …☆21Updated 2 years ago
- ☆38Updated this week
- Full stack data engineering tools and infrastructure set-up☆38Updated 3 years ago
- Creation of a data lakehouse and an ELT pipeline to enable the efficient analysis and use of data☆37Updated 9 months ago
- ☆25Updated 5 years ago
- ☆24Updated last year
- Simplified ETL process in Hadoop using Apache Spark. Has complete ETL pipeline for datalake. SparkSession extensions, DataFrame validatio…☆53Updated last year
- Materials for the next course☆22Updated last year
- ☆16Updated last year
- Build & Learn Data Engineering,Machine Learning over Kubernetes. No Shortcut approach.☆57Updated last year
- Data validation library for PySpark 3.0.0☆34Updated last year
- Docker compose and Google Colab demo to build a CDC with Delta Lake☆15Updated 2 years ago
- Airflow training for the crunch conf☆105Updated 5 years ago
- Apche Spark Structured Streaming with Kafka using Python(PySpark)☆41Updated 5 years ago
- Source code for the MC technical blog post "Data Observability in Practice Using SQL"☆35Updated 2 months ago
- One click deploy docker-compose with Kafka, Spark Streaming, Zeppelin UI and Monitoring (Grafana + Kafka Manager)☆120Updated 3 years ago
- event-triggered plugins for airflow☆21Updated 4 years ago
- Pipeline library for StreamSets Data Collector and Transformer☆32Updated last year
- How to manage Slowly Changing Dimensions with Apache Hive☆55Updated 5 years ago
- An example CI/CD pipeline using GitHub Actions for doing continuous deployment of AWS Glue jobs built on PySpark and Jupyter Notebooks.☆12Updated 3 years ago
- Various Demos mostly based on docker environments☆33Updated last year
- The demo of using Kafka, Spark, Hive, Cassandra, etc by using Docker. It produces the production ready environment for any kinds of big d…☆30Updated 4 years ago
- spark on kubernetes☆105Updated last year
- Debussy is an opinionated Data Architecture and Engineering framework, enabling data analysts and engineers to build better platforms and…☆28Updated last year
- Example project for consuming AWS Kinesis streamming and save data on Amazon Redshift using Apache Spark☆11Updated 6 years ago
- Spark on Kubernetes using Helm☆34Updated 4 years ago
- Spark package for checking data quality☆25Updated last year
- ETL pipeline using pyspark (Spark - Python)☆106Updated 4 years ago