Aiven-Labs / python-fake-data-producer-for-apache-kafkaLinks
The Python fake data producer for Apache Kafka® is a complete demo app allowing you to quickly produce JSON fake streaming datasets and push it to an Apache Kafka topic.
☆85Updated last year
Alternatives and similar repositories for python-fake-data-producer-for-apache-kafka
Users that are interested in python-fake-data-producer-for-apache-kafka are comparing it to the libraries listed below
Sorting:
- A repository of sample code to show data quality checking best practices using Airflow.☆77Updated 2 years ago
- A Series of Notebooks on how to start with Kafka and Python☆154Updated 3 months ago
- Docker envinroment to stream data from Kafka to Iceberg tables☆29Updated last year
- Delta-Lake, ETL, Spark, Airflow☆47Updated 2 years ago
- Cost Efficient Data Pipelines with DuckDB☆54Updated last month
- New generation opensource data stack☆68Updated 3 years ago
- Delta Lake Documentation☆49Updated last year
- Evaluation Matrix for Change Data Capture☆25Updated 10 months ago
- Code snippets for Data Engineering Design Patterns book☆119Updated 3 months ago
- ☆49Updated 3 years ago
- Full stack data engineering tools and infrastructure set-up☆53Updated 4 years ago
- Delta Lake helper methods. No Spark dependency.☆23Updated 9 months ago
- Streaming Synthetic Sales Data Generator: Streaming sales data generator for Apache Kafka, written in Python☆44Updated 2 years ago
- Docker Airflow - Contains a docker compose file for Airflow 2.0☆67Updated 2 years ago
- Data validation library for PySpark 3.0.0☆33Updated 2 years ago
- ☆21Updated 4 years ago
- Enforce Best Practices for all your Airflow DAGs. ⭐☆101Updated 2 weeks ago
- Materials for the next course☆24Updated 2 years ago
- ☆18Updated last year
- Materials of the Official Helm Chart Webinar☆27Updated 4 years ago
- A package to run DuckDB queries from Apache Airflow.☆19Updated last year
- A simple and easy to use Data Quality (DQ) tool built with Python.☆50Updated last year
- Quick Guides from Dremio on Several topics☆71Updated 3 weeks ago
- To provide a deeper understanding of how the modern, open-source data stack consisting of Iceberg, dbt, Trino, and Hive operates within a…☆35Updated last year
- Resources for video demonstrations and blog posts related to DataOps on AWS☆178Updated 3 years ago
- DataHub on AWS demonstration resources☆10Updated 2 years ago
- Pyspark boilerplate for running prod ready data pipeline☆28Updated 4 years ago
- Data lake, data warehouse on GCP☆56Updated 3 years ago
- A Python package that creates fine-grained dbt tasks on Apache Airflow☆70Updated 9 months ago
- Trino dbt demo project to mix and load BigQuery data with and in a local PostgreSQL database☆75Updated 3 years ago