redapt / pyspark-s3-parquet-example
This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.
☆19Updated 8 years ago
Alternatives and similar repositories for pyspark-s3-parquet-example:
Users that are interested in pyspark-s3-parquet-example are comparing it to the libraries listed below
- Airflow code accompanying blog post.☆21Updated 6 years ago
- This service is meant to simplify running Google Cloud operations, especially BigQuery tasks. This means you do not have to worry about …☆45Updated 6 years ago
- AWS Big Data Certification☆25Updated 2 months ago
- ☆17Updated 6 years ago
- Real-time report dashboard with Apache Kafka, Apache Spark Streaming and Node.js☆50Updated last year
- ☆15Updated 3 years ago
- Composable filesystem hooks and operators for Apache Airflow.☆17Updated 3 years ago
- A simple introduction to using spark ml pipelines☆26Updated 7 years ago
- Personal Finance Project to automatically collect swiss banking transaction into a DWH and visualise it☆26Updated last year
- Airflow workflow management platform chef cookbook.☆71Updated 5 years ago
- Big Data Demystified meetup and blog examples☆31Updated 7 months ago
- scaffold of Apache Airflow executing Docker containers☆85Updated 2 years ago
- ☆47Updated 3 years ago
- Model management example using Polyaxon, Argo and Seldon☆23Updated 6 years ago
- 📆 Run, schedule, and manage your dbt jobs using Kubernetes.☆24Updated 6 years ago
- Example stream processing job, written in Scala with Apache Beam, for Google Cloud Dataflow☆30Updated 8 years ago
- A curated list of all the awesome examples, articles, tutorials and videos for Apache Airflow.☆96Updated 4 years ago
- bigquery patterns☆13Updated 7 years ago
- ☆20Updated 3 years ago
- Using Luigi to create a Machine Learning Pipeline using the Rossman Sales data from Kaggle☆33Updated 8 years ago
- A Scalable Data Cleaning Library for PySpark.☆27Updated 6 years ago
- Ingest tweets with Kafka. Use Spark to track popular hashtags and trendsetters for each hashtag☆29Updated 8 years ago
- An example PySpark project with pytest☆17Updated 7 years ago
- Example Set up For DBT Cloud using Github Integrations☆11Updated 5 years ago
- Basic tutorial of using Apache Airflow☆36Updated 6 years ago
- Creating a Streaming Pipeline for user log data in Google Cloud Platform☆22Updated 5 years ago
- Snowflake Guide: Building a Recommendation Engine Using Snowflake & Amazon SageMaker☆31Updated 3 years ago
- ☆34Updated 3 months ago
- Example implementation running Airflow as separate services with docker-compose.☆19Updated 6 years ago
- Docker container for Kafka - Spark Streaming - Cassandra☆98Updated 5 years ago