redapt / pyspark-s3-parquet-example
This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.
☆19Updated 8 years ago
Alternatives and similar repositories for pyspark-s3-parquet-example:
Users that are interested in pyspark-s3-parquet-example are comparing it to the libraries listed below
- Airflow code accompanying blog post.☆21Updated 6 years ago
- Basic tutorial of using Apache Airflow☆36Updated 6 years ago
- Composable filesystem hooks and operators for Apache Airflow.☆17Updated 3 years ago
- Real-time report dashboard with Apache Kafka, Apache Spark Streaming and Node.js☆50Updated last year
- AWS Big Data Certification☆25Updated 3 months ago
- Big Data Demystified meetup and blog examples☆31Updated 8 months ago
- A simple introduction to using spark ml pipelines☆26Updated 7 years ago
- A self-paced workshop designed to allow you to get hands on with building a real-time data platform using serverless technologies such as…☆22Updated 6 years ago
- ☆17Updated 6 years ago
- This repository has a collection of utilities for Glue Crawlers. These utilities come in the form of AWS CloudFormation templates or AWS …☆19Updated 3 years ago
- Airflow workflow management platform chef cookbook.☆71Updated 5 years ago
- Snowflake Guide: Building a Recommendation Engine Using Snowflake & Amazon SageMaker☆31Updated 3 years ago
- Blog post on ETL pipelines with Airflow☆23Updated 4 years ago
- Code examples for the Introduction to Kubeflow course☆14Updated 4 years ago
- ☆16Updated 2 years ago
- Mastering Spark for Data Science, published by Packt☆47Updated 2 years ago
- A project for exploring how Great Expectations can be used to ensure data quality and validate batches within a data pipeline defined in …☆21Updated 2 years ago
- This service is meant to simplify running Google Cloud operations, especially BigQuery tasks. This means you do not have to worry about …☆45Updated 6 years ago
- Quickstart PySpark with Anaconda on AWS/EMR using Terraform☆47Updated 3 months ago
- ☆18Updated 4 years ago
- Streaming ETL with Apache Flink and Amazon Kinesis Data Analytics☆64Updated last year
- ☆10Updated 6 years ago
- event-triggered plugins for airflow☆21Updated 5 years ago
- Using the Parquet file format with Python☆15Updated last year
- Sentiment Analysis of a Twitter Topic with Spark Structured Streaming☆55Updated 6 years ago
- Helping you get Airflow running in production.☆9Updated 5 years ago
- A project template for developing BYOD docker images for use in Amazon SageMaker.☆19Updated 5 years ago
- Public source code for the Batch Processing with Apache Beam (Python) online course☆18Updated 4 years ago
- AWS Lambda function to get events in Kafka topic when files are uploaded to S3☆24Updated 6 years ago
- Docker compose files for various kafka stacks☆32Updated 7 years ago