redapt / pyspark-s3-parquet-exampleLinks
This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.
☆19Updated 9 years ago
Alternatives and similar repositories for pyspark-s3-parquet-example
Users that are interested in pyspark-s3-parquet-example are comparing it to the libraries listed below
Sorting:
- scaffold of Apache Airflow executing Docker containers☆85Updated 3 years ago
- Repo for all my code on the articles I post on medium☆106Updated 3 years ago
- A simple introduction to using spark ml pipelines☆26Updated 7 years ago
- Airflow code accompanying blog post.☆21Updated 6 years ago
- PySpark phonetic and string matching algorithms☆41Updated last year
- Sentiment Analysis of a Twitter Topic with Spark Structured Streaming☆55Updated 7 years ago
- Composable filesystem hooks and operators for Apache Airflow.☆17Updated 4 years ago
- Basic tutorial of using Apache Airflow☆36Updated 7 years ago
- AWS Big Data Certification☆25Updated last year
- Ingest tweets with Kafka. Use Spark to track popular hashtags and trendsetters for each hashtag☆29Updated 9 years ago
- Quickstart PySpark with Anaconda on AWS/EMR using Terraform☆47Updated last year
- Udacity Data Pipeline Exercises☆15Updated 5 years ago
- Airflow workflow management platform chef cookbook.☆70Updated 6 years ago
- A toolset to streamline running spark python on EMR☆20Updated 9 years ago
- 🚨 Simple, self-contained fraud detection system built with Apache Kafka and Python☆89Updated 6 years ago
- A Getting Started Guide for developing and using Airflow Plugins☆93Updated 7 years ago
- ☆17Updated this week
- A curated list of all the awesome examples, articles, tutorials and videos for Apache Airflow.☆96Updated 5 years ago
- CLI tool to launch Spark jobs on AWS EMR☆67Updated 2 years ago
- Code to build a simple analytics data pipeline with Python☆102Updated 8 years ago
- Open innovation with 60 minute cloud experiments on AWS☆87Updated last year
- ☆16Updated 2 years ago
- ☆17Updated last year
- Code examples for the Introduction to Kubeflow course☆14Updated 5 years ago
- An extendable Docker image for Airbnb's Superset platform, previously known as Caravel.☆114Updated 3 years ago
- Build and deploy a serverless data pipeline on AWS with no effort.☆111Updated 2 years ago
- Real-Time Data Processing Pipeline & Visualization with Docker, Spark, Kafka and Cassandra☆85Updated 8 years ago
- A Pyspark job to handle upserts, conversion to parquet and create partitions on S3☆28Updated 5 years ago
- This workshop demonstrates two methods of machine learning inference for global production using AWS Lambda and Amazon SageMaker☆58Updated 5 years ago
- An example PySpark project with pytest☆17Updated 8 years ago