redapt / pyspark-s3-parquet-example
This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.
☆19Updated 8 years ago
Alternatives and similar repositories for pyspark-s3-parquet-example:
Users that are interested in pyspark-s3-parquet-example are comparing it to the libraries listed below
- Basic tutorial of using Apache Airflow☆36Updated 6 years ago
- AWS Big Data Certification☆25Updated last month
- Blog post on ETL pipelines with Airflow☆23Updated 4 years ago
- Mastering Spark for Data Science, published by Packt☆47Updated 2 years ago
- Big Data Demystified meetup and blog examples☆31Updated 6 months ago
- Helping you get Airflow running in production.☆9Updated 5 years ago
- Real-time report dashboard with Apache Kafka, Apache Spark Streaming and Node.js☆50Updated last year
- Airflow workflow management platform chef cookbook.☆71Updated 5 years ago
- Fully unit tested utility functions for data engineering. Python 3 only.☆15Updated 5 months ago
- This workshop demonstrates two methods of machine learning inference for global production using AWS Lambda and Amazon SageMaker☆57Updated 4 years ago
- Business Data Analysis by HiPIC of CalStateLA☆20Updated 6 years ago
- ☆17Updated 6 years ago
- Composable filesystem hooks and operators for Apache Airflow.☆17Updated 3 years ago
- Airflow code accompanying blog post.☆21Updated 5 years ago
- An example PySpark project with pytest☆17Updated 7 years ago
- A self-paced workshop designed to allow you to get hands on with building a real-time data platform using serverless technologies such as…☆22Updated 6 years ago
- PyConDE & PyData Berlin 2019 Airflow Workshop: Airflow for machine learning pipelines.☆46Updated last year
- How to do data science with Optimus, Spark and Python.☆19Updated 5 years ago
- A simple introduction to using spark ml pipelines☆26Updated 6 years ago
- ☆34Updated 2 months ago
- Ingest tweets with Kafka. Use Spark to track popular hashtags and trendsetters for each hashtag☆29Updated 8 years ago
- Snowflake Guide: Building a Recommendation Engine Using Snowflake & Amazon SageMaker☆31Updated 3 years ago
- Docker compose files for various kafka stacks☆32Updated 6 years ago
- Udacity Data Pipeline Exercises☆15Updated 4 years ago
- A Scalable Data Cleaning Library for PySpark.☆26Updated 5 years ago
- A series of workshop modules introducing Feast feature store.☆19Updated 2 years ago
- Sentiment Analysis of a Twitter Topic with Spark Structured Streaming☆55Updated 6 years ago
- Build end-to-end Machine Learning pipeline to predict accessibility of playgrounds in NYC☆15Updated 4 years ago
- 🚨 Simple, self-contained fraud detection system built with Apache Kafka and Python☆84Updated 5 years ago