redapt / pyspark-s3-parquet-example
This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.
☆19Updated 8 years ago
Alternatives and similar repositories for pyspark-s3-parquet-example:
Users that are interested in pyspark-s3-parquet-example are comparing it to the libraries listed below
- Business Data Analysis by HiPIC of CalStateLA☆20Updated 6 years ago
- Airflow workflow management platform chef cookbook.☆71Updated 5 years ago
- Personal Finance Project to automatically collect swiss banking transaction into a DWH and visualise it☆26Updated last year
- AWS Big Data Certification☆25Updated 2 months ago
- Basic tutorial of using Apache Airflow☆36Updated 6 years ago
- ☆17Updated 6 years ago
- Real-time anomaly detection using Kafka, KSQL User Defined Function and a pre-trained model☆30Updated last year
- Code and setup information for Introduction to Machine Learning with Spark☆12Updated 9 years ago
- Spark NLP for Streamlit☆15Updated 3 years ago
- Docker image to submit Spark applications☆38Updated 7 years ago
- A python package to create a database on the platform using our moj data warehousing framework☆21Updated 6 months ago
- Real-time report dashboard with Apache Kafka, Apache Spark Streaming and Node.js☆50Updated last year
- Sentiment Analysis of a Twitter Topic with Spark Structured Streaming☆55Updated 6 years ago
- A simple introduction to using spark ml pipelines☆26Updated 6 years ago
- Udacity Data Pipeline Exercises☆15Updated 4 years ago
- This workshop demonstrates two methods of machine learning inference for global production using AWS Lambda and Amazon SageMaker☆57Updated 4 years ago
- Source code for 'PySpark Recipes' by Raju Kumar Mishra☆25Updated 5 years ago
- How to do data science with Optimus, Spark and Python.☆19Updated 5 years ago
- Mastering Spark for Data Science, published by Packt☆47Updated 2 years ago
- event-triggered plugins for airflow☆21Updated 5 years ago
- A Scalable Data Cleaning Library for PySpark.☆26Updated 5 years ago
- ☆15Updated 2 years ago
- A Singer.io Target for the Stitch Import API☆26Updated last month
- Code accompanying AWS blog post "Build a Semantic Search Engine for Tabular Columns with Transformers and Amazon OpenSearch Service"☆17Updated last year
- notebooks for nlp-on-spark☆13Updated 8 years ago
- This repository has a collection of utilities for Glue Crawlers. These utilities come in the form of AWS CloudFormation templates or AWS …☆19Updated 3 years ago
- Datasets for CS109☆28Updated 11 years ago
- Example stream processing job, written in Scala with Apache Beam, for Google Cloud Dataflow☆30Updated 7 years ago
- Model management example using Polyaxon, Argo and Seldon☆23Updated 6 years ago
- Deploy sentiment analysis using Flask☆17Updated 5 years ago