PacktPublishing / Amazon-Redshift-Cookbook
Amazon Redshift Cookbook, Published by Packt
☆15Updated 2 years ago
Alternatives and similar repositories for Amazon-Redshift-Cookbook:
Users that are interested in Amazon-Redshift-Cookbook are comparing it to the libraries listed below
- Developed an ETL pipeline for a Data Lake that extracts data from S3, processes the data using Spark, and loads the data back into S3 as …☆16Updated 5 years ago
- GitHub repository related to the course Mastering Elastic Map Reduce for Data Engineers☆24Updated 2 years ago
- Data lake, data warehouse on GCP☆56Updated 3 years ago
- Udacity Data Streaming Nanodegree Program☆22Updated 4 years ago
- Udacity Data Pipeline Exercises☆15Updated 4 years ago
- A repo to track data engineering projects☆13Updated 2 years ago
- Example of an ETL Pipeline using Airflow☆34Updated 7 years ago
- Code snippets and tools published on the blog at lifearounddata.com☆12Updated 5 years ago
- Simplify Big Data Analytics with Amazon EMR, published by Packt☆13Updated 2 years ago
- Snowflake Cookbook, published by Packt☆79Updated 2 years ago
- Batch Processing , orchestration using Apache Airflow and Google Workflows, spark structured Streaming and a lot more☆19Updated 2 years ago
- (project & tutorial) dag pipeline tests + ci/cd setup☆87Updated 4 years ago
- Data Engineering pipeline hosted entirely in the AWS ecosystem utilizing DocumentDB as the database☆13Updated 3 years ago
- Built a stream processing data pipeline to get data from disparate systems into a dashboard using Kafka as an intermediary.☆29Updated last year
- Serverless ETL and Analytics with AWS Glue, published by Packt☆48Updated last year
- ☆87Updated 2 years ago
- Big Data Demystified meetup and blog examples☆31Updated 8 months ago
- My solutions for the Udacity Data Engineering Nanodegree☆33Updated 5 years ago
- ☆17Updated 8 months ago
- A workspace to experiment with Apache Spark, Livy, and Airflow in a Docker environment.☆38Updated 4 years ago
- Data models, build data warehouses and data lakes, automate data pipelines, and worked with massive datasets.☆13Updated 5 years ago
- ☆21Updated 3 years ago
- ☆84Updated 2 years ago
- A production-grade data pipeline has been designed to automate the parsing of user search patterns to analyze user engagement. Extract d…☆24Updated 3 years ago
- Airflow Tutorials☆24Updated 4 years ago
- Snowflake Guide: Building a Recommendation Engine Using Snowflake & Amazon SageMaker☆31Updated 3 years ago
- AWS Big Data Certification☆25Updated 3 months ago
- ☆64Updated this week
- A Pyspark job to handle upserts, conversion to parquet and create partitions on S3☆26Updated 4 years ago
- Resources for video demonstrations and blog posts related to DataOps on AWS☆175Updated 3 years ago