Scout24 / emr-autoscalingLinks
☆16Updated 7 months ago
Alternatives and similar repositories for emr-autoscaling
Users that are interested in emr-autoscaling are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- Hadoop output committers for S3☆109Updated 4 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆88Updated last year
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- kinesis-kafka-connector is connector based on Kafka Connect to publish messages to Amazon Kinesis streams or Amazon Kinesis Firehose.☆155Updated last year
- Plug-and-play implementation of an Apache Spark custom data source for AWS DynamoDB.☆176Updated 4 years ago
- DynamoDB data source for Apache Spark☆95Updated 3 years ago
- Enables synchronizing metadata changes (Create/Drop table/partition) from Hive Metastore to AWS Glue Data Catalog☆35Updated last year
- Export Redshift data and convert to Parquet for use with Redshift Spectrum or other data warehouses.☆117Updated 2 years ago
- Herd is a managed data lake for the cloud. The Herd unified data catalog helps separate storage from compute in the cloud. Manage petabyt…☆135Updated 2 years ago
- PySpark for ETL jobs including lineage to Apache Atlas in one script via code inspection☆18Updated 8 years ago
- Filling in the Spark function gaps across APIs☆50Updated 4 years ago
- AWS bootstrap scripts for Mozilla's flavoured Spark setup.☆47Updated 5 years ago
- Automated data quality suggestions and analysis with Deequ on AWS Glue☆85Updated 2 years ago
- Kinesis Connector for Structured Streaming☆136Updated 11 months ago
- Utilities for Apache Spark☆34Updated 9 years ago
- ☆26Updated 9 years ago
- Autoscaling EMR clusters and Kinesis streams on Amazon Web Services (AWS)☆47Updated last year
- Scripts for parsing / making sense of yarn logs☆52Updated 8 years ago
- This code demonstrates the architecture featured on the AWS Big Data blog (https://aws.amazon.com/blogs/big-data/ ) which creates a concu…☆75Updated 6 years ago
- A custom AWS credential provider that allows your Hadoop or Spark application access S3 file system by assuming a role☆10Updated 7 years ago
- The iterative broadcast join example code.☆69Updated 7 years ago
- Implementations of open source Apache Hadoop/Hive interfaces which allow for ingesting data from Amazon DynamoDB☆226Updated last month
- Cloudformation templates for deploying Airflow in ECS☆40Updated 6 years ago
- Python API for Deequ☆41Updated 4 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆75Updated last year
- Turbine: the bare metals that gets you Airflow☆378Updated 3 years ago
- Performant Redshift data source for Apache Spark☆140Updated last month
- A client for the Confluent Schema Registry API implemented in Python☆53Updated 2 years ago
- Amazon Elastic MapReduce code samples☆63Updated 9 years ago