zillow / aws-custom-credential-providerLinks
A custom AWS credential provider that allows your Hadoop or Spark application access S3 file system by assuming a role
☆10Updated 3 weeks ago
Alternatives and similar repositories for aws-custom-credential-provider
Users that are interested in aws-custom-credential-provider are comparing it to the libraries listed below
Sorting:
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 5 years ago
- Spark Structured Streaming State Tools☆34Updated 5 years ago
- Plug-and-play implementation of an Apache Spark custom data source for AWS DynamoDB.☆175Updated 4 years ago
- Kinesis Connector for Structured Streaming☆138Updated last year
- Utilities for Apache Spark☆34Updated 9 years ago
- Big Data Toolkit for the JVM☆146Updated 5 years ago
- DynamoDB data source for Apache Spark☆95Updated 4 years ago
- Apiary provides modules which can be combined to create a federated cloud data lake☆37Updated last year
- Hadoop output committers for S3☆113Updated 5 years ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- A Spark-based data comparison tool at scale which facilitates software development engineers to compare a plethora of pair combinations o…☆52Updated 7 months ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- The Internals of Delta Lake☆187Updated 2 months ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 7 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 8 years ago
- A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR☆120Updated 9 years ago
- A framework for creating composable and pluggable data processing pipelines using Apache Spark, and running them on a cluster.☆47Updated 9 years ago
- Herd is a managed data lake for the cloud. The Herd unified data catalog helps separate storage from compute in the cloud. Manage petabyt…☆138Updated 3 years ago
- How to use Parquet in Flink☆32Updated 8 years ago
- Apache Spark on AWS Lambda☆157Updated 3 years ago
- kafka-connect-s3 : Ingest data from Kafka to Object Stores(s3)☆95Updated 6 years ago
- The iterative broadcast join example code.☆70Updated 8 years ago
- Build configuration-driven ETL pipelines on Apache Spark☆161Updated 3 years ago
- Scala + Druid: Scruid. A library that allows you to compose queries in Scala, and parse the result back into typesafe classes.☆117Updated 4 years ago