zillow / aws-custom-credential-providerLinks
A custom AWS credential provider that allows your Hadoop or Spark application access S3 file system by assuming a role
☆10Updated 2 weeks ago
Alternatives and similar repositories for aws-custom-credential-provider
Users that are interested in aws-custom-credential-provider are comparing it to the libraries listed below
Sorting:
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- JSON schema parser for Apache Spark☆82Updated 3 years ago
- Sample processing code using Spark 2.1+ and Scala☆51Updated 5 years ago
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆91Updated last year
- Hadoop output committers for S3☆111Updated 5 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago
- Reference architecture for real-time stream processing with Apache Flink on Amazon EMR, Amazon Kinesis, and Amazon Elasticsearch Service.☆70Updated last year
- Kinesis Connector for Structured Streaming☆137Updated last year
- Spark connector for SFTP☆98Updated 2 years ago
- Schema Registry integration for Apache Spark☆40Updated 3 years ago
- The AWS Glue Data Catalog is a fully managed, Apache Hive Metastore compatible, metadata repository. Customers can use the Data Catalog a…☆226Updated 9 months ago
- Spark Structured Streaming State Tools☆34Updated 5 years ago
- The iterative broadcast join example code.☆70Updated 8 years ago
- type-class based data cleansing library for Apache Spark SQL☆78Updated 6 years ago
- A library you can include in your Spark job to validate the counters and perform operations on success. Goal is scala/java/python support…☆108Updated 7 years ago
- Plug-and-play implementation of an Apache Spark custom data source for AWS DynamoDB.☆175Updated 4 years ago
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- Apache Spark and Apache Kafka integration example☆124Updated 8 years ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated last year
- Build configuration-driven ETL pipelines on Apache Spark☆162Updated 3 years ago
- Spark cloud integration: tests, cloud committers and more☆20Updated 10 months ago
- ☆81Updated 2 years ago
- Developing Spark External Data Sources using the V2 API☆48Updated 7 years ago
- The Internals of Delta Lake☆187Updated 3 weeks ago
- DynamoDB data source for Apache Spark☆95Updated 4 years ago
- Example projects for using Spark and Cassandra With DSE Analytics☆58Updated 2 months ago
- An example Apache Beam project.☆111Updated 8 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 7 years ago
- ACID Data Source for Apache Spark based on Hive ACID☆97Updated 4 years ago