zillow / aws-custom-credential-providerLinks
A custom AWS credential provider that allows your Hadoop or Spark application access S3 file system by assuming a role
☆10Updated 8 years ago
Alternatives and similar repositories for aws-custom-credential-provider
Users that are interested in aws-custom-credential-provider are comparing it to the libraries listed below
Sorting:
- Circus Train is a dataset replication tool that copies Hive tables between clusters and clouds.☆90Updated last year
- Spark Structured Streaming State Tools☆34Updated 5 years ago
- File compaction tool that runs on top of the Spark framework.☆59Updated 6 years ago
- Spark cloud integration: tests, cloud committers and more☆20Updated 7 months ago
- JSON schema parser for Apache Spark☆81Updated 3 years ago
- Bulletproof Apache Spark jobs with fast root cause analysis of failures.☆73Updated 4 years ago
- Kinesis Connector for Structured Streaming☆137Updated last year
- Hadoop output committers for S3☆111Updated 5 years ago
- Spark stream from kafka(json) to s3(parquet)☆15Updated 6 years ago
- SQL for Kafka Connectors☆99Updated last year
- ☆81Updated last year
- Waimak is an open-source framework that makes it easier to create complex data flows in Apache Spark.☆76Updated last year
- The Internals of Delta Lake☆186Updated 8 months ago
- Scripts for parsing / making sense of yarn logs☆52Updated 9 years ago
- A library to expose more of Apache Spark's metrics system☆146Updated 5 years ago
- The AWS Glue Data Catalog is a fully managed, Apache Hive Metastore compatible, metadata repository. Customers can use the Data Catalog a…☆225Updated 5 months ago
- A library for strong, schema based conversion between 'natural' JSON documents and Avro☆18Updated last year
- Apiary provides modules which can be combined to create a federated cloud data lake☆36Updated last year
- Build configuration-driven ETL pipelines on Apache Spark☆161Updated 2 years ago
- How to use Parquet in Flink☆32Updated 8 years ago
- Reference architecture for real-time stream processing with Apache Flink on Amazon EMR, Amazon Kinesis, and Amazon Elasticsearch Service.☆72Updated last year
- DynamoDB data source for Apache Spark☆95Updated 4 years ago
- ACID Data Source for Apache Spark based on Hive ACID☆97Updated 4 years ago
- Spark-Radiant is Apache Spark Performance and Cost Optimizer☆25Updated 8 months ago
- Autoscaling EMR clusters and Kinesis streams on Amazon Web Services (AWS)☆47Updated last year
- Schema Registry integration for Apache Spark☆40Updated 2 years ago
- Collection of open-source Spark tools & frameworks that have made the data engineering and data science teams at Swoop highly productive☆184Updated 2 years ago
- Profiler for large-scale distributed java applications (Spark, Scalding, MapReduce, Hive,...) on YARN.☆128Updated 7 years ago
- ☆50Updated 4 years ago
- Examples on how to use the command line tools in Avro Tools to read and write Avro files☆154Updated last year