jpmorganchase / inference-serverLinks
Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.
☆51Updated 3 months ago
Alternatives and similar repositories for inference-server
Users that are interested in inference-server are comparing it to the libraries listed below
Sorting:
- Build a Streamlit app with LangChain and Amazon Bedrock - Use ElastiCache Serverless Redis for chat history, deploy to EKS and manage per…☆14Updated last year
- The repository guides you through generating a synthetic dataset for a QA-RAG application using the Bedrock API, Python and Langchain.☆19Updated 10 months ago
- Generative AI with Amazon Bedrock and SageMaker☆31Updated 5 months ago
- This guidance helps game developers automate the process of creating a non-player character (NPC) for their games, and associated infrast…☆5Updated 4 months ago
- ☆12Updated 9 months ago
- Operational Data Processing Framework developed using AWS Glue and Apache Hudi. This framework is suitable for Data Lake and Modern Data …☆23Updated last year
- ☆12Updated last year
- Explore and experiment with large language models (LLMs) available in Amazon Bedrock☆18Updated 10 months ago
- This Guidance demonstrates how to configure a proxy in a virtual private cloud (VPC) to connect external services to your Amazon VPC Latt…☆14Updated 8 months ago
- Describes the concepts of lambda architecture and the actual deployment process with an example of building a serverless business intelli…☆15Updated last month
- CCCS security control profiles expressed using OSCAL☆14Updated last month
- This Guidance demonstrates how enterprises can unlock the value of their data through the powerful generative AI capabilities of Amazon Q…☆14Updated 3 weeks ago
- Run WebAssembly workloads on Amazon EKS☆17Updated 9 months ago
- How to build a simplified Corrective RAG assistant with Amazon Bedrock using LLMs, Embeddings model, Knowledge Bases for Amazon Bedrock, …☆14Updated last year
- ☆17Updated this week
- Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Service☆26Updated 7 months ago
- ☆12Updated 4 months ago
- ☆45Updated 3 weeks ago
- ☆13Updated 7 months ago
- This solution helps you deploy ETL processes and data storage resources to create an Insurance Lake using Amazon S3 buckets for storage, …☆27Updated last week
- This guidance focuses on the part of payments processing systems that post payments to recieving accounts. In this phase, inbound transac…☆15Updated last month
- Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector☆16Updated 7 months ago
- ☆11Updated this week
- Building Product Descriptions with AWS Bedrock and Rekognition☆10Updated 8 months ago
- A PoC application developed using Amazon Transcribe and Amazon Bedrock to capture knowledge.☆10Updated 5 months ago
- Chaos Engineering Framework across Private / Public / Hybrid Cloud Environments☆13Updated last year
- ☆21Updated 7 months ago
- 'Talk to your slide deck' (Multimodal RAG) using foundation models (FMs) hosted on Amazon Bedrock and Amazon SageMaker☆42Updated 5 months ago
- ☆8Updated 11 months ago
- This Guidance provides a set of artifacts that will guide customers in building a production monitoring architecture with AWS IoT TwinMak…☆12Updated 8 months ago