aws-samples / zero-administration-inference-with-aws-lambda-for-hugging-faceLinks
Zero administration inference with AWS Lambda for 🤗
☆63Updated 3 years ago
Alternatives and similar repositories for zero-administration-inference-with-aws-lambda-for-hugging-face
Users that are interested in zero-administration-inference-with-aws-lambda-for-hugging-face are comparing it to the libraries listed below
Sorting:
- ☆267Updated 5 months ago
- Deploy llama.cpp compatible Generative AI LLMs on AWS Lambda!☆173Updated last year
- ☆129Updated last week
- SageMaker custom deployments made easy☆62Updated 6 months ago
- ☆44Updated 2 months ago
- You're one command away from deploying your Streamlit app on AWS Fargate!☆47Updated 4 years ago
- Over 60 example task UIs for Amazon Augmented AI (A2I)☆99Updated 4 years ago
- Post-process Amazon Textract results with Hugging Face transformer models for document understanding☆99Updated 9 months ago
- Use LLMs for building real-world apps☆112Updated 8 months ago
- Parse JSON response of Amazon Textract☆232Updated 10 months ago
- ☆96Updated 4 years ago
- ☆145Updated 2 years ago
- Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.☆100Updated last year
- ☆39Updated last year
- A helper library to connect into Amazon SageMaker with AWS Systems Manager and SSH (Secure Shell)☆254Updated 3 months ago
- Amazon SageMaker Local Mode Examples☆260Updated 5 months ago
- ☆62Updated 5 months ago
- This sample show you how to train BERT on Amazon Sagemaker using Spot instances☆31Updated last year
- CLI for building Docker images in SageMaker Studio using AWS CodeBuild.☆56Updated 3 years ago
- A Python framework for multi-modal document understanding with Amazon Bedrock☆94Updated last month
- This repo will teach you how to deploy an ML-powered web app to AWS Fargate from start to finish using Streamlit and AWS CDK☆108Updated 4 years ago
- ☆50Updated last year
- ☆69Updated 3 weeks ago
- Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at h…☆140Updated last year
- Analyze documents with Amazon Textract and generate output in multiple formats.☆464Updated 5 months ago
- Context is Key: Combining Embedding-based Retrieval with LLMs for Comprehensive Knowledge Enrichment☆31Updated 2 years ago
- This is a sample solution for bringing your own ML models and inference code, and running them at scale using AWS serverless services.☆38Updated last year
- This sample demonstrates how to setup an Amazon SageMaker MLOps end-to-end pipeline for Drift detection☆62Updated last year
- Build Generative AI applications with Langchain on AWS☆182Updated 2 years ago
- ☆57Updated 3 years ago