aws-samples / zero-administration-inference-with-aws-lambda-for-hugging-face
Zero administration inference with AWS Lambda for π€
β63Updated 2 years ago
Alternatives and similar repositories for zero-administration-inference-with-aws-lambda-for-hugging-face:
Users that are interested in zero-administration-inference-with-aws-lambda-for-hugging-face are comparing it to the libraries listed below
- β243Updated 3 months ago
- β119Updated last month
- Deploy llama.cpp compatible Generative AI LLMs on AWS Lambda!β173Updated 9 months ago
- Post-process Amazon Textract results with Hugging Face transformer models for document understandingβ96Updated last month
- SageMaker custom deployments made easyβ58Updated last year
- β57Updated 2 years ago
- Serve scikit-learn, XGBoost, TensorFlow, and PyTorch models with AWS Lambda container images support.β97Updated 6 months ago
- β43Updated 8 months ago
- You're one command away from deploying your Streamlit app on AWS Fargate!β47Updated 3 years ago
- Use LLMs for building real-world appsβ111Updated 2 weeks ago
- β88Updated last year
- Context is Key: Combining Embedding-based Retrieval with LLMs for Comprehensive Knowledge Enrichmentβ32Updated last year
- β23Updated 2 months ago
- AWS Generative AI Conversational RAG Reference (Galileo)β73Updated 3 weeks ago
- This repository is part of a blog post that guides users through creating a NLU search application using Amazon SageMaker and Amazon Elasβ¦β17Updated last year
- β30Updated last year
- This sample show you how to train BERT on Amazon Sagemaker using Spot instancesβ31Updated last year
- Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.β216Updated this week
- A helper library to connect into Amazon SageMaker with AWS Systems Manager and SSH (Secure Shell)β226Updated 2 months ago
- CLI for building Docker images in SageMaker Studio using AWS CodeBuild.β56Updated 2 years ago
- A generative AI-powered framework for testing virtual agents.β160Updated last month
- β46Updated this week
- β24Updated 7 months ago
- β93Updated 3 years ago
- Deploying LLama 2 as AWS Lambda function for scalable serverless inferenceβ22Updated last year
- β21Updated last year
- Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at hβ¦β136Updated 3 months ago
- This Guidance provides best practices for building and deploying an intelligent document processing (IDP) architecture that scales with wβ¦β42Updated 3 months ago
- β20Updated last year
- β83Updated 8 months ago