philschmid / serverless-bert-huggingface-aws-lambda-docker
☆44Updated last year
Alternatives and similar repositories for serverless-bert-huggingface-aws-lambda-docker:
Users that are interested in serverless-bert-huggingface-aws-lambda-docker are comparing it to the libraries listed below
- ☆11Updated 4 years ago
- Deploy transformers serverless on AWS Lambda☆122Updated 3 years ago
- Alternate Implementation for Zero Shot Text Classification: Instead of reframing NLI/XNLI, this reframes the text backbone of CLIP models…☆37Updated 3 years ago
- ☆28Updated 4 years ago
- ☆57Updated 2 years ago
- ☆64Updated last year
- ☆25Updated 3 years ago
- No Teacher BART distillation experiment for NLI tasks☆26Updated 4 years ago
- ☆17Updated 4 years ago
- ☆13Updated 3 years ago
- [WIP] Behold, semantic-search, built over sentence-transformers to make it easy for search engineers to evaluate, optimise and deploy mod…☆15Updated 2 years ago
- Examples showing use of NGC containers and models withing Amazon SageMaker☆17Updated 2 years ago
- Generating boolean (yes/no) questions from any content using T5 text-to-text transformer model and BoolQ dataset☆35Updated last year
- KitanaQA: Adversarial training and data augmentation for neural question-answering models☆57Updated last year
- Zero administration inference with AWS Lambda for 🤗☆61Updated 3 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER models☆33Updated 2 years ago
- SageMaker custom deployments made easy☆61Updated 3 weeks ago
- An easy-to-use Python module that helps you to extract the BERT embeddings for a large text dataset (Bengali/English) efficiently.☆36Updated last year
- Huggingface inference with GPU Docker on AWS☆41Updated 3 years ago
- This sample show you how to train BERT on Amazon Sagemaker using Spot instances☆31Updated last year
- NLP Examples using the 🤗 libraries☆41Updated 4 years ago
- Fast model deployment on AWS Lambda☆14Updated last year
- ☆15Updated 4 years ago
- A bentoML-powered API to transcribe audio and make sense of it☆39Updated 2 years ago
- ☆9Updated 4 years ago
- Deploy llama.cpp compatible Generative AI LLMs on AWS Lambda!☆173Updated last year
- This is a sample solution for bringing your own ML models and inference code, and running them at scale using AWS serverless services.☆38Updated last year
- Deploy FastAI Trained PyTorch Model in TorchServe and Host in Amazon SageMaker Inference Endpoint☆73Updated 3 years ago
- Streamlit demo app to demonstrate the features of transformers interpret with multiple models.☆25Updated 3 years ago
- CI/CD pipeline with Amazon SageMaker and Github actions☆23Updated 2 years ago