the-full-stack / gpu-deploymentsLinks
Testing methods for GPU deployment
☆20Updated 2 years ago
Alternatives and similar repositories for gpu-deployments
Users that are interested in gpu-deployments are comparing it to the libraries listed below
Sorting:
- Cerule - A Tiny Mighty Vision Model☆67Updated 9 months ago
- manage histories of LLM applied applications☆89Updated last year
- 🚀🤗 A collection of templates for Hugging Face Spaces☆35Updated last year
- ☆26Updated 5 months ago
- Command-line script for inferencing from models such as LLaMA, in a chat scenario, with LoRA adaptations☆33Updated 2 years ago
- Framework for building and maintaining self-updating prompts for LLMs☆63Updated 11 months ago
- 🎨 Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.☆50Updated 2 years ago
- [WIP] A 🔥 interface for running code in the cloud☆85Updated 2 years ago
- ☆78Updated last year
- Apps that run on modal.com☆12Updated last year
- A miniature version of Modal☆20Updated 11 months ago
- Gradio UI for a Cog API☆66Updated last year
- Using modal.com to process FineWeb-edu data☆20Updated 2 months ago
- Repository containing awesome resources regarding Hugging Face tooling.☆47Updated last year
- run dreambooth training on modal☆22Updated 2 years ago
- Fast AI Practical Deep Learning for Coders experiments in Stable Diffusion☆25Updated 2 years ago
- showing various ways to serve Keras based stable diffusion☆110Updated 2 years ago
- This repository shows various ways of deploying a vision model (TensorFlow) from 🤗 Transformers.☆30Updated 2 years ago
- ☆70Updated last month
- GPT2 fine-tuning pipeline with KerasNLP, TensorFlow, and TensorFlow Extended☆32Updated last year
- utilities for loading and running text embeddings with onnx☆44Updated 10 months ago
- ☆23Updated last year
- Retrieve the source code for any model made available on replicate.com!☆34Updated last year
- ☆48Updated last year
- ☆19Updated 11 months ago
- I learn about and explain quantization☆26Updated last year
- ☆77Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆111Updated last week
- Evaluate your LLM apps, RAG pipeline, any generated text, and more!☆1Updated last year
- ☆22Updated last year