the-full-stack / gpu-deploymentsLinks
Testing methods for GPU deployment
β20Updated 2 years ago
Alternatives and similar repositories for gpu-deployments
Users that are interested in gpu-deployments are comparing it to the libraries listed below
Sorting:
- showing various ways to serve Keras based stable diffusionβ111Updated 2 years ago
- π¨ Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.β50Updated 2 years ago
- manage histories of LLM applied applicationsβ91Updated last year
- Source of the FSDL 2022 labs, which are at https://github.com/full-stack-deep-learning/fsdl-text-recognizer-2022-labsβ83Updated last year
- GPT2 fine-tuning pipeline with KerasNLP, TensorFlow, and TensorFlow Extendedβ33Updated 2 years ago
- run dreambooth training on modalβ22Updated 2 years ago
- β26Updated 9 months ago
- β78Updated last year
- Seemless interface of using PyTOrch distributed with Jupyter notebooksβ49Updated last week
- β80Updated last year
- β69Updated 5 months ago
- Repository containing awesome resources regarding Hugging Face tooling.β48Updated last year
- Framework for building and maintaining self-updating prompts for LLMsβ64Updated last year
- Cerule - A Tiny Mighty Vision Modelβ68Updated last year
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaciβ¦β24Updated last year
- QR Codes that look niceβ61Updated last month
- Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.β124Updated last week
- Machine Learning Pipeline for Semantic Segmentation with TensorFlow Extended (TFX) and various GCP productsβ95Updated 2 years ago
- A clone of OpenAI's Tokenizer page for HuggingFace Modelsβ45Updated last year
- Fine-tune an LLM to perform batch inference and online serving.β112Updated 3 months ago
- β21Updated last year
- β23Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ197Updated last year
- ML/DL Math and Method notesβ63Updated last year
- I learn about and explain quantizationβ26Updated last year
- utilities for loading and running text embeddings with onnxβ44Updated last month
- Command-line script for inferencing from models such as LLaMA, in a chat scenario, with LoRA adaptationsβ33Updated 2 years ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β33Updated last week
- Hugging Face Deep RL Class notesβ10Updated 2 years ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracyβ102Updated last year