bentoml / BentoDiffusionLinks
BentoDiffusion: A collection of diffusion models served with BentoML
☆374Updated 5 months ago
Alternatives and similar repositories for BentoDiffusion
Users that are interested in BentoDiffusion are comparing it to the libraries listed below
Sorting:
- NitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training☆289Updated 4 months ago
- A simple "Be My Eyes" web app with a llama.cpp/llava backend☆492Updated last year
- Run Latent Consistency Models on your Mac☆196Updated last year
- This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.☆157Updated last year
- 3D to Photo is an open-source package by Dabble, that combines threeJS and Stable diffusion to build a virtual photo studio for product p…☆447Updated last year
- An infinite number of monkeys randomly throwing paint at a canvas☆307Updated last year
- The source of the demo app for fal-serverless + Next.js☆122Updated last year
- Image Generation API Server - Similar to https://text-generator.io but for images☆51Updated last month
- ☆320Updated last year
- Examples of models deployable with Truss☆205Updated this week
- 🐳 | Dockerfiles for the RunPod container images used for our official templates.☆208Updated 2 weeks ago
- llama.cpp with BakLLaVA model describes what does it see☆382Updated last year
- Python library for designing and training your own Diffusion Models with PyTorch☆288Updated 4 months ago
- ☆436Updated last year
- SSD-1B, an open-source text-to-image model, outperforming previous versions by being 50% smaller and 60% faster than SDXL.☆177Updated last year
- ☆790Updated 3 years ago
- 🧰 | RunPod CLI for pod management☆339Updated last month
- Chat to Compose Video☆195Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Mistral7B playing DOOM☆138Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- Cog wrapper for ostris/ai-toolkit + post-finetuning cog inference for flux models☆406Updated 4 months ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- ☆276Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆449Updated last year
- A playground for creative exploration that uses SDXL Turbo.☆232Updated this week
- ☆127Updated last year
- Finetune a LLM to speak like you based on your WhatsApp Conversations☆371Updated last year
- a small code base for training large models☆309Updated 5 months ago
- ⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.☆145Updated last year