prawilny / ollama-rocm-docker
☆14Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for ollama-rocm-docker
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆174Updated last month
- Get up and running with Llama 3, Mistral, Gemma, and other large language models.by adding more amd gpu support.☆189Updated this week
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆478Updated 3 months ago
- ☆137Updated last week
- Croco.Cpp is a 3rd party testground for KoboldCPP, a simple one-file way to run various GGML/GGUF models with KoboldAI's UI. (for Croco.C…☆84Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆130Updated 3 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆39Updated 7 months ago
- Dolphin System Messages☆202Updated 2 months ago
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆397Updated last month
- Ollama chat client in Vue, everything you need to do your private text rpg in browser☆99Updated last month
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆144Updated this week
- ☆322Updated this week
- Neo AI integrates into the Linux terminal, capable of executing system commands and providing helpful information.☆94Updated 2 months ago
- Discord Bot that utilizes Ollama to interact with any Large Language Models to talk with users and allow them to host/create their own mo…☆93Updated last week
- ☆40Updated last year
- HTTP proxy for on-demand model loading with llama.cpp (or other OpenAI compatible backends)☆41Updated this week
- An OAI compatible exllamav2 API that's both lightweight and fast☆609Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆38Updated 3 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆110Updated 6 months ago
- A fast batching API to serve LLM models☆172Updated 6 months ago
- A simple to use Ollama autocompletion engine with options exposed and streaming functionality☆102Updated last month
- Make PyTorch models at least run on APUs.☆44Updated 11 months ago
- 100% Local AGI with LocalAI☆403Updated 5 months ago
- LLM Frontend in a single html file☆259Updated 3 weeks ago
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web using DuckDuckGo☆173Updated this week
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.☆25Updated this week
- Run an AI-powered Discord bot from the comfort of your laptop.☆138Updated this week
- Parse files (e.g. code repos) and websites to clipboard or a file for ingestions by AI / LLMs☆63Updated this week
- ☆84Updated 2 weeks ago
- API up your Ollama Server.☆97Updated last month