likelovewant / ollama-for-amd
Get up and running with Llama 3, Mistral, Gemma, and other large language models.by adding more amd gpu support.
☆189Updated this week
Related projects ⓘ
Alternatives and complementary repositories for ollama-for-amd
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆144Updated this week
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆76Updated 2 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆455Updated this week
- A minimal web-UI for talking to Ollama servers☆489Updated this week
- add support on amd in zluda☆30Updated this week
- CUDA on AMD GPUs☆299Updated 2 months ago
- Croco.Cpp is a 3rd party testground for KoboldCPP, a simple one-file way to run various GGML/GGUF models with KoboldAI's UI. (for Croco.C…☆84Updated this week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆41Updated last month
- ☆14Updated 11 months ago
- ROCm docker images with fixes/support for extra architectures, such as gfx803/gfx1010.☆25Updated last year
- The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU p…☆154Updated this week
- 👾 LM Studio CLI☆1,665Updated this week
- Forge for stable-diffusion-webui-amdgpu (formerly stable-diffusion-webui-directml)☆62Updated last week
- Simple frontend for LLMs built in react-native.☆578Updated this week
- An OAI compatible exllamav2 API that's both lightweight and fast☆609Updated this week
- Make PyTorch models at least run on APUs.☆44Updated 11 months ago
- Web UI for ExLlamaV2☆446Updated last month
- A modern and easy-to-use client for Ollama☆600Updated last month
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆263Updated 2 months ago
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆478Updated 3 months ago
- Ollama AI front-end using Windows Forms as a Copilot Application☆110Updated 6 months ago
- Run Pytorch with ROCm hardware acceleration on an RX590 (or similar GPU)☆23Updated last year
- a self-hosted webui for 30+ generative ai☆481Updated this week
- Microsoft GPU-P (dxgkrnl) on Hyper-V Ubuntu VM☆137Updated 6 months ago
- Aiming to provide a seamless and privacy driven chatting experience with open-sourced technologies(Ollama), particularly open sourced LLM…☆97Updated this week
- HTTP proxy for on-demand model loading with llama.cpp (or other OpenAI compatible backends)☆41Updated this week
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆545Updated this week
- Docker for Intel Arc GPU: Intel Pytorch EXtension + Stable Diffusion web ui☆41Updated last year
- Stable Diffusion web UI - adding support for Intel OneAPI / Arc GPUs☆40Updated last year
- A manual for helping using tesla p40 gpu☆104Updated last week