spectral-compute / scale-docsLinks
☆57Updated last week
Alternatives and similar repositories for scale-docs
Users that are interested in scale-docs are comparing it to the libraries listed below
Sorting:
- ☆58Updated 11 months ago
- AMD related optimizations for transformer models☆79Updated 7 months ago
- llama.cpp fork used by GPT4All☆55Updated 4 months ago
- Fork of ollama for vulkan support☆17Updated 2 months ago
- An extension to use Kokoro TTS in text generation webui☆20Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆31Updated this week
- Source code for Intel's Polite Guard NLP project☆35Updated last month
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆66Updated this week
- GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.☆58Updated last year
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆106Updated this week
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆49Updated 4 months ago
- A simple frontend page to interact with an OpenAI like API☆17Updated 4 months ago
- ☆16Updated 3 months ago
- Simple script to quiz LLMs☆26Updated last year
- LLM inference in C/C++☆77Updated this week
- Extension for Automatic1111's Stable Diffusion WebUI, using Microsoft DirectML to deliver high performance result on any Windows GPU.☆56Updated last year
- Add-on for the Web Search extension that provides the web browsing capabilities without the need for Extras API.☆40Updated last month
- ☆10Updated 4 months ago
- ☆10Updated 10 months ago
- ☆24Updated this week
- Fork of Triton repository for OpenXLA uses of the Triton language and compiler☆11Updated 2 weeks ago
- ☆11Updated last year
- Web browser version of StarCoder.cpp☆45Updated last year
- Various LLM Benchmarks☆21Updated 3 weeks ago
- Web page with political compass quiz results for open LLMs☆37Updated last year
- Local LLM Server with GPU and NPU Acceleration☆138Updated last week
- Download models from the Ollama library, without Ollama☆86Updated 7 months ago
- Convert downloaded Ollama models back into their GGUF equivalent format☆36Updated 6 months ago
- LLM inference in C/C++☆23Updated 8 months ago
- A demo of cluade computer use playing minecraft☆22Updated 8 months ago