AMD-AGI / AMD-LLMLinks
☆191Updated last year
Alternatives and similar repositories for AMD-LLM
Users that are interested in AMD-LLM are comparing it to the libraries listed below
Sorting:
- Docker-based inference engine for AMD GPUs☆231Updated last year
- ☆200Updated 9 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆104Updated last year
- Run and explore Llama models locally with minimal dependencies on CPU☆189Updated last year
- Algebraic enhancements for GEMM & AI accelerators☆287Updated 11 months ago
- Richard is gaining power☆200Updated 7 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated 2 years ago
- An implementation of bucketMul LLM inference☆224Updated last year
- Dead Simple LLM Abliteration☆248Updated 11 months ago
- ☆250Updated last year
- ☆126Updated 8 months ago
- Neurox control helm chart details☆30Updated 9 months ago
- This repo contains a new way to use bloom filters to do lossless video compression☆250Updated 8 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆288Updated 3 weeks ago
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆105Updated 2 years ago
- Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit …☆363Updated 8 months ago
- Tensor library & inference framework for machine learning☆117Updated 4 months ago
- DiscoGrad - automatically differentiate across conditional branches in C++ programs☆209Updated last year
- High-Performance Implementation of OpenAI's TikToken.☆467Updated 7 months ago
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 9 months ago
- Proof of thought : LLM-based reasoning using Z3 theorem proving with multiple backend support (SMT2 and JSON DSL)☆364Updated 3 months ago
- ☆163Updated 10 months ago
- A CLI to manage install and configure llama inference implemenation in multiple languages☆65Updated 2 years ago
- throwaway GPT inference☆141Updated last year
- A GPU Accelerated Binary Vector Store☆47Updated 11 months ago
- See Through Your Models☆400Updated 7 months ago
- ✨ rudimentary simulation of the three-body problem☆159Updated last year
- Online compiler for HIP and NVIDIA® CUDA® code to WebGPU☆205Updated last year
- minimal yet working VPN daemon for Linux☆106Updated 5 months ago
- Felafax is building AI infra for non-NVIDIA GPUs☆570Updated last year