AMD-AGI / AMD-LLMLinks
☆191Updated last year
Alternatives and similar repositories for AMD-LLM
Users that are interested in AMD-LLM are comparing it to the libraries listed below
Sorting:
- Docker-based inference engine for AMD GPUs☆230Updated last year
- ☆199Updated 7 months ago
- Run and explore Llama models locally with minimal dependencies on CPU☆190Updated last year
- Algebraic enhancements for GEMM & AI accelerators☆284Updated 9 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆104Updated last year
- ☆249Updated last year
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated 2 years ago
- Richard is gaining power☆200Updated 6 months ago
- Pytorch script hot swap: Change code without unloading your LLM from VRAM☆125Updated 8 months ago
- ☆126Updated 6 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆287Updated 3 months ago
- This repo contains a new way to use bloom filters to do lossless video compression☆250Updated 6 months ago
- A copy of ONNX models, datasets, and code all in one GitHub repository. Follow the README to learn more.☆105Updated 2 years ago
- High-Performance Implementation of OpenAI's TikToken.☆465Updated 5 months ago
- Mistral7B playing DOOM☆138Updated last year
- Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit …☆362Updated 7 months ago
- ☆165Updated last year
- An implementation of bucketMul LLM inference☆222Updated last year
- A GPU Accelerated Binary Vector Store☆47Updated 10 months ago
- ☆1,072Updated 7 months ago
- Online compiler for HIP and NVIDIA® CUDA® code to WebGPU☆204Updated 11 months ago
- A CLI to manage install and configure llama inference implemenation in multiple languages☆65Updated last year
- Proof of thought : LLM-based reasoning using Z3 theorem proving with multiple backend support (SMT2 and JSON DSL)☆364Updated 2 months ago
- A BERT that you can train on a (gaming) laptop.☆210Updated 2 years ago
- Felafax is building AI infra for non-NVIDIA GPUs☆568Updated 10 months ago
- Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.☆612Updated 9 months ago
- throwaway GPT inference☆141Updated last year
- Neurox control helm chart details☆30Updated 7 months ago
- Tensor library & inference framework for machine learning☆115Updated 2 months ago
- ☆164Updated 8 months ago