Linaro / tinyBLASLinks
A fork of OpenBLAS with Armv8-A SVE (Scalable Vector Extension) support
☆17Updated 5 years ago
Alternatives and similar repositories for tinyBLAS
Users that are interested in tinyBLAS are comparing it to the libraries listed below
Sorting:
- Editor with LLM generation tree exploration☆75Updated 7 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆223Updated last year
- Inference RWKV v7 in pure C.☆38Updated 3 weeks ago
- tiny code to access tenstorrent blackhole☆59Updated 3 months ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated this week
- Inference of Mamba models in pure C☆191Updated last year
- Lightweight Inference server for OpenVINO☆211Updated this week
- Lightweight Llama 3 8B Inference Engine in CUDA C☆49Updated 5 months ago
- GGUF implementation in C as a library and a tools CLI program☆290Updated 3 weeks ago
- A platform to self-host AI on easy mode☆163Updated last week
- Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.☆18Updated 2 weeks ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆180Updated last week
- The DPAB-α Benchmark☆29Updated 8 months ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 6 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 7 months ago
- ☆31Updated 5 months ago
- Neurox control helm chart details☆30Updated 4 months ago
- Train your own small bitnet model☆75Updated 10 months ago
- Mistral7B playing DOOM☆136Updated last year
- ☆60Updated last year
- ☆189Updated last year
- The Finite Field Assembly Programming Language☆36Updated 4 months ago
- Thin wrapper around GGML to make life easier☆40Updated 2 months ago
- Samples of good AI generated CUDA kernels☆90Updated 3 months ago
- noise_step: Training in 1.58b With No Gradient Memory☆221Updated 8 months ago
- ☆196Updated 4 months ago
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- LLM Ripper is a framework for component extraction (embeddings, attention heads, FFNs), activation capture, functional analysis, and adap…☆46Updated last week