Linaro / tinyBLASLinks
A fork of OpenBLAS with Armv8-A SVE (Scalable Vector Extension) support
☆17Updated 5 years ago
Alternatives and similar repositories for tinyBLAS
Users that are interested in tinyBLAS are comparing it to the libraries listed below
Sorting:
- 1.58 Bit LLM on Apple Silicon using MLX☆221Updated last year
- Editor with LLM generation tree exploration☆73Updated 6 months ago
- tiny code to access tenstorrent blackhole☆60Updated 3 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆48Updated 5 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆170Updated 3 weeks ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆73Updated last week
- ☆407Updated this week
- GGUF implementation in C as a library and a tools CLI program☆284Updated this week
- ☆31Updated 5 months ago
- Train your own small bitnet model☆75Updated 10 months ago
- ☆60Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 6 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆101Updated 10 months ago
- ☆187Updated last year
- Inference of Mamba models in pure C☆191Updated last year
- No-code CLI designed for accelerating ONNX workflows☆208Updated 2 months ago
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆52Updated last week
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- ☆311Updated this week
- C API for MLX☆124Updated last month
- Lightweight Inference server for OpenVINO☆202Updated this week
- The Finite Field Assembly Programming Language☆36Updated 3 months ago
- Pivotal Token Search☆121Updated last month
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆102Updated last month
- Inference RWKV v7 in pure C.☆38Updated this week
- LLM Ripper is a framework for component extraction (embeddings, attention heads, FFNs), activation capture, functional analysis, and adap…☆39Updated this week
- Source code for Intel's Polite Guard NLP project☆37Updated 3 weeks ago
- Tensor library & inference framework for machine learning☆109Updated this week
- Lightweight C inference for Qwen3 GGUF with the smallest (0.6B) at the fullest (FP32)☆16Updated 2 weeks ago
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated 8 months ago