MagellaX / StreamAttnLinks
A high-performance attention mechanism that computes softmax normalization in a single streaming pass using running accumulators (online softmax). 
☆27Updated 2 weeks ago
Alternatives and similar repositories for StreamAttn
Users that are interested in StreamAttn are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆145Updated last year
- SIMD quantization kernels☆89Updated last month
- Quantized LLM training in pure CUDA/C++.☆209Updated this week
- Tensor library with autograd using only Rust's standard library☆70Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- pytorch from scratch in pure C/CUDA and python☆41Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆133Updated last month
- Simple Transformer in Jax☆139Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- moondream in zig.☆76Updated 5 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 5 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- look how they massacred my boy☆63Updated last year
- Rust Implementation of micrograd☆53Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Because it's there.☆16Updated last year
- ☆28Updated last year
- Extensive introductory writeup on Zig language functionalities☆10Updated last year
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- A graph visualization of attention☆57Updated 5 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆72Updated 8 months ago
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 2 months ago
- ☆40Updated last year
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆127Updated 3 months ago
- An implement of deep learning framework and models in C☆48Updated 6 months ago
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Updated 4 months ago
- Experimental GPU language with meta-programming☆23Updated last year
- A really tiny autograd engine☆95Updated 5 months ago