MagellaX / StreamAttnLinks
☆22Updated this week
Alternatives and similar repositories for StreamAttn
Users that are interested in StreamAttn are comparing it to the libraries listed below
Sorting:
- Learning about CUDA by writing PTX code.☆135Updated last year
- SIMD quantization kernels☆83Updated last week
- Tensor library with autograd using only Rust's standard library☆69Updated last year
- pytorch from scratch in pure C/CUDA and python☆40Updated 10 months ago
- Simple Transformer in Jax☆139Updated last year
- Extensive introductory writeup on Zig language functionalities☆10Updated last year
- Rust Implementation of micrograd☆52Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆106Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 5 months ago
- moondream in zig.☆73Updated 2 months ago
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆66Updated 3 months ago
- could we make an ml stack in 100,000 lines of code?☆46Updated last year
- An implement of deep learning framework and models in C☆48Updated 4 months ago
- Because it's there.☆16Updated 11 months ago
- A really tiny autograd engine☆95Updated 3 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 weeks ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- ☆96Updated last year
- Gradient descent is cool and all, but what if we could delete it?☆104Updated last week
- A graph visualization of attention☆57Updated 3 months ago
- qwen3 experiments☆31Updated last month
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆278Updated 9 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated last week
- look how they massacred my boy☆64Updated 10 months ago
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆115Updated last month
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆73Updated 6 months ago
- Grokking on modular arithmetic in less than 150 epochs in MLX☆14Updated 10 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 9 months ago
- LLM training in simple, raw C/CUDA☆18Updated last year