jsgrad-org / jsgradLinks
jsgrad is a dependency-free ML library in Typescript for model inference and training with support to WebGPU and other runtimes.
☆55Updated 4 months ago
Alternatives and similar repositories for jsgrad
Users that are interested in jsgrad are comparing it to the libraries listed below
Sorting:
- A light tensor library in zig.☆78Updated 6 months ago
- Tensor library with autograd using only Rust's standard library☆69Updated last year
- SIMD quantization kernels☆86Updated this week
- ☆69Updated last year
- A graph visualization of attention☆57Updated 3 months ago
- TOPLOC: is a novel method for verifiable inference that enables users to verify that LLM providers are using the correct model configurat…☆40Updated 4 months ago
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated last year
- It's a baby compiler. (Lean btw.)☆16Updated 3 months ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- Simple Transformer in Jax☆140Updated last year
- peer-to-peer compute and intelligence network that enables decentralized AI development at scale☆115Updated last month
- moondream in zig.☆73Updated 3 months ago
- Solve puzzles to improve your tinygrad skills!☆142Updated 5 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆73Updated 6 months ago
- An implementation of bucketMul LLM inference☆223Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 9 months ago
- ☆22Updated last week
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- A native Jupyter notebook frontend with local + remote kernels, reactive cells, and IDE features, implemented in Rust☆124Updated 7 months ago
- Run GGML models with Kubernetes.☆174Updated last year
- Web-optimized vector database (written in Rust).☆254Updated 6 months ago
- ☆89Updated 11 months ago
- look how they massacred my boy☆64Updated 10 months ago
- Learning about CUDA by writing PTX code.☆135Updated last year
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated 8 months ago
- ctypes wrappers for HIP, CUDA, and OpenCL☆130Updated last year
- ☆99Updated 9 months ago
- Online compiler for HIP and NVIDIA® CUDA® code to WebGPU☆193Updated 7 months ago
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 2 weeks ago
- smol models are fun too☆93Updated 9 months ago