herrmann / rustorchLinks
"PyTorch in Rust"
☆17Updated last year
Alternatives and similar repositories for rustorch
Users that are interested in rustorch are comparing it to the libraries listed below
Sorting:
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Rust crate for some audio utilities☆25Updated 8 months ago
- implement llava using candle☆15Updated last year
- Make triton easier☆48Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆10Updated last year
- ☆23Updated 11 months ago
- A collection of optimisers for use with candle☆43Updated 3 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- ☆15Updated last year
- Rust Implementation of micrograd☆53Updated last year
- Rust bindings for CTranslate2☆14Updated 2 years ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated 2 years ago
- Efficiently computing & storing token n-grams from large corpora☆26Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Read and write tensorboard data using Rust☆23Updated last year
- A dashboard for exploring timm learning rate schedulers☆19Updated 11 months ago
- Utilities for Training Very Large Models☆58Updated last year
- ☆39Updated 3 years ago
- ☆21Updated 8 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- JAX bindings for the flash-attention3 kernels☆16Updated last month
- Training hybrid models for dummies.☆29Updated 2 weeks ago
- Because it's there.☆16Updated last year
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated 2 years ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated this week
- ML/DL Math and Method notes☆64Updated last year
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 7 months ago
- ☆18Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last month