herrmann / rustorchLinks
"PyTorch in Rust"
☆17Updated last year
Alternatives and similar repositories for rustorch
Users that are interested in rustorch are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- ☆39Updated 3 years ago
- Read and write tensorboard data using Rust☆24Updated 2 years ago
- Make triton easier☆50Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated last year
- ☆24Updated last year
- implement llava using candle☆15Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- new optimizer☆20Updated last year
- Rust bindings for CTranslate2☆14Updated 2 years ago
- Training hybrid models for dummies.☆29Updated 3 months ago
- Because it's there.☆16Updated last year
- Efficiently computing & storing token n-grams from large corpora☆26Updated last year
- ☆135Updated last year
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated 2 years ago
- ☆18Updated last year
- 🔭 interactively explore `onnx` networks in your CLI.☆26Updated last year
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Updated 2 years ago
- Rust Implementation of micrograd☆53Updated last year
- ☆12Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- Utilities for Training Very Large Models☆58Updated last year
- Visualising Losses in Deep Neural Networks☆16Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- JAX bindings for the flash-attention3 kernels☆20Updated last month
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆26Updated last week
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 10 months ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated 2 years ago