graphcore-research / pytorch-tensor-tracker
Flexibly track outputs and grad-outputs of torch.nn.Module.
☆13Updated last year
Alternatives and similar repositories for pytorch-tensor-tracker:
Users that are interested in pytorch-tensor-tracker are comparing it to the libraries listed below
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆73Updated 7 months ago
- These papers will provide unique insightful concepts that will broaden your perspective on neural networks and deep learning☆47Updated last year
- ☆37Updated 11 months ago
- 삼각형의 실전! Triton☆15Updated last year
- ☆52Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Stick-breaking attention☆48Updated last week
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- Minimal but scalable implementation of large language models in JAX☆34Updated 4 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆107Updated 3 months ago
- WIP☆93Updated 7 months ago
- Experiment of using Tangent to autodiff triton☆76Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 8 months ago
- ☆51Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- LoRA for arbitrary JAX models and functions☆135Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆122Updated 11 months ago
- ☆30Updated 3 months ago
- ☆76Updated 8 months ago
- Official Jax Implementation of MD4 Masked Diffusion Models☆67Updated 3 weeks ago
- Implementation of Infini-Transformer in Pytorch☆109Updated 2 months ago
- ☆33Updated 6 months ago
- supporting pytorch FSDP for optimizers☆79Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆100Updated 4 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- ☆43Updated last year
- ☆95Updated 9 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago