PrincetonUniversity / multi_gpu_trainingLinks
☆339Updated 3 months ago
Alternatives and similar repositories for multi_gpu_training
Users that are interested in multi_gpu_training are comparing it to the libraries listed below
Sorting:
- Example of how to use Weights & Biases on Slurm☆115Updated 2 years ago
- TensorDict is a pytorch dedicated tensor container.☆929Updated last week
- Annotated version of the Mamba paper☆485Updated last year
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆208Updated last year
- Helpful tools and examples for working with flex-attention☆846Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆596Updated this week
- For optimization algorithm research and development.☆520Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆278Updated last year
- Python 3.8+ toolbox for submitting jobs to Slurm☆1,457Updated last month
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆341Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆384Updated this week
- Building blocks for foundation models.☆511Updated last year
- VICReg official code base☆539Updated last year
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆319Updated 5 months ago
- ☆50Updated last year
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆438Updated 9 months ago
- Reading list for research topics in state-space models☆298Updated 2 weeks ago
- ☆270Updated 11 months ago
- TorchOpt is an efficient library for differentiable optimization built upon PyTorch.☆597Updated 3 weeks ago
- A convenient way to trigger synchronizations to wandb / Weights & Biases if your compute nodes don't have internet!☆82Updated 3 weeks ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆192Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆252Updated 3 months ago
- ☆229Updated 4 months ago
- Implementation of https://srush.github.io/annotated-s4☆499Updated last week
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆275Updated 2 weeks ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆592Updated 6 months ago
- Scalable and Performant Data Loading☆278Updated this week
- ☆436Updated 8 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆521Updated last month