PrincetonUniversity / multi_gpu_trainingLinks
☆349Updated 5 months ago
Alternatives and similar repositories for multi_gpu_training
Users that are interested in multi_gpu_training are comparing it to the libraries listed below
Sorting:
- Example of how to use Weights & Biases on Slurm☆117Updated 3 years ago
- Annotated version of the Mamba paper☆488Updated last year
- Python 3.8+ toolbox for submitting jobs to Slurm☆1,495Updated 3 months ago
- ☆56Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆359Updated last year
- Helpful tools and examples for working with flex-attention☆943Updated last week
- TensorDict is a pytorch dedicated tensor container.☆955Updated last week
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆207Updated 2 years ago
- Building blocks for foundation models.☆539Updated last year
- A convenient way to trigger synchronizations to wandb / Weights & Biases if your compute nodes don't have internet!☆83Updated 3 weeks ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- Code for our NeurIPS 2022 paper☆369Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆500Updated 2 months ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆200Updated last year
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆437Updated last year
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆400Updated 4 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆292Updated last year
- Reading list for research topics in state-space models☆319Updated 2 months ago
- Tensors, for human consumption☆1,281Updated 2 months ago
- List of ML conferences with important dates and accepted paper list☆141Updated 4 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆121Updated 10 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆284Updated last month
- For optimization algorithm research and development.☆530Updated this week
- ☆208Updated 2 years ago
- TorchOpt is an efficient library for differentiable optimization built upon PyTorch.☆612Updated 3 weeks ago
- VICReg official code base☆545Updated 2 years ago
- ☆233Updated 6 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆390Updated last week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆648Updated this week
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆236Updated last week