PrincetonUniversity / multi_gpu_trainingLinks
☆370Updated 4 months ago
Alternatives and similar repositories for multi_gpu_training
Users that are interested in multi_gpu_training are comparing it to the libraries listed below
Sorting:
- Annotated version of the Mamba paper☆495Updated last year
- Example of how to use Weights & Biases on Slurm☆119Updated 3 years ago
- VICReg official code base☆553Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆378Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆802Updated last week
- Python 3.8+ toolbox for submitting jobs to Slurm☆1,571Updated 3 weeks ago
- For optimization algorithm research and development.☆558Updated 3 weeks ago
- Reliable, minimal and scalable library for pretraining foundation and world models☆123Updated last week
- A convenient way to trigger synchronizations to wandb / Weights & Biases if your compute nodes don't have internet!☆89Updated this week
- ☆57Updated last year
- A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...☆390Updated last month
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆406Updated this week
- Helpful tools and examples for working with flex-attention☆1,118Updated 3 weeks ago
- Implementation of Diffusion Transformer (DiT) in JAX☆306Updated last year
- TensorDict is a pytorch dedicated tensor container.☆1,003Updated last week
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆210Updated 2 years ago
- ☆234Updated 11 months ago
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆438Updated last year
- Code for our NeurIPS 2022 paper☆371Updated 3 years ago
- Building blocks for foundation models.☆599Updated 2 years ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆352Updated 2 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆389Updated 2 years ago
- Reading list for research topics in state-space models☆344Updated 7 months ago
- ☆246Updated last year
- A simple command line tool to show GPU usage on a SLURM cluster☆115Updated last year
- Tensors, for human consumption☆1,353Updated 2 weeks ago
- Universal Notation for Tensor Operations in Python.☆464Updated 10 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 8 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆130Updated last year