yandex-research / DeDLOC
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆116Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for DeDLOC
- ☆236Updated 3 months ago
- PyTorch implementation of L2L execution algorithm☆106Updated last year
- ☆57Updated 2 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆146Updated 2 weeks ago
- Implementation of a Transformer, but completely in Triton☆249Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆234Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆185Updated 2 years ago
- Torch Distributed Experimental☆116Updated 3 months ago
- ☆64Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆54Updated 4 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Amos optimizer with JEstimator lib.☆81Updated 6 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆179Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆237Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- OSLO: Open Source framework for Large-scale model Optimization☆306Updated 2 years ago
- Learning to Initialize Neural Networks for Stable and Efficient Training☆136Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆55Updated 3 years ago
- This repository contains example code to build models on TPUs☆30Updated last year
- ☆77Updated 5 months ago
- Research and development for optimizing transformers☆125Updated 3 years ago
- Python Research Framework☆107Updated 2 years ago
- Train very large language models in Jax.☆195Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- Compression schema for gradients of activations in backward pass☆44Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 10 months ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆153Updated 11 months ago
- ☆178Updated last week