yandex-research / DeDLOCLinks
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆117Updated 3 years ago
Alternatives and similar repositories for DeDLOC
Users that are interested in DeDLOC are comparing it to the libraries listed below
Sorting:
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆273Updated 3 years ago
- ☆251Updated last year
- PyTorch implementation of L2L execution algorithm☆108Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆57Updated 4 years ago
- OSLO: Open Source for Large-scale Optimization☆175Updated last year
- Lite Inference Toolkit (LIT) for PyTorch☆161Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Python Research Framework☆106Updated 2 years ago
- Babysit your preemptible TPUs☆86Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Various transformers for FSDP research☆38Updated 2 years ago
- ☆67Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- OSLO: Open Source framework for Large-scale model Optimization☆309Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆155Updated last year
- ☆61Updated 3 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆241Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Torch Distributed Experimental☆117Updated last year
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- Functional deep learning☆108Updated 2 years ago