yandex-research / DeDLOCLinks
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆118Updated 3 years ago
Alternatives and similar repositories for DeDLOC
Users that are interested in DeDLOC are comparing it to the libraries listed below
Sorting:
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆276Updated 3 years ago
- PyTorch implementation of L2L execution algorithm☆108Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Python Research Framework☆106Updated 3 years ago
- Torch Distributed Experimental☆117Updated last year
- ☆62Updated 3 years ago
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Updated 4 years ago
- ☆252Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- ☆66Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Various transformers for FSDP research☆38Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated 2 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Babysit your preemptible TPUs☆86Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- OSLO: Open Source for Large-scale Optimization☆174Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- git extension for {collaborative, communal, continual} model development☆215Updated 11 months ago
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆183Updated 2 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago