yandex-research / DeDLOC
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆116Updated 3 years ago
Alternatives and similar repositories for DeDLOC
Users that are interested in DeDLOC are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of L2L execution algorithm☆107Updated 2 years ago
- ☆250Updated 9 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Implementation of a Transformer, but completely in Triton☆265Updated 3 years ago
- ☆67Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated 2 years ago
- Various transformers for FSDP research☆37Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Torch Distributed Experimental☆115Updated 9 months ago
- OSLO: Open Source for Large-scale Optimization☆175Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Compression schema for gradients of activations in backward pass☆44Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 5 months ago
- Inference code for LLaMA models in JAX☆118Updated 11 months ago
- ☆59Updated 3 years ago
- Python Research Framework☆106Updated 2 years ago
- Train very large language models in Jax.☆204Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago
- ☆106Updated 11 months ago
- ☆186Updated 2 weeks ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated last year
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- gpu tester detects broken and slow gpus in a cluster☆70Updated 2 years ago