yandex-research / DeDLOCLinks
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆116Updated 3 years ago
Alternatives and similar repositories for DeDLOC
Users that are interested in DeDLOC are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of L2L execution algorithm☆107Updated 2 years ago
- ☆250Updated 10 months ago
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated 2 years ago
- ☆67Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- A diff tool for language models☆42Updated last year
- Compression schema for gradients of activations in backward pass☆44Updated last year
- "Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts" (NeurIPS 2020), original PyTorch implemen…☆56Updated 4 years ago
- Various transformers for FSDP research☆37Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆266Updated 3 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Torch Distributed Experimental☆117Updated 10 months ago
- Fast sparse deep learning on CPUs☆53Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Memory-efficient transformer. Work in progress.☆19Updated 2 years ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated last year
- ☆78Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated last year
- Python Research Framework☆106Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆155Updated last year
- OSLO: Open Source for Large-scale Optimization☆174Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 3 years ago
- Inference code for LLaMA models in JAX☆118Updated last year