yandex-research / DeDLOC
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
☆115Updated 2 years ago
Related projects: ⓘ
- Implementation of a Transformer, but completely in Triton☆242Updated 2 years ago
- ☆234Updated last month
- ☆56Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆184Updated 2 years ago
- PyTorch implementation of L2L execution algorithm☆107Updated last year
- ☆64Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- Torch Distributed Experimental☆115Updated last month
- Various transformers for FSDP research☆31Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆229Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 8 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆145Updated this week
- Amos optimizer with JEstimator lib.☆79Updated 4 months ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Inference code for LLaMA models in JAX☆108Updated 4 months ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆236Updated last year
- OSLO: Open Source for Large-scale Optimization☆172Updated last year
- ☆34Updated this week
- Python Research Framework☆107Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- This repository contains example code to build models on TPUs☆30Updated last year
- Train very large language models in Jax.☆191Updated 10 months ago
- OSLO: Open Source framework for Large-scale model Optimization☆306Updated 2 years ago
- ☆322Updated 5 months ago
- Babysit your preemptible TPUs☆84Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated 11 months ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆152Updated 9 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆121Updated 9 months ago