microsoft / mutransformersLinks
some common Huggingface transformers in maximal update parametrization (ยตP)
โ87Updated 3 years ago
Alternatives and similar repositories for mutransformers
Users that are interested in mutransformers are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ87Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ138Updated last year
- A MAD laboratory to improve AI architecture designs ๐งชโ135Updated last year
- โ82Updated last year
- Language models scale reliably with over-training and on downstream tasksโ100Updated last year
- โ76Updated last year
- Code repository for the c-BTM paperโ108Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.โ246Updated 2 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformersโ83Updated last year
- โ121Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"โ243Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ132Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ180Updated 5 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS โฆโ60Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- โ167Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionโ112Updated last month
- โ53Updated last year
- nanoGPT-like codebase for LLM trainingโ113Updated last month
- Code for Zero-Shot Tokenizer Transferโ142Updated 11 months ago
- A toolkit for scaling law research โโ53Updated 10 months ago
- Supercharge huggingface transformers with model parallelism.โ77Updated 4 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)โ79Updated last year
- Triton Implementation of HyperAttention Algorithmโ48Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizersโ118Updated 3 months ago
- A set of Python scripts that makes your experience on TPU betterโ54Updated 3 months ago
- JAX implementation of the Llama 2 modelโ216Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE โฆโ116Updated last year
- Implementation of the Llama architecture with RLHF + Q-learningโ168Updated 10 months ago