microsoft / mutransformersLinks
some common Huggingface transformers in maximal update parametrization (ยตP)
โ82Updated 3 years ago
Alternatives and similar repositories for mutransformers
Users that are interested in mutransformers are comparing it to the libraries listed below
Sorting:
- Large scale 4D parallelism pre-training for ๐ค transformers in Mixture of Experts *(still work in progress)*โ86Updated last year
- Language models scale reliably with over-training and on downstream tasksโ97Updated last year
- A MAD laboratory to improve AI architecture designs ๐งชโ123Updated 7 months ago
- โ75Updated last year
- Understand and test language model architectures on synthetic tasks.โ221Updated 2 weeks ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023โ136Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ149Updated last month
- โ166Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pileโ116Updated 2 years ago
- Code repository for the c-BTM paperโ107Updated last year
- Inference code for LLaMA models in JAXโ118Updated last year
- โ81Updated last year
- โ113Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"โ238Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS โฆโ61Updated 9 months ago
- Code for Zero-Shot Tokenizer Transferโ133Updated 6 months ago
- [NeurIPS 2023] Learning Transformer Programsโ162Updated last year
- โ53Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionโ107Updated 4 months ago
- โ45Updated last year
- Triton Implementation of HyperAttention Algorithmโ48Updated last year
- Experiments for efforts to train a new and improved t5โ76Updated last year
- nanoGPT-like codebase for LLM trainingโ102Updated 2 months ago
- JAX implementation of the Llama 2 modelโ219Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ130Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flโฆโ75Updated 11 months ago
- โ39Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformersโ82Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.โ70Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE โฆโ114Updated last year