warner-benjamin / optimi
Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers
☆53Updated 2 months ago
Related projects: ⓘ
- ☆48Updated 6 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 7 months ago
- ☆20Updated last year
- ☆66Updated 3 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- ☆68Updated 2 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆60Updated this week
- Various transformers for FSDP research☆31Updated last year
- ☆13Updated this week
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆50Updated 10 months ago
- ☆73Updated 5 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆73Updated last month
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- ☆27Updated this week
- Collection of autoregressive model implementation☆62Updated 2 weeks ago
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆42Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆91Updated last year
- Experiment of using Tangent to autodiff triton☆66Updated 7 months ago
- Utilities for PyTorch distributed☆23Updated 11 months ago
- ☆58Updated 3 weeks ago
- ☆47Updated 3 months ago
- Experiments for efforts to train a new and improved t5☆76Updated 5 months ago
- ☆69Updated 4 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆33Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- ☆50Updated last month