lessw2020 / transformer_central
Various transformers for FSDP research
☆33Updated last year
Related projects ⓘ
Alternatives and complementary repositories for transformer_central
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆58Updated 3 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆46Updated 9 months ago
- ☆20Updated last year
- ☆76Updated 5 months ago
- Scalable and Performant Data Loading☆42Updated this week
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆43Updated this week
- A dashboard for exploring timm learning rate schedulers☆18Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆71Updated 9 months ago
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- ☆20Updated last year
- ☆56Updated 2 years ago
- This repository contains example code to build models on TPUs☆30Updated last year
- Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.☆34Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆67Updated 3 weeks ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Learn CUDA with PyTorch☆14Updated this week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆72Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 10 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Utilities for Training Very Large Models☆56Updated last month
- ☆13Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 3 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- RWKV model implementation☆38Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago