lessw2020 / transformer_central
Various transformers for FSDP research
☆37Updated 2 years ago
Alternatives and similar repositories for transformer_central:
Users that are interested in transformer_central are comparing it to the libraries listed below
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 9 months ago
- ☆103Updated 11 months ago
- ☆20Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆55Updated this week
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- ☆78Updated 10 months ago
- ☆60Updated 3 years ago
- ☆17Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆19Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- ☆67Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Learn CUDA with PyTorch☆20Updated 3 months ago
- Automatically take good care of your preemptible TPUs☆36Updated last year
- Utilities for Training Very Large Models☆58Updated 7 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- ☆49Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- ☆14Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year