Modalities / modalitiesLinks
Modalities, a PyTorch-native framework for distributed and reproducible foundation model training.
☆91Updated this week
Alternatives and similar repositories for modalities
Users that are interested in modalities are comparing it to the libraries listed below
Sorting:
- nanoGPT-like codebase for LLM training☆113Updated last month
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Interpretating the latent space representations of attention head outputs for LLMs☆36Updated last year
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆57Updated 4 months ago
- Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).☆93Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- ☆82Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- ☆167Updated 2 years ago
- ☆144Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆179Updated 5 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆115Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆245Updated 2 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 4 months ago
- SDLG is an efficient method to accurately estimate aleatoric semantic uncertainty in LLMs☆27Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated 3 weeks ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆62Updated last year
- PyTorch library for Active Fine-Tuning☆95Updated 2 months ago
- Efficient LLM inference on Slurm clusters using vLLM.☆86Updated this week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆131Updated 3 years ago
- Machine Learning eXperiment Utilities☆46Updated 4 months ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆80Updated 3 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated last month
- Minimum Description Length probing for neural network representations☆20Updated 10 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆91Updated last year
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆45Updated last month
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆138Updated last year