shreyansh26 / LLM-SamplingLinks
A collection of various LLM sampling methods implemented in pure Pytorch
☆26Updated last year
Alternatives and similar repositories for LLM-Sampling
Users that are interested in LLM-Sampling are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- ☆82Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆124Updated last month
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆62Updated 7 months ago
- ☆57Updated last month
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆98Updated 2 weeks ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆106Updated 8 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 3 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated last month
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).☆98Updated 6 months ago
- ☆91Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆135Updated 3 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- ☆41Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- ☆91Updated 7 months ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- An introduction to LLM Sampling☆79Updated last year
- ☆50Updated last year
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Updated 6 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- ☆59Updated 2 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year