PythonNut / superbpeLinks
Official code release for "SuperBPE: Space Travel for Language Models"
☆86Updated 3 weeks ago
Alternatives and similar repositories for superbpe
Users that are interested in superbpe are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- ☆38Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 7 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆76Updated 9 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆61Updated 7 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 9 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆23Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Updated 9 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- ☆51Updated 2 years ago
- ☆53Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- ☆24Updated last month
- ☆91Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Updated 7 months ago
- ☆77Updated last year
- ☆48Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 10 months ago
- ☆20Updated 3 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Updated last year
- Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings (ACL 2025 Main)☆40Updated 8 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 3 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year