PythonNut / superbpeLinks
Official code release for "SuperBPE: Space Travel for Language Models"
☆70Updated 2 months ago
Alternatives and similar repositories for superbpe
Users that are interested in superbpe are comparing it to the libraries listed below
Sorting:
- Code for Zero-Shot Tokenizer Transfer☆138Updated 9 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- ☆48Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆67Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- Long Context Extension and Generalization in LLMs☆61Updated last year
- ☆85Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆24Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆53Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆89Updated 2 months ago
- ☆39Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆46Updated 3 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- ☆52Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆42Updated 3 months ago
- ☆76Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆49Updated 5 months ago
- ☆120Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆135Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆63Updated 5 months ago
- ☆15Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 6 months ago
- ☆56Updated last year
- ☆33Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Efficient Transformers with Dynamic Token Pooling☆64Updated 2 years ago
- ☆19Updated 2 years ago