PythonNut / superbpeLinks
Official code release for "SuperBPE: Space Travel for Language Models"
☆56Updated 2 weeks ago
Alternatives and similar repositories for superbpe
Users that are interested in superbpe are comparing it to the libraries listed below
Sorting:
- This repo is based on https://github.com/jiaweizzhao/GaLore☆29Updated 9 months ago
- ☆48Updated last year
- Code for Zero-Shot Tokenizer Transfer☆133Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 8 months ago
- ☆53Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆77Updated 2 weeks ago
- ☆47Updated 10 months ago
- ☆37Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- ☆51Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆76Updated last year
- ☆32Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 3 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆68Updated 10 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆38Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆74Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆90Updated last month
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆39Updated last month
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 10 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- ☆79Updated 10 months ago
- ☆65Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆57Updated 2 months ago
- ☆47Updated 2 weeks ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago