Glaciohound / LM-InfiniteLinks
Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"
☆148Updated 7 months ago
Alternatives and similar repositories for LM-Infinite
Users that are interested in LM-Infinite are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆163Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated last year
- Easy control for Key-Value Constrained Generative LLM Inference(https://arxiv.org/abs/2402.06262)☆63Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆110Updated 6 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆231Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆108Updated 7 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆351Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 9 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Repository of LV-Eval Benchmark☆70Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆61Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 11 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 10 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆142Updated last year
- ☆106Updated 3 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- ☆85Updated 9 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆173Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- ☆105Updated 2 years ago
- Counting-Stars (★)☆83Updated 4 months ago
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆210Updated last month
- ☆140Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆155Updated 6 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆76Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆215Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆57Updated 2 years ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆77Updated 10 months ago