Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"
☆247Sep 12, 2025Updated 5 months ago
Alternatives and similar repositories for ProLong
Users that are interested in ProLong are comparing it to the libraries listed below
Sorting:
- The HELMET Benchmark☆203Updated this week
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Updated this week
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆79Oct 16, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆486Mar 19, 2024Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆227Feb 11, 2026Updated 3 weeks ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆232Aug 2, 2024Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆239Sep 2, 2025Updated 6 months ago
- ☆18Oct 14, 2024Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Dec 16, 2024Updated last year
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,920Updated this week
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆754Sep 27, 2024Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆378Sep 25, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- ☆301Jul 10, 2025Updated 7 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆214Jan 15, 2025Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,101Jan 15, 2025Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 4 months ago
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- Code for the preprint "Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?"☆48Jul 29, 2025Updated 7 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆183Jul 23, 2025Updated 7 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆195Oct 8, 2024Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆112Feb 20, 2025Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Sep 4, 2023Updated 2 years ago
- Ring attention implementation with flash attention☆986Sep 10, 2025Updated 5 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Oct 16, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,462Nov 13, 2025Updated 3 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆169Jun 13, 2024Updated last year
- A Comprehensive Survey on Long Context Language Modeling☆228Nov 24, 2025Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,190Sep 30, 2025Updated 5 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- ☆63Jun 12, 2025Updated 8 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Nov 25, 2024Updated last year
- ☆29May 4, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year