A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks
☆272Jul 30, 2024Updated last year
Alternatives and similar repositories for long-llms-learning
Users that are interested in long-llms-learning are comparing it to the libraries listed below
Sorting:
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆487Mar 19, 2024Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆378Sep 25, 2024Updated last year
- A repository sharing the literatures about large language models☆106Dec 22, 2025Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆448Oct 16, 2024Updated last year
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- Rectified Rotary Position Embeddings☆389May 20, 2024Updated last year
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,928Feb 27, 2026Updated last week
- LongBench v2 and LongBench (ACL 25'&24')☆1,101Jan 15, 2025Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,676Apr 17, 2024Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆195Oct 8, 2024Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,194Aug 17, 2024Updated last year
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆19Nov 3, 2024Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆393Jul 9, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆754Sep 27, 2024Updated last year
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆419Apr 4, 2025Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,693Aug 14, 2024Updated last year
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Feb 26, 2026Updated last week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Sep 26, 2024Updated last year
- The HELMET Benchmark☆203Feb 26, 2026Updated last week
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆112Feb 20, 2025Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- ☆302Jul 10, 2025Updated 7 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Dec 19, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆232Aug 2, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Feb 23, 2024Updated 2 years ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Jul 20, 2024Updated last year
- ☆320Sep 18, 2024Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Counting-Stars (★)☆83Nov 24, 2025Updated 3 months ago