Strivin0311 / long-llms-learningView external linksLinks
A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks
☆273Jul 30, 2024Updated last year
Alternatives and similar repositories for long-llms-learning
Users that are interested in long-llms-learning are comparing it to the libraries listed below
Sorting:
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆484Mar 19, 2024Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆373Sep 25, 2024Updated last year
- A repository sharing the literatures about large language models☆106Dec 22, 2025Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆445Oct 16, 2024Updated last year
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,909Jan 22, 2026Updated 3 weeks ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,093Jan 15, 2025Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆195Oct 8, 2024Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,173Aug 17, 2024Updated last year
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆19Nov 3, 2024Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆391Jul 9, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆752Sep 27, 2024Updated last year
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆419Apr 4, 2025Updated 10 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated 3 weeks ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Oct 11, 2025Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆246Sep 12, 2025Updated 5 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Sep 26, 2024Updated last year
- The HELMET Benchmark☆199Dec 4, 2025Updated 2 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆111Feb 20, 2025Updated 11 months ago
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- ☆302Jul 10, 2025Updated 7 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Dec 19, 2024Updated last year
- Official repository for LongChat and LongEval☆534May 24, 2024Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Apr 10, 2024Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆231Aug 2, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆89Oct 30, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Feb 23, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Jul 20, 2024Updated last year
- ☆321Sep 18, 2024Updated last year
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- Counting-Stars (★)☆83Nov 24, 2025Updated 2 months ago