π° Must-read papers and blogs on LLM based Long Context Modeling π₯
β1,967Apr 15, 2026Updated this week
Alternatives and similar repositories for Awesome-LLM-Long-Context-Modeling
Users that are interested in Awesome-LLM-Long-Context-Modeling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- LongBench v2 and LongBench (ACL 25'&24')β1,148Jan 15, 2025Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β250Sep 12, 2025Updated 7 months ago
- Codes for the paper "βBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718β381Sep 25, 2024Updated last year
- π° Must-read papers on KV Cache Compression (constantly updating π€).β689Updated this week
- β309Jul 10, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextβ497Mar 19, 2024Updated 2 years ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,203Apr 8, 2026Updated last week
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factualityβ237Aug 2, 2024Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsβ195Oct 8, 2024Updated last year
- A curated list for Efficient Large Language Modelsβ1,980Jun 17, 2025Updated 10 months ago
- A Comprehensive Survey on Long Context Language Modelingβ238Nov 24, 2025Updated 4 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"β450Oct 16, 2024Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β755Sep 27, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asyβ¦β9,340Updated this week
- Serverless GPU API endpoints on Runpod - Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Doing simple retrieval from LLM models at various context lengths to measure accuracyβ2,250Aug 17, 2024Updated last year
- From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 πβ3,591May 7, 2025Updated 11 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ402Jul 9, 2024Updated last year
- Awesome LLM compression research papers and tools.β1,806Feb 23, 2026Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ536Feb 10, 2025Updated last year
- verl: Volcano Engine Reinforcement Learning for LLMsβ20,603Apr 10, 2026Updated last week
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memβ¦β402Apr 20, 2024Updated last year
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 π and reasoning techniques.β6,906Dec 17, 2025Updated 4 months ago
- πA curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.πβ5,144Apr 9, 2026Updated last week
- GPUs on demand by Runpod - Special Offer Available β’ AdRun AI, ML, and HPC workloads on powerful cloud GPUsβwithout limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ380Jul 10, 2025Updated 9 months ago
- The HELMET Benchmarkβ211Apr 10, 2026Updated last week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)β209May 20, 2024Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarksβ274Jul 30, 2024Updated last year
- This repo contains the source code for RULER: Whatβs the Real Context Size of Your Long-Context Language Models?β1,509Nov 13, 2025Updated 5 months ago
- π Efficient implementations for emerging model architecturesβ4,878Updated this week
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsβ260Dec 16, 2024Updated last year
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ1,185Mar 31, 2026Updated 2 weeks ago
- β19Oct 14, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer β’ AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Ring attention implementation with flash attentionβ1,006Sep 10, 2025Updated 7 months ago
- A framework for few-shot evaluation of language models.β12,138Apr 8, 2026Updated last week
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"β376Jan 4, 2024Updated 2 years ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuningβ664Jun 1, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)β4,348Dec 9, 2025Updated 4 months ago
- [COLM'25] A Controlled Study on Long Context Extension and Generalization in LLMsβ64Mar 9, 2026Updated last month
- Latest Advances on Multimodal Large Language Modelsβ17,624Apr 9, 2026Updated last week