☆96Dec 6, 2024Updated last year
Alternatives and similar repositories for MemLong
Users that are interested in MemLong are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 5 months ago
- Evaluating the faithfulness of long-context language models☆30Oct 21, 2024Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year
- ☆28May 24, 2025Updated 9 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- The original Shared Recurrent Memory Transformer implementation☆34Jul 11, 2025Updated 8 months ago
- OpenBA-V2: 3B LLM (Large Language Model) with T5 architecture, utilizing model pruning technique and continuing pretraining from OpenBA-1…☆25May 10, 2024Updated last year
- This is the codebase for pre-training, compressing, extending, and distilling LLMs with Megatron-LM.☆12Mar 11, 2024Updated 2 years ago
- CMD: a framework for Context-aware Model self-Detoxification (EMNLP2024 Long Paper)☆17Feb 10, 2025Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆30Dec 13, 2024Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆109Dec 20, 2024Updated last year
- ☆21Jul 18, 2024Updated last year
- Official Implementation of "Multi-Head RAG: Solving Multi-Aspect Problems with LLMs"☆241Feb 26, 2026Updated 3 weeks ago
- ☆16Sep 4, 2025Updated 6 months ago
- ☆23Dec 17, 2024Updated last year
- Official implementation of Self-Taught Agentic Long Context Understanding (ACL 2025).☆13Sep 22, 2025Updated 6 months ago
- Diffusion Model Improvement Method☆35Sep 4, 2023Updated 2 years ago
- A comprehensive and efficient long-context model evaluation framework☆31Feb 25, 2026Updated 3 weeks ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆97May 16, 2025Updated 10 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Jun 15, 2024Updated last year
- 本项目致力于打造数智化平台级智能人机交互产品,结合智能知识库和知识检索的功能,满足高效运行和优质服务的需求。☆19Apr 29, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- ☆129Jul 23, 2025Updated 8 months ago
- Long Context Research☆29Jan 26, 2026Updated last month
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆245Aug 25, 2024Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆268May 15, 2024Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Jul 24, 2024Updated last year
- ☆13Mar 5, 2025Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Empowering RAG with a memory-based data interface for all-purpose applications!☆2,230Sep 11, 2025Updated 6 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 7 months ago
- Applies ROME and MEMIT on Mamba-S4 models☆14Apr 5, 2024Updated last year
- Source code of “Reinforcement Learning with Token-level Feedback for Controllable Text Generation (NAACL 2024)☆17Dec 8, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated 2 weeks ago
- ☆46Jun 11, 2025Updated 9 months ago
- [EMNLP 2024: Demo Oral] RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation☆310Oct 18, 2024Updated last year
- Code for KaLM-Embedding models☆115Jun 30, 2025Updated 8 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 3 months ago