jeffreysijuntan / llocoView external linksLinks
The official repo for "LLoCo: Learning Long Contexts Offline"
☆118Jun 15, 2024Updated last year
Alternatives and similar repositories for lloco
Users that are interested in lloco are comparing it to the libraries listed below
Sorting:
- ☆302Jul 10, 2025Updated 7 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Jun 13, 2024Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Apr 18, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆231Aug 2, 2024Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆328Sep 9, 2024Updated last year
- ☆49Nov 25, 2024Updated last year
- The repo for In-context Autoencoder☆164May 11, 2024Updated last year
- MPI Code Generation through Domain-Specific Language Models☆14Nov 19, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆195Oct 8, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Jul 12, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Reflect-RL: Two-Player Online RL Fine-Tuning for LMs☆18Jul 19, 2025Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Dec 16, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- ☆62Jun 17, 2024Updated last year
- ☆19Oct 14, 2024Updated last year
- ☆84Nov 10, 2025Updated 3 months ago
- ☆28May 24, 2025Updated 8 months ago
- ☆35Feb 10, 2025Updated last year
- ☆273Oct 31, 2023Updated 2 years ago
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated last year
- ☆54Jul 7, 2025Updated 7 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated last year
- Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).☆43Aug 6, 2024Updated last year
- Dateset Reset Policy Optimization☆31Apr 12, 2024Updated last year
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- ☆120Aug 28, 2024Updated last year
- 🌟Official code of our AAAI26 paper 🔍WebFilter☆35Nov 9, 2025Updated 3 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆46Jan 21, 2026Updated 3 weeks ago
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆19Jun 13, 2025Updated 8 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆29Oct 9, 2025Updated 4 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Aug 9, 2025Updated 6 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆276Aug 31, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago