☆84Nov 10, 2025Updated 3 months ago
Alternatives and similar repositories for GemFilter
Users that are interested in GemFilter are comparing it to the libraries listed below
Sorting:
- Large Language Models Can Self-Improve in Long-context Reasoning☆72Nov 24, 2024Updated last year
- ☆16Mar 13, 2023Updated 2 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- ROSA+: RWKV's ROSA implementation with fallback statistical predictor☆32Oct 13, 2025Updated 4 months ago
- ☆301Jul 10, 2025Updated 7 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Jul 2, 2024Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 4 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Token Omission Via Attention☆128Oct 13, 2024Updated last year
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- ☆25Dec 12, 2025Updated 2 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 5 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Jun 15, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated last year
- This is the official project of paper: Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conver…☆22Nov 18, 2024Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 3 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Jul 1, 2023Updated 2 years ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 6 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- ☆52Jul 18, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆177Jul 12, 2024Updated last year
- ☆29May 4, 2024Updated last year
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆34Apr 1, 2025Updated 11 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- Class materials, homeworks and videos for probation preparation.☆19Feb 3, 2026Updated 3 weeks ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- ☆10May 27, 2024Updated last year
- Bridging Retrieval and Inference through Evidence Fusion☆12Oct 20, 2025Updated 4 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- ☆16Dec 8, 2024Updated last year
- ☆13May 21, 2023Updated 2 years ago
- ☆23Aug 7, 2023Updated 2 years ago
- ☆46Jun 11, 2025Updated 8 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆44Aug 14, 2024Updated last year