☆84Nov 10, 2025Updated 5 months ago
Alternatives and similar repositories for GemFilter
Users that are interested in GemFilter are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Class materials, homeworks and videos for probation preparation.☆24Feb 3, 2026Updated 2 months ago
- ROSA+: RWKV's ROSA implementation with fallback statistical predictor☆34Oct 13, 2025Updated 6 months ago
- ACL 2026 & NAACL 2025: Bridging Retrieval and Inference through Evidence Fusion☆13Apr 9, 2026Updated 3 weeks ago
- ☆311Jul 10, 2025Updated 9 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,207Apr 8, 2026Updated 3 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆540Feb 10, 2025Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Jun 15, 2024Updated last year
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated 2 years ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Nov 24, 2024Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆513Aug 1, 2024Updated last year
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Nov 1, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆36Apr 1, 2025Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- ☆53Jul 18, 2024Updated last year
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆402Apr 20, 2024Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆182Jul 12, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated 2 years ago
- Quartet II Official Code☆69Mar 23, 2026Updated last month
- The predecessor of CiteLab.☆18Feb 3, 2026Updated 2 months ago
- Code for Paper "The Geometry of Reasoning: Flowing Logics in Representation Space" (ICLR 2026)☆47Jan 31, 2026Updated 3 months ago
- Token Omission Via Attention☆127Oct 13, 2024Updated last year
- ☆12Sep 7, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304☆14Oct 11, 2022Updated 3 years ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆381Jul 10, 2025Updated 9 months ago
- ☆85Dec 1, 2023Updated 2 years ago
- Official repo of dataset-decomposition paper [NeurIPS 2024]☆21Jan 8, 2025Updated last year
- ☆10May 27, 2024Updated last year
- [NAACL 2025] Official Implementation of "HMT: Hierarchical Memory Transformer for Long Context Language Processing"☆80Mar 12, 2026Updated last month