☆20Sep 24, 2025Updated 5 months ago
Alternatives and similar repositories for ZSMerge
Users that are interested in ZSMerge are comparing it to the libraries listed below
Sorting:
- ☆15Apr 11, 2024Updated last year
- ☆47Nov 25, 2024Updated last year
- ☆15Jan 24, 2025Updated last year
- The code based on vLLM for the paper “ Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention”.☆11Sep 19, 2024Updated last year
- linux 内核技术文档☆16Feb 26, 2026Updated 3 weeks ago
- Official implementation of ECCV24 paper: POA☆24Aug 8, 2024Updated last year
- ☆20Jun 9, 2025Updated 9 months ago
- [ACM MM25] LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆23Mar 29, 2025Updated 11 months ago
- ☆16Apr 15, 2025Updated 11 months ago
- The pmem.io Website☆17Jan 20, 2026Updated 2 months ago
- This module collects per-page stats and decide for each page if it should be migrated, replicated or interleaved.☆17Sep 29, 2015Updated 10 years ago
- ☆23Mar 7, 2025Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 10 months ago
- A comprehensive and efficient long-context model evaluation framework☆31Feb 25, 2026Updated 3 weeks ago
- ☆18Sep 5, 2024Updated last year
- ☆85Apr 18, 2025Updated 11 months ago
- Dynamic Context Selection for Efficient Long-Context LLMs☆56May 20, 2025Updated 10 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 5 months ago
- A Compute Express Link (CXL) Benchmark Suite☆20Feb 12, 2025Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16May 6, 2024Updated last year
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆21Feb 17, 2025Updated last year
- Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity (ACL 2025, oral)☆32Jun 14, 2025Updated 9 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 4 months ago
- The original Shared Recurrent Memory Transformer implementation☆34Jul 11, 2025Updated 8 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- [NeurIPS 2024] Implementation of paper - D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models☆23Apr 9, 2025Updated 11 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- The Code and Script of "David's Slingshot: A Strategic Coordination Framework of Small LLMs Matches Large LLMs in Data Synthesis"☆34Jun 13, 2025Updated 9 months ago
- RePo: Language Models with Context Re-Positioning☆74Dec 24, 2025Updated 2 months ago
- ☆64Jan 12, 2026Updated 2 months ago
- ☆26Mar 31, 2022Updated 3 years ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- deploy onnx models with TensorRT and LibTorch☆19Nov 17, 2021Updated 4 years ago
- ☆28May 24, 2025Updated 9 months ago
- Artifacts of EuroSys'24 paper "Exploring Performance and Cost Optimization with ASIC-Based CXL Memory"☆31Feb 21, 2024Updated 2 years ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Mar 7, 2025Updated last year
- ☆36Mar 17, 2025Updated last year
- MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer (EMNLP 2025)☆11Apr 18, 2025Updated 11 months ago