GeeeekExplorer / transformers-patchLinks
patches for huggingface transformers to save memory
☆26Updated last month
Alternatives and similar repositories for transformers-patch
Users that are interested in transformers-patch are comparing it to the libraries listed below
Sorting:
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- ☆74Updated last month
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆107Updated 3 months ago
- ☆116Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆121Updated 7 months ago
- ☆49Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆89Updated 3 weeks ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 8 months ago
- A collection of tricks and tools to speed up transformer models☆170Updated last month
- ☆47Updated last month
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆160Updated this week
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Quantized Attention on GPU☆44Updated 7 months ago
- Transformers components but in Triton☆34Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆118Updated last month
- ☆42Updated 2 weeks ago
- Code for the preprint "Cache Me If You Can: How Many KVs Do You Need for Effective Long-Context LMs?"☆24Updated 3 weeks ago
- KV cache compression for high-throughput LLM inference☆132Updated 5 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆138Updated last month
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆98Updated last year
- Distributed IO-aware Attention algorithm☆20Updated 10 months ago
- ☆136Updated 5 months ago
- ☆87Updated 3 months ago
- Vocabulary Parallelism☆19Updated 4 months ago
- ☆56Updated 3 weeks ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆46Updated 8 months ago
- ☆75Updated 7 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year