A collection of tricks and tools to speed up transformer models
☆196Feb 23, 2026Updated 3 weeks ago
Alternatives and similar repositories for transformer-tricks
Users that are interested in transformer-tricks are comparing it to the libraries listed below
Sorting:
- Implementation of Computer Vision Models in JAX (equinox)☆20Jan 15, 2026Updated 2 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Mar 25, 2025Updated 11 months ago
- Code repository for ICLR 2025 paper "LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid"☆25Mar 2, 2025Updated last year
- DUTH RISC-V Microprocessor☆25Dec 4, 2024Updated last year
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 4 months ago
- A PyTorch implementation of [VCT](https://github.com/google-research/google-research/tree/master/vct)☆10Nov 25, 2022Updated 3 years ago
- CoV: Chain-of-View Prompting for Spatial Reasoning☆52Jan 23, 2026Updated last month
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆31Mar 30, 2025Updated 11 months ago
- supporting pytorch FSDP for optimizers☆84Dec 8, 2024Updated last year
- ☆27Nov 25, 2025Updated 3 months ago
- ☆36Nov 14, 2025Updated 4 months ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 2 months ago
- ☆95Jul 7, 2025Updated 8 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- LLM KV cache compression made easy☆957Mar 13, 2026Updated last week
- ☆12Dec 22, 2024Updated last year
- ☆101Feb 26, 2026Updated 3 weeks ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆206Dec 4, 2025Updated 3 months ago
- ☆87Jan 23, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆242Jun 15, 2025Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 5 months ago
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆39Apr 10, 2025Updated 11 months ago
- Storing long contexts in tiny caches with self-study☆249Dec 5, 2025Updated 3 months ago
- ROSA+: RWKV's ROSA implementation with fallback statistical predictor☆35Oct 13, 2025Updated 5 months ago
- AirLLM 70B inference with single 4GB GPU☆19Jun 27, 2025Updated 8 months ago
- ☆11Sep 7, 2024Updated last year
- [NeurIPS'25 Spotlight] Boosting Generative Image Modeling via Joint Image-Feature Synthesis☆117Nov 3, 2025Updated 4 months ago
- a browser gui for nvidia smi☆20Mar 17, 2025Updated last year
- Fast, lightweight and parallelised simulation-based inference in JAX.☆23Oct 28, 2025Updated 4 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 3 months ago
- patches for huggingface transformers to save memory☆35Jun 2, 2025Updated 9 months ago
- Optmized Fused-SSIM☆72Feb 26, 2025Updated last year
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆51Jul 6, 2025Updated 8 months ago
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Jun 22, 2022Updated 3 years ago
- ROSA-Tuning☆70Feb 4, 2026Updated last month
- Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.☆684Mar 22, 2025Updated 11 months ago
- Memory Efficient Training Framework for Large Video Generation Model☆25Apr 22, 2024Updated last year
- ☆62Oct 29, 2024Updated last year