OpenMachine-ai / transformer-tricksLinks
A collection of tricks and tools to speed up transformer models
☆193Updated last week
Alternatives and similar repositories for transformer-tricks
Users that are interested in transformer-tricks are comparing it to the libraries listed below
Sorting:
- ☆66Updated 9 months ago
- Efficient LLM Inference over Long Sequences☆394Updated 6 months ago
- RWKV-7: Surpassing GPT☆102Updated last year
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆404Updated 3 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆223Updated 6 months ago
- ☆260Updated 6 months ago
- ☆133Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- ☆148Updated last year
- Normalized Transformer (nGPT)☆194Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated 3 weeks ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆217Updated this week
- Work in progress.☆75Updated last month
- ☆101Updated 10 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Updated last year
- CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning☆277Updated last month
- ☆59Updated 2 years ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆470Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆226Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- ☆113Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 8 months ago
- ☆63Updated 7 months ago
- Efficient triton implementation of Native Sparse Attention.☆256Updated 7 months ago
- Memory optimized Mixture of Experts☆72Updated 5 months ago
- Cookbook of SGLang - Recipe☆45Updated this week