goodevening13 / aquakvLinks
☆19Updated 2 months ago
Alternatives and similar repositories for aquakv
Users that are interested in aquakv are comparing it to the libraries listed below
Sorting:
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆90Updated 5 months ago
- ☆161Updated 6 months ago
- Code for data-aware compression of DeepSeek models☆66Updated 3 weeks ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- Work in progress.☆76Updated last month
- ☆114Updated last month
- Load compute kernels from the Hub☆357Updated 3 weeks ago
- Code for studying the super weight in LLM☆121Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- ☆92Updated last year
- Prune transformer layers☆74Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆119Updated last week
- supporting pytorch FSDP for optimizers☆84Updated last year
- QuIP quantization☆61Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- ☆124Updated last year
- MoE training for Me and You and maybe other people☆309Updated last week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆277Updated last month
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- ☆114Updated 2 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆74Updated last year
- ☆225Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Fast low-bit matmul kernels in Triton☆416Updated 2 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- Muon fsdp 2☆47Updated 4 months ago