goodevening13 / aquakvLinks
☆16Updated this week
Alternatives and similar repositories for aquakv
Users that are interested in aquakv are comparing it to the libraries listed below
Sorting:
- Load compute kernels from the Hub☆290Updated last week
- ☆153Updated 3 months ago
- Work in progress.☆74Updated 3 months ago
- ☆89Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆89Updated 2 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- ☆98Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆98Updated 3 months ago
- Prune transformer layers☆69Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- Code for studying the super weight in LLM☆119Updated 10 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆241Updated last month
- ☆221Updated 7 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- QuIP quantization☆60Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- nanoGPT-like codebase for LLM training☆107Updated 4 months ago
- Code for data-aware compression of DeepSeek models☆54Updated 3 months ago
- ☆15Updated 2 years ago
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆163Updated 8 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆246Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆373Updated last week
- ☆122Updated last year
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- Explore training for quantized models☆24Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 3 months ago
- Learn CUDA with PyTorch☆84Updated last week
- Efficient optimizers☆265Updated last week