Zyphra / tree_attentionLinks
Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters
☆126Updated 5 months ago
Alternatives and similar repositories for tree_attention
Users that are interested in tree_attention are comparing it to the libraries listed below
Sorting:
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆149Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆233Updated 3 months ago
- Fast and memory-efficient exact attention☆68Updated 2 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆237Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆126Updated 3 weeks ago
- ☆79Updated 9 months ago
- Token Omission Via Attention☆126Updated 7 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆155Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆116Updated 5 months ago
- RWKV-7: Surpassing GPT☆88Updated 6 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆62Updated 4 months ago
- ☆92Updated 8 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆216Updated 6 months ago
- Understand and test language model architectures on synthetic tasks.☆195Updated 2 months ago
- ☆44Updated last year
- ☆78Updated 10 months ago
- Load compute kernels from the Hub☆139Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 7 months ago
- ☆129Updated 3 months ago
- PyTorch implementation of models from the Zamba2 series.☆181Updated 4 months ago
- ☆50Updated 7 months ago
- ☆68Updated 10 months ago
- ☆197Updated 5 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 7 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆117Updated this week
- Work in progress.☆67Updated this week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆132Updated 9 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆124Updated 9 months ago
- Normalized Transformer (nGPT)☆181Updated 6 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆98Updated last month