Zyphra / tree_attentionView external linksLinks
Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters
☆132Dec 3, 2024Updated last year
Alternatives and similar repositories for tree_attention
Users that are interested in tree_attention are comparing it to the libraries listed below
Sorting:
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated last month
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- ☆303Jul 10, 2025Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- LLM KV cache compression made easy☆876Jan 28, 2026Updated 2 weeks ago
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- ☆56Nov 6, 2024Updated last year
- Code and dataset for the paper "IsarStep: a Benchmark for High-level Mathematical Reasoning"☆12Mar 15, 2021Updated 4 years ago
- Latent Large Language Models☆19Aug 24, 2024Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆251Jan 31, 2025Updated last year
- Large Context Attention☆766Oct 13, 2025Updated 4 months ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆28Aug 19, 2025Updated 5 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆562Jan 13, 2025Updated last year
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated last year
- Online Preference Alignment for Language Models via Count-based Exploration☆17Jan 14, 2025Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Feb 5, 2026Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆250Feb 5, 2026Updated last week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Feb 2, 2025Updated last year
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 3 months ago
- Official repo of dataset-decomposition paper [NeurIPS 2024]☆21Jan 8, 2025Updated last year
- Efficient LLM Inference over Long Sequences☆394Jun 25, 2025Updated 7 months ago
- ring-attention experiments☆165Oct 17, 2024Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- ☆33Mar 1, 2023Updated 2 years ago
- Code repository for Black Mamba☆261Feb 8, 2024Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Jan 20, 2024Updated 2 years ago
- ☆34May 14, 2025Updated 9 months ago
- JAX bindings for Flash Attention v2☆103Feb 5, 2026Updated last week
- ☆53May 20, 2024Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 4 months ago