Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
☆150Nov 3, 2025Updated 4 months ago
Alternatives and similar repositories for hip-attention
Users that are interested in hip-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Dec 23, 2025Updated 3 months ago
- Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)☆11Jun 20, 2025Updated 9 months ago
- [ACL 2021] Learning to Perturb Word Embeddings for Out-of-distribution QA☆16May 11, 2022Updated 3 years ago
- This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight…☆19Nov 24, 2023Updated 2 years ago
- Implementation of SmoothCache, a project aimed at speeding-up Diffusion Transformer (DiT) based GenAI models with error-guided caching.☆48Jul 17, 2025Updated 8 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆35Mar 6, 2025Updated last year
- Upscale Twitch stream and restream into Twitch or RTMP or File.☆16Sep 16, 2023Updated 2 years ago
- [ICLR 2023] RC-MAE☆52Dec 18, 2023Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Official Code Repository for the paper "Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes" (ICML 2024).☆15Jul 21, 2024Updated last year
- Automated Design of Agentic Systems☆10Sep 7, 2024Updated last year
- DPO, but faster 🚀☆49Dec 6, 2024Updated last year
- implementing Weight Agnostic Neural Networks to Spiking Neural Networks☆10Jan 26, 2021Updated 5 years ago
- Pytorch implementation of “MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures” (NeurIPS 2020 spotlight)☆13Jul 22, 2021Updated 4 years ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆389Apr 13, 2025Updated 11 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated 2 weeks ago
- Work in progress.☆79Nov 25, 2025Updated 4 months ago
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Oct 31, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆204Jul 17, 2024Updated last year
- Implement some method of LLM KV Cache Sparsity☆41Jun 6, 2024Updated last year
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Jul 26, 2024Updated last year
- ☆15Sep 22, 2024Updated last year
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- [EMNLP 2024] Official implementation of "Hierarchical Deconstruction of LLM Reasoning: A Graph-Based Framework for Analyzing Knowledge Ut…☆23Dec 4, 2024Updated last year
- Proteus is an experimental platform that combines the power of Large Language Models with the Genesis physics engine☆26Dec 20, 2024Updated last year
- Official Code Repository for the paper "Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-intensive Tasks…☆44Nov 24, 2024Updated last year
- Pytorch implementation of "Large-Scale Meta-Learning with Continual Trajectory Shifting" (ICML 2021)☆19Jul 22, 2021Updated 4 years ago
- [EMNLP 2025] Code for paper "Table-R1: Inference-Time Scaling for Table Reasoning"☆29Jun 3, 2025Updated 9 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆278Aug 31, 2024Updated last year
- 어색한 한국어 문장을 자연스럽게 바꿔주는 크롬 확장 프로그램입니다.☆36Jan 3, 2021Updated 5 years ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated 2 years ago
- ☆29Jun 5, 2025Updated 9 months ago
- Triton kernels for Flux☆22Jul 7, 2025Updated 8 months ago
- Official repository for KoMT-Bench built by LG AI Research☆71Aug 8, 2024Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Feb 2, 2025Updated last year
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 5 months ago
- ☆87Jan 23, 2025Updated last year