[NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning
☆94Apr 20, 2026Updated 2 weeks ago
Alternatives and similar repositories for Twilight
Users that are interested in Twilight are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆27Feb 21, 2025Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Dec 17, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 9 months ago
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆18Aug 21, 2023Updated 2 years ago
- A sparse attention kernel supporting mix sparse patterns☆509Jan 18, 2026Updated 3 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆296Dec 1, 2025Updated 5 months ago
- ☆32Mar 12, 2026Updated last month
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- ☆248Nov 19, 2025Updated 5 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆88Dec 7, 2025Updated 5 months ago
- ☆20Jun 17, 2024Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆184Jul 10, 2024Updated last year
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆664Mar 6, 2026Updated 2 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆100Dec 2, 2025Updated 5 months ago
- Debug print operator for cudagraph debugging☆15Aug 2, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 10 months ago
- ☆19Mar 11, 2025Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆990Feb 25, 2026Updated 2 months ago
- Python Script to Open SJTU Dormitory Smart Lock☆10Sep 12, 2022Updated 3 years ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆132Nov 26, 2025Updated 5 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆384Jul 10, 2025Updated 9 months ago
- An HBM FPGA based SpMV Accelerator☆18Aug 29, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- テスト姬 (评测姬)☆34Updated this week
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆17Oct 25, 2024Updated last year
- ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation☆121Feb 17, 2026Updated 2 months ago
- [ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasonin…☆42Mar 10, 2025Updated last year
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆297May 1, 2025Updated last year
- ☆63Jun 12, 2025Updated 10 months ago
- Official implementation of "TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization" (Findings of ACL …☆21Jul 25, 2025Updated 9 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆34Feb 26, 2026Updated 2 months ago
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆27Feb 26, 2026Updated 2 months ago
- [VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆134Feb 22, 2026Updated 2 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆60Nov 20, 2024Updated last year
- From Automated Idea Factory to Realization☆595May 1, 2026Updated last week
- TMMA: A Tiled Matrix Multiplication Accelerator for Self-Attention Projections in Transformer Models, optimized for edge deployment on Xi…☆31Apr 7, 2026Updated last month
- Reinforcement Learning Framework for Visual Generation☆108Feb 13, 2026Updated 2 months ago