[NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning
☆91Nov 29, 2025Updated 4 months ago
Alternatives and similar repositories for Twilight
Users that are interested in Twilight are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆26Feb 21, 2025Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆52Dec 17, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆18Aug 21, 2023Updated 2 years ago
- A sparse attention kernel supporting mix sparse patterns☆497Jan 18, 2026Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆292Dec 1, 2025Updated 4 months ago
- ☆31Mar 12, 2026Updated last month
- ☆241Nov 19, 2025Updated 4 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆253Dec 16, 2024Updated last year
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆85Dec 7, 2025Updated 4 months ago
- ☆20Jun 17, 2024Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆98Dec 2, 2025Updated 4 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆18Mar 11, 2025Updated last year
- Python Script to Open SJTU Dormitory Smart Lock☆10Sep 12, 2022Updated 3 years ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆655Mar 6, 2026Updated last month
- The Official Implementation of Ada-KV [NeurIPS 2025]☆131Nov 26, 2025Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 9 months ago
- An HBM FPGA based SpMV Accelerator☆18Aug 29, 2024Updated last year
- テスト姬 (评测姬)☆34Mar 13, 2026Updated last month
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆976Feb 25, 2026Updated last month
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆17Oct 25, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation☆117Feb 17, 2026Updated 2 months ago
- [ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasonin…☆42Mar 10, 2025Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆294May 1, 2025Updated 11 months ago
- ☆63Jun 12, 2025Updated 10 months ago
- Official implementation of "TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization" (Findings of ACL …☆21Jul 25, 2025Updated 8 months ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Feb 26, 2026Updated last month
- [VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆135Feb 22, 2026Updated last month
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆59Nov 20, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- TMMA: A Tiled Matrix Multiplication Accelerator for Self-Attention Projections in Transformer Models, optimized for edge deployment on Xi…☆31Apr 7, 2026Updated last week
- An experimentation platform for LLM inference optimisation☆36Sep 19, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆58Jul 23, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆279Aug 31, 2024Updated last year
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆128Aug 27, 2024Updated last year
- Flexible and Pluggable Serving Engine for Diffusion LLMs☆68Updated this week