☆159Feb 15, 2025Updated last year
Alternatives and similar repositories for TEAL
Users that are interested in TEAL are comparing it to the libraries listed below
Sorting:
- ☆32Nov 11, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- ☆39Aug 27, 2024Updated last year
- ☆352Apr 2, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 7 months ago
- A sparse attention kernel supporting mix sparse patterns☆467Jan 18, 2026Updated last month
- ☆20Dec 24, 2024Updated last year
- 16-fold memory access reduction with nearly no loss☆109Mar 26, 2025Updated 11 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆187Jan 1, 2025Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆388Apr 13, 2025Updated 10 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Sep 28, 2024Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆285Oct 19, 2025Updated 4 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Sep 4, 2024Updated last year
- ☆85Apr 18, 2025Updated 10 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆433Feb 1, 2026Updated 3 weeks ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- ☆301Jul 10, 2025Updated 7 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- ☆164Jul 22, 2024Updated last year
- ☆40Nov 22, 2025Updated 3 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- ☆106Mar 9, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆203Dec 5, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated 11 months ago
- Kernel Library Wheel for SGLang☆17Updated this week
- GoldFinch and other hybrid transformer components☆12Dec 9, 2025Updated 2 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆483Nov 26, 2024Updated last year