☆32Nov 11, 2024Updated last year
Alternatives and similar repositories for CATS
Users that are interested in CATS are comparing it to the libraries listed below
Sorting:
- ☆159Feb 15, 2025Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- ☆14Jun 4, 2024Updated last year
- RL with Experience Replay☆55Jul 27, 2025Updated 7 months ago
- ☆39Aug 27, 2024Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- ☆25Oct 31, 2024Updated last year
- ☆15Nov 7, 2024Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆17Dec 17, 2025Updated 2 months ago
- Code repository for "RL Grokking Recipe: How RL Unlocks and Transfers New Algorithms in LLMs""☆30Oct 12, 2025Updated 4 months ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Jan 3, 2022Updated 4 years ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆26Dec 14, 2025Updated 2 months ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆32Nov 28, 2025Updated 3 months ago
- ☆21Mar 7, 2024Updated last year
- ☆35Oct 23, 2025Updated 4 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆25Jul 4, 2024Updated last year
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- ☆34Aug 27, 2025Updated 6 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 7 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆20Jun 11, 2025Updated 8 months ago
- ☆19Jan 3, 2025Updated last year
- ☆20Jun 3, 2023Updated 2 years ago
- ☆54Oct 29, 2024Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆60Mar 25, 2025Updated 11 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆281Aug 28, 2025Updated 6 months ago
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆34Jan 18, 2025Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Aug 14, 2024Updated last year
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆77Apr 29, 2024Updated last year
- ☆33Dec 17, 2025Updated 2 months ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆101Sep 8, 2025Updated 5 months ago
- GPU operators for sparse tensor operations☆35Mar 11, 2024Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Mar 7, 2025Updated 11 months ago
- Preparing for ML Interviews.☆54Jan 12, 2026Updated last month
- ☆42Apr 22, 2025Updated 10 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆87Sep 12, 2025Updated 5 months ago