☆32Nov 11, 2024Updated last year
Alternatives and similar repositories for CATS
Users that are interested in CATS are comparing it to the libraries listed below
Sorting:
- ☆162Feb 15, 2025Updated last year
- ☆14Jun 4, 2024Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- RL with Experience Replay☆55Jul 27, 2025Updated 7 months ago
- ☆39Aug 27, 2024Updated last year
- ☆25Oct 31, 2024Updated last year
- ☆13Jan 15, 2025Updated last year
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated last year
- ☆56Jul 7, 2025Updated 8 months ago
- ☆12Jul 25, 2023Updated 2 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- [EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study un…☆17Dec 17, 2025Updated 3 months ago
- ☆15Nov 7, 2024Updated last year
- ☆21Mar 7, 2024Updated 2 years ago
- Code repository for "RL Grokking Recipe: How RL Unlocks and Transfers New Algorithms in LLMs""☆31Oct 12, 2025Updated 5 months ago
- Materials for season 3 (2022/23) of the UCL Artificial Intelligence Society's machine learning tutorial series☆12Mar 8, 2023Updated 3 years ago
- GPU operators for sparse tensor operations☆35Mar 11, 2024Updated 2 years ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆36Aug 27, 2025Updated 6 months ago
- ☆32Aug 24, 2022Updated 3 years ago
- ☆53Oct 29, 2024Updated last year
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆31Jul 4, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 8 months ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆35Nov 28, 2025Updated 3 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- ☆166Jul 22, 2024Updated last year
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Oct 20, 2020Updated 5 years ago
- ☆40Apr 3, 2022Updated 3 years ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆21Updated this week
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 8 months ago
- Explore visualization tools for understanding Transformer-based large language models (LLMs)☆22Dec 1, 2024Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Jul 1, 2025Updated 8 months ago
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆28Dec 14, 2025Updated 3 months ago