enyac-group / Quamba
The official repository of Quamba
☆17Updated 2 months ago
Alternatives and similar repositories for Quamba:
Users that are interested in Quamba are comparing it to the libraries listed below
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆23Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 4 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆101Updated 3 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- ☆58Updated last week
- A block oriented training approach for inference time optimization.☆32Updated 5 months ago
- GPU operators for sparse tensor operations☆30Updated 10 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 10 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆28Updated 7 months ago
- A sparse attention kernel supporting mix sparse patterns☆98Updated 3 months ago
- ☆27Updated 10 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆82Updated last week
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆26Updated 5 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆60Updated 9 months ago
- ACL 2023☆38Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated 3 months ago
- ☆31Updated 6 months ago
- ☆97Updated 5 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆29Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆74Updated last year
- 16-fold memory access reduction with nearly no loss☆72Updated 2 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 7 months ago
- ☆39Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆135Updated 8 months ago
- ☆100Updated last month
- Fast and memory-efficient exact attention☆57Updated last month
- ☆157Updated last year
- ☆21Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆86Updated this week