thu-ml / 2by4-pretrain-acc-examplesLinks
Code for "Accelerating Transformer Pre-training with 2:4 Sparsity"
☆24Updated 8 months ago
Alternatives and similar repositories for 2by4-pretrain-acc-examples
Users that are interested in 2by4-pretrain-acc-examples are comparing it to the libraries listed below
Sorting:
- Efficient 2:4 sparse training algorithms and implementations☆56Updated 8 months ago
- ☆51Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆129Updated 11 months ago
- A sparse attention kernel supporting mix sparse patterns☆267Updated 5 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆311Updated last month
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆165Updated 10 months ago
- 16-fold memory access reduction with nearly no loss☆103Updated 4 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆128Updated 5 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆215Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆215Updated last month
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆72Updated 4 months ago
- This repository contains integer operators on GPUs for PyTorch.☆211Updated last year
- ☆54Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆51Updated 4 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆56Updated last week
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆56Updated this week
- ☆23Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆30Updated 3 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆145Updated 2 months ago
- ☆54Updated 8 months ago
- ☆80Updated 6 months ago
- ☆150Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Updated 10 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆221Updated last month
- ☆42Updated 2 years ago
- 🎓Automatically Update circult-eda-mlsys-tinyml Papers Daily using Github Actions (Update Every 8th hours)☆10Updated this week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆109Updated 4 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆111Updated last month
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆463Updated last year
- ☆50Updated last year