NoakLiu / LLMEasyQuantLinks
A Serving System for Distributed and Parallel LLM Quantization [Efficient ML System]
☆26Updated 4 months ago
Alternatives and similar repositories for LLMEasyQuant
Users that are interested in LLMEasyQuant are comparing it to the libraries listed below
Sorting:
- GraphSnapShot: Caching Local Structure for Fast Graph Learning [Efficient ML System]☆40Updated 3 weeks ago
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆30Updated 5 months ago
- Adaptive Topology Reconstruction for Robust Graph Representation Learning [Efficient ML Model]☆10Updated 8 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆271Updated this week
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆382Updated 3 weeks ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆45Updated 2 weeks ago
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆27Updated 2 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆125Updated 2 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆125Updated 2 weeks ago
- Survey Paper List - Efficient LLM and Foundation Models☆258Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆559Updated 2 weeks ago
- Pytorch Implementation of "Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models", AAAI 2…☆37Updated 5 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆435Updated 2 months ago
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12Updated 4 months ago
- [ICML 2024] PyTorch implementation for "Diversified Batch Selection for Training Acceleration"☆10Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆111Updated 3 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆23Updated 7 months ago