☆12Oct 9, 2023Updated 2 years ago
Alternatives and similar repositories for BESA
Users that are interested in BESA are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆23Aug 6, 2025Updated 6 months ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- ☆56Jun 10, 2024Updated last year
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 10 months ago
- Source code of ACL 2023 accepted paper "AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression"☆12Jun 14, 2023Updated 2 years ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- 关于AI,ML,DA,DV等的几个经典案例,包括堵车模拟(NagelSchreckenberg)、蒙特卡洛排队问题(Monte Carlo Queuing Problem)、人脸识别(RecognitionFace)、遗传算法推断图像(IconGenetic)☆10Oct 14, 2018Updated 7 years ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Apr 9, 2024Updated last year
- Effective Attention Sheds Light On Interpretability - Findings of ACL2021☆11May 16, 2021Updated 4 years ago
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆12Feb 7, 2026Updated 3 weeks ago
- A hand pose estimation system using dual-KD-trees☆10Nov 3, 2015Updated 10 years ago
- Code for paper: Unraveling the Shift of Visual Information Flow in MLLMs: From Phased Interaction to Efficient Inference☆13Jun 7, 2025Updated 8 months ago
- PyTorch implementation of the Reinforced Mnemonic Reader + Answer Verifier model (https://arxiv.org/abs/1808.05759)☆10Nov 23, 2018Updated 7 years ago
- Code for Remember and Reuse: Cross-Task Blind Image Quality Assessment via Relevance-aware Incremental Learning (ACM Multimedia 2021)