[NeurIPS 2024] Search for Efficient LLMs
☆16Jan 16, 2025Updated last year
Alternatives and similar repositories for search-llm
Users that are interested in search-llm are comparing it to the libraries listed below
Sorting:
- The reproduce for "AM-LFS: AutoML for Loss Function Search"☆14May 20, 2020Updated 5 years ago
- ☆23Nov 26, 2024Updated last year
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆66Aug 15, 2025Updated 6 months ago
- ☆12Oct 9, 2023Updated 2 years ago
- (ECCV2022) EAGAN: EAGAN: Efficient Two-stage Evolutionary Architecture Search for GANs☆12Sep 15, 2022Updated 3 years ago
- WanJuan-CC是以CommonCrawl为基础,经过数据抽取,规则清洗,去重,安全过滤,质量清洗等步骤得到的高质量数据。☆14Apr 18, 2024Updated last year
- ☆12Feb 18, 2025Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Sep 9, 2025Updated 6 months ago
- ☆13Aug 9, 2022Updated 3 years ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆19Apr 16, 2024Updated last year
- This repository contains code for paper NPENAS.☆12Feb 13, 2022Updated 4 years ago
- Code for RepNAS☆14Dec 21, 2021Updated 4 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Benchmarks for Macro Neural Architecture Search; used and described in the paper "Local Search is a Remarkably Strong Baseline for Neural…☆12Jul 25, 2024Updated last year
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆24Aug 6, 2025Updated 7 months ago
- [IJCAI 2023 workshop]Expanding dataset for 2D medical image segmentation using diffusion models☆15Feb 28, 2023Updated 3 years ago
- ☆20Aug 16, 2021Updated 4 years ago
- ViT architecture with Mamba instead of transformer backbone☆18Dec 8, 2023Updated 2 years ago
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Jul 12, 2023Updated 2 years ago
- To appear in the 11th International Conference on Learning Representations (ICLR 2023).☆18Feb 24, 2023Updated 3 years ago
- Python Toolkit of Computer Vision Research.☆15Jun 25, 2021Updated 4 years ago
- The official implementation of "DOTS: Decoupling Operation and Topology in Differentiable Architecture Search"☆20Apr 19, 2021Updated 4 years ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆20Oct 5, 2024Updated last year
- One-stop solutions for Mixture of Expert modules in PyTorch.☆26Feb 10, 2026Updated 3 weeks ago
- Repository for "Accelerating Neural Architecture Search using Performance Prediction" (ICLR Workshop 2018)☆18Mar 21, 2018Updated 7 years ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 10 months ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago
- TF-FD☆20Nov 19, 2022Updated 3 years ago
- ☆28Apr 26, 2023Updated 2 years ago
- MLPNAS code for Paperspace series on Neural Architecture Search☆23May 29, 2023Updated 2 years ago
- ☆23May 21, 2018Updated 7 years ago
- ☆26Dec 10, 2020Updated 5 years ago
- A PyTorch implementation of NASBench☆52May 12, 2023Updated 2 years ago
- Personal Digest of NAS (Under Construction 🛠)☆25Nov 24, 2020Updated 5 years ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆35Mar 6, 2025Updated last year
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Mar 18, 2024Updated last year
- ☆232Sep 21, 2021Updated 4 years ago