mingkai-zheng / GENIUSLinks
Can GPT-4 Perform Neural Architecture Search?
☆89Updated 2 years ago
Alternatives and similar repositories for GENIUS
Users that are interested in GENIUS are comparing it to the libraries listed below
Sorting:
- ☆27Updated 2 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆48Updated 3 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Recycling diverse models☆46Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- ☆37Updated 2 years ago
- ☆70Updated 5 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- ☆33Updated 11 months ago
- ☆106Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated 2 years ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Updated 9 months ago
- Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)☆64Updated last year
- [ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, S…☆25Updated 3 years ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆40Updated 2 years ago
- Architecture embeddings independent from the parametrization of the search space☆15Updated 4 years ago
- The repository contains code for Adaptive Data Optimization☆29Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 5 months ago
- Official implementation for Sparse MetA-Tuning (SMAT)☆18Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Model Stock: All we need is just a few fine-tuned models☆127Updated 4 months ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- ACL 2023☆39Updated 2 years ago
- ☆13Updated 3 years ago