xiaofangxd / LLM_EA
Evolutionary-Algorithm and Large-Language-Model
☆16Updated 5 months ago
Alternatives and similar repositories for LLM_EA:
Users that are interested in LLM_EA are comparing it to the libraries listed below
- Large Language Model for MOEA☆47Updated 10 months ago
- Evolution of Heuristics☆150Updated 2 months ago
- Official implementation of the paper "Chain-of-Experts: When LLMs Meet Complex Operation Research Problems"☆92Updated 2 months ago
- A list of awesome papers and resources of the intersection of Large Language Models and Evolutionary Computation.☆84Updated last month
- The LLMOPT project offers a comprehensive set of resources, including the model, dataset, training framework, and inference code, enablin…☆53Updated this week
- This repo is for our EMNLP2023 short paper (Findings): InstOptima: Evolutionary Multi-objective Instruction Optimization via Large Langua…☆12Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆13Updated 3 months ago
- [IJCAI 2023] Black-box Prompt Tuning for Vision-Language Model as a Service☆17Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆98Updated 9 months ago
- Can GPT-4 Perform Neural Architecture Search?☆87Updated last year
- ☆13Updated 2 years ago
- This repository contains the publishable code for CVPR 2021 paper TransNAS-Bench-101: Improving Transferrability and Generalizability of …☆22Updated 2 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- [NeurIPS 2024] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution☆173Updated last week
- A Collection on Large Language Models for Optimization☆226Updated 5 months ago
- ☆12Updated 2 months ago
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆75Updated 8 months ago
- Official implementation of Our NeurIPS 2024 Paper "Boundary Matters: A Bi-Level Active Finetuning Method"☆11Updated 2 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- [ICML 2024] "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts"☆67Updated 8 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- ☆38Updated 2 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆44Updated 6 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆56Updated 3 weeks ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆24Updated 10 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆36Updated last week
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆43Updated 6 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 5 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆56Updated last month