AGI-Edgerunners / LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
β1,157Updated last year
Alternatives and similar repositories for LLM-Adapters:
Users that are interested in LLM-Adapters are comparing it to the libraries listed below
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).β768Updated last year
- Paper List for In-context Learning π·β854Updated 7 months ago
- Open Academic Research on Improving LLaMA to SOTA LLMβ1,621Updated last year
- Aligning Large Language Models with Human: A Surveyβ728Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.β546Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Compositionβ630Updated 9 months ago
- [NIPS2023] RRHF & Wombatβ807Updated last year
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β492Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)β1,117Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large β¦β1,013Updated 5 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought promptingβ2,723Updated 9 months ago
- β·οΈ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)β959Updated 5 months ago
- [ACL 2023] Reasoning with Language Model Prompting: A Surveyβ952Updated 3 weeks ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.β557Updated last year
- β914Updated 11 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)β1,026Updated 7 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruningβ604Updated last year
- LOMO: LOw-Memory Optimizationβ985Updated 10 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Modelsβ1,526Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,386Updated last year
- Reference implementation for DPO (Direct Preference Optimization)β2,560Updated 8 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.β1,736Updated 4 months ago
- Expanding natural instructionsβ996Updated last year
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".β1,519Updated last month
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".β2,035Updated last year
- β1,513Updated last week
- β900Updated 9 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).β838Updated this week
- YaRN: Efficient Context Window Extension of Large Language Modelsβ1,479Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diveβ¦β941Updated 6 months ago