declare-lab / instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
☆546Updated last year
Alternatives and similar repositories for instruct-eval:
Users that are interested in instruct-eval are comparing it to the libraries listed below
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆395Updated 11 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆604Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆376Updated 9 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆630Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆553Updated 4 months ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆342Updated last year
- Official repository for LongChat and LongEval☆519Updated 11 months ago
- Official repository for ORPO☆450Updated 11 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆807Updated 10 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆940Updated 6 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆487Updated 10 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆521Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆837Updated this week
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆482Updated 3 months ago
- RewardBench: the first evaluation tool for reward models.☆562Updated 2 months ago
- DSIR large-scale data selection framework for language model training☆246Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆557Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆323Updated last year
- ☆749Updated 10 months ago
- ☆515Updated 5 months ago
- [NIPS2023] RRHF & Wombat☆806Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- distributed trainer for LLMs☆573Updated 11 months ago
- Codebase for Merging Language Models (ICML 2024)☆818Updated last year
- ☆256Updated last year
- Run evaluation on LLMs using human-eval benchmark☆409Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆768Updated last year
- ☆270Updated 2 years ago