jasonvanf / llama-trlLinks
LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA
β217Updated 2 years ago
Alternatives and similar repositories for llama-trl
Users that are interested in llama-trl are comparing it to the libraries listed below
Sorting:
- π An unofficial implementation of Self-Alignment with Instruction Backtranslation.β140Updated 3 weeks ago
- β276Updated 4 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ264Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).β340Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β555Updated 5 months ago
- Generative Judge for Evaluating Alignmentβ238Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo β¦β370Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.β590Updated this week
- Data and Code for Program of Thoughts (TMLR 2023)β274Updated last year
- All available datasets for Instruction Tuning of Large Language Modelsβ250Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.β314Updated 9 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuningβ261Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Languβ¦β346Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chatβ115Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuningβ396Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels β¦β265Updated last year
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)β206Updated 2 years ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ157Updated 8 months ago
- β330Updated 3 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ378Updated 10 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ453Updated 7 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`β178Updated 6 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.β474Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasetsβ328Updated last year
- β280Updated last year
- FireAct: Toward Language Agent Fine-tuningβ278Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"β493Updated 4 months ago
- Scripts for fine-tuning Llama2 via SFT and DPO.β200Updated last year
- DSIR large-scale data selection framework for language model trainingβ249Updated last year
- Collection of papers for scalable automated alignment.β90Updated 7 months ago