jasonvanf / llama-trlLinks
LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA
β220Updated 2 years ago
Alternatives and similar repositories for llama-trl
Users that are interested in llama-trl are comparing it to the libraries listed below
Sorting:
- π An unofficial implementation of Self-Alignment with Instruction Backtranslation.β140Updated 3 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ268Updated 10 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β561Updated 7 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).β345Updated last year
- β278Updated 7 months ago
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chatβ114Updated 2 years ago
- Generative Judge for Evaluating Alignmentβ244Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuningβ265Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasetsβ340Updated last year
- All available datasets for Instruction Tuning of Large Language Modelsβ255Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuningβ397Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.β372Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other moβ¦β381Updated last month
- β269Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ166Updated last month
- β337Updated 2 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ388Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ474Updated 9 months ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Languβ¦β352Updated 2 years ago
- llama fine-tuning with loraβ139Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]β280Updated last year
- Large Language Models Are Reasoning Teachers (ACL 2023)β341Updated 4 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"β504Updated 6 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"β354Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsβ253Updated 7 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels β¦β272Updated last year
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β498Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`β180Updated 8 months ago
- β284Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.β319Updated last year