tianyi-lab / Reflection_TuningLinks
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
☆353Updated 8 months ago
Alternatives and similar repositories for Reflection_Tuning
Users that are interested in Reflection_Tuning are comparing it to the libraries listed below
Sorting:
- Official repository for ORPO☆453Updated last year
- RewardBench: the first evaluation tool for reward models.☆582Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆705Updated 2 months ago
- ☆517Updated 6 months ago
- ☆293Updated this week
- ☆282Updated 10 months ago
- Reproducible, flexible LLM evaluations☆203Updated 3 weeks ago
- Automatic evals for LLMs☆399Updated this week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆231Updated 3 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆241Updated 6 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆489Updated 4 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 7 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆417Updated last year
- A project to improve skills of large language models☆413Updated this week
- ☆731Updated last month
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆305Updated 8 months ago
- Direct Preference Optimization from scratch in PyTorch☆92Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆554Updated 5 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆217Updated 2 months ago
- ☆258Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆223Updated 6 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆477Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆213Updated 2 weeks ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆248Updated 2 weeks ago
- Generative Representational Instruction Tuning☆639Updated 2 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆344Updated last year
- ☆554Updated last month