tianyi-lab / Reflection_TuningLinks
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
☆359Updated 10 months ago
Alternatives and similar repositories for Reflection_Tuning
Users that are interested in Reflection_Tuning are comparing it to the libraries listed below
Sorting:
- Official repository for ORPO☆461Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆428Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆467Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆619Updated last month
- Reproducible, flexible LLM evaluations☆226Updated 3 weeks ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆504Updated 6 months ago
- ☆298Updated last year
- ☆525Updated 8 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆233Updated 8 months ago
- ☆128Updated 4 months ago
- FuseAI Project☆578Updated 6 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆736Updated 4 months ago
- ☆269Updated last year
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆117Updated 5 months ago
- ☆311Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆431Updated 9 months ago
- ☆306Updated 2 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆309Updated 10 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆561Updated 7 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆245Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆397Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆186Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆225Updated 4 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆207Updated last month
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆255Updated 3 weeks ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆229Updated 9 months ago
- A simple unified framework for evaluating LLMs☆229Updated 3 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- ☆187Updated 3 months ago