tianyi-lab / Reflection_TuningLinks
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
☆364Updated last year
Alternatives and similar repositories for Reflection_Tuning
Users that are interested in Reflection_Tuning are comparing it to the libraries listed below
Sorting:
- Official repository for ORPO☆467Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆447Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- ☆556Updated last year
- Reproducible, flexible LLM evaluations☆286Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆443Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- ☆315Updated last year
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆145Updated 9 months ago
- ☆272Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆243Updated last year
- ☆327Updated 6 months ago
- ☆157Updated last month
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆318Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 4 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆191Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆524Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆660Updated 5 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆513Updated last year
- FuseAI Project☆584Updated 10 months ago
- X-LoRA: Mixture of LoRA Experts☆252Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 5 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆793Updated 8 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆405Updated last year
- Automatic evals for LLMs☆559Updated 5 months ago
- ☆313Updated last year