tianyi-lab / Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
☆332Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for Reflection_Tuning
- The official evaluation suite and dynamic data release for MixEval.☆222Updated last week
- Official repository for ORPO☆420Updated 5 months ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆476Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆302Updated 6 months ago
- ☆445Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆435Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆424Updated 2 weeks ago
- OLMoE: Open Mixture-of-Experts Language Models☆435Updated this week
- FuseAI Project☆448Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆351Updated 3 weeks ago
- An Open Source Toolkit For LLM Distillation☆350Updated last month
- ☆246Updated last year
- ☆488Updated 3 weeks ago
- Generative Representational Instruction Tuning☆562Updated this week
- ☆211Updated 3 months ago
- Large Reasoning Models☆457Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆279Updated this week
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆153Updated 7 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆401Updated 2 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆194Updated this week
- A simple unified framework for evaluating LLMs☆138Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆642Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆494Updated 5 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆169Updated last week
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆200Updated 5 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆275Updated 2 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆190Updated 3 weeks ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆241Updated 5 months ago
- Code for Quiet-STaR☆639Updated 2 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆428Updated 6 months ago