princeton-nlp / LESSLinks
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
☆456Updated 8 months ago
Alternatives and similar repositories for LESS
Users that are interested in LESS are comparing it to the libraries listed below
Sorting:
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- A series of technical report on Slow Thinking with LLM☆699Updated last week
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆372Updated 9 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- ☆540Updated 5 months ago
- A Survey on Data Selection for Language Models☆235Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆498Updated 5 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆639Updated 5 months ago
- ☆288Updated 10 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆264Updated 9 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆225Updated 2 weeks ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆369Updated 5 months ago
- Collection of training data management explorations for large language models☆326Updated 10 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆226Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆897Updated 4 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆345Updated last year
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆531Updated 7 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆261Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆332Updated last year
- The related works and background techniques about Openai o1☆221Updated 5 months ago
- LLM hallucination paper list☆318Updated last year
- ☆331Updated 2 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆336Updated 8 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆903Updated 5 months ago
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆451Updated 5 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- The repo for In-context Autoencoder☆128Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆179Updated 6 months ago
- ☆202Updated 4 months ago