Re-Align / URIALLinks
☆313Updated last year
Alternatives and similar repositories for URIAL
Users that are interested in URIAL are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆242Updated 11 months ago
- Reformatted Alignment☆112Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- A simple unified framework for evaluating LLMs☆254Updated 6 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆314Updated last year
- ☆319Updated last year
- ☆122Updated last year
- Open Source WizardCoder Dataset☆160Updated 2 years ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆108Updated 8 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- Generative Judge for Evaluating Alignment☆247Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated 11 months ago
- The official evaluation suite and dynamic data release for MixEval.☆250Updated 11 months ago
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- ☆312Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆248Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models☆88Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆390Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last week
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- Code accompanying "How I learned to start worrying about prompt formatting".☆112Updated 4 months ago
- FireAct: Toward Language Agent Fine-tuning☆282Updated 2 years ago
- Reproducible, flexible LLM evaluations☆257Updated last week
- ☆274Updated 2 years ago