Re-Align / URIALLinks
☆310Updated last year
Alternatives and similar repositories for URIAL
Users that are interested in URIAL are comparing it to the libraries listed below
Sorting:
- Benchmarking LLMs with Challenging Tasks from Real Users☆228Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆307Updated 10 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated last year
- ☆319Updated 9 months ago
- ☆121Updated last year
- FireAct: Toward Language Agent Fine-tuning☆279Updated last year
- Generative Judge for Evaluating Alignment☆244Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆251Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- Open Source WizardCoder Dataset☆159Updated 2 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆145Updated 8 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- Reformatted Alignment☆113Updated 9 months ago
- A simple unified framework for evaluating LLMs☆221Updated 2 months ago
- ☆294Updated 11 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆376Updated 10 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆138Updated 8 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆244Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆343Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆383Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]