OpenBMB / EurusLinks
☆320Updated last year
Alternatives and similar repositories for Eurus
Users that are interested in Eurus are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 4 months ago
- ☆315Updated last year
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆248Updated 7 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆191Updated last year
- Reformatted Alignment☆113Updated last year
- ☆173Updated 7 months ago
- ☆122Updated last year
- FireAct: Toward Language Agent Fine-tuning☆286Updated 2 years ago
- ☆313Updated last year
- ☆327Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 11 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- ☆130Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆376Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 4 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆253Updated 7 months ago
- ☆96Updated 11 months ago
- [COLM 2025] An Open Math Pre-trainng Dataset with 370B Tokens.☆108Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆356Updated last year
- ☆213Updated 9 months ago
- ☆65Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆264Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆267Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 10 months ago
- AN O1 REPLICATION FOR CODING☆337Updated 11 months ago
- ☆83Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆358Updated last year
- Generative Judge for Evaluating Alignment☆248Updated last year
- ☆51Updated last year