OpenBMB / Eurus
☆300Updated 3 months ago
Alternatives and similar repositories for Eurus:
Users that are interested in Eurus are comparing it to the libraries listed below
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆208Updated 3 months ago
- ☆247Updated 5 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆303Updated 3 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆447Updated 9 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆237Updated last month
- A series of technical report on Slow Thinking with LLM☆297Updated last week
- A large-scale, fine-grained, diverse preference dataset (and models).☆325Updated last year
- Reformatted Alignment☆113Updated 3 months ago
- ☆295Updated last month
- RewardBench: the first evaluation tool for reward models.☆491Updated last week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆377Updated 3 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆524Updated last month
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆171Updated 3 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆233Updated 4 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆125Updated 5 months ago
- FireAct: Toward Language Agent Fine-tuning☆261Updated last year
- Building Open LLM Web Agents with Self-Evolving Online Curriculum RL☆282Updated 3 weeks ago
- ☆303Updated 7 months ago
- ☆120Updated 7 months ago
- Generative Judge for Evaluating Alignment☆223Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆206Updated 2 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents☆270Updated 7 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆172Updated 9 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆368Updated 6 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆145Updated 8 months ago
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆320Updated 8 months ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆565Updated last week
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step☆256Updated 9 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆125Updated 4 months ago
- Official implementation of paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆288Updated 4 months ago