Quehry / HelloBench
HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models
☆38Updated 2 months ago
Alternatives and similar repositories for HelloBench:
Users that are interested in HelloBench are comparing it to the libraries listed below
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆53Updated 9 months ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆92Updated 2 months ago
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆52Updated this week
- Large Language Models Can Self-Improve in Long-context Reasoning☆62Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆93Updated 3 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆58Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆107Updated 9 months ago
- Long Context Extension and Generalization in LLMs☆48Updated 4 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆83Updated last year
- This the implementation of LeCo☆30Updated 3 weeks ago
- ☆58Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- ☆60Updated last week
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 11 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated 2 months ago
- ☆74Updated last month
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆58Updated 3 months ago
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆18Updated last week
- ☆53Updated 2 months ago
- The demo, code and data of FollowRAG☆69Updated 2 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆111Updated 3 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆22Updated 4 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆45Updated 7 months ago
- The official repository of the Omni-MATH benchmark.☆71Updated last month
- ☆48Updated 11 months ago