open-compass / T-Eval
[ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step
☆231Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for T-Eval
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆326Updated 7 months ago
- ☆217Updated 3 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆139Updated last year
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆307Updated 2 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆218Updated last year
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆191Updated last month
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆217Updated 6 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆114Updated 2 months ago
- ☆120Updated 7 months ago
- ☆193Updated 6 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆498Updated 6 months ago
- Generative Judge for Evaluating Alignment☆217Updated 10 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆285Updated 4 months ago
- ☆287Updated 2 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆159Updated last year
- The related works and background techniques about Openai o1☆142Updated last week
- ☆158Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆438Updated 8 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆285Updated last month
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆125Updated 2 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆167Updated last month
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆491Updated 2 weeks ago
- ☆129Updated 4 months ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆241Updated 2 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆316Updated last month
- FireAct: Toward Language Agent Fine-tuning☆255Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks☆252Updated 3 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆220Updated 3 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.☆136Updated 2 months ago
- [ACL 2024] AUTOACT: Automatic Agent Learning from Scratch for QA via Self-Planning☆178Updated last month