GAIR-NLP / auto-j
Generative Judge for Evaluating Alignment
β228Updated last year
Alternatives and similar repositories for auto-j:
Users that are interested in auto-j are comparing it to the libraries listed below
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuningβ240Updated last year
- π An unofficial implementation of Self-Alignment with Instruction Backtranslation.β136Updated 7 months ago
- β258Updated 6 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other moβ¦β337Updated 5 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β534Updated 2 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)β256Updated 10 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsβ175Updated 4 months ago
- FireAct: Toward Language Agent Fine-tuningβ265Updated last year
- Unofficial implementation of AlpaGasusβ90Updated last year
- Data and Code for Program of Thoughts (TMLR 2023)β259Updated 9 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).β329Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ245Updated 5 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmarkβ370Updated 7 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"β333Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ142Updated 5 months ago
- Reformatted Alignmentβ114Updated 4 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels β¦β251Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QAβ111Updated 3 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"β217Updated this week
- β137Updated last year
- Codes for the paper "βBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718β307Updated 4 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.β144Updated last year
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planningβ207Updated last month
- [EMNLP 2023] Adapting Language Models to Compress Long Contextsβ293Updated 5 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.β129Updated 5 months ago
- β139Updated 7 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Followingβ119Updated 7 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ412Updated 4 months ago
- Counting-Stars (β )β78Updated 5 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsβ241Updated 2 months ago