GAIR-NLP / Safety-J
Safety-J: Evaluating Safety with Critique
☆16Updated 7 months ago
Alternatives and similar repositories for Safety-J:
Users that are interested in Safety-J are comparing it to the libraries listed below
- ☆73Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 7 months ago
- ☆41Updated last year
- ☆13Updated 8 months ago
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆37Updated 5 months ago
- ☆15Updated 4 months ago
- ☆26Updated last month
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?☆14Updated last month
- ☆43Updated 4 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆107Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆55Updated 3 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆73Updated 2 months ago
- ☆68Updated 3 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆27Updated this week
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆107Updated 6 months ago
- ☆38Updated 4 months ago
- [EMNLP 2023] Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts☆26Updated last year
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 4 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆53Updated 11 months ago
- A Survey on the Honesty of Large Language Models☆56Updated 3 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆79Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 2 weeks ago
- ☆15Updated 9 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated 11 months ago
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated 6 months ago
- ☆11Updated 6 months ago
- The official code repository for PRMBench.☆68Updated last month
- ☆59Updated 6 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆29Updated 4 months ago