Jarviswang94 / Multilingual_safety_benchmarkLinks
Multilingual safety benchmark for Large Language Models
☆50Updated 9 months ago
Alternatives and similar repositories for Multilingual_safety_benchmark
Users that are interested in Multilingual_safety_benchmark are comparing it to the libraries listed below
Sorting:
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Updated 2 years ago
- ☆38Updated 4 months ago
- ☆29Updated 3 months ago
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆75Updated last month
- basically all the things I used for this article☆24Updated 4 months ago
- Benchmarking LLMs' Emotional Alignment with Humans☆103Updated 3 months ago
- ☆30Updated 2 months ago
- Benchmarking LLMs' Psychological Portrayal☆116Updated 5 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆64Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆25Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆33Updated 9 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 5 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 8 months ago
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆27Updated 2 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 5 months ago
- ☆49Updated 11 months ago
- The reinforcement learning codes for dataset SPA-VL☆33Updated 11 months ago
- ☆74Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆61Updated 5 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 4 months ago
- ☆24Updated 2 years ago
- ☆41Updated 8 months ago
- ☆38Updated 2 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 8 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆34Updated 6 months ago
- ☆10Updated 3 months ago
- ☆21Updated 2 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆93Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆13Updated last year