Jarviswang94 / Multilingual_safety_benchmark
Multilingual safety benchmark for Large Language Models
☆24Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for Multilingual_safety_benchmark
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆59Updated 8 months ago
- ☆33Updated last year
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆15Updated 4 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆61Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 5 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆71Updated 2 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆30Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆83Updated 4 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆42Updated 8 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆43Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆56Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆46Updated last month
- ☆54Updated 2 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆21Updated 4 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆58Updated 8 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated last month
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆31Updated 9 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆63Updated last month
- Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936☆26Updated 5 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆33Updated this week
- ☆38Updated last year
- ☆65Updated 5 months ago
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆21Updated 8 months ago
- ☆44Updated 2 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆40Updated 2 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆65Updated 2 years ago
- ☆23Updated 2 months ago