Flossiee / HonestyLLMLinks
[NeurIPS 2024] HonestLLM: Toward an Honest and Helpful Large Language Model
☆29Updated 7 months ago
Alternatives and similar repositories for HonestyLLM
Users that are interested in HonestyLLM are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- ☆30Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆50Updated last year
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆55Updated last year
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆21Updated 3 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Updated last year
- ☆33Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆95Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆98Updated 3 weeks ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92Updated 9 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆170Updated 10 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆66Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆87Updated 10 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- ☆56Updated 4 months ago
- ☆46Updated 10 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆126Updated 11 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆27Updated 10 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆175Updated 2 years ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆172Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆60Updated last year