Flossiee / HonestyLLMLinks
[NeurIPS 2024] HonestLLM: Toward an Honest and Helpful Large Language Model
☆26Updated 2 months ago
Alternatives and similar repositories for HonestyLLM
Users that are interested in HonestyLLM are comparing it to the libraries listed below
Sorting:
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 8 months ago
- ☆30Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆34Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆153Updated 5 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- ☆31Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆26Updated 10 months ago
- ☆39Updated last year
- ☆26Updated 4 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- ☆39Updated 5 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆63Updated 2 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 9 months ago
- Lightweight Adapting for Black-Box Large Language Models☆23Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆76Updated 4 months ago
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆85Updated last month
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆83Updated 3 months ago
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correla…☆46Updated 3 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 3 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 4 months ago
- ☆48Updated 9 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated 11 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆144Updated 3 months ago
- ☆41Updated 9 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago