Jiuzhouh / Uncertainty-Aware-Language-AgentLinks
This is the official repo for Towards Uncertainty-Aware Language Agent.
☆32Updated last year
Alternatives and similar repositories for Uncertainty-Aware-Language-Agent
Users that are interested in Uncertainty-Aware-Language-Agent are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆29Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆32Updated 3 weeks ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 8 months ago
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- ☆46Updated 2 years ago
- Directional Preference Alignment☆58Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- ☆19Updated last year
- ☆57Updated 8 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆143Updated last year
- ☆15Updated 6 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆29Updated last year
- ☆103Updated 2 years ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆21Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- ☆103Updated last year
- ☆102Updated 2 years ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated 2 years ago
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆42Updated 8 months ago