OATML / semantic-entropy-probes
☆28Updated 9 months ago
Alternatives and similar repositories for semantic-entropy-probes
Users that are interested in semantic-entropy-probes are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (paragraph-length experiments).☆57Updated last year
- ☆165Updated 10 months ago
- ☆44Updated 9 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆57Updated 5 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- ☆88Updated 10 months ago
- ☆51Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- ☆36Updated 4 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆93Updated last month
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 6 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]☆29Updated 11 months ago
- ☆29Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆38Updated 4 months ago
- ☆69Updated 3 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆125Updated 10 months ago
- ☆31Updated 2 months ago
- This repository contains data, code and models for contextual noncompliance.☆22Updated 9 months ago
- ☆94Updated last year
- Evaluate the Quality of Critique☆35Updated 11 months ago
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆56Updated 2 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆40Updated last year
- ☆43Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- ☆74Updated 11 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year