Awenbocc / LLM-OODLinks
☆14Updated last year
Alternatives and similar repositories for LLM-OOD
Users that are interested in LLM-OOD are comparing it to the libraries listed below
Sorting:
- This repo contains the source code for reproducing the experimental results in semantic density paper (Neurips 2024)☆18Updated 4 months ago
- [ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative foundation models.☆125Updated 5 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆88Updated last year
- LLM Unlearning☆181Updated 2 years ago
- Papers about training data quality management for ML models.☆107Updated 3 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆39Updated 6 months ago
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆65Updated 10 months ago
- ☆45Updated 2 months ago
- ☆37Updated last year
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆35Updated last year
- A resource repository for representation engineering in large language models☆148Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆174Updated 3 months ago
- ☆25Updated 3 months ago
- Using Explanations as a Tool for Advanced LLMs☆69Updated last year
- JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synth…☆58Updated last year
- Code and data for the paper: On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents☆42Updated last month
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆34Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆144Updated last year
- Code and dataset for the paper: "Can Editing LLMs Inject Harm?"☆21Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆60Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated last year
- awesome SAE papers☆71Updated 8 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆47Updated 2 years ago
- ☆64Updated 8 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Updated 10 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Updated 4 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Updated 11 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆88Updated 5 months ago