ChnQ / TracingLLMLinks
☆30Updated last year
Alternatives and similar repositories for TracingLLM
Users that are interested in TracingLLM are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆99Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆41Updated last year
- A Sober Look at Language Model Reasoning☆86Updated 2 weeks ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆61Updated 10 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- ☆25Updated 7 months ago
- Lightweight Adapting for Black-Box Large Language Models☆23Updated last year
- ☆21Updated last month
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆29Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆133Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆25Updated 6 months ago
- ☆41Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆57Updated last year
- ☆22Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- ☆32Updated last year
- ☆31Updated 5 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 10 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆64Updated 3 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆20Updated 2 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆20Updated 6 months ago
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆70Updated this week