ChnQ / TracingLLM
☆25Updated 10 months ago
Alternatives and similar repositories for TracingLLM:
Users that are interested in TracingLLM are comparing it to the libraries listed below
- ☆34Updated 6 months ago
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- ☆26Updated 9 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆57Updated 6 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated 3 weeks ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- A Survey on the Honesty of Large Language Models☆57Updated 4 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆91Updated 10 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 10 months ago
- ☆9Updated 5 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆14Updated 2 weeks ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆89Updated 9 months ago
- Codebase for decoding compressed trust.☆23Updated 11 months ago
- SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆12Updated 2 weeks ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 9 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆33Updated this week
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆22Updated 5 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 6 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆47Updated 7 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated last month
- ☆29Updated 11 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 3 months ago
- ☆37Updated last year
- ☆21Updated last month
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆42Updated 5 months ago
- ☆19Updated last month
- ☆22Updated 6 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆72Updated 6 months ago