tmlr-group / NoisyRationales
[NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"
☆31Updated last month
Alternatives and similar repositories for NoisyRationales:
Users that are interested in NoisyRationales are comparing it to the libraries listed below
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆71Updated 7 months ago
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆31Updated last month
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆69Updated 4 months ago
- ☆20Updated 7 months ago
- ☆24Updated last year
- translation of VHL repo in paddle☆25Updated last year
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆24Updated 3 weeks ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆65Updated 3 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆12Updated 7 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆18Updated 3 months ago
- ☆30Updated 4 months ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆18Updated 4 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆39Updated 3 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆52Updated 4 months ago
- ☆45Updated 7 months ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆16Updated 5 months ago
- [ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"☆32Updated last year
- ☆22Updated 10 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆22Updated last month
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆62Updated 4 months ago
- ☆14Updated 11 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆16Updated 2 months ago
- [NeurIPS 2023] "Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation"☆11Updated last year
- ☆49Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆36Updated 3 months ago
- Accepted LLM Papers in NeurIPS 2024☆33Updated 4 months ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆23Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆35Updated 3 months ago