tmlr-group / NoisyRationales
[NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"
☆30Updated this week
Alternatives and similar repositories for NoisyRationales:
Users that are interested in NoisyRationales are comparing it to the libraries listed below
- ☆24Updated last year
- translation of VHL repo in paddle☆25Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆61Updated 2 months ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆17Updated 3 months ago
- ☆20Updated 6 months ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆65Updated 3 months ago
- [ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"☆32Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆63Updated 6 months ago
- ☆18Updated last month
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆53Updated 3 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆39Updated 2 months ago
- ☆24Updated 3 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆22Updated last week
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆53Updated this week
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆32Updated 2 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆42Updated 4 months ago
- ☆47Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆31Updated 3 weeks ago
- ☆39Updated last year
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆15Updated 3 months ago
- ☆44Updated 6 months ago
- Accepted LLM Papers in NeurIPS 2024☆32Updated 3 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆57Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 9 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆51Updated 4 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆18Updated 2 months ago
- Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆11Updated 6 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆34Updated 3 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆42Updated 2 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆65Updated 3 months ago