tianjunz / TEMPERALinks
☆46Updated 2 years ago
Alternatives and similar repositories for TEMPERA
Users that are interested in TEMPERA are comparing it to the libraries listed below
Sorting:
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆29Updated last year
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆56Updated 2 years ago
- ☆19Updated last year
- ☆46Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆31Updated 9 months ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆39Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆70Updated last year
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated last year
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 5 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆80Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- ☆30Updated last year
- Directional Preference Alignment☆57Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆25Updated 8 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆38Updated last year
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆63Updated 11 months ago
- ☆35Updated 2 years ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆19Updated last year
- ☆51Updated last year