lifan-yuan / OOD_NLP
[NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations".
☆31Updated last year
Alternatives and similar repositories for OOD_NLP:
Users that are interested in OOD_NLP are comparing it to the libraries listed below
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆29Updated 8 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆57Updated last year
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆39Updated last month
- ☆44Updated 4 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆95Updated last year
- Methods and evaluation for aligning language models temporally☆27Updated 10 months ago
- ☆84Updated 2 years ago
- LoFiT: Localized Fine-tuning on LLM Representations☆30Updated this week
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆58Updated last year
- ☆44Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 9 months ago
- ☆24Updated 3 months ago
- ☆47Updated 9 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆74Updated last year
- ☆36Updated last year
- AbstainQA, ACL 2024☆25Updated 3 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆12Updated 10 months ago
- Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936☆30Updated 7 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆62Updated 10 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆56Updated 2 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆61Updated 2 months ago
- ☆29Updated 8 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆64Updated 9 months ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆46Updated 5 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆18Updated 4 months ago
- ☆43Updated 5 months ago
- ☆21Updated 3 months ago
- ☆25Updated last year