huawei-lin / RapidInLinks
RapidIn: Scalable Influence Estimation for Large Language Models (LLMs). The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).
☆20Updated 6 months ago
Alternatives and similar repositories for RapidIn
Users that are interested in RapidIn are comparing it to the libraries listed below
Sorting:
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆76Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆123Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆125Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆129Updated 4 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated last year
- ☆54Updated last year
- ☆29Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆45Updated 10 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆69Updated 3 years ago
- ☆36Updated last year
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆42Updated 6 months ago
- AbstainQA, ACL 2024☆28Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆234Updated 2 years ago
- ☆41Updated 2 years ago
- ☆57Updated 2 years ago
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆33Updated 5 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- ☆101Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆77Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆52Updated last year
- ☆21Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- ☆17Updated last year