huawei-lin / RapidInLinks
RapidIn: Scalable Influence Estimation for Large Language Models (LLMs). The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).
☆20Updated 5 months ago
Alternatives and similar repositories for RapidIn
Users that are interested in RapidIn are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 11 months ago
- ☆98Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 2 years ago
- ☆56Updated 2 years ago
- LoFiT: Localized Fine-tuning on LLM Representations☆41Updated 9 months ago
- ☆41Updated last year
- AI Logging for Interpretability and Explainability🔬☆129Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆64Updated 10 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆81Updated 9 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆124Updated 3 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- Confidence Regulation Neurons in Language Models (NeurIPS 2024)☆12Updated 8 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated last year
- ☆54Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆100Updated last month
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 3 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆52Updated last year
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆41Updated 4 months ago
- AbstainQA, ACL 2024☆28Updated last year
- A Survey on Data Selection for Language Models☆250Updated 5 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆35Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- ☆41Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year