hartvigsen-group / composable-interventionsLinks
☆29Updated 9 months ago
Alternatives and similar repositories for composable-interventions
Users that are interested in composable-interventions are comparing it to the libraries listed below
Sorting:
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Exploration of automated dataset selection approaches at large scales.☆50Updated 9 months ago
- ☆33Updated 11 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- Confidence Regulation Neurons in Language Models (NeurIPS 2024)☆15Updated 10 months ago
- ☆51Updated last year
- ☆16Updated last year
- ☆52Updated 8 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆35Updated 9 months ago
- ☆20Updated last month
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆151Updated 5 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆31Updated 10 months ago
- ☆40Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 10 months ago
- ☆41Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆48Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆45Updated 10 months ago
- Interpretating the latent space representations of attention head outputs for LLMs☆34Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated 2 years ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Uncertainty quantification for in-context learning of large language models☆16Updated last year
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆29Updated last year
- This is the repo for constructing a comprehensive and rigorous evaluation framework for LLM calibration.☆13Updated last year
- ☆13Updated 5 months ago
- ☆29Updated last year
- Optimize Any User-defined Compound AI Systems☆63Updated 3 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- ☆21Updated 3 months ago