mlfoundations / task_vectors
Editing Models with Task Arithmetic
☆444Updated last year
Alternatives and similar repositories for task_vectors:
Users that are interested in task_vectors are comparing it to the libraries listed below
- Tools for understanding how transformer predictions are built layer-by-layer☆459Updated 7 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆92Updated last year
- ☆159Updated 11 months ago
- ☆184Updated 10 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆297Updated 7 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆493Updated 3 months ago
- Using sparse coding to find distributed representations used by neural networks.☆207Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆187Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆382Updated 9 months ago
- Sparse autoencoders☆407Updated this week
- ☆404Updated 5 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆133Updated 10 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆286Updated this week
- ☆210Updated 8 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆176Updated last month
- ☆63Updated 2 years ago
- ☆82Updated 11 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆131Updated 3 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆458Updated 11 months ago
- An Extensible Continual Learning Framework Focused on Language Models (LMs)☆263Updated 11 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆438Updated last year
- ☆201Updated 3 months ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆436Updated 6 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆192Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆113Updated 7 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆184Updated 3 months ago
- AI Logging for Interpretability and Explainability🔬☆97Updated 7 months ago
- ☆258Updated 10 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆609Updated 5 months ago
- This repository collects all relevant resources about interpretability in LLMs☆305Updated 2 months ago