mlfoundations / task_vectors
Editing Models with Task Arithmetic
☆464Updated last year
Alternatives and similar repositories for task_vectors:
Users that are interested in task_vectors are comparing it to the libraries listed below
- ☆173Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆101Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆154Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆485Updated 10 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆367Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆406Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆230Updated last year
- ☆201Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆449Updated last year
- ☆264Updated last year
- ☆66Updated 3 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆517Updated 2 months ago
- ☆450Updated 8 months ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆88Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆193Updated 6 months ago
- ☆93Updated last year
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).☆218Updated this week
- Function Vectors in Large Language Models (ICLR 2024)☆156Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- Scaling Data-Constrained Language Models☆335Updated 6 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆298Updated last year
- RewardBench: the first evaluation tool for reward models.☆553Updated last month
- ☆255Updated last year
- ☆121Updated last year
- ☆174Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆204Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆327Updated 10 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆140Updated 2 years ago
- Offsite-Tuning: Transfer Learning without Full Model☆372Updated last year
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆143Updated 3 months ago