prateeky2806 / ties-mergingLinks
☆187Updated last year
Alternatives and similar repositories for ties-merging
Users that are interested in ties-merging are comparing it to the libraries listed below
Sorting:
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆90Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆177Updated 4 months ago
- ☆73Updated 3 years ago
- AI Logging for Interpretability and Explainability🔬☆125Updated last year
- ☆269Updated last year
- ☆35Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 9 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆203Updated 11 months ago
- ☆96Updated last year
- ☆30Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆181Updated 6 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆107Updated 6 months ago
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- ☆50Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆47Updated 10 months ago
- ☆120Updated 5 months ago
- ☆187Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated 11 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 4 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆71Updated 2 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆431Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆212Updated 7 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆79Updated 8 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆103Updated 2 years ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆153Updated 6 months ago