prateeky2806 / ties-mergingLinks
☆192Updated last year
Alternatives and similar repositories for ties-merging
Users that are interested in ties-merging are comparing it to the libraries listed below
Sorting:
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆91Updated 2 years ago
- ☆269Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆123Updated 10 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆183Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆190Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆180Updated 5 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- ☆97Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆203Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆90Updated 11 months ago
- ☆75Updated 3 years ago
- AI Logging for Interpretability and Explainability🔬☆128Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- Test-time-training on nearest neighbors for large language models☆46Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆434Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆192Updated 5 months ago
- ☆127Updated 6 months ago
- ☆30Updated 2 years ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 3 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆105Updated 2 years ago
- AnchorAttention: Improved attention for LLMs long-context training☆212Updated 8 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆134Updated 3 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆81Updated 9 months ago
- ☆228Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆176Updated 6 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 7 months ago
- Self-Alignment with Principle-Following Reward Models☆165Updated 2 weeks ago