naver-ai / model-stockLinks
Model Stock: All we need is just a few fine-tuned models
☆127Updated 4 months ago
Alternatives and similar repositories for model-stock
Users that are interested in model-stock are comparing it to the libraries listed below
Sorting:
- [ACL 2024 Findings & ICLR 2024 WS] An Evaluator VLM that is open-source, offers reproducible evaluation, and inexpensive to use. Specific…☆78Updated last year
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆61Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- ☆200Updated last year
- Code for T-MARS data filtering☆35Updated 2 years ago
- ☆41Updated last year
- Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).☆43Updated last year
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆55Updated last year
- Matryoshka Multimodal Models☆120Updated 10 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 5 months ago
- ☆29Updated 3 years ago
- Language Quantized AutoEncoders☆111Updated 2 years ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆99Updated last month
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆60Updated last year
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆80Updated 8 months ago
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆99Updated last year
- Residual Prompt Tuning: a method for faster and better prompt tuning.☆56Updated 2 years ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆45Updated last month
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated 2 years ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- ☆187Updated last year
- ☆70Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- https://arxiv.org/abs/2209.15162☆53Updated 2 years ago
- Source code of "Calibrating Large Language Models Using Their Generations Only", ACL2024☆22Updated last year
- ☆33Updated 11 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago