mit-han-lab / offsite-tuningLinks
Offsite-Tuning: Transfer Learning without Full Model
☆382Updated last year
Alternatives and similar repositories for offsite-tuning
Users that are interested in offsite-tuning are comparing it to the libraries listed below
Sorting:
- ☆198Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆658Updated last year
- ☆235Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆92Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆468Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆186Updated last year
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆247Updated 2 years ago
- ☆272Updated 2 years ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆361Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 3 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆200Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated 2 months ago
- Official PyTorch implementation of QA-LoRA☆143Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- Editing Models with Task Arithmetic☆511Updated last year
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆148Updated 2 years ago
- Scaling Data-Constrained Language Models☆341Updated 4 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆180Updated last week
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆308Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆578Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆497Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆50Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Updated last year
- ☆79Updated 3 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆404Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- A curated list of Early Exiting papers, benchmarks, and misc.☆119Updated 2 years ago
- A simple and effective LLM pruning approach.☆820Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆445Updated last year
- An Extensible Continual Learning Framework Focused on Language Models (LMs)☆290Updated last year