mit-han-lab / offsite-tuningLinks
Offsite-Tuning: Transfer Learning without Full Model
☆375Updated last year
Alternatives and similar repositories for offsite-tuning
Users that are interested in offsite-tuning are comparing it to the libraries listed below
Sorting:
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆645Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆238Updated 2 years ago
- ☆185Updated last year
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆304Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆338Updated 2 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆197Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆569Updated last year
- ☆269Updated last year
- A simple and effective LLM pruning approach.☆782Updated last year
- A curated list of Model Merging methods.☆92Updated 10 months ago
- An Extensible Continual Learning Framework Focused on Language Models (LMs)☆283Updated last year
- Editing Models with Task Arithmetic☆490Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆229Updated last year
- Scaling Data-Constrained Language Models☆338Updated last month
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- Survey Paper List - Efficient LLM and Foundation Models☆253Updated 10 months ago
- DSIR large-scale data selection framework for language model training☆258Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆174Updated last year
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- ☆223Updated last year
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆270Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆460Updated last year
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆105Updated 7 months ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆139Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆285Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year