mit-han-lab / offsite-tuningLinks
Offsite-Tuning: Transfer Learning without Full Model
☆375Updated last year
Alternatives and similar repositories for offsite-tuning
Users that are interested in offsite-tuning are comparing it to the libraries listed below
Sorting:
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆232Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆637Updated 10 months ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆285Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆329Updated 2 years ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆561Updated last year
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆269Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆249Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆451Updated last year
- A simple and effective LLM pruning approach.☆756Updated 9 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆612Updated last year
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- Official PyTorch implementation of QA-LoRA☆137Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆530Updated 3 years ago
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆190Updated 2 years ago
- ☆179Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆193Updated last year
- ☆222Updated 11 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆300Updated last year
- Editing Models with Task Arithmetic☆479Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆238Updated 2 years ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- An Extensible Continual Learning Framework Focused on Language Models (LMs)☆280Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆223Updated last year
- ☆259Updated last year
- A curated list of Model Merging methods.☆92Updated 8 months ago