mit-han-lab / offsite-tuningLinks
Offsite-Tuning: Transfer Learning without Full Model
☆380Updated last year
Alternatives and similar repositories for offsite-tuning
Users that are interested in offsite-tuning are comparing it to the libraries listed below
Sorting:
- ☆196Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆656Updated last year
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆247Updated 2 years ago
- A simple and effective LLM pruning approach.☆811Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆356Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated 2 months ago
- A curated list of Model Merging methods.☆92Updated last year
- ☆233Updated last year
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆308Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆575Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆247Updated last year
- DSIR large-scale data selection framework for language model training☆265Updated last year
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆141Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- Editing Models with Task Arithmetic☆508Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆466Updated last year
- ☆271Updated 2 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆90Updated 2 years ago
- Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning☆199Updated last year
- Scaling Data-Constrained Language Models☆342Updated 4 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆258Updated last year
- A curated list of Early Exiting papers, benchmarks, and misc.☆119Updated 2 years ago
- ☆140Updated last year
- ☆43Updated last year
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆401Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆179Updated 7 months ago
- Explorations into some recent techniques surrounding speculative decoding☆288Updated 10 months ago