mit-han-lab / offsite-tuningLinks
Offsite-Tuning: Transfer Learning without Full Model
☆375Updated last year
Alternatives and similar repositories for offsite-tuning
Users that are interested in offsite-tuning are comparing it to the libraries listed below
Sorting:
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆235Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆640Updated 11 months ago
- DSIR large-scale data selection framework for language model training☆251Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆563Updated last year
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆286Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆250Updated last year
- Scaling Data-Constrained Language Models☆335Updated 9 months ago
- A simple and effective LLM pruning approach.☆763Updated 10 months ago
- Official PyTorch implementation of QA-LoRA☆137Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- Editing Models with Task Arithmetic☆480Updated last year
- ☆183Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆190Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆245Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆844Updated this week
- Explorations into some recent techniques surrounding speculative decoding☆269Updated 6 months ago
- Simple next-token-prediction for RLHF☆227Updated last year
- [ICCV2023] Dataset Quantization☆259Updated last year
- distributed trainer for LLMs☆577Updated last year
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆269Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆545Updated last year
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆300Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆332Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆719Updated last month
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆237Updated 2 years ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆243Updated last year