microsoft / mttlLinks
Building modular LMs with parameter-efficient fine-tuning.
☆114Updated 2 months ago
Alternatives and similar repositories for mttl
Users that are interested in mttl are comparing it to the libraries listed below
Sorting:
- ☆203Updated last year
- ☆80Updated 3 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆92Updated 2 years ago
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆191Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- ☆103Updated 2 years ago
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- ☆51Updated last year
- ☆29Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- AI Logging for Interpretability and Explainability🔬☆138Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Learning adapter weights from task descriptions☆19Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆48Updated 2 years ago
- ☆61Updated 7 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆189Updated 8 months ago
- ☆274Updated 2 years ago
- ☆52Updated 9 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- ☆150Updated 2 years ago
- ☆39Updated last year