adapter-hub / adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
☆2,681Updated last week
Alternatives and similar repositories for adapters:
Users that are interested in adapters are comparing it to the libraries listed below
- BERT score for text generation☆1,721Updated 8 months ago
- Longformer: The Long-Document Transformer☆2,104Updated 2 years ago
- Foundation Architecture for (M)LLMs☆3,067Updated last year
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆919Updated 11 months ago
- A modular RL library to fine-tune language models to human preferences☆2,297Updated last year
- Dense Passage Retriever - is a set of tools and models for open domain Q&A task.☆1,777Updated 2 years ago
- The implementation of DeBERTa☆2,069Updated last year
- PyTorch extensions for high performance and large scale training.☆3,293Updated this week
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,150Updated last year
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,535Updated 5 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,022Updated 6 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,608Updated this week
- A research project for natural language generation, containing the official implementations by MSRA NLC team.☆715Updated 8 months ago
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,180Updated 3 months ago
- Toolkit for creating, sharing and using natural language prompts.☆2,816Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,569Updated last week
- ☆2,782Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,918Updated this week
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,325Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,382Updated last year
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆929Updated 2 years ago
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,306Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,017Updated 8 months ago
- Data augmentation for NLP☆4,539Updated 9 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,512Updated 8 months ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆677Updated last month
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,621Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,923Updated 3 weeks ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆524Updated 3 years ago
- ☆1,509Updated 3 weeks ago