r-three / t-few
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"
☆444Updated last year
Alternatives and similar repositories for t-few:
Users that are interested in t-few are comparing it to the libraries listed below
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆463Updated 2 years ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆526Updated last year
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆260Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆764Updated last year
- DSIR large-scale data selection framework for language model training☆241Updated 10 months ago
- MEND: Fast Model Editing at Scale☆242Updated last year
- ☆344Updated 3 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆445Updated 10 months ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆208Updated 11 months ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆669Updated 2 months ago
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning☆713Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆312Updated last year
- Fusion-in-Decoder☆561Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆502Updated last month
- Scaling Data-Constrained Language Models☆333Updated 5 months ago
- Expanding natural instructions☆980Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆392Updated 9 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆293Updated 5 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆516Updated last year
- Scalable training for dense retrieval models.☆279Updated this week
- Accompanying repo for the RLPrompt paper☆320Updated 8 months ago
- Simple next-token-prediction for RLHF☆222Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆857Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆615Updated 7 months ago
- A library for finding knowledge neurons in pretrained transformer models.☆154Updated 3 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆542Updated 11 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆548Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆323Updated 9 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆464Updated last year