declare-lab / flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
☆350Updated last year
Alternatives and similar repositories for flan-alpaca:
Users that are interested in flan-alpaca are comparing it to the libraries listed below
- ☆456Updated last year
- ☆268Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆818Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- ☆536Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆626Updated last year
- batched loras☆338Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆208Updated 11 months ago
- ☆177Updated last year
- Crosslingual Generalization through Multitask Finetuning☆525Updated 4 months ago
- ☆172Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆229Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆276Updated this week
- A bagel, with everything.☆316Updated 10 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆462Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆222Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆461Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆443Updated 9 months ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆149Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆226Updated last year
- 🥤🧑🏻🚀Code and dataset for our EMNLP 2023 paper - "SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization…☆223Updated last year
- Repo for fine-tuning Casual LLMs☆455Updated 10 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆526Updated last year