yizhongw / Tk-InstructLinks
Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.
☆180Updated 2 years ago
Alternatives and similar repositories for Tk-Instruct
Users that are interested in Tk-Instruct are comparing it to the libraries listed below
Sorting:
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆100Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 9 months ago
- ☆135Updated 5 months ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- ☆180Updated 2 years ago
- A unified benchmark for math reasoning☆88Updated 2 years ago
- Scalable training for dense retrieval models.☆298Updated 2 weeks ago
- Code for Editing Factual Knowledge in Language Models☆138Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 9 months ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆274Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆157Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Updated 2 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 11 months ago
- A library for finding knowledge neurons in pretrained transformer models.☆158Updated 3 years ago
- Token-level Reference-free Hallucination Detection☆94Updated last year
- ☆97Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆182Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆135Updated last year
- ☆179Updated 2 weeks ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 5 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆101Updated 2 years ago
- ☆66Updated 3 years ago
- All available datasets for Instruction Tuning of Large Language Models☆253Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 3 years ago
- ☆75Updated last year
- ☆159Updated 2 years ago