yizhongw / Tk-InstructLinks
Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.
☆182Updated 3 years ago
Alternatives and similar repositories for Tk-Instruct
Users that are interested in Tk-Instruct are comparing it to the libraries listed below
Sorting:
- ☆141Updated 10 months ago
- The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)☆158Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆104Updated 2 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆282Updated 3 years ago
- ☆180Updated 2 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆165Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆60Updated 9 months ago
- Scalable training for dense retrieval models.☆297Updated 5 months ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- A unified benchmark for math reasoning☆89Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆100Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 4 months ago
- ☆39Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- Token-level Reference-free Hallucination Detection☆96Updated 2 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆196Updated 2 years ago
- A Multilingual Replicable Instruction-Following Model☆95Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- ☆75Updated 2 years ago
- ☆159Updated 2 years ago
- A library for finding knowledge neurons in pretrained transformer models.☆158Updated 3 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆81Updated 3 years ago
- ☆189Updated 4 months ago
- ☆86Updated 3 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆271Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 3 years ago
- ☆97Updated 3 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago