luohongyin / SAILLinks
SAIL: Search Augmented Instruction Learning
☆157Updated 2 months ago
Alternatives and similar repositories for SAIL
Users that are interested in SAIL are comparing it to the libraries listed below
Sorting:
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆222Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆201Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated 2 years ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago
- ☆180Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated 2 years ago
- ☆173Updated 2 years ago
- A set of utilities for running few-shot prompting experiments on large-language models☆122Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated last week
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 3 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last year
- ☆154Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆215Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆138Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- Pre-training code for Amber 7B LLM☆168Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆116Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆229Updated 2 years ago
- ☆135Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆158Updated last year
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 9 months ago
- ☆84Updated 2 years ago
- ☆274Updated 2 years ago
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year