luohongyin / SAILLinks
SAIL: Search Augmented Instruction Learning
☆158Updated 3 months ago
Alternatives and similar repositories for SAIL
Users that are interested in SAIL are comparing it to the libraries listed below
Sorting:
- ☆173Updated 2 years ago
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆222Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated 2 years ago
- ☆179Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆140Updated 2 years ago
- Pre-training code for Amber 7B LLM☆169Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆214Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 4 months ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆228Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 11 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated last month
- ☆95Updated 2 years ago
- ☆134Updated 2 years ago
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆155Updated 2 years ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.