luohongyin / SAIL
SAIL: Search Augmented Instruction Learning
☆158Updated last year
Alternatives and similar repositories for SAIL:
Users that are interested in SAIL are comparing it to the libraries listed below
- Code repository for the c-BTM paper☆105Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆108Updated last year
- ☆171Updated last year
- Unofficial implementation of AlpaGasus☆90Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆126Updated 2 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆221Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- Pre-training code for Amber 7B LLM☆160Updated 8 months ago
- ☆86Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆97Updated 4 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆194Updated last year
- ☆177Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆149Updated last year
- Experiments on speculative sampling with Llama models☆122Updated last year
- ☆129Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆115Updated last week
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆224Updated last year
- Benchmark baseline for retrieval qa applications☆96Updated 9 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆215Updated 9 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆153Updated last month
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆221Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆200Updated last month
- Data preparation code for Amber 7B LLM☆84Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- ☆94Updated last year
- Self-Alignment with Principle-Following Reward Models☆151Updated 10 months ago