r-three / phatgoose
Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"
☆81Updated last year
Alternatives and similar repositories for phatgoose:
Users that are interested in phatgoose are comparing it to the libraries listed below
- ☆118Updated 5 months ago
- ☆59Updated 10 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆52Updated 5 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 11 months ago
- SILO Language Models code repository☆81Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆113Updated 3 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 6 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆65Updated 8 months ago
- ☆73Updated 10 months ago
- Replicating O1 inference-time scaling laws☆83Updated 3 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆44Updated last month
- ☆58Updated 3 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 5 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆108Updated 9 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆53Updated 10 months ago
- ☆95Updated 8 months ago
- ☆72Updated 6 months ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆86Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆154Updated last year
- PyTorch library for Active Fine-Tuning☆58Updated 2 weeks ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- ☆39Updated 6 months ago
- ☆28Updated last month
- ☆141Updated 10 months ago