openai / automated-interpretabilityLinks
☆1,031Updated last year
Alternatives and similar repositories for automated-interpretability
Users that are interested in automated-interpretability are comparing it to the libraries listed below
Sorting:
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,777Updated 2 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,148Updated 3 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆866Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,041Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆824Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆543Updated 7 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆505Updated 2 years ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆660Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆513Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,602Updated 2 months ago
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆2,047Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,063Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆958Updated 10 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,125Updated last year
- An open-source implementation of Google's PaLM models☆822Updated last year
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,135Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆521Updated 3 weeks ago
- ☆664Updated 10 months ago
- A prize for finding tasks that cause large language models to show inverse scaling☆614Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- ☆513Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,483Updated 2 years ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- Evolution Through Large Models☆731Updated last year
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆796Updated 7 months ago
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆768Updated 10 months ago
- Ask Me Anything language model prompting☆546Updated 2 years ago
- ☆761Updated last year
- ☆1,040Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year