openai / automated-interpretabilityLinks
☆1,027Updated last year
Alternatives and similar repositories for automated-interpretability
Users that are interested in automated-interpretability are comparing it to the libraries listed below
Sorting:
- Representation Engineering: A Top-Down Approach to AI Transparency☆855Updated 11 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆822Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,148Updated 3 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,770Updated last month
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,035Updated 2 years ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,119Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆510Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆613Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆655Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,061Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,582Updated 2 months ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆786Updated 6 months ago
- PaL: Program-Aided Language Models (ICML 2023)☆503Updated 2 years ago
- Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI☆2,043Updated last year
- Ask Me Anything language model prompting☆547Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆540Updated 6 months ago
- Salesforce open-source LLMs with 8k sequence length.☆721Updated 6 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,825Updated 7 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆957Updated 9 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆805Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆767Updated 9 months ago
- ☆1,532Updated this week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆647Updated 7 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆786Updated this week
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆569Updated last year
- ☆1,036Updated 2 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,470Updated 2 years ago