openai / automated-interpretability
☆990Updated 11 months ago
Alternatives and similar repositories for automated-interpretability:
Users that are interested in automated-interpretability are comparing it to the libraries listed below
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,679Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,884Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,081Updated last year
- PaL: Program-Aided Language Models (ICML 2023)☆481Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,368Updated 2 months ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆669Updated last month
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆793Updated 7 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆911Updated 3 months ago
- A framework for the evaluation of autoregressive code generation language models.☆884Updated 3 months ago
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,109Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆496Updated 2 weeks ago
- ☆1,025Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated 2 weeks ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆605Updated 9 months ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,136Updated last year
- Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".☆629Updated 4 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆462Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,646Updated last month
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆760Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆787Updated 6 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆626Updated last year
- ☆1,497Updated this week
- Expanding natural instructions☆973Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,302Updated last year
- Inference code for Persimmon-8B☆416Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆798Updated this week
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆636Updated last month
- Tools for understanding how transformer predictions are built layer-by-layer☆472Updated 8 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆666Updated 4 months ago