microsoft / automated-brain-explanationsLinks
Generating and validating natural-language explanations for the brain.
☆57Updated this week
Alternatives and similar repositories for automated-brain-explanations
Users that are interested in automated-brain-explanations are comparing it to the libraries listed below
Sorting:
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆94Updated last year
- Code and Dataset for Learning to Solve Complex Tasks by Talking to Agents☆24Updated 3 years ago
- Code and data from the paper 'Human Feedback is not Gold Standard'☆19Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 8 months ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆44Updated 7 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆75Updated last year
- Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs. EMNLP 2024☆27Updated 10 months ago
- SILO Language Models code repository☆82Updated last year
- ☆22Updated 8 months ago
- The official repo of our research work "Interactive Editing for Text Summarization".☆22Updated 2 years ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 5 months ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated 2 years ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 8 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 8 months ago
- Everything for the Paper: 'Evoke: Evoking Critical Thinking Abilities in LLMs via Reviewer-Author Prompt Editing'☆17Updated last year
- ☆44Updated 10 months ago
- ☆11Updated 2 years ago
- Explaining ML models using LLMs☆22Updated 11 months ago
- ☆26Updated last year
- A weak supervision framework for (partial) labeling functions☆16Updated last year
- A benchmark for evaluating learning agents based on just language feedback☆89Updated 4 months ago
- Code for our EMNLP '22 paper "Fixing Model Bugs with Natural Language Patches"☆19Updated 2 years ago
- ☆26Updated 2 years ago
- ☆28Updated last year
- Python package for generating datasets to evaluate reasoning and retrieval of large language models☆19Updated 2 weeks ago
- ☆27Updated 7 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Repo for "Smart Word Suggestions" (SWS) task and benchmark☆20Updated last year