inverse-scaling / prizeView external linksLinks
A prize for finding tasks that cause large language models to show inverse scaling
☆620Oct 11, 2023Updated 2 years ago
Alternatives and similar repositories for prize
Users that are interested in prize are comparing it to the libraries listed below
Sorting:
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,199Jul 19, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,731Nov 15, 2025Updated 3 months ago
- Scaling Data-Constrained Language Models☆340Jun 28, 2025Updated 7 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,814Jun 17, 2025Updated 7 months ago
- A modular RL library to fine-tune language models to human preferences☆2,377Mar 1, 2024Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆548Jun 25, 2024Updated last year
- Expanding natural instructions☆1,030Dec 11, 2023Updated 2 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including an…☆286Oct 20, 2022Updated 3 years ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,768Aug 4, 2024Updated last year
- 🤖ConvRe🤯: An Investigation of LLMs’ Inefficacy in Understanding Converse Relations (EMNLP 2023)☆24Oct 10, 2023Updated 2 years ago
- Toolkit for creating, sharing and using natural language prompts.☆2,997Oct 23, 2023Updated 2 years ago
- maximal update parametrization (µP)☆1,676Jul 17, 2024Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,552May 28, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,393Updated this week
- ☆2,947Jan 15, 2026Updated last month
- ☆1,559Feb 5, 2026Updated last week
- pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference☆61Dec 8, 2022Updated 3 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆465Nov 5, 2022Updated 3 years ago
- ☆284Mar 2, 2024Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Oct 29, 2023Updated 2 years ago
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆880Jan 16, 2025Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,667Feb 9, 2026Updated last week
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Oct 28, 2022Updated 3 years ago
- ☆19Jan 21, 2023Updated 3 years ago
- Repo for external large-scale work☆6,544Apr 27, 2024Updated last year
- Beyond Accuracy: Behavioral Testing of NLP models with CheckList☆2,048Jan 9, 2024Updated 2 years ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,092Jun 1, 2023Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,384Oct 28, 2024Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,377Jul 25, 2023Updated 2 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆594Feb 3, 2026Updated last week
- A dataset of alignment research and code to reproduce it☆78Jun 22, 2023Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Feb 9, 2026Updated last week
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,802Oct 12, 2025Updated 4 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Jun 13, 2024Updated last year
- Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.☆17,663Nov 3, 2025Updated 3 months ago
- ☆250Dec 21, 2022Updated 3 years ago
- Dense Passage Retriever - is a set of tools and models for open domain Q&A task.☆1,858Apr 6, 2023Updated 2 years ago