TristanThrush / perplexity-correlations
Simple and scalable tools for data-driven pretraining data selection.
☆20Updated 2 months ago
Alternatives and similar repositories for perplexity-correlations:
Users that are interested in perplexity-correlations are comparing it to the libraries listed below
- Language models scale reliably with over-training and on downstream tasks☆96Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 5 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- ☆16Updated 2 months ago
- Exploration of automated dataset selection approaches at large scales.☆39Updated last month
- ☆30Updated 9 months ago
- ☆18Updated 9 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 2 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- ☆23Updated 2 months ago
- ☆66Updated 3 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 11 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆70Updated 3 weeks ago
- A library for efficient patching and automatic circuit discovery.☆62Updated 2 months ago
- ☆91Updated 2 months ago
- ☆41Updated last year
- Sparse Autoencoder Training Library☆48Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆88Updated last year
- ☆47Updated last year
- ☆72Updated 11 months ago
- ☆51Updated 11 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆54Updated last year
- ☆23Updated 7 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆22Updated 10 months ago
- ☆12Updated 10 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year