Nix07 / finetuning
This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking".
☆25Updated last year
Alternatives and similar repositories for finetuning
Users that are interested in finetuning are comparing it to the libraries listed below
Sorting:
- ☆33Updated last week
- ☆82Updated 9 months ago
- A library for efficient patching and automatic circuit discovery.☆64Updated 3 weeks ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆77Updated last month
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 11 months ago
- Simple and scalable tools for data-driven pretraining data selection.☆23Updated 3 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 6 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆46Updated 7 months ago
- ☆94Updated last year
- Algebraic value editing in pretrained language models☆65Updated last year
- How do transformer LMs encode relations?☆48Updated last year
- Sparse Autoencoder Training Library☆49Updated last week
- General-purpose activation steering library☆66Updated last week
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆113Updated this week
- ☆92Updated 3 months ago
- ☆114Updated 9 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆68Updated 3 months ago
- Sparse probing paper full code.☆56Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆75Updated last year
- ☆33Updated last year
- ☆42Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- Function Vectors in Large Language Models (ICLR 2024)☆166Updated 3 weeks ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆73Updated last year
- ☆52Updated 11 months ago
- ☆24Updated 3 months ago