Nix07 / finetuningLinks
This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking".
☆27Updated last year
Alternatives and similar repositories for finetuning
Users that are interested in finetuning are comparing it to the libraries listed below
Sorting:
- A library for efficient patching and automatic circuit discovery.☆73Updated 2 weeks ago
- ☆89Updated 11 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆40Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 9 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆175Updated 3 months ago
- ☆121Updated last year
- ☆103Updated 5 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated last month
- ☆23Updated 6 months ago
- ☆96Updated last year
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 8 months ago
- ☆15Updated last year
- ☆21Updated 3 months ago
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- ☆45Updated last week
- ☆77Updated 4 months ago
- ☆88Updated last year
- Code repository for the paper "Mission: Impossible Language Models."☆52Updated 3 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆52Updated 10 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last week
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Algebraic value editing in pretrained language models☆65Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Multi-Layer Sparse Autoencoders (ICLR 2025)☆23Updated 5 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆141Updated last week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆77Updated 9 months ago