ndif-team / ndifLinks
The NDIF server, which performs deep inference and serves nnsight requests remotely
☆36Updated this week
Alternatives and similar repositories for ndif
Users that are interested in ndif are comparing it to the libraries listed below
Sorting:
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆130Updated 3 years ago
- ☆144Updated 3 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- ☆81Updated last week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆231Updated last week
- ☆132Updated 2 years ago
- ☆132Updated last year
- A library for efficient patching and automatic circuit discovery.☆80Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆84Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆232Updated 11 months ago
- Sparse Autoencoder Training Library☆55Updated 7 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- ☆58Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆43Updated last year
- ☆95Updated last year
- ☆70Updated 3 years ago
- Measuring the situational awareness of language models☆39Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆220Updated last year
- ☆36Updated last year
- ☆259Updated last year
- Implementation of the BatchTopK activation function for training sparse autoencoders (SAEs)☆55Updated 4 months ago
- ☆111Updated 10 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆151Updated 5 months ago
- Steering Llama 2 with Contrastive Activation Addition☆196Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆215Updated this week
- ☆66Updated 2 months ago
- Erasing concepts from neural representations with provable guarantees☆239Updated 10 months ago
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆99Updated 2 years ago