guidelabs / infembedLinks
Find the samples, in the test data, on which your (generative) model makes mistakes.
☆28Updated 9 months ago
Alternatives and similar repositories for infembed
Users that are interested in infembed are comparing it to the libraries listed below
Sorting:
- ☆107Updated this week
- A fast, effective data attribution method for neural networks in PyTorch☆213Updated 8 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆117Updated 5 months ago
- ☆101Updated 5 months ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers☆41Updated 5 months ago
- ☆34Updated last year
- Erasing concepts from neural representations with provable guarantees☆231Updated 5 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆195Updated last week
- Sparse Autoencoder for Mechanistic Interpretability☆256Updated last year
- ☆181Updated last week
- AI Logging for Interpretability and Explainability🔬☆124Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆102Updated 3 weeks ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆206Updated 7 months ago
- Improving Alignment and Robustness with Circuit Breakers☆220Updated 9 months ago
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆157Updated 3 weeks ago
- ☆315Updated last week
- ☆231Updated 9 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆16Updated last year
- ☆22Updated 11 months ago
- Sparsify transformers with SAEs and transcoders☆590Updated last week
- ☆72Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆508Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆50Updated 9 months ago
- ☆273Updated last year
- ☆100Updated 5 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆776Updated last week
- ☆123Updated last year
- Python package for measuring memorization in LLMs.☆160Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆165Updated last year
- General-purpose activation steering library☆85Updated 2 months ago