goodfire-ai / goodfire-sdkLinks
Ember is a hosted API/SDK that lets you shape AI model behavior by directly controlling a model's internal units of computation, or "features". With Ember, you can modify features to precisely control model outputs, or use them as building blocks for tasks like classification.
☆37Updated 4 months ago
Alternatives and similar repositories for goodfire-sdk
Users that are interested in goodfire-sdk are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆167Updated 9 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆120Updated last week
- Mechanistic Interpretability Visualizations using React☆301Updated 11 months ago
- ☆53Updated last month
- ☆106Updated last week
- ☆79Updated last month
- ☆36Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆130Updated 3 years ago
- Machine Learning for Alignment Bootcamp☆81Updated 3 years ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆214Updated last year
- Open source interpretability artefacts for R1.☆163Updated 7 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆128Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆227Updated 11 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- Inference API for many LLMs and other useful tools for empirical research☆80Updated this week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆232Updated 3 months ago
- open source interpretability platform 🧠☆486Updated this week
- (Model-written) LLM evals library☆18Updated 11 months ago
- ☆226Updated 3 weeks ago
- ☆17Updated last week
- ☆253Updated last year
- ☆81Updated last month
- ☆19Updated last week
- ☆310Updated last year
- ☆25Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆212Updated last week
- Collection of evals for Inspect AI☆284Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆284Updated last year
- ☆32Updated 5 months ago
- Decoder only transformer, built from scratch with PyTorch☆31Updated 2 years ago