google-deepmind / mishax
☆130Updated last month
Alternatives and similar repositories for mishax:
Users that are interested in mishax are comparing it to the libraries listed below
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆170Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆199Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- Open source interpretability artefacts for R1.☆103Updated 2 weeks ago
- Functional Benchmarks and the Reasoning Gap☆85Updated 7 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated 11 months ago
- ☆26Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆99Updated 2 months ago
- ☆38Updated 2 months ago
- Extract full next-token probabilities via language model APIs☆242Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆189Updated 5 months ago
- Applying SAEs for fine-grained control☆17Updated 4 months ago
- ☆91Updated 2 months ago
- nanoGPT-like codebase for LLM training☆94Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆172Updated last month
- ☆54Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆182Updated 5 months ago
- Sparse Autoencoder Training Library☆49Updated this week
- ☆73Updated last week
- Evaluating LLMs with fewer examples☆151Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 3 months ago
- ☆39Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆54Updated 6 months ago
- Mechanistic Interpretability Visualizations using React☆242Updated 4 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆120Updated 2 years ago
- Repository for the paper Stream of Search: Learning to Search in Language☆145Updated 3 months ago
- ☆121Updated last year