jnward / monosemanticity-repro
☆28Updated 9 months ago
Alternatives and similar repositories for monosemanticity-repro:
Users that are interested in monosemanticity-repro are comparing it to the libraries listed below
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆222Updated this week
- ☆48Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- ☆40Updated 9 months ago
- ☆80Updated last month
- Functional Benchmarks and the Reasoning Gap☆82Updated 4 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆185Updated 8 months ago
- ☆74Updated last year
- ☆121Updated last week
- Just a bunch of benchmark logs for different LLMs☆119Updated 6 months ago
- ☆32Updated 7 months ago
- RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly sui…☆85Updated 5 months ago
- ☆82Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆167Updated last month
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated 2 months ago
- ☆113Updated 4 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆130Updated this week
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆162Updated last week
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆185Updated 4 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Sparse autoencoders for Contra text embedding models☆25Updated 9 months ago
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆45Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆101Updated 6 months ago
- ☆60Updated last year
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- Set of scripts to finetune LLMs☆36Updated 10 months ago