vgel / repengLinks
A library for making RepE control vectors
☆595Updated 4 months ago
Alternatives and similar repositories for repeng
Users that are interested in repeng are comparing it to the libraries listed below
Sorting:
- Sparsify transformers with SAEs and transcoders☆547Updated last week
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆238Updated 3 months ago
- Visualize the intermediate output of Mistral 7B☆362Updated 4 months ago
- Stop messing around with finicky sampling parameters and just use DRµGS!☆349Updated last year
- A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and full…☆615Updated 2 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆172Updated last week
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆187Updated last year
- Automatically evaluate your LLMs in Google Colab☆631Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆279Updated 3 months ago
- utilities for decoding deep representations (like sentence embeddings) back to text☆814Updated last week
- The repository for the code of the UltraFastBERT paper☆514Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆443Updated 8 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- Training Sparse Autoencoders on Language Models☆802Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆494Updated last year
- ☆412Updated last year
- visualizing attention for LLM users☆212Updated 5 months ago
- Erasing concepts from neural representations with provable guarantees☆227Updated 4 months ago
- ☆517Updated 6 months ago
- Draw more samples☆190Updated 11 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆309Updated 7 months ago
- ☆287Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆711Updated last year
- ☆536Updated 9 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆651Updated last year
- batched loras☆343Updated last year