apple / ml-auraLinks
Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models, ICML 2024
☆21Updated last year
Alternatives and similar repositories for ml-aura
Users that are interested in ml-aura are comparing it to the libraries listed below
Sorting:
- ☆55Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆90Updated 9 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 7 months ago
- ☆33Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- ☆22Updated 2 weeks ago
- ☆44Updated 9 months ago
- PyTorch library for Active Fine-Tuning☆89Updated 6 months ago
- MEXMA: Token-level objectives improve sentence representations☆41Updated 7 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 9 months ago
- ☆26Updated last year
- ☆27Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- ☆54Updated 9 months ago
- ☆85Updated last year
- ☆28Updated 6 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆41Updated 10 months ago
- Lottery Ticket Adaptation☆39Updated 9 months ago
- We introduce EMMET and unify model editing with popular algorithms ROME and MEMIT.☆24Updated 8 months ago
- Minimum Description Length probing for neural network representations☆18Updated 7 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- ☆76Updated last week
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆88Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated 2 months ago