ahans30 / goldfish-lossLinks
[NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs
☆93Updated last year
Alternatives and similar repositories for goldfish-loss
Users that are interested in goldfish-loss are comparing it to the libraries listed below
Sorting:
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 3 months ago
- Replicating O1 inference-time scaling laws☆91Updated last year
- The repository contains code for Adaptive Data Optimization☆30Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- PyTorch library for Active Fine-Tuning☆95Updated 3 months ago
- ☆75Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Exploration of automated dataset selection approaches at large scales.☆52Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆51Updated last year
- ☆91Updated last year
- ☆33Updated 11 months ago
- Aioli: A unified optimization framework for language model data mixing☆31Updated 11 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Updated 11 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- PostTrainBench measures how well CLI agents like Claude Code or Codex CLI can post-train base LLMs on a single H100 GPU in 10 hours☆55Updated this week
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 9 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Updated last year
- ☆100Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)